title string | paper_decision string | review_1 string | rebuttals_1 string | review_2 string | rebuttals_2 string | review_3 string | rebuttals_3 string | review_4 string | rebuttals_4 string | global_rebuttals string | dataset_source string | conference_year int64 | review_5 string | rebuttals_5 string | review_6 string | rebuttals_6 string | review_7 string | rebuttals_7 string | review_8 string | rebuttals_8 string |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Uni-Med: A Unified Medical Generalist Foundation Model For Multi-Task Learning Via Connector-MoE | Accept (poster) | Summary: The paper introduces Uni-Med, a medical generalist foundation model designed for multi-task learning across six different medical tasks. The proposed CMoE module leverages a mixture of projection experts to align visual and language embedding spaces effectively. The model demonstrates significant performance improvements across diverse medical tasks, validated through extensive experiments and ablation studies.
Strengths: - The introduction of the CMoE module to address the tug-of-war problem at the connector level is novel and well-executed.
- The paper provides thorough ablation studies to validate the effectiveness of the proposed CMoE module under various configurations.
- Uni-Med achieves impressive performance with minimal training computational overhead, highlighting its efficiency in handling large-scale multi-modal medical data.
Weaknesses: - The ablation studies show that certain configurations (e.g., using a high number of projection experts) might lead to overfitting. This aspect could be discussed in more detail, including strategies to mitigate overfitting.
- While the interpretation analysis only focus on visual features on different tasks, the analysis of visual features on different medical image modalities should be considered.
- It could be better to add more work on evaluation of medical vision-language models in the section of related work to make sure that the relevant work is fully discussed, such as [1,2].
[1] Yan Q, He X, Yue X, et al. Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA[J]. arXiv preprint arXiv:2405.20421, 2024.
[2] Xia P, Chen Z, Tian J, et al. CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models[J]. arXiv preprint arXiv:2406.06007, 2024.
Technical Quality: 3
Clarity: 4
Questions for Authors: - More detailed ablation studies.
- The complete interpretation analysis.
- More complete reference work.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments!
**Q1:** The ablation studies show that certain configurations (e.g., using a high number of projection experts) might lead to overfitting. This aspect could be discussed in more detail, including strategies to mitigate overfitting.
**A1:** Thank you for your suggestion.
- Choosing the optimal parameter configuration in multi-modal multi-task scenarios has always been a concern, which aims to achieve a balance between performance and computational efficiency [1-2]. It is a challenging study as the complexity of the scenario can end up overfitting to simpler modalities or tasks or underfitting complex ones.
- As shown in Table 2 (c), in the experiment exploring the key parameter of the number of experts, increasing the number of experts still brings performance gains on some datasets, but the **average gain tends to stabilize across all tasks and datasets**.
- To our knowledge, recent studies [3-5] have also conducted ablation experiments on these key parameters. However, there are different observations in different scenarios (number of modalities/tasks/data volumes). Therefore, we believe that **selecting the optimal parameter settings using the development set is a simple and effective method** to achieve the balance between performance and computational efficiency.
Thanks to your feedback, we will provide more detailed discussions in the revised version. We will continue to focus on this in our future research.
**Q2:** While the interpretation analysis only focuses on visual features on different tasks, the analysis of visual features on different medical image modalities should be considered.
**A2:** Thank you for your suggestion. We also use t-SNE method to **visualize the distribution of visual features on medical image modalities** and provide the results in **Figure Re.1**.
- Specifically, we first observe the visual feature distribution of different modalities under the same task in **Figure Re.1 (a-c)**. We find that the feature distributions of CT and MRI modalities in the REG task have good discriminability after passing through the frozen visual encoder. After passing through the connector, the improvement in Silhouette score (from 0.3049 to 0.3335) is relatively limited.
- In addition, we select 100 samples from each of the 8 modalities and observe their visual feature distributions after passing through different visual encoders in **Figure Re.1 (d-f)**. It can still be observed that the majority of modality distributions are ordered and tightly packed.
Based on the above observations, the **distinction of medical image modalities is achieved effectively** through the visual encoder, while task differentiation requires the well-designed connector. We will add the analysis of visual features on different medical image modalities in the revised version.
**Q3:** It could be better to add more work on evaluation of medical vision-language models in the section of related work to make sure that the relevant work is fully discussed.
**A3:** Thank you for your suggestion. We attach great importance to your suggestions and will add cutting-edge developments [6-7] on evaluation of medical vision-language models to the related work section in the revised version.
**References**
[1] Liu Q, Wu X, Zhao X, et al. When MOE Meets LLMs: Parameter Efficient Fine-tuning for Multi-task Medical Applications[C]//Proceedings of the 47th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2024: 1104-1114.
[2] Chen S, Jie Z, Ma L. Llava-mole: Sparse mixture of lora experts for mitigating data conflicts in instruction finetuning mllms[J]. arXiv preprint arXiv:2401.16160, 2024.
[3] Chen T, Zhang Z, JAISWAL A K, et al. Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers[C]//The Eleventh International Conference on Learning Representations.
[4] Dou S, Zhou E, Liu Y, et al. Loramoe: Revolutionizing mixture of experts for maintaining world knowledge in language model alignment[J]. arXiv preprint arXiv:2312.09979, 2023, 4(7).
[5] Gou Y, Liu Z, Chen K, et al. Mixture of cluster-conditional lora experts for vision-language instruction tuning[J]. arXiv preprint arXiv:2312.12379, 2023.
[6] Xia P, Chen Z, Tian J, et al. CARES: A Comprehensive Benchmark of Trustworthiness in Medical Vision Language Models[J]. arXiv preprint arXiv:2406.06007, 2024.
[7] Yan Q, He X, Yue X, et al. Worse than Random? An Embarrassingly Simple Probing Evaluation of Large Multimodal Models in Medical VQA[J]. arXiv preprint arXiv:2405.20421, 2024.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer wK4x
Comment: Thank you for the rebuttal, which addressed some of my concerns. I have increased my score and am looking forward to reading your revision in future venue. | Summary: The paper presents Uni-Med, a medical generalist foundation model designed to perform multiple medical tasks efficiently through multi-task learning. This model introduces a Connector-Mixture-of-Experts (CMoE) module to mitigate the tug-of-war problem in multi-modal, multi-task optimization, which is a common issue in current models. Uni-Med achieves competitive or superior performance across six medical tasks without requiring task-specific fine-tuning.
Strengths: 1. Multi-modal multi-task optimization is a complex and important question for large multimodal models, the introduction of the Connector-Mixture-of-Experts (CMoE) module, employs a mixture of projection experts to align visual and language embedding spaces, shows superior performance on multiple tasks.
2. The paper conducts a comprehensive interpretation analysis of the problem from the perspective of gradient optimization and parameter statistics.
3. Extensive experiments demonstrate Uni-Med's effectiveness across multiple tasks and datasets.
Weaknesses: 1. The model currently supports only 2D images, whereas most commonly used medical imaging modalities, such as CT and MRI, are in 3D.
2. For the report generation task, the evaluation should include metrics like RadGraph Score and RadCliQ, as BLEU and ROUGE cannot fully assess the semantic accuracy.
Technical Quality: 3
Clarity: 3
Questions for Authors: Is the number of projection experts correlated with the number of tasks or the number of image modalities?
Figure 5 shows that visual features of the same task are more tightly distributed. How would t-SNE behave for different modalities? Why are visual features related to tasks? For a single image, it can be used for both classification and report genertaion.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments!
**Q1:** The model currently supports only 2D images, whereas most commonly used medical imaging modalities, such as CT and MRI, are in 3D.
**A1:** Thank you for your advice. Same as most medical MLLMs, we input **2D slices and corresponding questions for 3D images** such as CT and MRI. We acknowledge that Uni-Med has certain limitations in handling genuine 3D medical image inputs. The primary challenge lies in the need for different visual encoders to process 3D images effectively [1]. Replacing the visual encoder to handle 3D images would compromise our ability to process 2D image datasets effectively. This remains an area for future research.
**Q2:** For the report generation task, the evaluation should include metrics like RadGraph Score and RadCliQ, as BLEU and ROUGE cannot fully assess the semantic accuracy.
**A2:** Thank you for your suggestion. Firstly, we learne the concepts of the two metrics mentioned above:
- RadGraph-based metrics. The RadGraph model [2] parses radiology reports into graphs containing clinical entities and relations between them. The RadGraph F1 metric computes the overlap in entities and relations separately, then reports their average.
- RadCliQ. Radiology Report Clinical Quality (RadCliQ) is a composite metric that integrates RadGraph F1 and BLEU score in a linear regression model to predict the total number of errors in a report [3].
Secondly, we calculate and report RadGraph entity F1, RadGraph relation F1, RadCliQ-v0 and RadCliQ-v1 on MIMIC-CXR dataset in **Table Re.1**, using the code released by Yu et al. [3]. The improvement of RadGraph-based metrics and the decrease of RadCliQ both indicate that **Uni-Med achieves better semantic accuracy in the report generation task**.
**Q3:** Is the number of projection experts correlated with the number of tasks or the number of image modalities?
**A3:** Thank you for your question. We hold the opinion that both are important, and the answer to this question needs to be **based on the actual situation**. We analyze the following three scenarios:
- **Single modal, multi-task.** A same image may need to complete different tasks, and the number of experts should be related to the number of tasks.
- **Multi-modal, single task.** The setting of the number of experts should consider the number of modalities.
- **Multi-modal, multi-task.** In this scenario, further analysis of the data is required. Taking Uni-Med as an example, when we try to visualize visual features separately by task and modality, we find that the distribution of features by task is more chaotic, while the distribution of features by modality is more orderly (Detailed in Q4 & A4). Therefore, CMoE needs to consider task information more, and the number of experts is more related to the task.
**Q4:** Figure 5 shows that visual features of the same task are more tightly distributed. How would t-SNE behave for different modalities? Why are visual features related to tasks? For a single image, it can be used for both classification and report generation.
**A4:** Thank you for your question.
- In **Figure 5**, we visualize the distribution of visual features by tasks. It can be clearly observed that the distribution of features by task is chaotic in **Figure 5 (a)**, which means that there is no obvious discrimination between different tasks after passing through the frozen visual encoder. Visual features of the same task are more tightly distributed after CMoE in **Figure 5 (c)** than MLP in **Figure 5 (b)**.
- We use t-SNE to visualize the distribution of visual features by modalities and provide the results in **Figure Re.1**. Specifically, we first observe the visual feature distribution of different modalities under the same task in **Figure Re.1 (a-c)**. We find that the feature distributions of CT and MRI modalities in the REG task have good discriminability after passing through the frozen visual encoder. After passing through the connector, the improvement in Silhouette score (from 0.3049 to 0.3335) is relatively limited. In addition, we select 100 samples from each of the 8 modalities and observe their visual feature distributions after passing through different visual encoders in **Figure Re.1 (d-f)**. It can still be observed that the majority of modality distributions are ordered and tightly packed.
- The above findings also provide a new perspective for explaining the explicit task conditioned projection in CMoE. When aligning visual and language embedding spaces through the connector in Uni-Med scenario, **task information is more difficult to distinguish than modality information**.
- As described in the question, for a single image, it can be used for different tasks. If the visual features are not related to the task, then the tokens of the same image input into LLM are exactly the same (through linear, MLP, and token-level CMOE in Table 2). In this case, achieving multitasking relies entirely on the capabilities of LLM. On the contrary, we assume that different tasks require attention to different image features. After passing through the connector, the features of the same image for different tasks have adaptively changed, which can alleviate the negative impact of the tug of war problem in multi-task learning on LLM. The significant improvement in experimental results confirms the latter assumption.
**References**
[1] Bai F, Du Y, Huang T, et al. M3d: Advancing 3d medical image analysis with multi-modal large language models[J]. arxiv preprint arxiv:2404.00578, 2024.
[2] S Jain, A Agrawal, A Saporta, et al. RadGraph: Extracting clinical entities and relations from radiology reports. In Proceedings of the Neural Information Processing Systems Track on Datasets and Benchmarks, volume 1, December 2021.
[3] Yu F, Endo M, Krishnan R, et al. Evaluating progress in automatic chest x-ray radiology report generation[J]. Patterns, 2023, 4(9).
---
Rebuttal 2:
Comment: Thank you for the response. The authors have adequately addressed my primary concerns, and I have no further questions. I will maintain my previous rating. | Summary: The authors propose to build a medical generalist multi-modal foundation model using a novel "connector mixture of experts" module to solve the problem of "multi-task" learning. Their connector-MOE technique introduces a projection and routing module from the visual encoder into the LLM that is explicitly conditioned on the underlying task. The model is tested on a set of medical tasks, and benchmarked against several other multimodal medical models demonstrating improved performance.
Strengths: The explicit task conditioned projection is novel, and integrates an older concept of MoEs into a SOTA MLLM framework.
The authors do an excellent job sourcing and assembling a large, multi-task, multi-modal set of medical benchmarks and performing an extensive set of ablations.
The paper is benchmarked broadly across multiple tasks - fitting the definition of a foundation model.
Weaknesses: I disagree with the claim that there is limited research on connecting modalities in multi-modal models. This is an area of immense interest and extensive research broadly within the field of machine learning.
I don't think that it is a unique medical issue, and the benchmarking of a novel architecture would be better served using more common datasets to the ML community. This is particularly important because, as the authors note, the medical datasets and models involved were hard for them to control for data leakage. Why not compare the routing technique against non-medical models on non-medical datasets where this issue won't be the case?
The results are hard to follow in two very large Tables Table 2 and 3, particularly Table 3 which is the comparison to existing standards. The explanation of these comparisons is quite brief, and not well described in the text despite being the primary comparison for the paper.
The paper cites data leakage as being an issue, but isn't dataset shift an issue too? If I understand correctly, Uni-Med is being tested on held-out samples from datasets it was trained on, while the other LLMs are being tested on a mixture of data that in some cases was even in their training datasets? A fairer comparison would be to utilize the same datasets across model architectures.
Technical Quality: 2
Clarity: 3
Questions for Authors: In the discussion of MOEs and the paper motivation I think that it could be clarified substantially. Line 97-105 clearly disambiguates the usage of the term MoE, and it might be helpful to do this sooner in the introduction to help clarify this work for readers.
Generalist foundation model seems to be redundant? Isn't a "Foundation model" by definition "generalist"?
A simple linear projection and purely autoregressive design with visual instruction fine-tuning like LLaVA learns to implicitly condition the projected visual tokens on task and is the appropriate benchmark for this explicit routing framework.
I feel like the paper would, in general, benefit from framing it as an investigation into novel connectors with which to build any foundation model - medical or otherwise. Framing it as a novel medical foundation model focuses on the wrong thing, and makes me wonder why it isn't tested on a broader range of medical tasks and datasets, scaled up and down, and so forth.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No limitations are mentioned, and I think that this is a missed opportunity. There are clearly limitations with regards to the comparisons to other models, datasets involved with these comparisons, and overall size of the involved models utilized in Uni-Med.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments!
**Q1:** I disagree with the claim that there is limited research on connecting modalities in multi-modal models.
**A1:** There is no doubt about that researches about connecting modalities in multi-modal models is popular and extensive. As mentioned in lines 4-6, 50-52, and 104-105, our viewpoint is that research focus on the field of **connectors in MLLMs**. We will highlight this scope of our research in the revised version to reduce your misunderstanding.
**Q2:** I don't think that it is a unique medical issue, and the benchmarking of a novel architecture would be better served using more common datasets to the ML community. As the authors note, the medical datasets and models involved were hard for them to control for data leakage. Why not compare the routing technique against non-medical models on non-medical datasets?
**A2:** Thank you for your suggestion.
- First, we agree that this is not a unique medical issue. However, when we construct medical MLLMs based on multi-modal and multi-task scenarios, we observe the tug of war problem at the connector level within standard MLLM architectures. This is the **background** of our proposal of Uni-Med.
- Second, it is of great significance to research and develop the medical generalist foundation model. Providing a superior solution to the tug-of-war problem which is particularly serious due to the diversity of image modalities and tasks in the medical field becomes our basic **motivation**. We believe that **"solving problem in prominent fields" and "exploring generalization in general fields" are equally important contributions** to the machine learning community.
- Third, not only in the medical field, but also in general field, model evaluation is plagued by data leakage issue. In our work, the data leakage issue is only observed on the MPx-Single using the model provided by RadFM. We ensure that all experiments of Uni-Med are free of data leakage issue and the results are **reliable**.
- Fourth, we conduct **preliminary exploration of generalization** of our method in general field. We fully follow the training strategy of LLaVA-1.5 and report metrics on 9 benchmarks with/without CMoE in **Table Re.2**. The results show that the introduction of CMoE brings significant improvements on all benchmarks.
**Q3:** The results are hard to follow in two very large Table 2 and 3. The explanation of these comparisons is quite brief, and not well described in the text despite being the primary comparison for the paper.
**A3:** Thank you for your suggestion. Due to the page limit of the paper, we acknowledge the inadequacies in our writing regarding the explanation of experimental results. We will **add these details** in the revised version. We hope to make the readers feel that the presentation is clear and easy to understand.
**Q4:** If I understand correctly, Uni-Med is being tested on held-out samples from datasets it was trained on, while the other LLMs are being tested on a mixture of data that in some cases was even in their training datasets?
**A4:** Your understanding is accurate. For other medical MLLMs, we use readily available model checkpoints for testing. We clearly know that a completely fair comparison would be to utilize the same datasets across model architectures. But we have already achieved relative fairness:
- We use the **official test set split** for all datasets, except for Slake-VQA, as we utilize it to build data for other tasks. In this case, there is an unfair comparison between Slake-VQA, but it's actually unfair to Uni-Med because part of the test data is used for training in other models.
- As for the fact that we report RadFM's data leakage in the MPx-Single dataset (RadFM opens source this dataset and provides split), we conduct testing **strictly according to the split**, which only indicates that RadFM's model checkpoint may not have been trained according to this split.
- **None of the ablation experiments has data leakage issues**, and the effectiveness of CMoE in any configuration is **reliable**.
**Q5:** Line 97-105 clearly disambiguates the usage of the term MoE, and it might be helpful to do this sooner in the introduction to help clarify this work for readers.
**A5:** Thank you for your suggestion. As mentioned in line 50-53, current research to mitigate the tug-of-war problem mainly tailors the MoE approach to the language model components, overlooking the potential benefits of exploring and enhancing the connector. We will provide a clearer presentation of our motivation and the usage of MoE in the introduction section.
**Q6:** LLaVA learns to implicitly condition the projected visual tokens on task and is the appropriate benchmark for this explicit routing framework.
**A6:** In fact, all experiments with linear and MLP connectors in Table 2 have model architectures **consistent with LLaVA and LLaVA-1.5**, respectively. In addition, CMoE with token-level router strategy is also an implicitly condition projection architecture. We hope these explanations are helpful for you to understand our benchmark settings.
**Q7:** The paper would benefit from framing it as an investigation into novel connectors with which to build any foundation model. Framing it as a novel medical foundation model focuses on the wrong thing, and makes me wonder why it isn't tested on a broader range of medical tasks and datasets.
**A7:** We elaborate on the background and motivation for why we chose the medical field for research in the first and second point in **A2**. With current computing resources, we have trained and tested on as wide a range of medical tasks as possible. **Compared to the existing medical MLLMs**, Uni-Med have added more diverse tasks and datasets. We do indeed look forward to having more data and a wider variety of tasks to validate our method. We will continue to focus on this in our future research. | Summary: This paper introduces Uni-Med, which applies mixture of experts at the connector level for efficient training toward a unified medical multi-modal foundation model. The contributions of this work include 1.) curation of indexes to quantify tug-of-war problem in multi-modal multi-task modal; 2.) a novel perspective of applying Connector MoE for multi-modal multi-task model, which enables efficient training; 3.) comprehensive ablative studies to evaluate various configurations for different modules; 4) commitment to providing open-source code and weights of the proposed method.
Strengths: 1. It is a technically-solid work. The mitigation of the tug-of-war problem is justified from multiple perspectives, including developed indexes, parameter statistics scores, routing weights, and tSNE feature visualization.
2. The proposed framework is evaluated thoroughly in ablation studies.
3. Consistent improvements over existing open-source medical foundation models are observed.
4. The presentation is clear and easy to follow.
Weaknesses: 1. While the reviewer appreciates the acknowledgment of several limitations of this work, it would be better to mention them in the main text, especially limitation no. 5 in lines 662-663. If the space is not sufficient, at least they should be briefly mentioned in the main text, and a reference to detailed limitations should be provided.
2. Confusion about the training/fine-tuning details: for models presented in Table 3, are they individually fine-tuned for each dataset based on the split introduced in the appendix? How is the fine-tuning implemented? Is it end-to-end or LoRA fine-tuning? For the Uni-Med, the reviewer understands that it is trained on all datasets appearing in Table 3, but there is no individual fine-tuning. Is this understanding correct?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Line 143 is missing an introduction about the soft router.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Most of the limitations are discussed in the Appendix, while some of them are not addressable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your insightful comments!
**Q1:** While the reviewer appreciates the acknowledgment of several limitations of this work, it would be better to mention them in the main text, especially limitation no. 5 in lines 662-663. If the space is not sufficient, at least they should be briefly mentioned in the main text, and a reference to detailed limitations should be provided.
**A1:** Thank you for your suggestion. We have realized that the limitations of our work and related references should be mentioned in the main text. We will add a limitation section in the main text of the revised version.
**Q2:** Confusion about the training/fine-tuning details: for models presented in Table 3, are they individually fine-tuned for each dataset based on the split introduced in the appendix? How is the fine-tuning implemented? Is it end-to-end or LoRA fine-tuning? For the Uni-Med, the reviewer understands that it is trained on all datasets appearing in Table 3, but there is no individual fine-tuning. Is this understanding correct?
**A2:** Thank you for your questions. We will elaborate on the implementation details of the models in Table 3 to reduce your confusion.
- The understanding of Uni-Med is basically accurate. Uni-Med achieves **joint training** on 6 six distinct medical tasks and 12 datasets, requiring only **one-stage training** on a single A800 GPU and **no task/dataset fine-tuning**. It strictly follows the dataset split introduced in the appendix.
- For model comparison, we use readily **available model checkpoints for testing**. The details are as follows: (1) **About training data and model type**. Except for Med-Flamingo, the raw training data of the comparison models all contain some of the datasets in Table 3. For example, LLaVA-Med uses full parameter fine-tuning on the Slake-VQA and Path-VQA, respectively, which means it offers different dataset-specific model checkpoints. XrayGPT is task-specific model, and it traines on MIMIC-CXR. RadFM is a generalist foundation model and its training data includes Slake-VQA, MIMIC-CXR and MPx-Single. (2) **About datasets split**. We use the **official test set split** for all datasets, except for Slake-VQA, as we utilize it to build data for other tasks. In this case, there is an unfair comparison between Slake-VQA, but it's actually unfair to Uni-Med because part of the test data is used for training in other models.
- A completely fair comparison across different model architectures is to use the **same dataset split for training and testing**. The medical MLLMs used for comparison all followe the **standard architecture** consists of a vision encoder, a connector (e.g. XrayGPT: linear layer; LLaVA-Med: MLP), and an LMM. From this perspective, the experiments of the connector using linear layer or MLP in Table 2, **to some extent, represent** the results of XrayGPT and LLaVA-Med frameworks that are **fully consistent with the Uni-Med training and fine-tuning strategy**, respectively.
Thank you for recognizing our extensive experiments and analyses. To reduce the confusion of readers, we will add more implementation details in the revised version.
**Q3:** Line 143 is missing an introduction about the soft router.
**A3:** Thank you for your meticulous review and reminder. The soft router receives input tokens and calculates the routing weights for each expert. We will add the missing introduction about the soft router in the revised version.
---
Rebuttal Comment 1.1:
Title: Response to the rebuttal
Comment: I thank the author for their time and effort preparing for the rebuttal. After reading the rebuttal, I have a follow-up question about the comparison with open-source MLLM and Table 3 (correspond to Q2 and A2 above).
Other models, such as LLaVA-Med and Med-Flamingo, are never trained on some of the other datasets. For instance, the authors used the LLaVA-Med checkpoints that are not trained on MIMIC-CXR and also did not fine-tune it on the MIMIC-CXR dataset. Rather, that checkpoint was directly applied to the test set of MIMIC-CXR, and the results were reported in Table 3. Is this understanding correct?
---
Reply to Comment 1.1.1:
Comment: Thanks again for your patience and meticulousness! For the follow-up question mentioned above:
- Your understanding in the comment is correct. Taking the evaluation of LLaVA-Med as an example, for Slake-VQA and Path-VQA, we use the checkpoints of the third stage (dataset-specific fine-tuning) for each dataset separately; For other datasets, we use the checkpoints of the second stage (medical instruction tuning). Some open-source medical MLLMs have never been trained on some of the datasets. In these cases, we have annotated "zero-shot" (i.e. gray background) in Table 3.
- If you are concerned about the performance of other models using the same data and training strategy as Uni-Med, the experiments of the connector using linear layer or MLP in Table 2, to some extent, represent the results of XrayGPT and LLaVA-Med framework, respectively.
- Furthermore, we would like to clarify the purpose of Table 3: (1) Uni-Med has covered more medical tasks than existing open-source medical MLLMs; (2) Uni-Med has achieved competitive or superior evaluation metrics on various medical tasks compared to other "task-specific" MLLMs (e.g. LLaVA-Med for VQA, XrayGPT for report generation).
We hope these explanations are helpful for you to address your concerns. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to express our heartfelt gratitude for your invaluable time, expertise, and meticulous attention in reviewing our manuscript. The insightful comments and constructive feedback have immensely enriched the quality and rigor of our work.
We appreciate that the reviewers acknowledge the advantages of our work:
- **About module designs.** The introduction of the CMoE module to address the tug-of-war problem at the connector level is **novel and well-executed** (Reviewer wK4x); The explicit task conditioned projection is **novel**, and integrates an older concept of MoEs into a SOTA MLLM framework (Reviewer Snz8);
- **About experiments.** The paper provides **thorough ablation studies** to validate the effectiveness of the proposed CMoE module under various configurations (Reviewer wK4x); **Extensive experiments** demonstrate Uni-Med's effectiveness across multiple tasks and datasets (Reviewer b87p); The authors do an **excellent job sourcing and assembling** a large, multi-task, multi-modal set of medical benchmarks and performing an **extensive set of ablations** (Reviewer Snz8); The paper is **benchmarked broadly** across multiple tasks—fitting the definition of a foundation model (Reviewer Snz8); The proposed framework is **evaluated thoroughly** in ablation studies (Reviewer mvVf).
- **About results and meaning.** Uni-Med achieves **impressive performance with minimal training computational overhead**, highlighting its **efficiency** in handling large-scale multi-modal medical data (Reviewer wK4x); Multi-modal multi-task optimization is a **complex and important question** for large multimodal models, the introduction of the CMoE module, employs a mixture of projection experts to align visual and language embedding spaces, shows **superior performance** on multiple tasks(Reviewer b87p); It is a **technically-solid** work (Reviewer mvVf); **Consistent improvements** over existing open-source medical foundation models are observed (Reviewer mvVf).
- **About analysis.** The paper conducts a **comprehensive interpretation analysis** of the problem from the perspective of gradient optimization and parameter statistics (Reviewer b87p); **The mitigation of the tug-of-war problem is justified from multiple perspectives**, including developed indexes, parameter statistics scores, routing weights, and tSNE feature visualization (Reviewer mvVf).
- **About writing.** The presentation is **clear and easy to follow** (Reviewer mvVf).
On the other hand, we actively adopt the suggestions put forward by the reviewers and diligently address all the issues. Allow me to summarize the revisions made in the rebuttal:
- **Exploring visual features distribution on image modalities.** We conduct additional interpretation analysis which focus on visual features on different medical image modalities. We also use the t-SNE method for visualization and get instructive observation results.
- **Adding metrics for the report generation task.** To fully assess the semantic accuracy, RadGraph F1 and RadCliQ are uesd to evaluate the results of different models on the MIMIC-CXR dataset.
- **Emphasizing contribution to the ML community.** Although this is not a unique medical issue, we have observed that the tug-of-war problem is particularly serious due to the diversity of image modalities and tasks in the medical field. We believe that "solving problem in prominent fields" and "exploring generalization in general fields" are equally important contributions to the machine learning community. Through extensive experiments and interpretability analysis from multiple perspectives, Uni-Med has shown its effectiveness in mitigating the tug-of-war problem in medical field, and the main idea is instructive for the general field.
- **Clarifying fair comparison.** None of the ablation experiments has data leakage issues, and the effectiveness of CMoE in any configuration is reliable. For model comparison, we use readily available model checkpoints for testing. We use the official test set split for all datasets, except for Slake-VQA, as we utilize it to build data for other tasks. In this case, there is an unfair comparison between Slake-VQA, but it's unfair to Uni-Med because part of the test data is used for training in other models.
- **Preliminary exploration of generalization in general field.** We fully follow the training strategy of LLaVA-1.5 and report metrics on 9 benchmarks with or without CMoE. The results show that the introduction of CMoE brings significant improvements on all benchmarks.
- **Writing revision and explanation.** Thanks to the reviewers' feedback, in the revised version, we will (1) provide more detailed experimental discussions; (2) provide clearer explanation for the usage details of comparative models; (3) supplement, emphasize and refine some textual expressions; (4) adjust the placement of the limitation section to the main text.
- **Adding references.** We will add cutting-edge developments on evaluation of medical vision-language models to the related work section.
- **Future work.** Addressing the limitations of handling 3D image data, the optimal setting of key parameters such as the number of experts in different scenarios, validation on more data and a wider variety of tasks will be part of our future work.
Pdf: /pdf/2eb7b4381a548784f54eff92fedfbd5b4dce653d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exploration by Learning Diverse Skills through Successor State Representations | Accept (poster) | Summary: This paper proposes a new skill-based exploration method that leverages successor states. The authors start by arguing the inadequacy of maximizing mutual information as objective for learning diverse skills that also encourage exploration. Then the authors define the mutual information in terms of successor state measures to derive a novel uncertainty measure that ranks states highly that encourage visitation within the current skill but that other skills haven’t visited. The authors provide comprehensive experiments showing that they outperform common intrinsic motivation methods and other well known skill diversity exploration methods.
Strengths: - The presentation of the paper is excellent. It is easy to read and follow.
- The authors motivate the paper very well and start from existing work to derive a novel method.
- The experiments seem fitting although somewhat simple.
- The authors perform thorough ablations and also analyse failure cases.
Weaknesses: - The first 2-D environment seems a bit simple. Although I think it is adequate to demonstrate exploration behaviour, it would be interesting to see how LEADS performs with more complex observations or tasks.
Technical Quality: 3
Clarity: 4
Questions for Authors: - have you tested your methods on any rgb observations? It would be interesting to see if C-Learning is still robust in this case.
- Could you think of any way to perform C-Learning in high dimensional environments? Is the bottleneck related more to the task complexity or observational complexity?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors have addressed the limiting components of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Answer: Reviewer 8kZQ
We thank the reviewer 8kZQ for their very positive comments on the paper presentation and for their insightful comments on C-learning. We hope to address their questions in our response.
We did find that the application of C-learning is limited based on the dimensionality of the problem. The application of C-learning to the high-dimensional state spaces of the MuJoCo environments was a challenging aspect of this work, as detailed in Appendix D. We therefore did not try RGB environments, as this would require further fine-tuning of C-learning on a convolutional architecture. However, recent contributions [1][2] building upon C-Learning illustrate that learning this measure in such environments is feasible, making this application an interesting direction for future work. We will include these references in the discussion on the limitations of C-learning in Appendix D.
[1] Eysenbach, Benjamin, et al. "Contrastive learning as goal-conditioned reinforcement learning." Advances in Neural Information Processing Systems 35 (2022): 35603-35620.
[2] Zheng, Chongyi, Ruslan Salakhutdinov, and Benjamin Eysenbach. "Contrastive Difference Predictive Coding." The Twelfth International Conference on Learning Representations.
---
Rebuttal Comment 1.1:
Comment: Thank you for the answer!
I think it would be interesting to try these paradigms on problems that do not require immediate skill learning such as in MuJoCo, but where skill also goes through many hierarchies. For example, the agent needs to learn how to pick up an object on a more detailed level, but also needs to understand how to use that object in a broader context. I do realise this is out of the scope for the paper.
Overall, I think this is a nice paper, with a simple and clear idea.
---
Reply to Comment 1.1.1:
Title: Official Comment to Reviewer 8kZQ
Comment: Thank you for your positive feedback and insightful suggestion. We will certainly take this direction into account for future research. We appreciate your valuable input and thank you for taking the time to review our paper. | Summary: The paper introduces LEADS, an algorithm to learn diverse skills that additionally encourages exploration. LEADS is motivated by the observation that common mutual information-based diversity-seeking algorithms cannot effectively encourage exploration. LEADS instead proposes a new objective based on successor state measures that explicitly encourage the coverage of state spaces. Experiments conducted on state-based control problems validate the effectiveness of LEADS in learning diverse skills that better cover the whole state space than baselines.
Strengths: 1. The work is well motivated with both illustrative examples and theoretical analysis.
2. This work provides extensive comparisons with baselines. The qualitative visualization is particularly helpful for readers to understand the learned exploratory behavior of the proposed algorithm.
Weaknesses: 1. In the experiment section, it would be better to briefly introduce the compared baseline methods and point out the main difference between LEADS and these methods before delving into the detailed discussion of results.
2. Besides better coverage of states, are there other advantages of learning more exploratory behaviors? For example, can LEADS achieve better success rates on hard-exploration problems?
3. How does LEADS perform with a different number of skills? Is there a best choice of the number of skills or is LEADS insensitive to this hyper-parameter?
Technical Quality: 3
Clarity: 2
Questions for Authors: See the weaknesses part.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: LEADS relies on the state occupancy measure estimator to learn diverse skills and encourage exploration. For behaviors that cannot be easily distinguished from the state occupancy measure, the algorithm might not be able to handle them properly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Answer: Reviewer axfE
We thank Reviewer axfE for their comments. In our general response, we address their specific concern regarding a more comprehensive comparison of LEADS with other baselines. Below, we will address the remaining questions and concerns about our study.
### Beyond the maximum coverage objective:
We appreciate the reviewer raising this insightful question, which opens up important avenues for discussion.
As supported by multiple studies [1][2][3][4], exploration represents a critical step in deriving effective policies, especially in hard exploration problems. We conjecture that initializing, for example, a goal-reaching algorithm using the replay buffer obtained via LEADS would yield better success rates, particularly in hard exploration problems. For instance, in Figure 2, in the Hard-maze environment, the transitions sampled by skill 4 are essential for training any goal-reaching policy over goals in that part of the maze. We also conjecture that another expected advantage could be the design of reusable/composable skills in a hierarchical setting, but we consider it out of the scope of this study and reserve it for future work. We propose mentioning this direction in the conclusion section.
### Different number of skills:
As with other skill-based algorithms, LEADS's coverage changes with the number of skills. However, the skills learned by LEADS evolve throughout the entire training procedure to visit unexplored areas. Hence, one can expect LEADS to require fewer skills to achieve the same final coverage performance than algorithms that learn static skills, like DIAYN. Choosing the number of skills involves a tradeoff between the acceptable computational runtime of the algorithm (which increases with the number of skills) and the desired efficiency in state space coverage. This choice is also highly problem-dependent. Dynamic adaptation of the number of skills is a challenging and open topic, covered by works such as [5]. Although this remains an open question, for LEADS we worked with a hand-chosen, predetermined number of skills and did not focus on this aspect, as is common in most works in the literature. We propose including this discussion in Appendix C on Hyperparameters.
We hope these answers address the concerns of the reviewer and lead to further exchanges.
[1] Deepak, Pulkit et al. Curiosity-driven exploration by self-supervised prediction. In International conference on machine learning, pages 2778–2787. PMLR, 2017.
[2] Yuri, Harrison et al. Exploration by random network distillation. arXiv preprint arXiv:1810.12894, 2018.
[3] Adrià, Pablo et al. Never give up: Learning directed exploration strategies. arXiv preprint arXiv:2002.06038, 2020.
[4] Zhaohan, Shantanu, Miruna et al. BYOL-explore: Exploration by bootstrapped prediction. Advances in neural information processing systems, 35:31855–31870, 2022.
[5] Kamienny, Tarbouriech et al.(2022, April). Direct then Diffuse: Incremental Unsupervised Skill Discovery for State Covering and Goal Reaching. In ICLR 2022.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer. My questions about further advantages beyond maximum coverage and the number of skills are mostly addressed. I would like to raise the score to 6.
---
Reply to Comment 1.1.1:
Title: Official Comment to Reviewer axfE
Comment: Thank you for your response and for raising the score. We appreciate your feedback, which has been instrumental in improving the readability of our paper. Thank you for your time and consideration. | Summary: The authors aim to solve the problem of exploring to learn a diverse set of skills. To do this, they modify a commonly used mutual information objective in two ways: first, they apply it to the successor measure, and second, they change the sampling distribution to focus on states with high uncertainty. They test this method on a number of challenging goal-conditioned environments, and show that it performs better than existing methonds on several environments, as well as being the only method which achieves good state coverage on all environments.
Strengths: Originality:
There are two main original contributions to this method: the application of mutual information to state successor measures, and the substitution of the uncertain measure as the sampling distribution. In my opinion, this is sufficient originality.
Quality:
The derivation of the objective is correct. The authors test on a varied set of environments in challenging high dimensional spaces.
Clarity:
The work is presented well and is easy to follow.
Significance:
The work is a decent improvement over SOTA the authors compare to.
Weaknesses: Using the uncertainty measure instead of the actual distribution breaks most of the theoretical properties this algorithm may have had, but I can see why it was done. It would be nice if there were a more solid mathematical interpretation of this objective, such as a tradeoff between an explicit exploration objective and the mutual information objective, but I understand this is not always possible.
Technical Quality: 3
Clarity: 3
Questions for Authors: What hand environments are being used? I assume this is HandReach?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately address the limitations of their work
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Answer: Reviewer 98FT
We thank Reviewer 98FT for their comments. In the following, we address their specific questions and concerns about our study.
### Hand environment:
Indeed, the environment used in our tests is HandReach. We will make this clear in the experimental section.
### Theoretical study of LEADS
We agree with the reviewer's comment about the benefits of a deeper theoretical understanding of the LEADS algorithm and specifically the interplay between the mutual information objective and an explicit exploration objective. Through this study, we aim to demonstrate that mutual information maximization can lead to exploration when coupled with an appropriate objective, such as the maximization of the uncertainty measure we propose, along with a sufficient successor state measure. A theoretical understanding of this interplay motivated the writing of Appendix A, which we hope serves as a starting point for this discussion. More specifically, the analysis of the exploration term in Equation 7, detailed in Appendix A.1, draws a link between the uncertainty measure we use and a specific Kullback-Leibler divergence, providing a practical understanding of its maximization.
In future work, we hope to build upon this theoretical insight to derive more formally motivated objectives based on this study.
---
Rebuttal 2:
Comment: Thank you for your response.
The connection to the KL divergence does help motivate the method a bit better. I have increased my score to a 7
---
Rebuttal Comment 2.1:
Title: Official Comment to Reviewer 98FT
Comment: We thank you for your feedback and for the time you invested in reviewing our paper. | Summary: Having an agent learn a set of diverse skills potentially affords the agent better environment exploration. A lack of diversity among the learned skills may reduce the agent's ability to discover newer informative states in the environment. This paper formalizes a method for an agent to learn a set skills by enhancing previous defined mutual information between the states and skills so that it explicitly encourages diversity in exploration. They then evaluate their approach (LEADS) against other algorithms in a number of domains to demonstrate its ability to more effectively explore the state space.
Strengths: **Originality**
While the paper does not introduce any new concept or problem, I consider that the paper does propose an interesting variation to the use of mutual information between states and skills to learn exploratory options that maximize coverage of the state space at the same time that the options are diverse. Both successor features and using mutual information for exploration existed in the literature before this work. The combination done in this way seems interesting.
**Significance**
The algorithm represents progress towards the solution of a very relevant problem in deep reinforcement learning: continual exploration based on online observations of the state space.
**Clarity**
The explanation of the problem and the general idea of the solution and the paper are easy to follow. However there are some issues explained later.
**Quality**
The intuitive concept behind combining successor state measures and mutual information is sound. While the resultant algorithm makes sense, I do not think it works necessarily how the authors explain. The experiments performed are reasonable for empirically backing their claims but the results were not analyzed properly (more later)
Weaknesses: * While the explanation of the problem and the general idea of the solution are easy to follow. There is some mathematical imprecision in Section 3:
1. A measure is a function from a $\sigma$-algebra to non-negative reals. Here the measure is defined for states rather than sets of states. This is more accurately the successor representation.
2. The letter $p$ is used to denote multiple different probability density functions. This is confusing since, $s_1$ and $s_2$ correspond to different random variables: $p(s_1|z)$, the effective state distribution observed in the replay buffer is different from $p(s_2|z)$, which is the state distribution resulting from the discounted visitation.
* Besides the lack of mathematical precision, there is a statement that is not backed up by any argument: line 163 states that the lower bound results in eliminating the natural interplay between the skills that is embedded in the mutual information. There is no explanation for this assertion.
* The related works does not reference work on options or Eigenoptions which are very closely related do successor representations.
* The t-test seems an inappropriate statistical test --- Paired t-tests are better than t-test for comparing algorithms. (See https://arxiv.org/pdf/2304.01315#page=29)
* Sample standard deviation (as used in Fig. 5) is a measure of the variability of a distribution NOT a confidence interval over a mean. Statements such as "One can note that LEADS outperforms other methods across almost all tasks" are unsubstantiated by the provided evidence. One should instead use standard error (as is commonly done in the literature) to obtain a confidence interval. However, a more appropriate choice would either by student-t confidence interval or a bootstrap confidence interval. Please see: https://arxiv.org/pdf/2304.01315#page=13
Technical Quality: 2
Clarity: 2
Questions for Authors: * Notation is imprecise. E.g. what is the domain of policy? $\pi_\theta: \mathcal{S} \times \mathbb{R}^d \rightarrow \mathcal{A}$ or $\pi_\theta: \mathcal{S} \rightarrow \mathcal{A}$?
* The problem setting of Fig 1 is confusing. Is each state reachable? If so the mutual information calculation is wrong. If we assume only the states denoted by the skills are reachable then the calculation is correct but this assumption should be explicitly stated.
Treatment of how hyperparameters were handled was not specified. Were baselines tuned? How was tuning done? (E.g. grid search? random search? etc.)
* The first bound was introduced to remove the need to sample a minibatch $s_1$ from $z$, but this is still the case after the introduction of the bound. What was gained out of the replacement?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Answer: Reviewer QVxs
We thank Reviewer QVxs for their comments. In our general response, we address several of the concerns raised by the reviewer. Below, we specifically address their additional questions and concerns about our study.
### Statement in line 164:
The sentence "natural interplays between skills" in the text refers to the interaction or coordination required among the different skills during the maximization of Mutual Information. Specifically, the reversed form of Mutual Information (MI) is:
$$
\mathcal{I}(S,Z) = \mathbb{E}_{\substack{z \sim p(z) \\ s \sim p(s|z)}} \left[\log \left(\frac{p(z|s)}{p(z)}\right)\right].
$$
To maximize this quantity, the distribution $p(z|s)$ in the log term must be as concentrated as possible on the skill $z$ that was used to sample this state. Therefore, the probability of any other skill on that state must be minimal. The term "natural interplays" thus informally describes the necessary coordination between skills to ensure that each skill is uniquely associated with specific parts of the environment, leading to maximal MI.
In contrast, maximizing the lower bound given in Equation 4:
$$
\mathbb{E}_{\substack{z \sim p(z) \\ s_2 \sim p(s|z) \\ s_1 \sim p(s|z)}} \left[\log(m(s_1, s_2, z))\right]
$$
corresponds to increasing the probability mass on state $s_2$ (sampled in $p(s|z)$) starting from $s_1$ (also sampled in $p(s|z)$) for the skill $z$, independently of the probability mass of other skills on these two states. The maximization of this quantity does not account for the interactions between skills.
In Equation 5, we derive a new lower bound that reintroduces these interplays:
$$
\underset{\substack{z \sim p(z) \\ s_2 \sim p(s|z) \\ s_1 \sim p(s|z)}}{\mathbb{E}} \left[\log \left(\frac{m(s_1, s_2, z)}{1 + \sum_{z' \in \mathcal{Z}} m(s_1, s_2, z')}\right)\right]
$$
In this formulation, the interplays between skills are captured in the denominator of the log term. To maximize this log expression, the denominator must be minimal. This occurs when the probability of transitioning to state $s_2$ from state $s_1$ (which were sampled using skill $z$) is minimal for all other skills $z'$.
To conclude, the term "natural interplays" refers to whether the relationships and interactions between skills are preserved in the maximization process of these expectations. We propose to include the above reasoning as an explanation of the phrase in a new short appendix section.
### Eigenoptions in related work:
In the study of Eigenoptions, the authors derive a way to decompose a given task into different options for which they propose automatic learning. We agree that the study could be relevant in the related work subsection "Successor features for MI maximization". We will include the following reference [1] in the final version of the paper.
### Statistical Analysis of the Results:
We thank the reviewer for their suggestions and will modify the results in the final section. Specifically, we will use Standard Error and a Paired t-test in our final results. Below, we include p values using the paired t-test, which we will add to Appendix B. We note that the significance indication in Table 1 does not change when using the paired t-test; LEADS is significantly superior to all other methods on the Umaze, Hard, and Fetch Slide environments, and no other method significantly outperforms all other methods.
| Method | pvalue |
|----------------------------|----------------|
| Umaze : LEADS/CSD | 0.01522 |
| Umaze : LEADS/LSD | 0.01152 |
| Hard : LEADS/CSD | 0.00246 |
| Hard : LEADS/LSD | 0.00977 |
| Fetch Reach : LEADS/CSD | 0.06354 |
| Fetch Reach : LEADS/DIAYN | 0.08307 |
| Finger : CSD/LEADS | 0.07072 |
| Finger : CSD/LSD | 0.00753 |
| Fetch Slide : LEADS/CSD | 0.00070 |
| Fetch Slide : LEADS/LSD | 0.00055 |
### Hyperparameter tuning:
We propose the inclusion of the following statement in the article: "Hyperparameters were determined through manual testing on the Hard-Maze environment. Resulting hyperparameters are displayed in Table 3. Default values from the literature were used as hyperparameters for other methods." We include hyperparameters of all methods in the included code (https://anonymous.4open.science/r/LDS-BE6B).
### Figure 1 clarification:
In the Figure 1, only the states denoted by the skills are reachable.
We propose making the unreachable states gray and indicating this in the figure explanation.
### Clarification on the Introduction of the First Bound and its Benefits:
We believe the reviewer refers to the use of Jensen's inequality to derive the first lower bound in Equation 4. The mutual information can be defined as: $\mathbb{E}_{(s_2, z) \sim p(s, z)} \left[ \log \left( \mathbb{E}s_1 \left[ m(s_1, s_2, z) \right] \right) \right]$, with $s_1 \sim p(s_1|z)$.
Suppose we wish to maximize this quantity using SGD. For each sample $(s_2,z)$, we need to compute the expectation $\mathbb{E}_{s_1 \sim p(s|z)} \left( m(s_1,s_2,z) \right)$ (because the log is not linear). In practice, this is computationally expensive because it requires sampling a mini-batch of $s_1$ states for every single $s_2$ state. Instead, we use Jensen's inequality, leveraging the concavity of the log function to derive the lower bound:
$$
\mathbb{E}_{\substack{z \sim p(z) \\ s_2 \sim p(s|z) \\ s_1 \sim p(s|z)}}\left[\log(m(s_1,s_2,z))\right]
$$
Maximizing this new quantity can be done with SGD by sampling three independent mini-batches at each iteration. We propose including the above explanation in a new appendix section that expands on section 3.1
[1] Machado, M. C., Rosenbaum, C., Guo, X., Liu, M., Tesauro, G., & Campbell, M. (2018, February). Eigenoption Discovery through the Deep Successor Representation. In International Conference on Learning Representations.
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I would like to thank the authors for their response. The changes made to paper will definitely improve it and they address my major concerns. I will be increasing my score.
---
Reply to Comment 1.1.1:
Title: Official Comment to Reviewer QVxs
Comment: Thank you for your response. We appreciate your acknowledgment of the changes made and are glad to hear that they address your major concerns. Your feedback has been instrumental in improving the paper. Thank you for your thorough review and valuable suggestions. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their insightful feedback, which has been crucial in refining our paper and pinpointing areas that require further clarification. In this general response, we address the concerns raised by reviewers regarding the paper's clarity. We believe these concerns will lead to the main changes in the text, and we include them in the general response to encourage further discussion if needed. Additionally, we address all remaining concerns raised by each reviewer in the respective rebuttal sections.
* **The use of the word measure in "Successor State Measure":**
(reviewer QVxs)
We agree that the term "Successor State Measure" can be misleading and that the Successor State (Representation in the more generic case) of a policy does not adhere to the strict definition of a measure. Considering a measurable space $(\mathcal{S}, \Sigma)$, where $\mathcal{S}$ is a set and $\Sigma$ is a $\sigma$-algebra on $\mathcal{S}$, our definition of Successor State Measure indeed refers to the density of a measure defined on that measurable space. Depending on the reviewer’s viewpoint, we propose to either clarify this in the definition of the Successor State Measure or to replace "Successor State Measure" with "Successor Representation" throughout the text.
* **The use of $p$ to denote distributions:**
(reviewer QVxs)
The reviewer’s remark is correct. In Equation 3, we use the law of total probability to obtain: $p(s_2|z) = \mathbb{E}_{s_1 \sim p(s|z)}\left[ p(s_2|s_1,z) \right]$. We then use our estimation of the Successor State as an approximation of $p(s_2|s_1,z)$. More specifically, just like the density of the random variable $s_1$ (the replay buffer) is an empirical approximation of the density $p(s|z)$ obtained via a Monte Carlo process, the density of the random variable $s_2$ is an approximation of $p(s|z)$ obtained by approximating our Successor State as $p(s_2|s_1, z)$ in the expectation. We could indeed propose new notations in the text, but we believe it would be more appropriate to explicitly clarify these approximations in the text to avoid overly complex notations.
* **Policy mapping:**
(reviewer QVxs)
The policy's domain is $\mathcal{S} \times \mathbb{R}^d$ and its image by the policy $\pi$ is the set of distributions over the action space $\Delta(\mathcal{A})$. We will make this more explicit in the text.
* **Providing a better distinction with other baselines:**
(reviewer axfE)
Due to lack of space, including such details and comparison was not possible in the main text of the paper. But we agree it would make the paper easier to read. We have added an Appendix to better cover the baseline methods and their differences with LEADS, and referred to it at the beginning of the experimental section in the main text. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Emergence of Hidden Capabilities: Exploring Learning Dynamics in Concept Space | Accept (spotlight) | Summary: The paper introduces a novel framework for understanding concept learning in text-conditioned generative models. The experiments are carefully designed (although most of them look a bit toyish) and support the analyses well. The three key findings, i.e., concept signal levels determine the learning dynamics; capability consistently emerges before observable behaviors; under-specification harms the system, are quite novel and could provide many insights to the community. The proposed theoretical formulation also has the potential to be further developed. I enjoy reading this paper and suggest an acceptance.
Strengths: - Good presentation, easy to follow.
- The discussions about the learning dynamics of concept memorization and compositional generalization are cool.
- The three probing methods that verify the claim that the model might learn the concept before generating the correct image are insightful.
- The under-specification discussion might bring practical suggestions to real systems.
Weaknesses: - In Section 3, it is a little hard to understand the relationships between different variables and functions. A diagram like Figure 1 in [2] might be helpful.
- Although the experiments and analysis of the paper are very insightful and persuasive, it is only verified in a relatively simple setting. There are experiments on CelebA, but the number of attributes and their possible values are still kind of small. One or two experiments considering more complex scenarios will strengthen the paper a lot.
- The paper claims that the model usually has the capability to compose the learned concepts earlier than this behavior shows up. How will this finding influence our understanding of (or provide insights to improve) the practical systems? Does this correlate to early stopping (the switching point from underfitting to overfitting)?
Technical Quality: 3
Clarity: 4
Questions for Authors: - The legends in Figure 2-bc (so as other learning dynamics figures) are a bit hard to read.
- In Figures 2 and 3, the learning curves exhibit a “zigzag” pattern, i.e., first pointing to the training example sharing the strongest concept value, and then converging to the memorization one or comp-gen one. A similar phenomenon is also observed in a general classification problem discussed in [1]. Is there any similar reason behind these two findings?
- Both the concept signal and sigmoid function are denoted by $\sigma$, which is confusing.
- What are (0.4, 0.4, 0.6) to (0.3, 0.3, 0.7) in line 216 means, normalized RGB values?
- The color bar of Figure 6 might have the wrong legend (it is the percentage of masked $\alpha$, but the bar ranges from 1 to 3.5).
- The paper concludes that (in line 300) concept learning is a well-controlled phase transition, but the observed behavior can be arbitrarily delayed. Are there any methods that can reduce this delay?
- [2] studies a very similar compositional generalization problem (but in a representation learning setting). It would be helpful to also discuss that in the related work. The discussion of simplicity bias and Kolmogorov complexity in [2] might also be a potential explanation for the observations presented in this paper.
[1] Ren, Yi, Shangmin Guo, and Danica J. Sutherland. "Better supervisory signals by observing learning paths." ICLR 2022
[2] Ren, Yi, et al. "Improving compositional generalization using iterated learning and simplicial embeddings." NeurIPS 2023
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The experimental setting is kind of simple. But I think it is sufficient for a phenomena explanation (and theory) paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer o8es,
Thank you so much for thoroughly understanding, and positively evaluating our work. We are glad to hear that you enjoyed our work, and found all of our three main findings to be “quite novel and could provide many insights to the community”. We thank you also for your very insightful and actionable suggestions to robustify our claims. In response to your thoughtful suggestions, we addressed your concerns by adding:\
(i) a new schematic diagram inspired by [2] Ren, Yi et al.\
(ii) further experiments in a more complex scenario with three concept variables yielding Fig. R1 and R3\
(iii) a better grounding into the learning dynamics of compositional generalization(CG) literature [1,2]\
(iv) a general improvement of figure qualities.
---
* **1. Added New Diagram for the framework:** Inspired by [2] Ren, Yi et al., we have updated our framework section of the paper to be easier to understand. We have added a schematic Figure describing the relation of $\mathcal{S}$, $\mathcal{G}$, $\mathcal{M}$, $\mathcal{X}$ in appendix as well. We haven’t been able to fit this new figure in the attached pdf due to space constraints, but this new figure will be included in the final version of this submission.
* **2. Concept space findings generalize to more complex scenarios:** We have now extended our results to more complex scenarios. In Fig. R1, we show that our *findings on hidden emergence and also our probing methodology generalizes to a compositional setup on CelebA*. In this experiment, the model did not show the CG behavior, while latent linear intervention clearly demonstrated the emergence of the capability. We have also generalized our findings on concept signals in Section 4.1 to a 2x2x2 setup with 3 concept variables. We again find that *concept signal controls the speed of learning in scenarios involving more than 2 concept variables*.
* **3. Practical Implications of Early Capabilities:** One implication of the hidden emergence of capabilities is that standard evaluation pipelines on a fixed dataset might be probing behavior, which we show is very seed dependent. Additionally, this motivates studies on enhancing current models by understanding their internals as it suggests that *models can have capabilities which are not elicited unless intervened appropriately*. With your specific point regarding early stopping, we believe that these models do not explicitly over-fit and nor do frontier diffusion models (see [3]), as they are in the under-parametrized regime and usually trained in an online setup. We also do not have evidence of any capabilities getting lost as we train far beyond fitting the training set.
---
Questions:
* **Figure 2-bc legend:** Thank you for this feedback! We have made significant revisions to explain the axes better in the captions and to improve the overall quality of the figure.
* **Zigzag Pattern:** Thank you for this reference [1]! Indeed, our current understanding is similar to [1], where higher difficulty corresponds to lower concept signal. In our case a concept is harder when its concept signal is less important to reduce the diffusion loss. We will make sure to cite [1] and discuss the similarity of the mechanisms underlying the shared trend in learning dynamics.
* **Usage of $\sigma$:** Thank you for this suggestion! We updated the sigmoid function to $g$ to avoid confusion.
* **Clarification of line 216:** Yes! We will clarify this in our updated manuscript.
* **Figure 6 Legend** Thank you for this catch, we will edit this.
* **Controlling the phase transition:** We did find out that many hyperparameters control the transition, such as weight decay, weight initialization, optimizer, etc.. However, we did not yet find a clear identifiable relationship which allows any predictive hyperparameter transfer. Testing the generalization of these observations to more complex text-to-image models is a great future direction. Currently, we are interested in following up with a theoretical model based on the concept signal values.
* **Relation to reference [2]:** Thank you for pointing out this work. We will definitely cite and discuss it in the final version! The simplicity bias explanation is very interesting, but it seems non-trivial to define it in our setup at the current stage. We briefly looked into weight decay to add an explicit “simplicity” enforcing term, however this did not qualitatively change our results.
---
**Summary**: Thank you again for carefully going through our manuscript and giving us such a concrete and insightful list for improvements! As clarified above, your suggestions inspired us to not only create new schematic figures, but also to run 8 more experiments resulting in 2 new experimental plots. With the above, we hope that we have fully addressed your concerns and now you could strongly recommend acceptance of this paper.
---
[3] Kadkhodaie, Zahra, et al. "Generalization in diffusion models arises from geometry-adaptive harmonic representation." arXiv preprint arXiv:2310.02557 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks very much for the author's hard work. The new results and discussions indeed strengthen the paper. I would like to increase my evaluation to 8. I am looking forward to seeing the final version of the paper. | Summary: This paper introduces a new framework, "Concept Space," designed to study the learning dynamics of capability and behavior in generative models. It uses the concept signal to measure the rate of concept learning. They trained a diffusion model on a simplified synthetic dataset to validate their hypotheses that transitions in model capability occur much faster than the corresponding changes in behavior. This work provides empirical insights into understanding and controlling the development of AI models' capability and behavior.
Strengths: 1. This paper is well-structured and clearly articulated.
2. This paper introduces the Concept Space framework and verifies several interesting combinatorial optimization results on synthetic datasets.
3. The experiment setting is meaningful and interesting.
Weaknesses: 1. The rationale for modifying the differences in attribute values to represent different levels of concept signals is unclear. How does this relate to the formula $σ_i=|\partial G(\vec{z})/(\partial z_i )|$
The derivation of equation (1) is also not clear.
2. Uncertainty in generalization: Despite the introduction of the Concept Space framework, there is still uncertainty about the model's ability to generalize to real-world data. While concept learning is easily validated on synthetic datasets, it is unclear how to effectively validate this approach on large-scale datasets and large models. Can you provide some insights?
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer uzj6,
We thank you for your detailed feedback! We are glad you found our concept space framework and the robust emergence of capability insightful and interesting. We are especially happy to hear that our work provides “insights into understanding and controlling the development of AI models' capability and behavior”, as this was one of the main goals of our study. In response to your comments, we have added:\
(i) An improved description of our framework in Section 3.\
(ii) experiments on frontier multimodal embedding (CLIP) and generative (Stable Diffusion v2.1) models yielding Fig. R2.
---
* **Concept Signal can be altered by color differences:** The concept signal $\sigma_i$ is the absolute change of the data generated when the corresponding concept variable is altered. Here, we are changing the raw RGB difference between the color concept blue and red. Thus ,a rescaling of this difference by $s$ is equivalent to the change of the data generating process $\mathcal{G}\to\mathcal{G}'$, where $|\frac{\partial\mathcal{G}'}{\partial\texttt{color}}|=s|\frac{\partial\mathcal{G}}{\partial\texttt{color}}|$, and thus results in an enhancement of the color concept signal by a factor of $s$. For a visual explanation, please see Fig.7, where the distances are not schematic, but calculated from the dataset as we have access to the data generating process. We have updated Section 3, where we introduce our framework, so that this relation is more clear. We have also moved Fig.7 to the main text. Thank you for prompting this important clarification!
* **Derivation of Equation 1:** Thank you for your question. Equation (1) is a phenomenological model designed to qualitatively reproduce concept learning dynamics, where our goal was to suggest a novel potential space for future theoretical study. The sigmoid function models sudden capability acquisition, while quadratic terms represent the energy landscape guiding the model towards correct concept values. Although not derived from a specific learning process, it introduces a new perspective of defining "concept variables" and describing their evolution via differential equations. We are currently working on a theoretical follow-up to rigorously derive this equation from a concrete model that further substantiates our hypothesis. We will clarify our motivation and limitations in an updated draft.
* **Your feedbacks have inspired us to validate how many of our observations generalize to real-world data scenarios!:** Thank you so much for your insightful suggestions to further explore whether our observations validated in our synthetic setup generalize to real-world data. While, we've made the "simplicity-interpretability" trade-off to derive a set of hypothesis, you are right that it's very important to then go back to realistic scenarios to further validate those hypotheses.
1) Latent linear intervention elicits hidden capabilities also in real world facial attribute dataset (CelebA). [Fig R1] In Fig R1a, we first show how diffusion model struggles to compose unseen combination of "female" and "with hat" concepts with naive prompting. Then, in Fig R1b, we demonstrate how applying latent linear intervention elicits this capability as was seen earlier in synthetic dataset!
2) Square/cubic structure of concept graphs can be found in CLIP embeddings. We show that the concept space framework is useful to understand the failures and success of CG. *In some cases*, the concept space can be constructed for real diffusion models since it only requires a feature detector (e.g. CLIP), which is much simpler compared to training a generative model. In Fig R2a, we show that CLIP is ready to serve as a feature detector to construct an orthogonal concept space.
3) One specific insight from overprompting and latent linear intervention is that one could use model interventions to evaluate model capabilities. In Fig. R2b, we show an explicit example where overprompting results in elicitation of a CG capability. We *speculate* that this could be scaled, as we show in Fig. R2a that CLIP can already serve as an orthogonal feature detector, and the lack of an expected feature can be used to provide feedback onto a method guiding generation by intervention.
4) Our work surfaces that controlling concept learning might be more difficult than expected. Our results indicate that *even* In a synthetic setup where the acquisition of capabilities is robust, the model's apparent ability to CG might appear limited, and can show high variance across run seeds. Our contribution can be understood as attributing the high uncertainties observed in CG in real models to the variance of behavior seen in models with equivalent capabilities.
---
* **Summary:** Thank you again for your review. Your comments have motivated us to enhance the introduction of our framework in the manuscript and to demonstrate scalability to large models and practical setups. As a result, we have managed to show that our intervention strategies could be valid in Stable Diffusion v2.1 and that CLIP embeddings already encompass orthogonal concepts graphs, suggesting a promising direction for scalability. We hope your concerns have been addressed with these clarifications and experiments, and we would be very glad if you could recommend our paper more strongly.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. I really appreciate the idea and motivation behind this paper. The rebuttal has significantly improved the quality of the paper, and I now feel more confident in recommending its acceptance. As a result, I have raised both my score and confidence level. | Summary: This manuscript investigates the distinction between capabilities and behaviors in diffusion models. Using a toy task, the authors analyze at what point during training the model begins generating objects with correct specific features (in particular color and shape). They find that increasing the salience of certain attributes (“concept signal”) makes learning of those attributes faster and further causes the model's compositional generalization to collapse onto the nearest training point after intermediate training times. They replicate qualitative features of these learning curves with a mathematical toy model. They then use different interventions that enable the model to generalize compositionally much earlier than they would otherwise --- the fact that these interventions all yield generalization at the same time suggests a specific time point where the model has learned the underlying capabilities without this necessarily manifesting in behavior. Finally, they use this toy setup to analyze the impact of underspecification on compositional generalization.
Strengths: 1. The topic of compositional generalization and concept composition in generative models is important and generally still poorly understood. The authors did a good job motivating this and its connection to the distinction between capability and behavior/competence and performance.
2. The findings in section 4.4 are surprising and notable and the authors replicated this behavior across different model seeds and methodologies (though I have a couple of questions, see weaknesses, point 3).
3. The toy model for underspecification provided helpful intuition and appears to be a particularly simple task that gives rise to such underspecification.
4. The manuscript is generally well-written and the figures are generally easy to follow.
5. Concept memorization is a useful phenomenological finding.
Weaknesses: As noted above, I think there are several interesting findings in this manuscript. However, in its current stage, I believe that it falls short of its stated goals. More specifically:
1. As far as I can tell, sections 4.1-4.3 and section 4.4 get at different forms of "capability." Specifically, the concept space studied in sections 4.1-4.3 distinguishes whether the model is able to generate certain properties of the image (e.g. color and size). In contrast, section 4.4 demonstrates that certain interventions can improve the models accuracy on the compositional task substantially. Since those interventions don't fundamentally change the model's capabilities, this suggests that the model has already learned to do the right thing and these interventions simply surface that capability. It's unclear to me whether that is necessarily apparent from the concept space behavior. Put differently, the model, in principle, could still be performing extremely badly according to the concept space but have already exhibited this transition. (You seem to be getting at a related point in section 6, "Why concept space?" and the supplementary figure, but it's unclear to me what the time of concept of acquisition is. To the extent that it relates the findings in section 4.4 to the findings in the previous sections, I think you'd have to show this across different models rather than using a single example of a model.)
2. I did not understand the role played by equation (1). First, the defined energy landscape always has its minimum (for high t) at $c_1,c_2=1$, so why can these curves tend to different corners of the concept space? Second, I did not understand how this differential equation is grounded in model behavior, as it does not appear to be a simplified learning model. Rather, the goal seems to be to recapitulate the (rough) trajectories in concept space, so I'm not sure what insights are gained from that. In particular, the fact that the model first tends towards the correlated training point (e.g. the small red circle) during learning is built in by the definition of the sigmoidal function.
3. I think the related work section on compositional generalization should provide a better overview of existing insights into the questions you're asking. Right now, you're only citing a number of papers investigating these questions, but I think it would be important to actually give an overview of what these papers are presenting and investigating and how it relates to your own findings. In particular, it's unclear to me what was previously known about the impact of underspecification on compositional generalization.
4. I think it would be important to report standard errors or some sense of deviation across different model seeds. It is still important in my opinion to understand the reliability of these qualitative findings --- e.g. how similar are the concept space curves you're presenting across different initializations?
All in all, I think the paper presents several interesting findings, but, in its current state, leaves unclear how these findings fit together. On the one hand, the concept memorization finding is interesting and works well together with the underspecification finding. I think for both of those findings it would be important to more thoroughly evaluate how reliably the model actually ends up generalizing compositionally (e.g. across different seeds of initialization). In addition, it would also be helpful to give additional mathematical insights (or give an intuition in a different way) into why the observed behaviors emerge, as I don't see the current mathematical model as helpful on this end. On the other hand, the finding in section 4.4 is also interesting, but it remains unclear how it is related to the concept space framework and, if it can be explained in terms of learning both of these capabilities, why the presented intervention mechanism can help the model generalize compositionally.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Could you clarify how sections 4.1-4.3 and section 4.4 are related (see weaknesses, point 1)?
2. Could you clarify how you determined equations (1) and (2) and what we can learn from them (see weaknesses, point 2)?
3. As noted, I think the findings in section 4.4 are really intriguing, so I want to make sure I understand exactly what is going on there. a) Since you're using a binary classifier to assess performance, are the overprompted colors/sizes really identical to the original colors/sizes or is it possible that e.g. the color produced from overprompting is different from the ground-truth color and just more clearly on the correct red/blue side of the hyperplane? It would be useful to see a few examples of generated images here. b) It seems that you're only using one model seed in Figs. 4(c) and (d). For 4(d) you explained that you only used the one with "full capability" (I assume that's the one with an accuracy close to 1.0?). Why did you only use one model for Fig. 4(c) (or are these multiple lines that are just strongly overlapping?)? c) I think the fact that there's only one model that reaches full compositional generalization accuracy qualifies these findings a bit, in particular as it means that the latter methods only present a single sample. Would the other models improve in their performance as well if they were trained for longer? d) I would suggest providing an additional figure where you plot the different curves for each model seed on top of each other, as it is currently a bit difficult to judge how precisely the transition times overlap.
4. I'm not very familiar with the literature on underspecification --- have other papers previously noted its effect on compositional generalization (i.e. the "strawberry"/"yellow strawberry" effect)? If not, I think it'd be worth emphasizing that a bit more --- if yes, it would good to emphasize that as well.
**A couple of minor notes**
L. 29: “pre-training on such models” -> “pre-training of such models”
L. 32: What is the “model experimental systems approach”?
L. 69: Space before citations missing.
L. 96: Is $S$ just the support of the probability distribution?
L. 99: Is $F$ a stochastic function?
L. 117-118: Isn’t $G^{-1}(Y)$ in the concept space, i.e. it should match $z$, not $h$?
Figure 2: Why do the two trajectories for 01 and 10 illustrate concept memorization? I would have thought that this was illustrated by the trajectories for 11 that end up near 01.
L. 247: What does it mean to mask the token? Set it to zero?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: As the authors acknowledge, they focus on toy synthetic data here. Furthermore, their analysis is largely empirical in nature, leaving unclear the exact reasons why the models generalize or don't generalize. I think the authors have adequately communicated the limitations of their work overall.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer G81k,
We thank the reviewer for their very detailed feedback. We are happy you found our findings in 4.4 “surprising and notable”, just the way we felt, and 5. and 4.2 “helpful” and “useful”. In response to your comments and concerns, we have added:\
(i) A clear definition and clarification of ``Capability” in Sec. 3.\
(ii) A demonstration that turns in concept space corresponds to emergence of capabilities, yielding Fig. R5 b,c\
(iii) A substantially more thorough related work section.\
(iv) A better quantification of uncertainties, yielding Fig. R4 and R5
---
* **1. Clarification of Capability in 4.1-4.3 vs 4.4:** The model’s capability is only discussed in 4.4. We do not mention capability in 4.1\~4.3 as we are simply probing its behavior/execution of the task. Nevertheless, we now see that this can be confusing so we clearly defined capability and made this distinction in Sec. 3.\
Your comment also suggested that the relationship between concept space and capability should be strengthened. We have now added Fig. R5 b,c, which respectively demonstrates that 1) sudden turns in concept space does correspond to emergence of capability to compositionally generalize (CG) and 2) this correspondence is robust across different concept signal levels. We hope this clarifies the relation of section 4.1\~4.3(concept space) and 4.4(hidden emergence).
* **2. Motivation of the phenomenological model was to postulate concept learning dynamics hypothesis:** Eq. (1) is a phenomenological model designed to qualitatively reproduce learning dynamics, requiring different values of c1 and c2 to specify the model generalization point. The initial tendency towards the correlated training point is a result of the sigmoidal function definition, illustrating how different concepts evolve over time. Although the model is not derived from a specific learning process, it introduces the idea of defining "concept variables" and describing their evolution with differential equations. We are currently working on a theoretical follow-up to rigorously derive this equation and further substantiate our hypothesis.
* **3. A more thorough related work section:** Thank you for the feedback! We have significantly expanded the related work section now, including discussion of prior work on CG, concept learning, use of interpretability tools to understand learning dynamics, and distinction between capabilities and behaviors. Related to CG, we briefly note that prior work has primarily focused on impossibility results, i.e., showing whether neural networks can express compositional solutions (e.g. [1]). To our knowledge, there is only one work focusing on learning dynamics of CG in a generative model [2], and does not include any intervention experiments or underspecified setups. Underspecification's influence on CG has been partially explored by a few papers on disentangled representation learning [3], but again these papers focus on possibility results and not learning dynamics---the target of our study. A position paper by Hutchinson et al. [4] mentions challenges in CG due to underspecification, but has a different goal compared to our work.
* **4. Errors quantified and visualized:** Please see Fig. R4,5 for different quantification of errors. In R4, we show that the std. of behavior across seeds is high while it remains very low for probes of capability. In R5a, we visualized the standard error of mean on trajectories in Fig. 2c.
* **5. Clarifications:** The intervention methods are more meant to investigate the model's internal capabilities and show that they emerge consistently before behavior. Although they do improve compositional generalization, that is more of a by product rather than a practical method we introduce.
---
Questions:
* **1, 2, 4:** We hope the replies above address these questions.
* **3:** a) We have now added example images on top of Figure 4. The generated colors depends on the exact intervention we apply, as we would expect for a steerable model.\
b,c) We have now repeated the Linear Latent Intervention and Embedder Patching for all 5 seeds. We found LLI to work perfectly on all seeds and EP on 2 seeds. We don't think these affect our findings as one intervention method is *sufficient* to demonstrate model capabilities. Embedder Patching is a more subtle technique (higher bar) since it requires the model to have minimal Embedder-CNN compensating weight changes late in training.\
d) This is indeed how we have plotted Figs. 4 a, b! If the question was directed towards Figs. 4 c, d, we note we have now added all individual seeds to the plots, as mentioned above.
---
Minor Notes
* **L.29,69,117-118:** Thank you for these catches.
* **L.32:** By “model experimental systems approach”, we mean a small synthetic setup in which we can control the data distribution and probe the resulting model's behavior, allowing one to study questions that are not feasible to do so in real data.
* **L.96,99:** Yes
* **Figure 2:** The two gray trajectories are taken from the training data curves to help visualize concept memorization.
* **L. 247:** What does it mean to mask the token? Set it to zero? Yes, the corresponding element is set to zero.
---
* **Summary:** Enormous thanks to the reviewer for pointing out very important points to make our submission more persuasive. We believe our work became more clear and robust thanks to the suggestions. Your comments motivated us to substantially strengthen our theory and related work section. It has also motivated us to quantify uncertainties on our experiments yielding Fig. R4, R5, which are essential to join concept space and hidden emergence, the two big pillars of our work. We believe our manuscript greatly improved in quality, and would be very happy if you could recommend our work more strongly.
---
[1] https://arxiv.org/abs/2310.05327
[2] https://arxiv.org/abs/2310.09336
[3] https://arxiv.org/abs/2006.07886
[4] https://arxiv.org/abs/2210.05815
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for their careful rebuttal, which has addressed many of my concerns. In particular, I appreciate the authors adding a quantification of uncertainty to their figures and I will increase my score to 5. | Summary: The paper introduces "concept space", a framework for analyzing the learning dynamics of generative models, focused especially on compositional generalization. The key contributions are:
- Introducing the concept of "concept signal" that governs the rate of concept learning (and in particular determine learning speed), and is the key driver of the generalization dynamics
- Analyzing learning dynamics using the concept space framework on a synthetic dataset of 2D objects.
- Claims about how concept signals shape the geometry of learning trajectories.
- a proposal on interventional protocols to uncover hidden model capabilities.
The paper uses a conditional diffusion model trained on synthetic data to study how concepts like shape, color, and size are learned and composed together. They show that concept signal levels determine learning speed and generalization dynamics, and that model capabilities often emerge before observable behaviors. The paper also explores how underspecification hinders compositional generalization.
Strengths: Originality:
- Concept space and concept signals are a novel lens for analyzing the learning dynamics for diffusion models.
- The claim that "capability consistently emerges before behavior" is interesting and well-backed, and is thus a novel insight into how these models behave over the course of the training.
Quality:
- Operating on the activation space and embedder patching seem like appropriate tests for the claim of testing .
- It shows the intuitive result that model's generations for unseen combinations initially gravitate towards the training class with the strongest concept value.
Clarity:
- The claims are clearly presented with an explanation supported with figures.
Significance:
- The results suggest that the more distinguishable or salient a concept is in the training data (i.e., the stronger its concept signal), the faster the model will learn to recognize and generate that concept. If true for a broader class of generative models, the result could be widely applicable in data design choice for models.
Weaknesses: Significance:
- Overall the setup is too simplistic to have the claims be transferable to a broader set of generative models or even diffusion models. All the experiments use a simplified synthetic dataset with 2D objects and binary concept variables; while good for control, it limits the complexity of the kind of concepts real-world models learn by a massive amount. Real world models are learning more complex, hierarchical, and interdependent concepts, not simply characterizable by things like size or color, and use continuous concept variables rather than binary.
- The concept signal relies on knowing G, the data generating process. In real-world models, the true data generating process is often unknown, concepts might not have clear, differentiable manifestations in the input space.
- The toy model for learning dynamics is based on a simple energy function with sigmoid activation; it's not clear why "concept signals" should be able to capture any non-linear interactions between concepts.
Presentation of the claims:
- The authors use the word "capability" when they really mean the model's internal ability to understand and compose concepts. The wording is quite unclear and should be clarified.
- The graphs are pixelated, low-quality, poorly labeled and hard to understand. In particular, you should explain what the colors and the axis clearly mean in Figure 3, have multiple random seeds analyzed with error bars for overprompting and linear latent intervention in Figure 4, and make Figure 5 (b) higher quality.
- Linear latent intervention assumes linear separability of concepts in the latent space, which breaks down in more complex settings than the synthetic data setting.
- There should've been a higher focus on results on CelebA in the main sections of the paper.
Technical Quality: 3
Clarity: 1
Questions for Authors: - Would we expect overprompting to still work when concept manifestations are complex?
- Doesn't the embedder patching technique assume that the final checkpoint has disentangled concepts? Why should we expect to be the case?
- The paper briefly mentions experiments with the CelebA dataset in the appendix. Could the authors elaborate on how well the findings from synthetic data translate to this more complex dataset?
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: - The paper should've compared the concept space framework to other existing methods for representations in learning dynamics (e.g., basic linear probing techniques, representational similarity analysis) as baselines for our understanding to see how much "concept space" improves it.
- The models are fairly simple and small, and the study should further look into whether any of the techniques (e.g. overprompting, linear latent intervention etc.) still continue to hold in setting that more closely resemble real-world conditional diffusion models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer cQgr,
We thank the reviewer for their detailed feedback. We are excited that you found our work provides a novel lens for analyzing learning dynamics of diffusion models, yielding claims that are interesting, clear, and well-backed. In response, we have added:\
(i) A major experiment on CelebA generalizing our findings to more realistic data, Fig. R1\
(ii) A test on Stable Diffusion v2.1(SD), Fig R2b\
(iii) Experiments on a more complex 2x2x2 setup, Fig. R3
---
* **1. Simplicity of the setup:** Please note that our goal in this work was to understand how a diffusion model learns various concepts underlying the data-generating process and learns to compose them. The simple setup allowed us to perform a more precise analysis of the model’s learning dynamics and develop novel hypotheses on concept learning. However, our results on CelebA (R1) shows that this simplicity did not bottleneck our claims: our results do transfer to more realistic settings! Orthogonally, prior work on compositional generalization (CG) often uses similar settings (e.g., toy shapes, colors), making our chosen setup a natural starting point.
* **2. Concept Signal relies on G:** Computing precise concept signal values is indeed non-trivial in real data. One approach would be to approximate the derivative of G by training a VAE to compute $dG/dz_i$. However, our primary contribution is a step towards understanding what controls CG when G is known. At the same time, our results show that we can retrospectively establish orders of concept signals (CS) for real data: e.g. we can infer CS for gender is stronger than hat from the CelebA experiment (R1).
* **3. The model for learning dynamics:** We clarify that Fig. 2 shows accuracies over training time, where x and y represents accuracy for color and size. The key insight is that concept signals determine the learning dynamics' trajectory. Fig. 2(c) shows that curves bend toward specific directions reflecting the relative strengths of the concept signals for size and color. To model this behavior, we have defined conditions in Eq. 1: $\sigma_1 > \sigma_2$ and $\sigma_2 < \sigma_1$. We will provide further clarification on these points in the revised version.
* **4. The definition of capability:** Thank you for the suggestion! You correctly inferred our intended meaning: by capability, we mean “the model's internal ability to understand and compose concepts”. We have now formalized this in Sec. 3 of the paper.
* **5. Figures:** We apologize for the pixelated plots. We updated all figures with higher quality versions. We also labeled all axes and added generated images to aid understanding, showing how the model improves during training (e.g., see R4). As per your suggestions, we have clearly clarified the axes in the caption and labeled the colorbar in Fig. 3. We updated Fig. 4 so that the curves for each seed are clearly visible. We updated Fig. 5b and labeled the color axis as $a$. We thank the reviewer again, as we really value high quality plots especially for a paper focused on sending a qualitative message.
* **6. Linear Latent Intervention(LLI) Breaking Down**\
The focus of our interventions is mainly to gain understanding of how capabilities are acquired in a model rather than to present a method to improve CG in practical settings. This is also why we presented 3 methods, so that we do not overfit to pitfalls of one specific method. Thus, our methods are *sufficient* to show a model has CG capabilities, but not *necessary* or efficient. Nevertheless, for LLI, we show in R1b that it can generalize to more realistic CelebA data.
* **7. Discussion of Celeb A in main text**\
Thank you for the suggestion! We moved some of the CelebA results to the main text.
---
Questions:
* **On overprompting(OP):** The short answer is “sometimes" and just as LLI above, this intervention method was *sufficient* to show capability. While there will be cases where OP will not elicit CG, in R2b, we show that OP elicits CG even in SD, demonstrating its generalizability. In fact, SD’s prompt encoder takes brackets [] around words which need to be enhanced, and internally scales the vectors corresponding to these tokens, implementing OP.
* **Assumptions on Embedder Patching:** Yes, and also an even stronger one: weights in MLP and CNNs should not drift after convergence of training. Restating the point made in 6 above, our intervention methods are sufficient but not necessary to show a model’s capability.
* **CelebA Results:** Certainly! We reproduce both concept memorization(R1a) and hidden emergence(R1b) in CelebA. The former checks out the inverse concept signal learning point made above and the latter shows that this synthetic setup captures realistic concept learning well.
---
Limitations:
* **Comparisons:** To our knowledge, the listed tools (e.g., probing) are not standard protocols studying learning dynamics in gen. models. The closest work to ours is Okawa et al. 2023, where the authors used probing to model learning dynamics, but could not identify what determines the order of concept learning, that there is concept memorization, and that there is a hidden emergence of capabilities. Nonetheless, we will add an expanded discussion of these tools in the paper.
* **Generalizability:** We have added 2x2x2 (R3) and CelebA (R1) results to demonstrate that our result, at least, extends to systems which are a little bit more complex.
---
* **Summary:** We thank the reviewer again for detailed feedback. The reviewer’s comments motivated us to investigate further into the more realistic CelebA setup, and we are happy to show that our major finding on hidden emergence is well reproduced. We hope our changes and new experiments address the reviewer’s concerns adequately, and would be glad if our paper can be recommended more strongly.
---
[1] arxiv.org/abs/2311.03658 | Rebuttal 1:
Rebuttal: Dear Reviewers,
We would like to thank the reviewers for their thoughtful feedback and for recognizing the value of our work. We are pleased that all reviewers found our main contributions, particularly the introduction of the “concept space” framework and the studies of hidden capabilities to be "a novel lens for analyzing the learning dynamics for diffusion models" [R cQgr], "surprising and notable" [R G81k], “meaningful and interesting” [R uzj6] and "quite novel and could provide many insights to the community" [R o8es]. We are trying to understand diffusion models using controlled synthetic experiments and many reviewers pointed out that it is crucial to show how much of the findings generalize to real data/models. This feedback was very helpful and we believe our submission became much more persuasive in this direction with the additional experiments we conducted.
We ran:
* 1 major experiment on CelebA in Fig R1
* 2 experiments with frontier multimodal models in Fig R2
* 27 new experiments with the synthetic setup distributed in Fig. R3, 5
* 8 new probing experiments for Fig. R4
Below are the description of the new experiments and the major changes made to our work. Please find the figures in the attached PDF.
Abbreviations: Compositional Generalization (CG)
---
**New experiments**:
* **[Figure R1] Hidden emergence of capabilities reproduced on CelebA:** Many reviewers asked about the generalizability of our results to a more *realistic* scenario. We trained a conditional diffusion model from scratch on CelebA using the compositional concepts With Hat and Female. Fig R1a shows the concept space dynamics evaluated on CelebA. In this case, CG was harder than the experiment in Fig 9 in the paper, and the model was not able to generate (Female, With Hat), and it was *concept memorizing* on (Female, No Hat). However, Fig R1b shows that latent linear intervention was able to elicit this capability and the model can in fact compose the two concepts it has internalized. Fig R1b clearly shows hidden emergence of capability on a realistic dataset with arbitrarily delayed behavior.
* **[Figure R2] CLIP and Stable Diffusion v2.1 suggests potential scalability of our work:** Reviewers also asked if the *assumptions made and the methods used* are only expected to work in simplistic settings. While we accept that there will be more subtleties in generalizing these to real data, we show promising results. In Fig. R2a we show that CLIP already embeds compositional concepts as a cube with roughly orthogonal axes, addressing the concern that concepts might be non-linearly embedded in realistic models unlike our assumption in the synthetic setup. This also suggests that CLIP can be used as a feature critique to construct a real life concept space. In Fig. R2b, we show that overprompting can be used to elicit CG in Stable Diffusion v2.1, demonstrating that our method does generalize to some extent to frontier models.
* **[Figure R3] Concept signal controls CG on 2x2x2 concept graph:** Reviewers also asked if the results would generalize to more *complex* scenarios than two binary concepts. On this end, we ran experiments with 3 concept variables and reproduced the findings in Fig. 2a, showing that concept signal controls CG timings by affecting concept learning speeds. Fig. R3a shows the concept accuracy versus training time and Fig. R3b shows that the speed of concept learning (defined by the average accuracy up to the final checkpoint) clearly depends on concept signal.
* **[Figure R4] Robust Capability Learning across random seeds:** We confirmed that overprompting and latent linear interventions shows consistent acquisition of CG capabilities across model seeds while the behavior(execution) has high variance. We show that the standard deviation of capability learning curves is negligible compared to behavior. We confirm this behavior on multiple seeds as it is the main result of our paper.
* **[Figure R5] Quantifying uncertainties across random seeds and data distributions:** Our findings on hidden emergence are robust across random seeds and data distributions. We ran multiple seeds of the experiments with different data distributions and confirmed that our results are robust. Fig R4a visualizes the standard error of the mean of the concept space learning dynamics of the models and demonstrates the level of uncertainty in the concept space trajectories. Fig R4b shows that the sudden turn in concept space corresponds to the emergence of a capability across different seeds. Fig R4c shows that this finding is also consistent across different data distributions, varying the level of concept signal.
---
**Major Edits:**
* **Figure Revisions:** We edited most of our figures' qualities and labeling. We made sure all figures had clearly labeled axes and non-pixelated graphs. In particular, for Fig 4, the most important figure of our work, we added many seeds (c,d) and example images (a,b,c,d).
* **Related Works:** We substantially increased the related works section to better ground our paper in the intersection of concept learning, interpretability and cognitive science(competence vs behavior).
* **Framework:** We reformatted the framework section(Sec. 3) so that the term capability is well defined. We explicitly distinguish capability and behavior(execution).
* **CelebA:** We added a section in the main text describing CelebA as many reviewers pointed out the importance of showing generalization of the findings to realistic data.
* **Appendix:** We added all our additional experiments to appendix.
Pdf: /pdf/6029fea1697a63bef0ff7c355cbedd7ccf4f337b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Direct Preference-Based Evolutionary Multi-Objective Optimization with Dueling Bandits | Accept (poster) | Summary: This article focuses on the preference-based evolutionary multiobjective optimization. First of all, such problems are widely found in real engineering application scenarios, thus becoming one of the most popular research directions in the field of multi-objective optimization.
Overall, in terms of the presentation of the article, this article is well-structured and clearly expressed. From the aspect of methodological design, this article is somewhat innovative. However, from the aspect of experimental design, this article seems to have some defects.
Strengths: The designed method aims to address the issues exists in both consultation and preference elicitation modules.
To be specifically, the authors design a clustering-based stochastic dueling bandits algorithm to overcome the problem of convergence due to a large amount of preference comparisons. Additionally, the authors applies the Gaussian mixture distribution to leverage the preference learned from the consultation session.
Weaknesses: 1-In Section 4.1, I am curious as to what criteria the authors used to select the benchmark problems. For example, why didn't the authors choose the same series of benchmark problems as DTLZ7, WFG2, WFG4, etc.?
2-In Section 4.2, I am concerned that the comparison algorithm used in the experiment is SOTA in the domain.
3-Two variants, DPNSGA-II and DP MOEA/D, were designed in the article. However, the discussion between these two variants seems to be a bit thin. What are their respective strengths and weaknesses?
4-Similarly, I found the discussion in the experimental section to be overly simplistic, seeming to restate the results of the experiment at face value. But it doesn't analyze the reasons behind the results too much.
5-Population size, number of search iterations, and other such parameter settings do not seem to be given by the authors.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1-In Section 4.1, I am curious as to what criteria the authors used to select the benchmark problems. For example, why didn't the authors choose the same series of benchmark problems as DTLZ7, WFG2, WFG4, etc.?
2-In Section 4.2, I am concerned that the comparison algorithm used in the experiment is SOTA in the domain.
3-Two variants, DPNSGA-II and DP MOEA/D, were designed in the article. However, the discussion between these two variants seems to be a bit thin. What are their respective strengths and weaknesses?
4-Similarly, I found the discussion in the experimental section to be overly simplistic, seeming to restate the results of the experiment at face value. But it doesn't analyze the reasons behind the results too much.
5-Population size, number of search iterations, and other such parameter settings do not seem to be given by the authors.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The experimental chapter needs to be further upgraded.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to weaknesses and questions**
We address the reviewer’s concerns one by one as follows.
1. The choice of benchmark test problems follow the existing literature [1] and it also adheres to the criteria outlined in [2,3]. This ensures that the chosen benchmark test problems have different Pareto-optimal front (PF) shapes and various characteristics (e.g., multiple local optima and concave/convex PF, etc) while we omitted some benchmark problems having duplicated properties. In addition, we included more real-world problems (protein structure prediction and RNA inverse design) to promote AI4Science.
We do not consider DTLZ7 and WFG2 because they are with disconnected PF segments. As discussed in [1], it is difficult for decision-makers to elicit their preferences in the disconnected regions, which also represent infeasible parts. Further, since WFG4 shares the same characteristics with WFG7 (i.e., separable, unimodal and with a concave PF), we consider WFG7 in our experiments.
2. In particular, according to the experimental results reported in a recent paper [4], we chose three most competitive algorithms (i.e., I-MOEA/D-PLVF, I-NSGA-II/LTR, and IEMO/D) as our peer algorithms. We believe the chosen algorithms represent the current state-of-the-art in preference-based evolutionary multi-objective optimization (EMO). Further, to demonstrate the flexibility of our framework, we chose a preferential Bayesian optimization [5], which was designed only for single-objective optimization problems, as a baseline and adapt it to our D-PBEMO framework by replacing its preference learning part by our consultation module. As the results shown in Section 4.3, this variant also works well for tackling multi-objective optimization problems.
3. Because our proposed D-PBEMO framework is algorithm-agnostic. That is to say any existing EMO algorithm can be used in the optimization module with minor adaptation. For proof-of-concept purpose, we choose NSGA-II and MOEA/D, two influential EMO algorithms in the literature, as two instances. As reported in the EMO literature [6], NSGA-II performs better on 2-objective problems, while MOEA/D scales well to many-objective problems ($m\ge3$). This characteristic is retained in our D-PBNSGA-II and D-PBMOEA/D algorithms.
4. We apologize for the succinct discussion on the experimental results. This is partially because our proposed D-PBNSGA-II and D-PBMOEA/D have shown constantly superior performance in almost all comparisons, as well as the strict page limit. We promise to strengthen this part in the camera-ready version. Specifically, we plan to focus on discussing the different characteristics of D-PBNSGA-II and D-PBMOEA/D for handling low- and high-dimensional problems, respectively. These characteristics are related to the internal mechanisms of of NSGA-II and MOEA/D.
i) NSGA-II is a representative algorithm based on Pareto dominance in its environmental selection. Since its diversity maintenance strategy mainly depends on the distances among solutions, it is relatively robust and fast to find a reasonably good Pareto-optimal front (PF) approximation. However, because NSGA-II relies on Pareto dominance, it does not scale well to many-objective problems.
ii) In contrast, the search of MOEA/D is mainly determined by the weight vectors. It is naturally more scalable to many-objective cases.
The above properties are all reflected in the performance of D-PBNSGA-II and D-PBMOEA/D.
5. All experiment parameter settings, including population size, number of search iterations, and other parameter settings, are provided in our Appendix D.2.
[1] Li, Ke, et al. "Does preference always help? A holistic study on preference-based evolutionary multiobjective optimization using reference points." *IEEE Transactions on Evolutionary Computation* 24(6): 1078-1096, 2020.
[2] Huband, P. et al. "A review of multiobjective test problems and a scalable test problem toolkit," in IEEE Transactions on Evolutionary Computation, 10(5):477-506, 2006, doi: 10.1109/TEVC.2005.861417.
[3] Zapotecas-Martínez, Saúl, et al. "A review of features and limitations of existing scalable multiobjective test suites." *IEEE Transactions on Evolutionary Computation* 23(1): 130-142, 2018.
[4] Li, Ke, et al. "Interactive evolutionary multiobjective optimization via learning to rank." *IEEE Transactions on Evolutionary Computation* 27(4): 749-763, 2023.
[5] J. González, et al. "Preferential bayesian optimization." In ICML’17: Proc. of the 34th international conference on Machine learning, volume 70, pages 1282–1291. PMLR, 2017.
[6] Zhang, Qingfu, and Hui Li. "MOEA/D: A multiobjective evolutionary algorithm based on decomposition." *IEEE Transactions on evolutionary computation* 11(6): 712-731, 2007.
---
Rebuttal 2:
Comment: I think the author did a great job of responding to all of my comments. From my personal point of view, I have no other problems.
Of course, I wish the author had placed some of the replies to my comments in the final version, as it would have prevented others from having similar doubts.
Based on the author's and other reviewers' comments, I think this article is of good quality and I would like to increase my rating for this article from 6 (Weak Accept) to 7 (Accept).
---
Rebuttal Comment 2.1:
Title: Response to the Reviewer dDig
Comment: We sincerely appreciate the reviewer's positive feedback on our efforts and your kind help to elevate your rating. Meanwhile, we for sure will carefully revise our final version to make sure it stands at the highest quality possible.
Last but not the least, we would like to take this opportunity to thank yours and the other reviewers' constructive suggestions on our work. All of them give us insights about how to improve the quality of our work, as well as how to push this line of research forward.
---
Rebuttal Comment 2.2:
Title: Response to Official Comment by Reviewer dDig
Comment: Dear Reviewer dDig,
Thank you very much again for your positive feedback and confirmation on our effort. We would like to respectively ask whether you would like to raise your score from 6 (Weak Accept) to 7 (Accept) as mentioned at the end of your comment? Really sorry for chasing this, because the deadline is approaching.
Thank you very much again.
Best regards
Authors | Summary: This paper focused on the problem of multi-objective optimization in the dueling bandits settings. A clustering-based stochastic dueling bandits algorithm was developed and analysis. The performance is further validated via experiments.
Strengths: - A new framework via combining dueling bandits and evolutionary multi-objective optimization was presented.
- The proposed algorithm has been applied to real-world problem, i.e., protein structure prediction, which is interesting and promising.
Weaknesses: - In dueling bandits literature, one of the most widely assumed winner or the most general winner is the Condorcet winner. In this paper, for example, in definition 2.1, the authors defined the Copeland winner. What is the intuition behind this or in other words, does your framework requires this specific winner, or can it be generalized to the general Condorcet winner (or other winners such as Borda Winner, Neumann Winner that are widely used in the dueling bandits literature)?
- In your algorithms and theorem, $\alpha>0.5$ is assumed. Is this a natural option in practice?
- The proposed D-PBEMO framework consists of three modules as shown in Figure 1(a), while the regret is characterized for the consultation modules. Not sure if I miss anything, can this regret be claimed as that of D-PBEMO?
- The writing of the paper can be significantly improved. It is hard to follow the paper with many mathematical notations not rigorously defined, and some details are missing.
Technical Quality: 2
Clarity: 1
Questions for Authors: See weakness above.
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: N/A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to W1: Choice of winner**
We address the reviewer’s concerns from the following three aspects.
1. The Copeland winner is more universally applicable than the Condorcet winner. The Copeland winner includes the Condorcet winner. That is to say the Condorcet winner is always a Copeland winner, whereas the Copeland winner is not necessarily a Condorcet winner. In some scenarios, a Condorcet winner may even not exist, but a Copeland winner always does.
2. Following the first bullet point, in our multi-objective optimization scenario, a Condorcet winner may not exist. Specifically, user preferences, which are represented as reference points or golden points in our experimental settings, may not be reachable because sometimes user preferences lie beyond the PF (see some examples in Figure A7 on the ZDT3 test problem in our Appendix E.1). In such cases, the non-dominated solution set found by the evolutionary multi-objective optimization (EMO) algorithm may contain multiple solutions that are the same closest to the golden point. These solutions are optimal arms and are Copeland winners but not Condorcet winners. Because the Condorcet winner should be unique by definition, the Condorcet winner is not suitable for our multi-objective optimization context.
3. As for the reviewer mentioned two other winners, our justifications are as follows.
- For the Borda winner, it belongs to the family of positional voting where the preferences are elicited as a full rank regarding all solutions. As reported in [1], such preference elicitation method can be cognitive intensive for humans. Note that preference-based EMO itself is an optimization-cum-decision-making process, which involves multiple runs of consultations with humans. Therefore, one of our design principles is to reduce the human's cognitive load as much as we can during each consultation. Given this justification, we think the Borda winner can in principle be applied in our consultation module, but it is not recommended, at least not as good as the dueling bandits using pairwise comparisons in our D-PBEMO.
- For the Neumann winner, it is designed for scenarios involving potential clones of arms [2]. However, because each subset (i.e., arm) is different in our D-PBEMO, the preference elicitation module will suspend preference learning before subsets become clones. Further, the Neumann winner is originally designed for contextual dueling bandits, it is not directly applicable to our EMO context which is stochastic.
Given the above justifications, we believe our choice of Copeland winner in our current version of D-PBEMO is rational.
[1] Zintgraf, Luisa M., et al. "Ordered preference elicitation strategies for supporting multi-objective decision making." AAMAS'18: 1477-1485, (2018).
[2] Dudík, Miroslav, et al. "Contextual dueling bandits." *Conference on Learning Theory*. PMLR, 2015.
**Response to W2: $\alpha$ setting**
We confirm to the reviewer that $\alpha > 0.5$ is a natural option in practice. The requirement $\alpha > 0.5$ in the double Thompson sampling proof can be traced back to the RUCB paper [3]. This parameter range ensures the exploration capability in Thompson sampling or UCB processes, which is a fundamental assumption to guarantee the algorithm’s convergence.
[3] Zoghi, Masrour, et al. "Relative upper confidence bound for the k-armed dueling bandit problem." *ICML'14*: 10-18, (2014).
**Response to W3: Regret analysis**
We confirm that the reviewer understood well. The regret analysis is for the consultation module, i.e., the clustering-based stochastic dueling bandits algorithm used for preference learning. We justify this further from the following three aspects.
1. The regret represents the uncertainty in the consultation module. In this paper, we are not yet to provide an uncertainty quantification for the D-PBEMO algorithm. This is mainly attributed to the stochastic nature of the reproduction operators (i.e., crossover and mutation) which introduce additional uncertainty difficult to quantify.
2. Further, we believe our regret analysis for the preference learning in the context of preference-based EMO is an original contribution, which has not yet been addressed so far to the best of our knowledge.
3. Note that our proposed D-PBEMO framework is algorithm agnostic. That is to say any EMO algorithm can be applied with minor modification in our optimization module. In this paper, we applied two most influential EMO algorithms as the baseline for a proof-of-concept purpose. Because the convergence property highly depends on the algorithmic behavior of the underlying EMO algorithm (partially justified in our first bullet point), it is hard to provide a universal convergence analysis applied for all EMO algorithms. Yet, we also believe this is part of our future endeavours.
**Response to W4: Writing of the paper**
We apologize for our presentation when tackling many mathematical notions. In the camera-ready version, we will take two actions to improve the readability of this paper.
1. Carefully double check all mathematical notions used in this paper and prepare a lookup table at the beginning of the Appendix.
2. We will also augment some important definitions and preliminary knowledge statement about dueling bandits to be more self-contained.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: Thank you for the clarifications. I will increase the rating.
---
Reply to Comment 1.1.1:
Title: Response to the Reviewer FUFf
Comment: Thank you very much for confirming our effort. We also sincerely appreciate your constructive suggestions on improving the quality of our work. | Summary: Preference-based evolutionary multi-objective optimization (PBEMO) methods involve optimization (explore the space), consultation (learn human preference), and elicitation (guide evolutionary search). Existing PBEMO methods may suffer from inaccurate reward models, which is likely to happen given that human feedback exhibit a lot of randomness. The authors propose to directly leverage human feedback without using a reward model to guide the evolutionary search. Specifically, given that human feedback from relative comparison is better than absolute labels, the authors employ dueling bandits to compute preference metrics.
Strengths: The authors prove the regret bound of proposed algorithm, which is better than that of Thompson sampling based dueling bandit algorithms. Empirical evaluations on 33 settings showcase the effectiveness of proposed approach.
Weaknesses: In step 1 of consultation module, one needs to choose K, which is the number of subsets to partition S into. The choice of K would greatly affect how close the solutions are within a subset. Thus, it is important to discuss methods to choose K and provide justifications.
The runtimes are not reported.
Technical Quality: 2
Clarity: 2
Questions for Authors: Could the authors provide some insights on the advantage of starting with a coarse-grained representation, which may yield an initially inaccurate SOI, compared to having a set of Pareto-optimal candidate solution upfront?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The limitations are not clear. The scope of the claims should be discussed. I encourage the authors to create a separate limitations section (see NeurIPS paper checklist).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to W1: Choice of $K$**
We agree with the reviewer that $K$ can impact the crowdedness of solutions within a subset. In particular, the larger the $K$ is, the smaller the distances between solutions within each subset are. As for the choice and sensitivity of $K$, our justifications are presented in the **Author Rebuttal** attached with a PDF containing additional experiment results.
**Response to W2: Runtimes**
Because we are not sure whether the *runtimes* referred by the reviewer is the CPU wall clock time (actual execution time) or the run time analysis used to analyze the convergence of an evolutionary algorithm, we provide discussions on both aspects.
1. In terms of CPU wall clock time, our D-PBEMO algorithms are the fastest one in the experiments. However, we are concerned about whether it is a fair comparison if we report the CPU wall clock time results. This is mainly attributed to the programming languages used to implement different algorithms. In particular, our clustering-based stochastic dueling bandits algorithm is implemented in C++ and it runs super fast (usually within $2$ seconds). In contrast, the other peer algorithms are mainly implemented in Python. Therefor, they are often much slower than ours. Further, algorithm like I-NSGA-II/LTR [1] involves neural network training that can even make itself very slow (often over $10$ minutes).
2. As for the run time analysis, while there have been some attempts on the evolutionary multi-objective optimization (EMO) algorithms (e.g., [2]), it is far from mature compared to the rich literature for single-objective optimization, not to mention the preference-based EMO. Further, the run time analysis highly depends on the baseline algorithm while our D-PBEMO framework is EMO algorithm agnostic. In this paper, our key theoretical contribution is the regret bound for the preference learning part (i.e., the consultation module) when considering multiple conflicting objectives. We believe this contribution is original and will be valuable to multiple communities including but not limited to preference learning, multi-objective decision-making, and EMO. As part of our future works, we plan to further analyze the complexity for EMO when using the preference learned from our consultation module.
[1] Li, Ke, et al. "Interactive evolutionary multiobjective optimization via learning to rank." *IEEE Transactions on Evolutionary Computation* 27(4): 749-763, 2023.
[2] Bian, Chao, et al. "A General Approach to Running Time Analysis of Multi-objective Evolutionary Algorithms." *IJCAI'18*: 1405-1411, (2018).
**Response to Q1: Advantage of starting with coarse-grained representation**
We respectively check whether the reviewer is concerned about the statement in our *Remark 1*. If this is the case, we address the reviewer’s question from the following three aspects.
1. In EMO, solutions at the early stages of the evolutionary search are often far from the Pareto-optimal front (PF). Further, the search direction of EMO is not deterministic. Instead, given the diversity and spread requirements of an evolutionary population, the search direction is rather stochastic in the early stages. Therefore, it can be misleading and noisy if we ask decision-makers to elicit their preferences regarding such solutions.
2. On the other hand, we do not intend to wait until the end of EMO, i.e., when obtaining a set of solutions well approximate the PF (this also corresponds to the reviewer mentioned "$\cdots$ *a set of Pareto-optimal candidate solution upfront*"). This represents the posteriori decision-making whose drawbacks are discussed in Appendix A.1. In particular, it is hard to guarantee that we can have the solution(s) meet the decision-makers’ preference upfront given the dense characteristics of PF. Therefore, it is a paradox if the underlying EMO algorithm ends up with no solutions lying in the region of interest. This is not uncommon since the range of PF can be too huge to have a full coverage when having many objectives. Differently, since preference-based EMO is an optimization-cum-decision-making process, it is designed to search for the solution of interest interactively, rather than approximating the whole PF.
3. Our strategy in D-PBEMO is to start consulting decision-makers neither too early nor too late as justified in the above two bullet points. This is because: 1) such coarse-grained PF sufficiently represents the PF shape, and 2) the search direction of an evolutionary population is largely determined. In this case, we believe decision-makers can already elicit meaningful preferences regarding such solutions. This hypothesis is also empirically validated through our experiments, i.e., our proposed D-PBEMO instances outperform the other peer algorithms. As some experiments in [3, 4], it is suggested to elicit decision-makers' preferences in the later half of an EMO process. Here, to make our comparison fair, we heuristically set the timing for consulting decision-makers as the middle of EMO for all peer algorithms.
[3] Lai, Guiyu, et al. "Empirical studies on the role of the decision maker in interactive evolutionary multi-objective optimization." *CEC'21*: 185-192, (2021).
[4] Marquis, Jon, et al. "Impact of number of interactions, different interaction patterns, and human inconsistencies on some hybrid evolutionary multiobjective optimization algorithms." *Decision Sciences* 46(5): 981-1006, 2015.
---
Rebuttal Comment 1.1:
Title: Additional justification of limitations
Comment: Dear **Reviewer hDtX**,
We just realized that our response to your **Limitations** concern was missing. Please find our response as follows.
```
Due to the space limitations, we only briefly discussed the limitations in the conclusion section of our current manuscript. In the camera-ready version, we will add a dedicated section to discuss the limitations. We list some examples as follows.
1. The regret analysis of our proposed clustering-based stochastic dueling bandits is for the optimal subset, i.e., the region of interest on the PF. It is not yet directly applicable to identify the exact optimal solution of interest. As part of our future work, we will work on efficient algorithms and theoretical study on the best arm identification in the context the preference-based EMO.
2. This paper only analyzes the regret of the consultation module. How to further analyze the convergence of the D-PBEMO as a whole remains unknown. This will also lead to the next step of our research. In particular, if it is successful, we may provide a radically new perspective to analyze the convergence of evolutionary multi-objective optimization algorithms.
```
Hope this can address your concern at this point. If you have any concerns and questions, we are more than happy to have further discussion. Thank you very much for your efforts and help. | Summary: One challenge and potential advantage of multi-objective optimization (MO) is to adapt the dynamic human preferences while outputting an optimum. Although Preference-based evolutionary MO (PBEMO) is a promising framework, current approaches are inefficient and may not interpret the decision makers' true intentions accurately. One reason is that the decision makers' true intentions were not precisely "expressed" and "acknowledged" by the predefined reward model. This paper, intending to solve this reason, designs a framework that directly leverages human feedback, using a clustering-based stochastic dueling bandits algorithm.
Strengths: 1. This paper tackles an interesting and important problem, and the method is novel. In particular, it is certainly valuable to directly express the human feedback into the framework, as human feedback is the critical information for preferences outputs.
2. The application of multiple arms and Copeland winner as the main decision-making criteria indeed makes sense to me. Although different, it resembles the rank computation in many RLHF schemes to some extent.
Weaknesses: Due to my very limited knowledge in this area, I might not be able to find valuable weaknesses in this paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: In Section 3.2 "Preference Elicitation Module", why is Equation (4) the resulted mixture distribution? If it was computed, were there any justifications that this shall be the result? More explanations on this may be appreciated because, to the best of my understanding, this part plays a critical role in justifying the framework since it directly works on the preferences.
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Response to Q1: Is the Gaussian mixture model computed**
Thank you for this question. First of all, the Gaussian mixture model is not computed. Instead, it is a model assumption which we contend to be reasonable to represent user preference distribution within the solution of interest (SOI) region. We justify this from the following three aspects.
1. A key assumption of using the Gaussian mixture model is that user preference distribution within the SOI region follows a Gaussian distribution. However, we do not enforce such Gaussian distribution assumption to the other areas out side of the SOI region.
2. Theoretically, a Gaussian mixture model can simulate any distribution. Therefore, using a Gaussian mixture model to simulate user preference distributions is more general compared to other methods for expressing preferences, such as Gaussian process for ranking used in Bayesian optimization [1].
3. In addition, Gaussian mixture model is potent to be used to derive the uncertainty of the preference elicitation model.
[1] Zintgraf, Luisa M., et al. "Ordered preference elicitation strategies for supporting multi-objective decision making." *AAMAS'18*: 1477-1485, (2018).
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: I appreciate the concise response by the authors. However, due to my very limited knowledge in this area, I will keep my score and confidence level.
---
Reply to Comment 1.1.1:
Title: Response to "Thanks for the response"
Comment: We appreciate the reviewer's support in this work. | Rebuttal 1:
Rebuttal: # Response to reviewer hDtX about the choice of $K$
We agree with the reviewer hDtX that $K$ can impact the crowdedness of solutions within a subset. In particular, the larger the $K$ is, the smaller the distances between solutions within each subset are. As for the choice of $K$, we justify this from two perspectives:
1. As shown in Theorem 3.3, $K$ is involved in the regret of our clustering-based stochastic dueling bandits algorithm. A larger $K$ results in a larger uncertainty in preference learning for the same number of comparisons, leading to a slower convergence rate. However, it also narrows the final region of interest (ROI). There is no definitive guideline for selecting $K$ for different problems. Users can adjust $K$ to an appropriate value based on the accuracy of preference learning.
2. Further, we have conducted a sensitivity study on $K$ for $2$-objective problems in Section 4.4. In particular, we set the population size as $100$ and $K\in\{2,5,10\}$. From the results therein we can see that our proposed D-PBEMO framework is not sensitive to the choice of $K$. In addition, we have conducted additional experiments on problems with more objectives. The results can be found in the attached PDF file.
As the dimensionality increases, the population size will increase. Generally, with larger populations, a higher $K$ tends to yield better results, aligning with our intuition. Furthermore, our significance analysis across 20 repeated experiments reveals that the optimal $K$ does not show significant differences in performance.
In summary, $K$ does not significantly impact the performance of our proposed D-PBEMO framework. For most problems, we do not recommend choosing a very small/large $K$ (e.g., $K=2$, $K=N$), as it may inefficiently narrow down the ROI.
Pdf: /pdf/b2d133a2cea40b982ca2eee3a59658e14fa9e0ab.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CALANet: Cheap All-Layer Aggregation for Human Activity Recognition | Accept (poster) | Summary: This paper designs CALANet, a cheap all-layer aggregation network designed for real-time sensor-based HAR. The main objective of CALANet is to improve the accuracy of HAR while maintaining a low computational costs on edge devices for real-time applications. The authors argue that existing CNN models for HAR often suffer from limited accuracy because they only use features from the last layer of the network. In contrast, CALANet allows the classifier to aggregate features from all layers. The authors theoretically prove that the computational cost of CALANet is equivalent to that of conventional CNNs. The authors utilize 7 datasets to demonstrate the effectiveness of CALANet. The results show that CALANet outperforms seven state-of-the-art methods, achieving superior performance on 7 datasets.
Strengths: [1] The topic of sensor-based HAR is interesting and important, aligning with the scope of the ML community.
[2] I appreciate authors for providing theoretical proofs, for the complexity and others.
[3] 7 datasets are used in evaluation, providing sufficient evaluation results with detailed analysis. Both efficiency and effectiveness are evaluated.
[4] It is good to see future studies are also discussed.
Weaknesses: - One of my concerns is the baselines. Why choose these models as baselines? I believe there are many more advanced models in sensor-based HAR. I suggest the authors go through some papers in AAAI, IMWUT, Sensys, Mobicom, where most HAR papers are published. Otherwise, current improvement might not justify the advantage of CALANet. Also, there are more advanced time series classification models from NIPS, ICML, etc.
- The goal of this study is improving the accuracy of HAR with the computational costs during the inference phase as the constraint. Therefore, the fundamental thing is the accuracy. However, why CNN become the choice for this study? While authors mention that CNN is popular for HAR recently in Line 27, this argument does not support the study of accelerating CNN for HAR in this paper.
- Also curious about the dataset partition, some are 7:3 and others are 8:2.
- More references added in the content will improve the manuscript further, e.g., from line 51 to 55, references can be added to support the argument of the “” straightforward approach” and the “”computational costs”.
- In related work section, it seems the discussion of accelerating CNN or neural network is missing, which is a popular topic and has been widely studied.
- Writing: Writing can be further improved. For example, in Table 1, authors can put their own models at the same location (e.g., at the bottom). I believe this will lower the cognitive load of readers. In line 47, “under real-time response”.
- I suggest the authors to re-design Fig 2 for better readability. Fig 2 takes a lot of spaces but it is challenging to understand why the proposed model can utilize all features with low computational costs.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see above.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Please see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the constructive comments of the reviewer. Below, we provide specific answers and explanations regarding those comments.
**(W1) One of my concerns is the baselines. Why choose these models as baselines? I believe there are many more advanced models in sensor-based HAR. I suggest the authors go through some papers in AAAI, IMWUT, Sensys, Mobicom, where most HAR papers are published. Otherwise, current improvement might not justify the advantage of CALANet. Also, there are more advanced time series classification models from NIPS, ICML, etc.**
Thank you for your constructive comments. Although there are many advanced models in sensor-based HAR, unfortunately, we could not find the advanced models in “real-time HAR.”
For example, generative models or seme-supervised HAR studies are difficult to compare fairly with our CALANet due to different evaluation scenarios.
Instead, we experimented with two models in time series classification. Please check the attached .pdf file in “Global(Author) Rebuttal”. Table 2 shows the comparison results for CALANet, MILLET, and DSN. MILLET is a TSC framework designed to provide inherent interpretability. Meanwhile, DSN was proposed so that the temporal receptive field is trained sparsely and selectively.
As shown in Table 2, CALANet exhibited comparable performance despite its significantly low FLOPs.
**(W2) The goal of this study is improving the accuracy of HAR with the computational costs during the inference phase as the constraint. Therefore, the fundamental thing is the accuracy. However, why CNN become the choice for this study? While authors mention that CNN is popular for HAR recently in Line 27, this argument does not support the study of accelerating CNN for HAR in this paper.**
Thank you for your constructive comments. Popular architectures in sensor-based human activity recognition (HAR) literature include CNNs, RNNs, and Transformers.
However, our goal is to provide accurate feedback while satisfying real-time constraints. In real-time HAR, RNNs have some weaknesses, including poor parallelization and the lack of hardware accelerators compared to CNNs.
In addition, Transformer needs to calculate the relationship between timestamp of which each is meaningless. Specifically, its search space is larger than CNNs, resulting in slow training time.
Although the fundamental thing in this study is accuracy, other architectures find it difficult to satisfy the real-time constraint. Therefore, we adopted CNN. We apologize for the confusing sentences. To avoid these confusions, we will modify the corresponding sentences.
**(W3) Also curious about the dataset partition, some are 7:3 and others are 8:2.**
Thank you for your constructive comments. We used the values recommended from original paper or followed the setting that the prior studies adopted.
**(W4) More references added in the content will improve the manuscript further, e.g., from line 51 to 55, references can be added to support the argument of the “” straightforward approach” and the “”computational costs”.**
Thank you for your constructive comments. We will add not only two references [1], [2] but also more references to improve the manuscript further.
**(W5) In related work section, it seems the discussion of accelerating CNN or neural network is missing, which is a popular topic and has been widely studied.**
Thank you for your constructive comments. We will add some sentences. For example, furthermore, model compression technology is commonly used to accelerate inference time further on wearable or mobile devices. Especially recent studies preferred quantization methods over other compression methods, such as pruning [3]. This postprocessing can further accelerate and optimize CALANet.
**(W6) Writing: Writing can be further improved. For example, in Table 1, authors can put their own models at the same location (e.g., at the bottom). I believe this will lower the cognitive load of readers. In line 47, “under real-time response”.**
The entire sentence will be reviewed and modified to improve readability.
**(W7) I suggest the authors to re-design Fig 2 for better readability. Fig 2 takes a lot of spaces but it is challenging to understand why the proposed model can utilize all features with low computational costs.**
Thank you for your constructive comments. Although the figure still takes up a lot of space and cannot be attached, we hope it can be shown after further development.
> [1] Lee, Chen-Yu, et al. "Deeply-supervised nets." Artificial intelligence and statistics. Pmlr, 2015.
> [2] Yu, Fisher, et al. "Deep layer aggregation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
> [3] Kuzmin, Andrey, et al. "Pruning vs quantization: which is better?." Advances in neural information processing systems 36 (2024).
---
Rebuttal Comment 1.1:
Comment: I appreciate the detailed feedback from the authors, which addressed some concerns.
One thing about "real-time HAR" is that some studies might not exactly highlight "real-time" in the title or abstract, however, they are still highly efficient. Also, in (w2), there are multiple claims about the efficiency of RNN and transformers. I suggest that the authors provide better intuition or motivation for choosing CNN. Also, some studies or tutorials focusing on the computational complexities of different model architectures might also be helpful.
---
Reply to Comment 1.1.1:
Comment: Thank you for your constructive comments.
In this study, one of our primary missions is to achieve an accurate HAR without exceeding the model complexity of existing NNs for HAR.
Thus, increasing the complexity of the model is not an option in our study because a larger model typically yields better accuracy with increasing inference time.
As a result, we had to choose CNNs because CNNs are much lighter than RNNs or transformers, and CNNs for HAR already exist.
Please see the following detailed explanation:
Sensor-based HAR can be defined as a multivariate time series classification task.
To solve this problem, the classifier requires both local (via CNNs) and global (via RNNs or transformers) temporal representations [1].
Specifically, the locality of CNNs improves accuracy due to their translational invariance concerning the precise location of activity within a segment of time-series data [2].
On the other hand, RNNs or transformers have an advantage for global feature extraction because they can model long-term dependencies.
In this regard, many studies have attempted to integrate RNNs or transformers into CNNs [1,3-6], which has increased both accuracy and inference times.
The increase in inference time is primarily because of the lack of device-level optimizations compared with CNNs [7].
We noted that real-time or efficient HAR models using wearable sensors processes the input signals with short segmentation lengths for rapid response.
If CNNs are sufficient to extract meaningful information from the short-term signals, unnecessary increase of inference time due to integration with RNNs or transformers can be avoided.
In Table 1 of the manuscript, CALANet outperformed two CNNs with RNNs, i.e., Bi-GRU-I [5] and DeepConvLSTM [6], on all datasets.
In addition, we compare CALANet with RevAttNet [3] and IF-ConvTransformer [4], hybridizations of CNNs and transformers.
In the below Table, CALANet exhibited comparable performance despite its significantly low FLOPs.
These results indicate that CNNs are sufficient to model the temporal information for the real-time HAR dataset.
|(F1-score / FLOPs)|UCI-HAR|UniMiB-SHAR|DSADS|OPPORTUNITY| KU-HAR|PAMAP2|
|:---|:---|:---|:---|:---|:---|:---|
|CALANet|96.1 / 7.6M|78.3 / 8.8M|90.0 / 8.5M|81.6 / 19.3M|97.5 / 29.6M|79.4 / 74.9M|98.2 / 56.7M|
|RevAttNet|95.1 / 143.1M|76.7 / 168.7M|87.6 / 140.2M|78.6 / 101.5M|97.7 / 335.3M|79.7 / 573.5M|98.5 / 282.1M|
|IF-ConvTransformer|95.4 / 209.8M|77.0 / 183.5M|87.5 / 628.4M|82.2 / 986.2M|96.4 / 491.7M|80.1 / 1.7G|97.4 / 3.0G|
> [1] Zhao, Bowen, et al. "Rethinking attention mechanism in time series classification." Information Sciences 627 (2023): 97-114.
> [2] Hammerla, Nils Y., Shane Halloran, and Thomas Plötz. "Deep, convolutional, and recurrent models for human activity recognition using wearables." Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence. 2016.
> [3] Pramanik, Rishav, Ritodeep Sikdar, and Ram Sarkar. "Transformer-based deep reverse attention network for multi-sensory human activity recognition." Engineering Applications of Artificial Intelligence 122 (2023): 106150.
> [4] Zhang, Ye, et al. "IF-ConvTransformer: A framework for human activity recognition using IMU fusion and ConvTransformer." Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 6.2 (2022): 1-26.
> [5] Tong, Lina, et al. "A novel deep learning Bi-GRU-I model for real-time human activity recognition using inertial sensors." IEEE Sensors Journal 22.6 (2022): 6164-6174.
> [6] Ordóñez, Francisco Javier, and Daniel Roggen. "Deep convolutional and lstm recurrent neural networks for multimodal wearable activity recognition." Sensors 16.1 (2016): 115.
> [7] Mehta, Sachin, and Mohammad Rastegari. "MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer." International Conference on Learning Representations. 2022. | Summary: The CALANet describes a technique to aggregate the features from all the neural network layers for human activity recognition (HAR).
Because HAR is a common application for wearable devices, the model needs to be lightweight to be deployed on the edge for example on an Apple Watch.
Existing studies are often limited by shallow networks. The activity prediction is done on the final layer without using the features from previous layers,
which may contain key information. Working with computational constraints, CALANet proposes to aggregate features from all the layers to improve the model performance.
The CALANet consists of two modifications: (1) a channel-wide transformation matrix to condense features from each layer (2) A ShuffleNet-like convolution to
aggregate all the layer features with computational efficiency. The authors also presented proof results on why CALANet is within the same computational complexity as a shallow network. The empirical results on 7 benchmarks against the state-of-the-art models supported the authors' claim.
Strengths: 1. This paper presented both theoretical guarantees and empirical evidence to validate the computational efficiency of the proposed model.
2. It was cool to see the authors start the paper with an empirical observation on how the features at different layers might differ to motivate the work.
3. CALANet addresses an important question for efficient mobile computing in the HAR space.
4. The manuscript reads well.
Weaknesses: 1. The notations of the proofs can be better clarified so that the readers don't need to refer to the appendix. For example, D is not explained in eq. 2 during its first occurrence.
2. This is a limitation of the field of HAR in general. Existing benchmarks are small usually with the number of participants under 100. It is fairly easy to overfit on the test set. I can see that you are just doing a simple train/test split. I know that some of the older benchmark recommends this but would be interesting to see cross-fold validation results. It is probably ok if you don't have the time to do this. Furthermore, could you clarify how you selected the hyper-parameters for all the models? Again, I suspect that the results reported could overfit your current test set given your evaluation framework.
3. In your ablation study you are trying to show the effectiveness of your layer aggregation technique (Table 2), there are several changing variables including network depths and FLOPs in addition to the network tricks you introduced. I don't think we can conclude that the layer aggregation trick worked. To test this properly, we should do the ablation studies using the same network L and probably similar FLOPs across different baselines.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. L60: what do you mean by temporal resolution T?
2. The majority if not all of the benchmark datasets you used are lab-based, which is really not a realistic assessment. Could consider adding one of the free-living datasets to the baselines if time allows. But you don't have to do this.
1. Logacjov, Aleksej, et al. "HARTH: a human activity recognition dataset for machine learning." Sensors 21.23 (2021): 7853.
2. Chan, Shing, et al. "CAPTURE-24: A large dataset of wrist-worn activity tracker data collected in the wild for human activity recognition." arXiv preprint arXiv:2402.19229 (2024).
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the constructive comments of the reviewer. Below, we provide specific answers and explanations regarding those comments.
**(W1) The notations of the proofs can be better clarified so that the readers don't need to refer to the appendix. For example, $D$ is not explained in eq. 2 during its first occurrence.**
**(Q1) L60: what do you mean by temporal resolution $T$?**
Thank you for your constructive comments. We will modify or add some sentences that help understanding of notation, including a kernel size $D_k$ and a length of sequence $T$.
**(W2) This is a limitation of the field of HAR in general. Existing benchmarks are small usually with the number of participants under 100. It is fairly easy to overfit on the test set. I can see that you are just doing a simple train/test split. I know that some of the older benchmark recommends this but would be interesting to see cross-fold validation results. It is probably ok if you don't have the time to do this. Furthermore, could you clarify how you selected the hyper-parameters for all the models? Again, I suspect that the results reported could overfit your current test set given your evaluation framework.**
**(Q2) The majority if not all of the benchmark datasets you used are lab-based, which is really not a realistic assessment. Could consider adding one of the free-living datasets to the baselines if time allows. But you don't have to do this.**
Thank you for your constructive comments. We agree that the recommendations of the older benchmark may cause an overfitting on the test set. We are conducting additional experiments and the results of CALANet's 5-fold cross-validation for KU-HAR dataset are as follows:
> KU-HAR dataset (5-fold)
> F1-score: 92.1 95.7 94.7 97.3 94.1
Also, we almost used the hyper-parameters recommended from original paper except for epochs, batch size, and optimizer.
**(W3) In your ablation study you are trying to show the effectiveness of your layer aggregation technique (Table 2), there are several changing variables including network depths and FLOPs in addition to the network tricks you introduced. I don't think we can conclude that the layer aggregation trick worked. To test this properly, we should do the ablation studies using the same network L and probably similar FLOPs across different baselines.**
Thank you for your constructive comments. Please check the attached .pdf file in “Global(Author) Rebuttal”. In Table 1 we compared two networks with similar FLOPs and with/without our cheap all-layer aggregation. Despite applying the layer aggregation, its FLOPs were slightly reduced while improving F1-score. Especially, Figure 1 shows the tradeoff between the FLOPs and F1-score with varying numbers of layers in CALANet with/without LCTMs and SLAP. The tradeoff curves closer to the top-left are more efficient, with a higher F1-score per FLOPs. As a result, we can conclude that the layer aggregation trick worked.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for addressing all of my concerns.
I can't upgrade my current rating from 6 to 7 because of the limited relevance of this manuscript because model architecture proposed are mostly applicable for bio-signals rather than time series in general. However, a rating of 7 would require moderate impact on several domains. Perhaps, the ubiquitous computing community will find this work more relevant.
---
Rebuttal 2:
Comment: Thank you for your recommendation and encouragement.
Human Activity Recognition is a research topic of interest to the NeurIPS conference from the past to the present. Below are representative works presented at the NeurIPS conference.
* DelPreto, Joseph, et al. "ActionSense: A multimodal dataset and recording framework for human activities using wearable sensors in a kitchen environment." Advances in Neural Information Processing Systems 35 (2022).
* Cheng, Ricson, Ziyan Wang, and Katerina Fragkiadaki. "Geometry-aware recurrent neural networks for active visual recognition." Advances in Neural Information Processing Systems 31 (2018).
* Mahdaviani, Maryam, and Tanzeem Choudhury. "Fast and scalable training of semi-supervised CRFs with application to activity recognition." Advances in Neural Information Processing Systems 20 (2007).
Also, studies regarding the model's complexity concerning inference time on resource-limited devices have recently attracted attention.
* Kuzmin, Andrey, et al. "Pruning vs quantization: which is better?." Advances in neural information processing systems 36 (2024).
* Zheng, Hong-Sheng, et al. "StreamNet: memory-efficient streaming tiny deep learning inference on the microcontroller." Advances in Neural Information Processing Systems 36 (2024).
These studies redesign models primarily based on FLOPs or inference time in seconds, so their analysis depends on the device's choice.
In this regard, one of this study's key contributions and differences from previous works is improving model performance while maintaining the model's complexity based on theoretical time complexity.
Because these aforementioned works are closely related to our work and were presented at the NeurIPS, we believe that our work is within the scope of the NeurIPS conference.
---
Rebuttal Comment 2.1:
Comment: I agree with the authors that the work presented is of relevance to the NeurIPS community. This is well-justified by the relevant work that you've have cited.
I would like to thank the authors for your timely and clear responses to my comments.
My rating stands as 6 because the proposed work is specific to bio-signals instead of time series modelling in general.
Good luck :D | Summary: This paper proposes an all-layer aggregation network, CALANet, to improve model accuracy while maintaining the efficiency of lightweight models. Specifically, the authors have exploited the features from all layers for classification, as a kind of aggregation.
Strengths: 1. The motivation of this work is clear. It sounds reasonable that the features from all layers can provide more information for classification, compared to merely using the features from the last layer.
2. Extensive experiments have been conducted, including comparison results and ablation study.
3. The organization of paper is clear and the writing is easy to understand.
Weaknesses: 1. My major concern is about the aggregation. When I first read the title of this paper, I assumed that the authors have trained a neural network and then compress its multiple layers via aggregation. However, it seems that the authors just add several components (LCTMS, SLAP) to a complete CNN. It looks like an advanced version of residual networks with channel weights.
2. The authors claimed that they wanted to improve model performance while keeping model efficiency. However, in Table, it seems that, in some cases (KU-HAR, PAMAP2), the proposed method did not have obvious F1 improvements but resulted in highly increased FLOPs.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. How to prove that the performance improvement is because of the aggregation, but not because of the more complicated network architecture? To be more specific, what if we do not use the proposed LCTMs and SLAP but directly increase the layers of CNNs? It would be convincing if the authors provide results with varying numbers of layers with/without LCTMs and SLAP.
2. The authors have used FLOPs to measure model efficiency, which may be inadequate, because the best hyperparameters for different models are different. If the authors directly use the same hyperparameters for all models, we may see a fair comparison in terms of efficiency, but it is not fair to compare their accuracy/F1. Can the authors provide the results in terms of training/testing time for intuitive efficiency comparison (in Table 1)?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: 1. This is not a model compression work but just proposed some aggregation modules. The authors should discuss the difference between their work and model compression works.
2. This work mainly focuses on CNNs. The authors can further discuss other network architectures.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the constructive comments of the reviewer. Below, we provide specific answers and explanations regarding those comments.
**(W1) My major concern is about the aggregation. When I first read the title of this paper, I assumed that the authors have trained a neural network and then compress its multiple layers via aggregation. However, it seems that the authors just add several components (LCTMS, SLAP) to a complete CNN. It looks like an advanced version of residual networks with channel weights.**
**(L1) This is not a model compression work but just proposed some aggregation modules. The authors should discuss the difference between their work and model compression works.**
Thank you for your constructive comments. There is an important difference between our study and model compression works. The objective of model compression is to reduce model size and computations while minimizing loss of accuracy. Conversely, our goal is to improve accuracy while maintaining computations. Thus, we did not focus on compressing the model. Instead, we fixed its complexity because the computational capability of wearable devices is fixed and hard to change.
**(W2) The authors claimed that they wanted to improve model performance while keeping model efficiency. However, in Table, it seems that, in some cases (KU-HAR, PAMAP2), the proposed method did not have obvious F1 improvements but resulted in highly increased FLOPs.**
**(Q1) How to prove that the performance improvement is because of the aggregation, but not because of the more complicated network architecture? To be more specific, what if we do not use the proposed LCTMs and SLAP but directly increase the layers of CNNs? It would be convincing if the authors provide results with varying numbers of layers with/without LCTMs and SLAP.**
Thank you for your constructive comments. If what we have figured out is correct, we apologize for the confusion caused by removing a version without layer aggregation from our CALANet in Table 2 of the submitted paper. To avoid these confusions, we will add Figure 1 and Table 2 in the attached .pdf file to the paper.
Please check the attached .pdf file in “Global(Author) Rebuttal”. Figure 1 shows the tradeoff between the FLOPs and F1-score with varying numbers of layers in CALANet with/without LCTMs and SLAP.
The tradeoff curves closer to the top-left are more efficient, with a higher F1-score per FLOPs. As shown in Figure 1, CALANet with LCTMs and SLAP the higher F1-score in similar computational cost than one without LCTMs and SLAP.
**(Q2) The authors have used FLOPs to measure model efficiency, which may be inadequate, because the best hyperparameters for different models are different. If the authors directly use the same hyperparameters for all models, we may see a fair comparison in terms of efficiency, but it is not fair to compare their accuracy/F1. Can the authors provide the results in terms of training/testing time for intuitive efficiency comparison (in Table 1)?**
Thank you for your constructive comments. We are measuring the train and test times for all baselines and datasets in Table 1. In our measurements, we found that the testing time of the network architecture depends on not only FLOPs but also various environments, including devices, memory available space, and background apps. Interestingly, in most cases, it was confirmed that applying LCTMs and SLAP to a given specific architecture hardly increased the time.
Also, the training time tended to be almost proportional to FLOPs because we used the same epochs for all models, and larger models used more memory accesses.
**(L2) This work mainly focuses on CNNs. The authors can further discuss other network architectures.**
Thank you for your constructive comments. In sensor-based human activity recognition (HAR) literature, popular architectures include CNNs, RNNs, and Transformer.
Compared with CNNs, RNNs in terms of real-time HAR have some weaknesses as follows:
1) poor parallelization due to dependency of computations,
2) the lack of hardware accelerators for edge device deployment.
Meanwhile, Transformer needs to calculate the relationship between timestamp of which each is meaningless.
Furthermore, the hybridization of CNNs and Transformer can be promising approach. We want to emphasize strengths of CALANet because the self-attentions commonly are calculated for the output of convolutional layers due to their local temporal modeling capability. Therefore, our LCTMs and SLAP can improve the hybridization of CNNs and Transformer.
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed response. Good luck. | Summary: The problem of Human Activity Recognition (HAR) is considered in this paper where the border between different activities can differ depending on the type of activity. For example, one activity can be “just sitting”, and another can be “sitting while speaking”. To this end, the authors argue that we need to leverage the features extracted in all the layers of a neural net. Thus, the authors propose a modification to ConvNet models where (called “all-layer aggregation”) is added to the model which takes its input from all the previous conv layers. To do so, the authors design learnable channel-wise transformation matrices that can be added to the model and provide fast aggregation. Evaluation results on seven HAR dataset shows that the proposed modification can improve the classification accuracy of HAR, compared to other alternatives.
Strengths: This paper spots a very interesting problem in HAR and offers an effective solution with a nice architectural modification. The paper is written well and easily understandable.
Weaknesses: The weakness of this paper is its relevance to the NeurIPS. This work gets more attention and appreciated by people in the Mobile or UbiComp community as the novelty and contribution is not much in the ML part but in systemic modification of the architecture.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It seems that such a modification is only applicable to ConvNets. It is not clear how similar things can be done to other architectures.
2. I could not understand whether this CALANet is only useful to HAR or if it can also help with other data types like audio data or bio signals.
3. It would be interesting and useful if the authors could show how this modification can be applied to some benchmark architectures that are built for mobile or wearable devices such as MobileNet or EfficientNet, and similar models.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The main limitation is that the solution is very specific to HAR datasets and it is only applied to basic ConvNets and not other benchmark models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the constructive comments. Below, we provide specific answers and explanations regarding those comments.
**(W1) The weakness of this paper is its relevance to the NeurIPS. This work gets more attention and appreciated by people in the Mobile or UbiComp community as the novelty and contribution is not much in the ML part but in systemic modification of the architecture.**
Thank you for your constructive comments. We respect good opinions, but we believe that the discovery of new structures in architecture can also serve as a foundation for important research in the field of ML.
**(Q1) It seems that such a modification is only applicable to ConvNets. It is not clear how similar things can be done to other architectures.**
**(Q3) It would be interesting and useful if the authors could show how this modification can be applied to some benchmark architectures that are built for mobile or wearable devices such as MobileNet or EfficientNet, and similar models.**
Thank you for your constructive comments. Some constraints must be satisfied to apply our method to other architectures effectively: 1) the layers of a network architecture should be calculated sequentially and independently; 2) the forward pass should include the resolution reduction operation like the pooling layer. 3) the output of each layer should be able to be expressed as a (temporal length * channel size) matrix.
To the best of our knowledge, most wearable sensor-based human activity recognition models can satisfy the above constraints. For example, in Inception-like models, LCTMs and SLAP can be effectively applied for each module rather than each operator.
Please check the attached .pdf file in “Global(Author) Rebuttal”. In Table 1, we applied our LCTMs and SLAP to SqueezeNet. Specifically, the output of a squeeze convolution layer in each fire module is fed into LCTMs and connected to the last layer.
As a result, our modification significantly improved F1-score of SqueezeNet on 71% of the all datasets while maintaining its FLOPs.
**(Q2) I could not understand whether this CALANet is only useful to HAR or if it can also help with other data types like audio data or bio signals.**
Thank you for your constructive comments. CALANet is useful when information lost in the intermediate layer affects classification accuracy. In this regard, our CALANet can be useful to some applications using bio signals. On the other hand, most audio data is collected in high sampling frequency. Therefore, these applications require the capability modeling long-term temporal dependency. In this regard, CALANet may not be appropriate.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and enthusiasm for improving this work. Considering additional datasets, such as biosignals similar to motion sensors, can improve the demonstration of this work's applicability to other ML applications. Also, a proper discussion on how the method presented in this work can be applied to different architectures is necessary. The new results should also be appropriately integrated into the paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your recommendation and encouragement.
We applied CALANet to the ECG heartbeat classification problem using the MIT-BIH arrhythmia dataset [1], which includes 24-hour ambulatory ECG recordings collected from inpatients and outpatients at Boston's Beth Israel Hospital. The dataset has 21,892 heartbeats, each with a signal length of 187. In the following Table, CALANet exhibited comparable performance with other networks designed to process ECG signals. This result shows that CALANet has promising applicability to other ML applications.
||CALANet|Pham et al. [2] |Ganguly et al. [3]|
|:---|:---:|:---:|:---:|
|Average Accuracy|98.2|98.5|97.3 |
In addition, we elaborate on how CALANet can be applied to other architectures.
1. Define the smallest unit of a set of adjacent operators repeated across the network architecture as the "layer," such as the fire module of SqueezeNet and the residual module of ResNet.
2. Check if the output of each layer can be expressed as a (temporal length $\times$ channel size) matrix.
3. Multiply the output matrix and LCTM for each layer, and concatenate the output vectors across layers.
4. Feed the concatenated features into the classifier to predict activity.
> [1] Goldberger, Ary L., et al. "PhysioBank, PhysioToolkit, and PhysioNet: components of a new research resource for complex physiologic signals." circulation 101.23 (2000): e215-e220.
> [2] Pham, Bach-Tung, et al. "Electrocardiogram heartbeat classification for arrhythmias and myocardial infarction." Sensors 23.6 (2023): 2993.
> [3] Ganguly, Biswarup, et al. "Automated detection and classification of arrhythmia from ECG signals using feature-induced long short-term memory network." IEEE Sensors Letters 4.8 (2020): 1-4. | Rebuttal 1:
Rebuttal: # General Response
We thank the reviewers for their detailed feedback and valuable comments. We are glad the reviewers find that
- Our paper deals with a novel and interesting question and has a clear motivation.
- "This paper spots a very interesting problem in HAR and offers an effective solution with a nice architectural modification" – QGqB
- "The motivation of this work is clear" - 1Z8p
- "CALANet addresses an important question for efficient mobile computing in the HAR space" – VmYr
- "It was cool to see the authors start the paper with an empirical observation on how the features at different layers might differ to motivate the work" – VmYr
- "The topic of sensor-based HAR is interesting and important, aligning with the scope of the ML community" - 2n6z
- "It is good to see future studies are also discussed" - 2n6z
- Our claim and approach are theoretically reasonable.
- "It sounds reasonable that the features from all layers can provide more information for classification, compared to merely using the features from the last layer" - 1Z8p
- "This paper presented both theoretical guarantees and empirical evidence to validate the computational efficiency of the proposed model" – VmYr
- "I appreciate authors for providing theoretical proofs, for the complexity and others" - 2n6z
- Our paper is well-organized and easily understandable.
- "The paper is written well and easily understandable" – QGqB
- "The organization of paper is clear and the writing is easy to understand" -1Z8p
- "The manuscript reads well" – VmYr
We agree that some aspects of the paper can be improved, and many suggestions will be incorporated in the paper. We respond to individual comments below but briefly provide some common responses here. If any questions are unanswered or our responses need clarification, we would appreciate the chance to engage further with our reviewers.
One of the key concerns that multiple reviewers raised during the review process was that the experiments and performance analysis needed to be improved. Attached is a file containing the additional experiments suggested by the reviewers.
1. Reviewer QGqB asked about whether our architectural modification can be applied to some benchmark models that are built for mobile devices. Table 1 in .pdf file shows the comparison results of SqueezeNets with/without our cheap all-layer aggregation (LCTMs and SLAP). We explain in detail in our response below.
2. Reviewer 1Z8p asked if the authors provide results with varying numbers of layers with/without LCTMs and SLAP. Figure 1 in .pdf file shows the tradeoff between the FLOPs and F1-score with varying number of layers; here, tradeoff curves closer to the top-left are more efficient, with a higher F1-score per FLOPs. We explain in detail in our response below.
3. Reviewer VmYr suggested the ablation studies using the same depth of networks and probably similar FLOPs across different baselines. Table 1 in .pdf file compared two networks with similar FLOPs and with/without our cheap all-layer aggregation. In addition, Figure 1 shows an efficiency of our layer aggregation. We explain in detail in our response below.
4. Reviewer 2n6z suggests that we compare CALANet and more advanced models. We conducted an additional experiment with two larger models in Table 2. We explain in detail in our response below.
Once again, we are grateful for the time and effort put into reviewing this submission, and we firmly believe that these comments will strengthen the clarity of our manuscript.
Pdf: /pdf/5f8e23330a67510d467ee71988cd1940a22f4648.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
From an Image to a Scene: Learning to Imagine the World from a Million 360° Videos | Accept (poster) | Summary: The paper introduces a large-scale, real-world, multiview, 360-degree outward dataset designed for static novel view synthesis and 3D reconstruction. To capture these data, the study develops a pipeline capable of identifying corresponding multiview frames from 360-degree outward videos with a fixed camera trajectory. To demonstrate the dataset's contribution and impact, the work proposes a diffusion-based model that achieves state-of-the-art performance using this dataset.
Strengths: Strength
- The work introduces a real-world, multiview, 360-degree outward dataset that is significantly larger and more diverse than existing datasets for static novel view synthesis. This dataset is crucial for modern data-driven methods.
- The work presents a novel and efficient pipeline that utilizes Dust3R to find corresponding frames and graphs to maintain long-range correspondence for data collection. This innovative pipeline makes the data collection process scalable.
- The work proposes a diffusion-based model, ODIN, that leverages the video dataset for static novel view synthesis by incorporating motion masking and viewpoint conditions to achieve state-of-the-art performance. Additionally, the model can generate long-range novel views, further validating the dataset's positive impact.
Weaknesses: Dateset
- The dataset is collected from 360 YouTube videos, captured beforehand by the camera operator. The camera trajectory (x, y, z) is controlled by the camera operator, thereby limiting the flexibility to select diverse viewpoints from different positions.
- The quality of the data is unknown and can be unstable across different data points because the author cannot investigate each video one by one or frame by frame. This represents a tradeoff between capturing one's own data and crawling data from the Internet. For instance, the author acknowledges in lines 178-179 that it is infeasible to guarantee the uniqueness of video content.
- Although the work claims that 1 fps is sufficient in lines 126-128, this claim lacks experimental support. The author should consider trying different fps rates and show the performance gap and computational time between them.
- Larger dataset may need more training time and computational resource.
Method
- One of the most challenging aspects that prevent people from using video for static scene tasks is the impact of motion over time (multiview inconsistency). The work proposes a motion mask method to mitigate this impact. However, this aspect lacks analysis and discussion. For instance, the importance of the hyperparameter in Eq. 3 is unknown.
- In Fig. 3, in the second row, the method is shown to synthesize humans as well. However, I wonder whether the motion masking has a negative impact on synthesizing dynamic objects when they are static.
Writing
- In lines 5-6, the claim may be too arbitrary. As mentioned in lines 87-90, there are some large-scale real-world datasets, such as CO3D, albeit with limited diversity. A related work, OmniObject3D, should also be mentioned.
- In lines 8-12, the phrase "...diverse views" may be too broad and should be softened. The proposed dataset is collected from pre-captured 360 YouTube videos. The camera trajectory is decided by the camera operator beforehand, and only the camera orientation can be controlled after that. Hence, compared to general multiview datasets, which can have multiple viewpoints from different locations at the same timestamp, the views of the proposed dataset are limited.
- Eq. 2 does not match the equation in Sec. 5.1. The equation in Sec. 5.1 contains an undefined term.
- I hope the author can define their 360 multiview videos at the beginning since there are several kinds of 360 videos. For instance, DiVa360 captures 360-degree inward videos with multiple cameras at different locations at the same timestamp.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I hope the author can verify that the viewpoints are diverse enough by discussing the difference between a multiview dataset that can capture images from different locations at the same timestamp and a dataset that can only change the camera orientation at the same timestamp.
- The dataset contains a large amount of data, which may vary in quality. I hope the author can discuss the impact of the low-quality data in the dataset. Is it negligible?
- Iines 178-179 mentions that it is infeasible to guarantee the uniqueness of video content. Hence, how do you make sure there is no overlap between training and testing set? How do you make sure the training dataset does not contain the testing cases in other dataset? Will it be unfair for the benchmark?
- The dataset contains diverse data. How is the data distribution in terms of motion? Will high-motion data, such as scenes with rain, have a negative impact?
- Lines 126-128 need experimental support. Please consider trying it on a subset of the dataset.
- The work uses Depth Anything as the depth estimator. This is a monocular depth estimator with two types of checkpoints pretrained on two different datasets. I wonder if this causes a domain gap in the proposed dataset when using different checkpoints. Do you finetune the estimator on your dataset? Additionally, it is unknown how good the multiview consistency is on your data with this estimator.
- What is the tradeoff between using this dataset and others? From the experiment section, we know that the dataset can increase performance. But how about the training time compared to existing datasets? Maybe the author can consider adding the training time from the appendix to the table.
- How sensitive and stable is the motion masking hyperparameter in Eq. 3? How should it be tuned?
- What is the performance when the input image contains many dynamic objects?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I agree with the limitation section in the paper.
- One potential limitation of the large-scale dataset is to maintain the data quality and high fps.
- Another potential limitation is to make sure there is no overlap between training and testing set.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review. We are happy that they find our dataset crucial for modern data-driven methods and find our correspondence search pipeline to be innovative. We have addressed comments and questions below and are happy to engage in further discussion. Note that we could not address all points within the character limit and will address the remaining questions during the discussion phase in the official comments.
**The camera trajectory (x, y, z) is controlled by the camera operator, thereby limiting the flexibility to select diverse viewpoints from different positions.**
We found that the increased scale and diversity of camera views in the 360-1M dataset compensates for this limitation as shown qualitatively and quantitatively in our results. The 1 million in-the-wild videos provides camera movements with extremely diverse trajectories, for example people sky-diving, climbing mountains, and walking many blocks through cities. Though each video has a fixed (x,y,z), the diversity of viewpoints across the data gives our model flexibility in generating from different viewpoints compared to models trained on previous datasets such as ObjaVerse and Co3D.
**The quality of the data is unknown and can be unstable across different data points. For instance, the author acknowledges that it is infeasible to guarantee the uniqueness of video content.**
While it’s true that we do not crawl every frame, we contend that the quality of the model is a strong indicator of the data quality. In our case we show that the scale and the quality of the 360-1M leads to improved model performance compared to previous datasets.
**Although the work claims that 1 fps is sufficient in lines 126-128, this claim lacks experimental support.**
We initially ran small-scale proxy experiments at various FPS before scaling to the full dataset. Performing such experiments at 1 million scale was computationally infeasible. We found that below 1 FPS there was no performance loss. We’ve also included the computational cost difference. Below we’ve included the results of our experiments on DTU and have added this to the appendix. For this experiment we kept the window size for correspondence searching fixed and used 50k videos from 360-1M.
| | LPIPS | PSNR | SSIM |
|----------|-------|-------|------|
| 0.5 FPS | 0.488 | 15.88 | 0.492|
| 1 FPS | 0.467 | 16.67 | 0.525|
| 5 FPS | 0.461 | 16.85 | 0.539|
| 10 FPS | 0.475 | 16.71 | 0.536|
**Larger dataset may need more training time and computational resource.**
We agree that considering computational cost is important. The task of finding corresponding pairs in video and estimating their relative pose was the main computational cost for us. By providing the image pairs and relative pose, other researchers can benefit from large-scale multi-view datasets without the computational burden of parsing a million videos.
**One of the most challenging aspects is the impact of motion over time. This aspect lacks analysis. For instance, the importance of the hyperparameter in Eq. 3 is unknown.**
We’ve included an ablation below over the hyperparameter lambda in Eq. 3 and have added discussion about choosing lambda to the appendix. We found that if the lambda value was too small, then the model would mask all objects in the scene and the reconstruction quality was poor. If the lambda value was too large, then smearing would occur near dynamic objects as the model was forced to predict the motion of the object.
| | LPIPS | PSNR | SSIM |
|--------------------|-------|-------|-------|
| 0.1 = $\lambda$ | 0.498 | 12.31 | 0.366 |
| 0.5 = $\lambda$ | 0.467 | 14.73 | 0.402 |
| 1 = $\lambda$ | 0.378 | 16.67 | 0.525 |
| 2 = $\lambda$ | 0.395 | 14.94 | 0.431 |
**I wonder whether the motion masking has a negative impact on synthesizing dynamic objects when they are static.**
Qualitatively we found that motion masking helped in reconstructing dynamic objects even when they are static such as parked cars. We hypothesize that due to the scale of the dataset, objects which can be both dynamic and static are seen multiple times in both settings.
**As mentioned, there are some large-scale real-world datasets, such as CO3D, albeit with limited diversity. A related work, OmniObject3D, should be mentioned.**
Thanks, we have added OmniObject3D to the related works. We’ve changed line 5-6 to be more concrete. Co3D is impressive and relatively large-scale, but orders of magnitude smaller than our proposed dataset (20 thousand vs 1 million videos).
**In lines 8-12, the phrase "...diverse views" may be too broad.**
Often datasets such as DTU, MVImageNet and Co3D have images taken from positions close in space around an object (1-5 meters apart). By diverse views we meant varying differences in camera pose (up to 50 meters apart and 360 degree rotation) and from various locations, not only around objects or landmarks. We will make it more precise what we mean by diverse viewpoints in the abstract and throughout the paper.
**I hope the author can discuss the difference between a multiview dataset that can capture images from different locations at the same timestamp and one that can only change the camera orientation.**
One limitation of a multi-view video dataset is that camera poses at different points in the (x,y,z) trajectory will have different timestamps. Learning novel view synthesis from dynamic scenes is still an open problem which temporal masking addresses to an extent, but there is still significant progress to be made. Current novel view synthesis (ZeroNVS, MegaScene) works seem capable of training on datasets where time and spatial location change simultaneously. Datasets which capture images from different locations at the same timestamp are ideal, but we believe this would be extremely difficult to manually collect at scale, and harnessing existing video data is an appealing alternative.
---
Rebuttal Comment 1.1:
Title: Continued Response to Reviewer whPn
Comment: Below we've included responses to the remaining questions and comments from reviewer whPn. We appreciate their detailed feedback and questions.
**The dataset contains a large amount of data, which may vary in quality. I hope the author can discuss the impact of the low-quality data in the dataset. Is it negligible?**
We found that low quality video frames such as blank frames, blurry frames, low resolution frames, or those with minimal camera movement were automatically filtered out in the Dust3r scoring phase. If the frames were low quality they naturally received a low score because the depth could not be well estimated. We verified this through manual inspection by randomly sampling frames which were selected by the Dust3r scoring mechanism.
**Iines 178-179 mentions that it is infeasible to guarantee the uniqueness of video content. Hence, how do you make sure there is no overlap between training and testing set?**
We ran deduplication over video titles, URLs, and thumbnails for all 1 million videos though this does not guarantee videos contain overlapping clips. The primary utility of a large multi-view dataset is for training models. We believe that datasets such as Mip-NeRF are better for evaluation.
**Lines 126-128 [in theory we can align the views to look at the various regions of the scene to form multiple view correspondences] need experimental support. **
Figure 8 and 9 in the appendix show qualitative paired frames from the dataset. Quantitatively we found 80,567,325 paired frames from the videos by aligning the views. Upon manual inspection we found that the frames were aligned to contain overlapping content. We will add more qualitative paired frames from the dataset to the appendix for readers view.
**The work uses Depth Anything as the depth estimator. I wonder if this causes a domain gap in the proposed dataset.**
We used the outdoor version of Depth Anything off-the-shelf. We found that the depth estimation did not need to be extremely precise, as small errors in the scale of the scene did not impact training the model.
**What is the tradeoff between using this dataset and others?**
We believe our dataset is complementary to other datasets. For example, ZeroNVS combines multiple datasets to train their model. We see no reason why our dataset could not be similarly combined with other multi-view datasets to train even better models. In terms of training, the training time scales proportionally with dataset size so more data means longer training. We will add a FLOPS comparison to the appendix.
---
Rebuttal 2:
Title: Growth Rate (Open question)
Comment: I just realized that my discussion is not public to the authors. Anyway, I copy and paste it here...
I have an open question regarding data maintenance. The dataset is large-scale because it includes existing data accumulated over time from YouTube, in addition to the data collected through your pipeline. My concern is whether the data growth rate might slow down in the future. Perhaps the authors could consider plotting the total amount of data over time or by year, so that readers can understand the data growth rate. This could also help the authors estimate the expected time needed to collect a new large amount of data and update the dataset to a second version. | Summary: This paper considers novel view synthesis from a single image. The main contribution is a dataset sourced from 1 million 360 videos from youtube, which is used to create about 380 million pairs of images with corresponding relative poses. With this dataset the authors train a diffusion model similar to zero-1-to-3 [24], but for general scenes instead of objects, and real-world scenes instead of synthetic ones. State-of-the-art results are shown for novel view synthesis from single images.
Strengths: - The main contribution of the paper is that it introduces a dataset for novel view synthesis that is significantly larger (380 million image pairs extracted from 1 million 360-videos) than existing datasets for the task, and considers whole scenes rather than single objects as e.g. objaverse.
- The authors train a diffusion model (ODIN) for novel view synthesis that is more general than existing work (zero-1-to-3 and follow up works which only consider single objects), and compared to scene based methods (zeroNVS), it is more accurate and general, permitting a larger set of possible relative poses (R,t). The improvements are mainly due to training on the larger dataset since the methodology, namely the diffusion model architecture and training, is very similar to zero-1-to-3.
- The proposed diffusion model, ODIN, along with the 3d reconstruction method dust3r can generate realistic-looking 3d reconstruction from a single image.
Weaknesses: - There is a lack of qualitative examples. Specifically, a few things that would have been useful to show are 1) The generated images along the trajectories in fig. 4 in the form of both images and videos. Only the final 3d reconstruction by dust3r and not the intermediately generated images by ODIN are shown, 2) Images of the renderings of different methods in table 1 and 2, and 3) Generated images and the corresponding 3d reconstructions on Google Scanned Objects, so qualitative comparison corresponding to table 3 for all methods.
- There are a few missing references and as a result too strong claims in the paper. 1) “Long-Term Photometric Consistent Novel View Synthesis with Diffusion Models” (Yu et al. ICCV 2023), 2) “Geometry-Free View Synthesis: Transformers and no 3D Priors” (Rombach et al. ICCV 2021), and 3) “Look Outside the Room: Synthesizing A Consistent Long-Term 3D Scene Video from A Single Image” (Ren et al. CVPR 2022). These works, and also zeroNVS (which is cited) all address novel view synthesis from a single image, though trained on smaller datasets. ZeroNVS also constructs a 3d scene by training a NeRF with Score distillation sampling. Due to this I think e.g. the claim on line 48 that ODIN is the first to reasonably reconstruct a 3d scene from a single image is too strong.
Technical Quality: 3
Clarity: 4
Questions for Authors: - I wonder how important the use of dust3r is for the 3d reconstruction. Is the confidence measure by dust3r used, and if so, how? I would imagine that it would be useful to get a consistent 3d reconstruction even if the views generated by ODIN are not consistent. By consistent I mean both that the views might not adhere to a specific camera model well enough for traditional sfm method to work well, or that the different views contain contradicting contents. Is that why there are some holes in the reconstructions in fig. 4 (e.g. the middle part in the leftmost 3d model of the cathedral), because it looks from the positions of the cameras that they should not be empty? What would happen if colmap was used on the generated images, would it still result in a reasonable 3d model?
- Did you notice any failure cases related to just conditioning on the previously generated image for the 3d generation? The camera motions in fig. 4 are fairly simple with little rotations and objects do not disappear and reappear, but if there would be more rotations I would imagine that it needs to be more carefully handled, e.g. as in the paper “WonderJourney: Going from Anywhere to Everywhere” (CVPR 2024).
- For 3d reconstruction, many methods (e.g. zeroNVS) use a NeRF model and score distillation sampling instead of direct 3d reconstruction from multiple generated images like it is done in this paper. Were any experiments like that performed?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: This is adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review. We are glad they appreciated the diversity and scale of our proposed dataset and the new capabilities of our resulting model. We provide responses to their comments and questions below and are happy to engage in further discussion.
**A few things that would have been useful to show are 1) The generated images along the trajectories in fig. 4 in the form of both images and videos. Only the final 3d reconstruction by dust3r and not the intermediately generated images by ODIN are shown, 2) Images of the renderings of different methods in table 1 and 2, and 3) Generated images and the corresponding 3d reconstructions on Google Scanned Objects, so qualitative comparison corresponding to table 3 for all methods.**
Great suggestion, we’ve included generated videos in the one page pdf for better visualization and have added 3D construction examples and further image examples to the paper.
**There are a few missing references and as a result too strong claims in the paper ZeroNVS also constructs a 3d scene by training a NeRF with Score distillation sampling. Due to this I think e.g. the claim on line 48 that ODIN is the first to reasonably reconstruct a 3d scene from a single image is too strong.**
Thanks for the suggestion. We’ve removed this sentence from the introduction and softened the language overall. Additionally we’ve added the missing references you’ve pointed us to.
**I wonder how important the use of dust3r is for the 3d reconstruction. Is the confidence measure by dust3r used, and if so, how? I would imagine that it would be useful to get a consistent 3d reconstruction even if the views generated by ODIN are not consistent. By consistent I mean both that the views might not adhere to a specific camera model well enough for traditional sfm method to work well, or that the different views contain contradicting contents. Is that why there are some holes in the reconstructions in fig. 4 (e.g. the middle part in the leftmost 3d model of the cathedral), because it looks from the positions of the cameras that they should not be empty? What would happen if colmap was used on the generated images, would it still result in a reasonable 3d model?**
For the 3D reconstruction portion colmap performs qualitatively the same. You’re correct that the hole in fig. 4 is due to low confidence from the Dust3R model in that region. In general, as long as enough images are generated, experimentally, we found that colmap can perform similarly. To summarize, Dust3r paired with the graphical search is necessary for finding corresponding frames while constructing the dataset, but at inference the model is agnostic to the method for 3D reconstruction.
**Did you notice any failure cases related to just conditioning on the previously generated image for the 3d generation? The camera motions in fig. 4 are fairly simple with little rotations and objects do not disappear and reappear, but if there would be more rotations I would imagine that it needs to be more carefully handled, e.g. as in the paper “WonderJourney: Going from Anywhere to Everywhere” (CVPR 2024).**
Yes, as the reviewer mentioned the most common failure case is if an object comes in and out of occlusion it can be inconsistently generated by the model. With long-range camera trajectories more parts of the scene will come in and out of view which can lead to inconsistent generations, a typical failure mode of conditional NVS models. We found that the SDS anchoring introduced by ZeroNVS was effective in improving this consistency.
**For 3d reconstruction, many methods (e.g. zeroNVS) use a NeRF model and score distillation sampling instead of direct 3d reconstruction from multiple generated images like it is done in this paper. Were any experiments like that performed?**
In general the dataset and model are agnostic to the downstream 3D reconstruction method.
We originally chose direct 3D reconstruction so that the geometry of the scene could be inferred in real time and we found it better for large-scale scenes and inferring geometry compared to distilling into a NeRF model.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for providing these answers. I keep my accept rating. | Summary: This paper introduces a new approach for efficiently finding corresponding frames from diverse viewpoints from YouTube videos at scale. The resulting 360-1M dataset contains over eighty million frames from approximately one million videos. Additionally, the paper also builds on the 360-1M dataset by proposing the ODIN model, which synthesizes novel views and reconstructs 3D scenes from a single input image. The ODIN model is trained by leveraging the diverse viewpoints in the dataset to handle significant camera view changes and complex real-world scenes. The authors demonstrate the utility of the 360-1M dataset as well as the benefits of their proposed ODIN model by comparing it to state-of-the-art 3D reconstruction approaches, where it outperforms the latter by a significant margin.
Strengths: 1) The model figures are informative and especially helpful in helping the reader to understand the different stages of the data curation process as well as the intuition behind each stage. The paper is also well-organized and well-written.
2) The introduced algorithm to transform 360 degrees videos into the required multi-view format is especially significant. It opens up the possibility of scaling up available 3D object and scene datasets, which are often plagued by a lack of large-scale data. Furthermore, it is especially helpful that this algorithm appears to generalize well to videos from diverse subject categories.
3) The ODIN model also introduces a novel modeling objective which is conditioned on both rotation and translation. This differs from prior work which is unable to do so. This may be beneficial for generating more realistic samples of real world objects and scenes for further training.
Weaknesses: 1) The paper relies heavily on the 360-degree videos collected from YouTube, which may contain varying video qualities and resolutions. The described preprocessing steps does not fully account for this diversity in video quality, which may affect the quality of the extracted multi-view data negatively.
2) It is mentioned in line 220 that optimizing Equation 2 directly results in a degenerate solution and Equation 3 is introduced to mitigate this problem. However, there is no ablation to demonstrate this. It may help make the paper more comprehensive.
3) In Tables 2 and 3, it is shown that ODIN outperforms other approaches on the LPIPS and PSNR metrics but not the SSIM metric consistently. However, there is no discussion of possible reasons for this discrepancy.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please look at the above-mentioned limitations.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review. We are glad they found our algorithm for transforming 360° video into multi-view data to be especially significant and see its potential for scaling 3D datasets. We provide responses to their comments and questions below and would be happy to engage in further discussion.
**The paper relies heavily on the 360-degree videos collected from YouTube, which may contain varying video qualities and resolutions. The described preprocessing steps does not fully account for this diversity in video quality, which may affect the quality of the extracted multi-view data negatively.**
We experimented with filtering techniques such as CLIP filtering, where we fine-tuned a CLIP model to classify low quality frames from high-quality frames. For the fine-tuning data we hand-labeled 10k example frames. We found that the confidence scoring used to find image-pairs was sufficient for removing videos that were low in quality such as blank frames, corrupted or blurry frames, and static videos. Intuitively, frames that are low in quality are unlikely to be paired since relative pose and depth are difficult to estimate. For resolution we filtered out videos below a resolution of 2K. We also provide the available resolutions, view count, etc. for each video in the meta-data so users can filter videos based on various criteria.
**It is mentioned in line 220 that optimizing Equation 2 directly results in a degenerate solution and Equation 3 is introduced to mitigate this problem. However, there is no ablation to demonstrate this. It may help make the paper more comprehensive.**
Thanks for the suggestion. We’ve included an ablation below over the hyperparameter lambda in Eq. 3 and have added discussion about choosing lambda to the appendix. We found that if the lambda value was too small, then the model would mask all objects in the scene and the reconstruction quality was poor. If the lambda value was too large, then smearing would occur near dynamic objects as the model was forced to predict the motion of the object.
| | LPIPS | PSNR | SSIM |
|--------------------|-------|-------|-------|
| $\lambda$ = .1 | 0.498 | 12.31 | 0.366 |
| $\lambda$ = .5 | 0.467 | 14.73 | 0.402 |
| $\lambda$ = 1 | 0.378 | 16.67 | 0.525 |
| $\lambda$ = 2 | 0.395 | 14.94 | 0.431 |
**In Tables 2 and 3, it is shown that ODIN outperforms other approaches on the LPIPS and PSNR metrics but not the SSIM metric consistently. However, there is no discussion of possible reasons for this discrepancy.**
On line 247 we briefly note that previous works [1,2,3] have shown that SSIM and PSNR are not accurately correlated with better novel view synthesis. For example, in [3] figure 7 they show that a frame with uniformly grey pixels can outperform far more reasonable generations. In general, PSNR is sensitive to the low level pixel statistics which is tangential to the content of the image. We will better clarify this point in section 6.1.
1. Zero-Shot 360-Degree View Synthesis from a Single Image
2. Generative novel view synthesis with 3d-aware diffusion models
3. Nerdi: Single-view nerf synthesis with language-guided diffusion as general image priors
---
Rebuttal 2:
Comment: Thank you very much for your comprehensive efforts in addressing my concerns. In particular, I find your response on how and why the long-range image pairs are used particularly helpful. I also appreciate your efforts on the additional ablation experiments, given the time and computational constraints on your end as well as empirically quantifying the effects of using 1 FPS in other responses. After reading all of the reviewers' feedback as well as the corresponding responses from the authors, I will retain my original rating. | Summary: This paper addresses the challenge of training models with 3D object understanding using large-scale real data, proposing the use of 360-degree videos as a scalable and diverse data source. Its main contributions include a 360-1M video dataset, an efficient method to convert 360-degree videos into multi-view data, and a novel diffusion-based model for view synthesis. The proposed model is evaluated on NVS and 3D reconstruction tasks compared to prior approaches.
Strengths: 1. The paper propose a novel large-scale real-world dataset collected from Internet along with an efficient pipeline to extract valid correspondence between frames. This dataset should be useful for future researches in the community.
2. The idea of utilizing 360 degree video to provide more correspondences is interesting.
3. Experiments demonstrate the proposed model is able to predict novel view images along a long trajectory, which is an important capability for large-scale scene/object understanding.
4. The paper is overall well written and easy to follow.
Weaknesses: 1. Besides its major contribution as a large-scale and open-world training dataset compared to ZeroNVS, the model primarily aligns with Zero-1-to3, which somewhat limits its technical contributions.
2. Experiments:
[-] Could you provide a reason why Zero-1-to-3 is not compared on the DTU benchmark, which is suitable for Zero-1-to-3 given its focus on single objects placed on tabletops?
[-] I wonder how Zero-1-to-3 is applied in Fig. 3, as its camera pose involves both elevation and azimuth angles, which may differ from the video’s camera poses. Could the authors clarify how they calculate the camera pose?
[-] This paper presents quantitative results only for experiments on DTU and MipNeRF360 (Tables 1 and 2). Including visual comparisons could help understanding.
[-] Why does Table 4 in the supplementary materials show a higher Chamfer Distance compared to ZeroNVS?
2. Others
[-] Considering the training requirements outlined in the appendix, it’s evident that training is highly resource-intensive, which is reasonable given the dataset’s scale. I wonder if there are potential methods to expedite this process, such as initializing from a pretrained Zero-1-to-3 model?
[-] Could the authors elaborate on how long-range image pairs contribute to the model? It’s challenging to distinguish correspondence in Fig. 9 from the appendix, as the images appear not to overlap. While such pairs may assist the model in extrapolating unseen regions, could they potentially hinder learning the viewpoint changes of observable content?
[-] The visualization in Fig. 3 could be enhanced by including relative changes in viewing angles, as it’s currently difficult for humans to find correspondence between the output and input view images.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above. Overall, I believe the proposed dataset and the demonstrated performance of the model hold promise for future research. I look forward to the rebuttal addressing my concerns.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thorough review. We are glad they find our model demonstrates important capabilities for large-scale scene understanding, the idea of using 360° video is interesting, and our dataset will be useful for future researchers. We provide responses to their comments and questions below and are happy to discuss further.
**Besides its major contribution as a large-scale and open-world training dataset compared to ZeroNVS, the model primarily aligns with Zero-1-to3, which somewhat limits its technical contributions.**
We contend that the technical contribution of this work goes significantly beyond a new dataset. We've enumerated our contributions below.
- We present a new technique capable of finding image-pairs from in-the-wild 360° videos. Corresponding image pairs are crucial for training novel view synthesis (NVS) models and current approaches such as colmap and hloc are not capable of extracting such pairs from large-scale video.
- We introduce temporal masking, a method which enables training on dynamic scenes which has been a limitation of novel view synthesis works.
- We provide a new dataset consisting of over 1 million 360° videos, multi-view image pairs from the videos with their relative camera pose, and meta data for each video.
- Our model demonstrates novel capabilities in generating full, real-world scenes along long-range trajectories consisting of both rotation and translation compared to Zero1-to-3 which is limited to synthetic objects and camera rotations about the object.
**Could you provide a reason why Zero-1-to-3 is not compared on the DTU benchmark, which is suitable for Zero-1-to-3 given its focus on single objects placed on tabletops?**
We originally omitted Zero-1-to-3 since it is very similar to ZeroNVS and strictly worse in performance. We have added it back to Table 1 for completeness.
**I wonder how Zero-1-to-3 is applied in Fig. 3, as its camera pose involves both elevation and azimuth angles, which may differ from the video’s camera poses. Could the authors clarify how they calculate the camera pose?**
It’s true that Zero1-to-3 is limited to specific relative camera poses. We fit the azimuth, elevation angles, and radius of rotation to minimize L2 distance between the Zero1-to-3 camera pose and the target pose. Due to this limitation of Zero-1-to-3, we primarily focused our comparisons on ZeroNVS as it is designed for unconstrained camera movement, real-world scenes, and has a similar architecture.
**This paper presents quantitative results only for experiments on DTU and MipNeRF360 (Tables 1 and 2). Including visual comparisons could help understanding.**
We’ve included videos of our generation for better visual understanding in our one page pdf. Additionally we will add qualitative comparison for MipNeRF360 and DTU to the main paper.
**Why does Table 4 in the supplementary materials show a higher Chamfer Distance compared to ZeroNVS?**
Thanks for catching the type in the appendix. That should be a comparison with Zero-1-to-3. We’ve corrected the error.
**I wonder if there are potential methods to expedite this process, such as initializing from a pretrained Zero-1-to-3 model?**
We tried initializing from Zero-1-to-3 and ZeroNVS. We found that it led to slightly faster convergence, about ~10% fewer training iterations. The performance at convergence was the same when starting from both pretrained models.
**Could the authors elaborate on how long-range image pairs contribute to the model?**
The long-range image pairs in the training data are crucial for generating long-range trajectories and modeling large-scale scenes with our model such as in figure 1l. Models are limited to types of camera movements in their training distribution. We observe this with zero1-to-3 being limited to only rotating the camera around objects.
**It’s challenging to distinguish correspondence in Fig. 9 from the appendix, as the images appear not to overlap. While such pairs may assist the model in extrapolating unseen regions, could they potentially hinder learning the viewpoint changes of observable content?**
The trees and the steps in the background are shared by both images and Fig. 9 shows that Dust3r is capable of accurately finding the relative camera poses. Empirically we did not find that extrapolating to unseen regions decreased performance of the model for observable content.
**The visualization in Fig. 3 could be enhanced by including relative changes in viewing angles, as it’s currently difficult for humans to find correspondence between the output and input view images.**
Thanks for the suggestion, we will add this to figure 3 to improve the visualization.
---
Rebuttal Comment 1.1:
Title: Post-rebuttal
Comment: I thank the authors' efforts in addressing my concerns, especially over the experiment details. After reading the opinions from other reviewers, I raise my rating to weak accept. | Rebuttal 1:
Rebuttal: We thank the reviewers for their insightful reviews and feedback. We are glad that the reviewers found our dataset to be crucial for modern data-driven methods [whPn] and our proposed pipeline for constructing multi-view data from 360° video to be innovative [whPn], interesting [8VWi] and especially significant for scaling 3D datasets [Jg5T]. Additionally we appreciate that the reviewers found our model is important for large-scale scene understanding [8VWi], introduces a novel modeling objective [ Jg5T], and achieves state-of-the-art for novel view synthesis [whPn].
**In the one page pdf we’ve included generated videos from our model to better visualize the outputs of ODIN. We recommend that the pdf be downloaded and viewed with adobe acrobat for the best viewing experience.**
Below we’ve addressed individual questions and comments and are happy to engage in further discussion with reviewers.
Pdf: /pdf/8c2e971c85de416cff175faec0a24879c8462835.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
FreqBlender: Enhancing DeepFake Detection by Blending Frequency Knowledge | Accept (poster) | Summary: The generalization of DeepFake detection can be addressed by enhancing training data with synthetic fake faces, known as pseudo-fakes. Traditional methods generate these faces by spatial blending operations. However, the limitations of these methods are that they ignore to simulate the frequency distribution, where additional important forgery clues might be found.
This paper introduces an interesting method called FreqBlender, which attempts to blend proper frequency knowledge to enhance the effectiveness of pseudo-fake faces. This method involves identifying forgery clues in the frequency domain and blending the corresponding frequency components from fake faces into real faces. This process is challenging due to the variability and spread of forgery clues across different frequency ranges.
To achieve frequency decomposition, the authors propose a Frequency Parsing Network (FPNet) that can adaptively partition the frequency domain. The network, consisting of an encoder and three decoders, extracts semantic, structural, and noise information from the faces. Training FPNet is difficult due to the lack of ground truth for frequency distribution, so the authors devise a novel training strategy leveraging inner correlations among different frequency components.
Once trained, FPNet can extract the structural information of a fake face's frequency component and blend it with a real face to generate a pseudo-fake face. This method complements existing spatial blending methods and improves detection performance on multiple DeepFake datasets, demonstrating its effectiveness.
Strengths: 1. This papeer enhancing the generalization of DeepFake detection is a critical and current issue in AI safety, as real-world DeepFakes are likely generated by unknown models. This method addresses this challenge by taking an innovative approach, creating pseudo-fake faces through blending frequency knowledge rather than the conventional spatial knowledge used in existing methods.
2. This work identifies the limitations of existing methods, noting that spatial blending only mimics the visual similarity of wild fake faces but overlooks their frequency characteristics. By analyzing the frequency distribution, the authors describe a Frequency Parsing Network (FPNet) to parse the frequency range carrying forgery traces. Interestingly, this method doesn't require wild fake faces for blending; instead, it uses existing spatial-based pseudo-fake faces as surrogates.
6. The method's efficacy is validated across multiple recent DeepFake datasets, demonstrating its effectiveness with various backbones and complementing existing spatial-blending methods. This showcases its robustness and practical applicability in enhancing detection performance.
Weaknesses: This paper proposes an interesting and intuitive method that seems reasonable to me. Here are some recommendations for future improvements:
1. Investigate more fine-grained frequency ranges to uncover the detailed composition of artifacts. This could potentially enhance the model's performance by providing a more nuanced understanding of forgery clues.
2. Develop algorithms to enhance the explainability of the model’s decisions. This will help users understand how and why a particular face is identified as a DeepFake, increasing trust and usability in practical applications.
3. Extend the method to handle other types of forgery operations beyond face-swapping, such as whole-face synthesis and attribute editing. This will broaden the method's applicability and ensure it remains effective across a wider range of DeepFake techniques.
4. Utilizing LLMs may help you parse the frequency more precisely, as LLMs can provide the prior knowledge which may compensate for the lack of ground truth when training FPNet.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. As described in Section 3, the statistics of the frequency distribution are calculated using an azimuthal average, which includes a logarithmic transformation and the calculation of azimuthally-averaged flux in circular annular apertures. Could you provide more details on why placing the center of the circular annular aperture at the top-left corner of the frequency map results in a one-dimensional spectrum diagram?
2. The FPNet uses three independent decoders to analyze the frequency domain. The output of the decoders is not clearly explained, as it is visualized in grayscale as the frequency map in Fig. 5. I understand that the output is a soft mask ranging from [0,1] for the frequency map. Therefore, I suggest using a color version to replace the DCT map in Fig. 5.
3. In the discussion on the loss of authenticity determination, the authors aim to create two sets of faces, with and without forgery traces. Does the set C_r represent the face sets that do not contain structural information of fake faces, while C_f corresponds to the face sets that have structural information of fake faces?
4. In Prior and Integrity Loss, the final term is designed to ensure that the combination of three frequency components covers the entire frequency domain. This term sums these frequency components and calculates the distance from a mask of 1. However, this term might not guarantee that the sum of these frequency components equals 1, but it can reduce deviations. What is the rationale behind this design??
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This paper lists its limitations in main body. Since the proposed method is designed to address the limitations of existing spatial-blending techniques. it operates under the assumption that faces are forged using face-swapping methods. It unlikely tackle other types of forgeries, such as whole-face synthesis.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive review on the significance of our topic, novelty and experimental results.
**Q1: Could you provide more details on why placing the center of the circular annular aperture at the top-left corner of the frequency map results in a one-dimensional spectrum diagram?**
**R1:** Thanks for the question. We follow the process in [26,27], where we use the top-left corner of frequency map as the center for several circular annuli apertures. The radius of these apertures is varied from 0 up to the side length of the frequency map, with a fixed interval. Then we average the signals inside the interval area between adjacent circular annuli apertures. In this way, we convert a two-dimensional frequency map into a one-dimensional frequency spectrum. We will revise correpsonding descriptions to improve clarity.
**Q2: I understand that the output is a soft mask ranging from [0,1] for the frequency map. Therefore, I suggest using a color version to replace the DCT map in Fig. 5.**
**R2:** Thanks for your suggestion. We will replace it with a color version.
**Q3: Does the set C_r represent the face sets that do not contain structural information of fake faces, while C_f corresponds to the face sets that have structural information of fake faces?**
**R3:** Yes, your understanding is correct.
**Q4: The Prior and Integrity Loss sums these frequency components and calculates the distance from a mask of 1. What is the rationale behind this design?**
**R4:** Thanks for the insightful question. We expect these frequency components will span the entire frequency domain (all elements in frequency map) while minimizing overlap. Therefore, we calculate the distance of the sum of these frequency components using mask 1. We will enhance the clarity of the related descriptions in the revision.
---
Rebuttal Comment 1.1:
Comment: Thank you for the reply. The author responded to my concern directly, resolving my doubts. I believe this article has a positive significance for the community and the idea is quite novel, so I keep my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your comments! I am very glad to hear that our response addresses your concerns and resolves your doubts. We appreciate that the novelty of our idea is highly recognized and its positive significance to the community is well acknowledged. | Summary: This work studies generalizable deepfake detection. The proposed method is motivated by a new data augmentation method that blends real and fake faces in the frequency domain. The paper claims the forgery can be found in three different frequency bands and proposes an unsupervised learning method, Frequency Parsing Network. Empirical results indicate that the proposed method's performance achieves results comparable to state-of-the-art.
Strengths: 1. unsupervised way of learning the frequency component, which can favor its usage.
2. I like the analysis from Fig 2 and 3, showing the difference between real and fake residing in the high-frequency band.
3. I enjoy the problem formulation as well as the learning object in section 4.2.
Weaknesses: 1. the proposed method does not achieve state-of-the-art detection performance in table 1.
2. table 2 only has a few methods on the FF++ dataset. It should not be the case that SBI is only a baseline to compare.
3. the proposed approach might be limited in standard face-swap fake face, or gan-generated face. How about diffusion model generated face?
4. in terms of approach. the proposed method works when capturing semantic, structural and noise information. What if it fails to capture any one of those?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. line 31 says the motivation of simulating various blending artifacts. However, these artifacts are quite "old" compared to that from current diffusion models. That says, how would your method perform on the face generated by stable diffusion? such as instantid, photomaker?
2. did you compare with DIRE and HiFi-Net?
R1: DIRE for Diffusion-Generated Image Detection, ICCV2023
R2: Hierarchical fine-grained image forgery detection and localization, CVPR2023
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I am impressed by the learning objective formulation, but I am inclined to reject it because it does not achieve reasonable sota performance and does not show generalization to diffusion-generated images.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the valuable time and comments.
**Q1: The proposed method does not achieve state-of-the-art detection performance in table 1.**
**R1:** We would like to highlight that **our method achieves the highest number of top-1 rankings compared to all others (best performance on 3 out of 4 datasets)**, and ranks 3rd on the DFDC dataset among the 20 methods compared. Given the wide variety of detection strategies and deepfake datasets, we believe this achievement can demonstrate the comprehensive superiority of our method.
Moreover, we respectfully believe that both the AC and all reviewers could agree with that, the value and contribution of a work should not solely be judged by empirical results, but also by the innovative insights it offers for future research.
**Q2: Table 2 only has a few methods on the FF++ dataset. It should not be the case that SBI is only a baseline to compare.**
**R2:** The rationale for using SBI lies in two aspects:
* **Second-best performance**: SBI consistently performs better than other methods (**coming in second only to our method, achieving the second-best performance on 2 out of 4 datasets**). Therefore, we believe that comparing our method with SBI is representative and can reflect the efficacy of our method in cross-manipulation evaluation.
* **High relevance**: Our method adopts SBI as the substitute for fake faces in creating frequency-blending pseudo-fake faces. Thus, comparing our results with those of SBI further highlights the effectiveness of our method.
We will include these explanations in the revision for better clarity.
**Q3: The proposed approach might be limited in standard face-swap fake face, or gan-generated face. How about diffusion model generated face? This method does not show generalization to diffusion-generated images.**
**R3:** Thanks for the thoughful question. We provide our responses in three aspects:
* **Our method is in scope of face-swap deepfake detection**: As described in Introduction (L29-L31) and stated in the limitation section (L322-L325), our method targets for the face-swap deepfake detection, **a significant topic in recent years, as evidenced by works such as Face x-ray [14] (CVPR2020), PCL [15] (ICCV2021), SBI [16] (CVPR2022), UCF [12] (ICCV2023), BiG-Arts [25] (PR2023), F-G [43] (CVPR2024), and LSDA [23] (CVPR2024)**. This scope limitation has been noted by other reviewers as well.
* **It is important to note that detecting diffusion-generated faces and face-swap deepfake are typically two different tasks**: Detecting Diffusion-generated images typically falls under the category of whole image synthesis, as seen in works such as DIRE (ICCV2023, the suggested work by reviewer), AVG (ICASSP2023). In these works, they do not validate themselves on face-swap deepfake datasets studied in our paper. Therefore, **while detecting diffusion-generated images is also an important task, it falls outside the scope of our method.**
* **In response to the suggestion, we investigate whether our method can facilitate the detection of Diffusion-based generated images.** We conduct an additional scenario: **Diffusion-based face-swap deepfake detection**. In this scenario, a recent diffusion model (Collaborative Diffusion (CVPR2023)) is used to synthesize a face, which is then blended into original videos. We use the dataset provided by this work to create 200 fake faces, duo to the time contraints. **Table A** presents the performance of our method in this context, demonstrating its effectiveness in detecting such forged faces. We will include this experiment into the revision.
**Table A: Performance of Diffusion-based face-swap deepfake detection.**
| | Diffusion-based |
|:-----------:|:----: |
| FreqBlender | 94.74 |
**Q4: The proposed method works when capturing semantic, structural and noise information. What if it fails to capture any one of those?**
**R4:** Thanks for the thoughtful question. The proposed method is capable of decomposing any input face into three frequency components. However, in certain exceptional cases (e.g., whole face synthsis or attribute editing), the captured structural information may not fully cover the forgery traces, affecting the effectiveness of pseudo-fake faces and subsequently limiting the efficacy of deepfake detectors.
**Q5: "Line 31 says the motivation of simulating various blending artifacts. However, these artifacts are quite "old" compared to that from current diffusion models. How would your method perform on the face generated by stable diffusion?**
**R5:** See response in R3.
**Q6: Did you compare with DIRE and HiFi-Net?**
**R6:** Thanks for the value question.
* **DIRE is intended for detecting fully synthesized faces, rather than specifically identifying face-swap deepfakes.** To compare with ours, we adapt this method under our scenario. **Table B** illustrates the performance of both DIRE and our method, demonstrating that our method significantly outperforms DIRE.
* **HiFi-Net, on the other hand, is built for general image manipulation localization (i.g., segmentation) and could not be used for face forgery detection.** Thus, HiFi-Net is not included in our comparisons.
**Table B: Comparison with DIRE and HiFi-Net.**
| | CDF | DFDC | DFDCP | FFIW |
|:-----------:|:-----:|:----:|:-----:|:-----:|
| DIRE | 46.71 | 51.97| 45.24 | 50.82 |
| FreqBlender | 94.59 | 74.59| 87.56 | 86.14 |
---
Rebuttal 2:
Comment: Dear Reviewer GPf8,
We sincerely appreciate the time and thoughtful comments you’ve provided. With the remaining time being limited, we are eager to receive your feedback, especially on the issues we've addressed in our rebuttal.
In our response, we have carefully reviewed your comments and provided the following summaries:
1. Clarified the concern regarding SOTA and its application to diffusion-based models.
2. Analyzed the suggested DIRE and HIFI-Net.
3. Addressed the questions about using SBI in Table 2 and the shortcomings of FPNet.
We hope the new experiments and analysis have demonstrated the merits of our work. We deeply appreciate your time and effort!
Best regards,
Authors
---
Rebuttal Comment 2.1:
Title: Review Comments
Comment: Thanks for the clarification from the authors. I am sorry that I mistakenly thought the method did not achieve the sota detection performance.
However, I still keep my scores for the proposed method's limitation on FF++ and lack of convincing experiments to show the generalization ability to the diffusion face-swap.
1. FF++ is the core of deep fake detection, and cross-manipulation evaluation is common in the community [R1,R2,R3,R4,R5]. However, table 2 from the main table merely reports 2 methods, and NONE of these references are discussed in the submission. This is insufficient to conclude that the proposed method is effective enough.
[R1] End-to-End Reconstruction-Classification Learning for Face Forgery Detection, CVPR2022
[R2] Thinking in Frequency: Face Forgery Detection by Mining Frequency-aware Clues, ECCV2020
[R3] UCF: Uncovering Common Features for Generalizable Deepfake Detection, ICCV2023
[R4] Spatial-Phase Shallow Learning: Rethinking Face Forgery Detection in Frequency Domain, CVPR 2021
[R5] Exploring Disentangled Content Information for Face Forgery Detection, ECCV 2022
2. Table 1 in the rebuttal is invalid, and I am not convinced by this experiment in **two** aspects:
- no method was used for the comparison, only showing your performance is not enough.
- why can't authors evaluate more commonly used diffusion-based methods when easily accessible tools are available, such as stable 1.5, stable 2.1, instantiated, and Dalle2? Being selective on the face generation method is not fair. For example, [R6] reports the generalization performance on StarGAN, DDPM, DDIM, and SD.
[R6] Transcending Forgery Specificity with Latent Space Augmentation for Generalizable Deepfake Detection, CVPR 2024
3. The proposed method is frequency-based, then what is the performance when the forgery trace largely occurs in the RGB domain whereas less on the frequency domain. For example, these cartoon faces with large eyes generated from SD-based method? will the performance decline?
---
Rebuttal 3:
Comment: We sincerely appreciate your valuable time and additional comments you’ve provided.
**Q1. FF++ is the core of deep fake detection, and cross-manipulation evaluation is common in the community [R1,R2,R3,R4,R5]. However, table 2 from the main table merely reports 2 methods, and NONE of these references are discussed in the submission. This is insufficient to conclude that the proposed method is effective enough.**
**R1**. We would like to clarify that **the goal of our method is to create effective pseudo-fake faces solely using real faces, as in (Face X-ray [14], PCL [15], SBI [16])**. But the difference (novelty) is that we introduce FreqBlender to incorporates frequency information into these pseudo-fake faces.
Typically, the efforts of this direction are trained on **real faces in FF++ without using fake faces.** This allows them to be fairly validated across all four tracks in FF++, demonstrating effectiveness in cross-manipulation scenarios. Therefore, we follow the protocol of these methods and compare our approach with theirs. **Since SBI shows the second-best performance, we limit our comparison to it in Table 2. However, additional studies involving more methods, including [14,15,16]), are presented in Table 7 (Supplementary)**
**Following your suggestion, we thoroughly review these papers and found that R2 and R4 do not conduct cross-manipulation evaluations, while R3 employs a less challenging scenario (training on three tracks and testing on one). Thus, R2, R3, and R4 are not suitable for direct comparison.**
R1 and R5 are trained on the DF track of FF++. Comparing with these two methods are relative fair. The results, shown in **Table A**, highlight the notable superiority of our approach. We will include this comparison in the revision.
**Table A: Cross-manipulation comparison.**
| | DF | F2F | FS | NT | Avg |
|:-----------:|:-----:|:----:|:-----:|:-----:|:-----:|
| R1 (trained on DF) | 99.65 |70.66 |74.29 |67.34 |77.99 |
| R5 (trained on DF) | 99.22 |60.18 |68.19 |61.17 |72.19 |
|FreqBlender | 99.18 |96.76 |97.68 |90.88 |96.13 |
**Q2: Table 1 in the rebuttal is invalid, and I am not convinced by this experiment in two aspects: 1) no method was used for the comparison, only showing your performance is not enough. 2) why can't authors evaluate more commonly used diffusion-based methods when easily accessible tools are available, such as stable 1.5, stable 2.1, instantiated, and Dalle2? Being selective on the face generation method is not fair. For example, [R6] reports the generalization performance on StarGAN, DDPM, DDIM, and SD**
**R2**: In the first round of rebuttal, we follow the suggestion to show the generalization of our method to diffusion models. The performance is 94.74, which we believe can demonstrate the efficacy. **As suggested, we evaluate more methods (I2G, Face X-ray, SBI) as in Table B**, which also demonstrate the efficacy of our method.
To create diffusion-based face-swap deepfakes, **Collaborative Diffusion (CVPR 2023) is more user-friendly and efficient than Stable 1.5, Stable 2.1, Instantiated, and DALL-E 2**, as it allows for more effective editing of facial attributes compared to the suggested models.
**Please note that evaluating on StarGAN, DDPM, DDIM, and SD is not the primary focus or contribution of R6, which is why its performance on these models is not satisfactory (around 73% on average)**. Following the suggestion, we have made an effort to validate our method on StyleGAN and StyleGAN2 (which provide ready-made face sets) and present the results in **Table C**. **Our method outperforms others but achieves performance comparable to R6**.
**Table B: Performance of Diffusion-based face-swap deepfake detection.**
| | Diffusion-based |
|:-----------:|:----: |
| I2G | 63.51 |
| Face X-ray | 89.81 |
| SBI | 91.70 |
| FreqBlender | 94.74 |
**Table C: Results in Gan-generated images.**
| | StyleGan | StyleGan2 |
|:-----------:|:---------:|:-------------:|
| I2G | 47.89 | 43.86 |
| Face X-ray | 59.11 | 66.54 |
| SBI | 63.99 | 72.88 |
| FreqBlender | 64.39 | 76.70 |
**Q3: The proposed method is frequency-based, then what is the performance when the forgery trace largely occurs in the RGB domain whereas less on the frequency domain. For example, these cartoon faces with large eyes generated from SD-based method? will the performance decline?**
**R3**: We would like to emphasize that our method does not rely solely on frequency information. **As highlighted in L241, our approach integrates frequency knowledge into the existing spatial-blending pseudo-fake faces, allowing it to address both spatial and frequency aspects effectively.**
We hope this explanation clarifies your concerns and encourages a re-evaluation of our work’s contribution. | Summary: This paper proposes FreqBlender, a new method to generate pseudo-fake faces that effectively simulate the frequency distribution of wild fake faces. Unlike common blending techniques done in the spatial domain, their method blends frequency knowledge. An analysis is conducted, showing that three frequency components are present in faces, namely semantic information, structural information, and noise information. They demonstrated that structural information contains forgery traces.
To this end, the first stage of their method employs FPNet, a novel architecture built to disentangle the input fake face into the three different frequency components. As no ground truth exists for this task, carefully crafted objectives provide the necessary supervision.
In the second stage, the trained network is used to parse the frequency components, and the structural component is extracted from a given fake face. It is then blended into a real face to obtain the pseudo-fake.
The method outperforms the state-of-the-art across different relevant datasets.
Strengths: Originality:
This method is the first to mine frequency structural information and propose a way of generating pseudo-fake faces that mimic the frequency distribution of fake faces.
Quality:
The method is well evaluated on different datasets, and the results are consistent.
Clarity:
The paper is well written, and the method is well explained. Figures, along with a preliminary analysis of the frequency components present in faces, are provided, which help to understand the method.
Significance:
This method provides a new pseudo-fake mechanism that complements spatial blending, which is the reference in the literature. The authors have shown that their method improves the results of different frame and spatial blending-based methods such as DSP-FWA, I2G, Face-Xray, and SBI. The authors claim that the code will be released in the future. To ease the reproduction and adoption of this technique, we encourage them to also release the pretrained weights of FPNet.
Weaknesses: References Missing:
Some references are missing, for example, methods that are frame-based and only use real faces during training [1, 2], as well as other recent detectors [3, 4].
Additional Overhead:
During the training of the detector, the inference of FPNnet (1 encoder and 3 decoders) is required for generating a pseudo-fake. This adds an overhead during training. The authors should include an efficiency analysis.
[1] Li et al., Pixel bleach network for detecting face forgery under
compression, in IEEE Transactions on Multimedia 2023
[2] Larue et al., SeeABLE: Soft Discrepancies and Bounded Contrastive Learning
for Exposing Deepfakes, in ICCV 2023
[3] Dong et al., Implicit Identity Leakage: The Stumbling Block to Improving Deepfake Detection Generalization, in CVPR 2023
[4] Guo et al., AltFreezing for More General Video Face Forgery Detection, in CVPR 2023
Technical Quality: 3
Clarity: 3
Questions for Authors: Clarification Needed on FPNet Training:
While the authors explain that for deepfake detection, pseudo-fakes generated using SBI are used as a substitute for fake faces from FF++, the authors should include the results when real fake faces from FF++ are used in Table 1.
Use of PixelShuffle:
Why is a PixelShuffle used in the decoder of FPNet?
FPNet Direct Application:
Can the FPNet be used directly for deepfake detection? What happens when a real face is input into the FPNet?
Ablation Study and Frequency Knowledge:
The authors successfully conduct an ablation study on the backbone (Table 5) and show that results improve consistently when compared to SBI. Why does this method introduce frequency knowledge when all the tested backbones are spatial (e.g., EfficientNet-b4)? It would be interesting to compare the proposed method with a built-in frequency-based backbone (e.g., Face Forgery Network (F3-Net) [3], AFFGM [1], or the multi-scale high-frequency feature extractor from [2]).
Results on Highly Compressed Data:
What are the results of the method on highly compressed data, i.e., FF++ LQ?
[1] Li et al., Frequency-aware Discriminative Feature Learning Supervised by Single-Center Loss for Face Forgery Detection, in CVPR 2021
[2] Luo et al., Generalizing Face Forgery Detection with High-frequency Features, in CVPR 2021
[3] Thinking in Frequency: Face Forgery Detection by Mining Frequency-aware Clues, in ECCV 2020
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Acknowledgement of Limitations:
The authors acknowledge the usual limitations of pseudo-fake generation methods, i.e., the hypothesis that the test face is crafted using face-swapping techniques may not hold when the test face is generated using other techniques.
Discussion on Societal Impact:
The race between attackers and defenders is a well-known issue in the field of deepfake detection. The authors should discuss the potential negative societal impact of their work and how it could be exploited by attackers to generate more realistic deepfakes by simply training their generators to fool the proposed method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the positive feedback regarding the originality, quality, clarity, and significance of our work, and are grateful to the constructive comments.
**Q1: Some references are missing, e.g., [1, 2, 3, 4].**
**R1:** Thanks for the suggestion. We will include these related references in the revision.
**Q2: Additional Overhead: FPNet (1 encoder and 3 decoders) adds an overhead during training. The authors should include an efficiency analysis.**
**R2:** Thanks for the constructive suggestion. Note that the encoder only contains four convolutional layers and each decoder contains four layers made up of a convolutional layer and a PixelShuffle operation (L173-L175). Thus, FPNet is lightweight and has small overhead. **Table A shows the efficiency analysis regarding FLOPs, Params and Run-time of FPNet**. Using an Nvidia 3080ti GPU, the run-time is 248 FPS (0.004 seconds per image), and creating a pseudo-fake face costs 0.074 seconds on average. Thus our method only introduces minimal overhead in training detectors.
**Table A: Efficiency analysis of FPNet.**
| Architecture| Params (M) | FLOPs(G)| Run-time (FPS) |
|:-----------:|:--------:|:-----:|:-----:|
| FPNet (Encoder * 1,Decoder * 3) | 20.14 | 29.10 | 248 |
**Q3: While the authors explain that pseudo-fakes generated using SBI are used as a substitute for fake faces from FF++, the authors should include the results when real fake faces from FF++ are used in Table 1.**
**R3:** Thanks for the constructive suggestion. We conduct an extra experiment accordingly, as shown in **Table B**. It can be seen that by using "real" fake faces to create pesudo-fakes with our method, the performance is notably degraded across all datasets. This is because compared to SBI, only using the fake faces from FF++ lack diversity, primarily reflecting the frequency distribution of FF++ instead of real-world fake faces. We will include these results into revised version.
**Table B: Performance of using fake faces from FF++.**
| Method | Type | CDF | DFDC | DFDCP | FFIW |
|:-----------:|:----:|:-----:|:----:|:-----:|:-----:|
| FreqBlender |"real" fake | 75.79 | 66.30 | 82.01 | 67.70 |
| FreqBlender |SBI | 94.59 | 74.59| 87.56 | 86.14 |
**Q4: Why is a PixelShuffle used in the decoder of FPNet?**
**R4:** PixelShuffle is an effective upsampling operation that is widely used in generative models. This operation rearranges the channels and reshapes the features by a specified factor. Since the decoder is designed to generate frequency component masks, we employ PixelShuffle operations for upsampling. We will include these descriptions in the revision for better clarity.
**Q5: Can the FPNet be used directly for deepfake detection? What happens when a real face is input into the FPNet?**
**R5:** Thanks for the thoughtful questions. The responses are as follows:
1) **FPNet is not intended for direct deepfake detection**. This is because that FPNet is designed to analyze the frequency components, which serves as a preprocessing step to create effective pseudo-fake faces for training deepfake detectors.
2) **FPNet can decompose the frequency domain into three components for both real and fake faces.** But for real faces, the frequency component corresponding to structural information does not contain forgery traces.
**Q6: ITable 5) Why does this method introduce frequency knowledge when all the tested backbones are spatial (e.g., EfficientNet)?**
**R6:** Thanks for the thoughtful question. Our method creates effective pseudo-fake faces by blending frequency knowledge. These pseudo-fake faces contain essential frequency knowledge and are used for training deepfake detectors. Thus, the frequency knowledge can be introduced, even though the backbones are spatial.
**Q7: (Table 5) It would be interesting to compare the proposed method with a built-in frequency-based backbone (e.g., F3-Net [3], AFFGM [1], or the multi-scale high-frequency feature extractor from [2]).**
**R7:** Thanks for the constructive suggestion. As suggested, we attempt to explore the effect of the backbones in frequency-based methods. Since AFFGM [1] has not released its codes, we are unable to test our method on this backbone. Nevertheless, we retrain F3-Net [3] and GFFD [2] using their released codes. The results, **shown in Table C**, indicate that the performance of these frequency-based backbones also improved, demonstrating the effectiveness of our method on frequency-based backbones. We will include these results and analysis in the revision.
**Table C: Performance of our method with F3-Net [3] and GFFD [2].**
| Method | CDF | FF++ | DFDCP | FFIW | Avg |
|:-----------: |:-----:|:----:|:-----:|:-----:|:-----:|
| F3-Net [3] + SBI | 84.94 | 93.42| 79.29 | 73.42 | 82.77 |
| F3-Net [3] + Ours| 88.10 | 95.16| 84.32 | 74.49 | 85.52 |
| GFFD [2] + SBI | 81.34 | 91.81| 77.19 | 65.53 | 78.97 |
| GFFD [2] + Ours | 86.71 | 92.18| 78.25 | 77.45 | 83.65 |
**Q8: Results on Highly Compressed Data: What are the results on highly compressed data, i.e., FF++ LQ?**
**R8:** Thanks for the insightful question. **Table D** shows the results of our method when testing on FF++ LQ set. It can be observed that the performance of all methods significantly declines on the LQ set, which aligns with our expectations since compression operations can obscure forgery traces, making detection more challenging. However, our method still achieves the best performance compared to others, demonstrating better generalization ability on low-quality videos.
**Table D: Performance of our method on FF++ LQ.**
| Method | FF++ LQ |
|:-----------:|:-----: |
| I2G | 52.20 |
| Face x-ray | 65.41 |
| SBI | 76.11 |
| FreqBlender | 77.56 |
**Q9: We encourage them to also release the pretrained weights of FPNet**
**R9:** Thanks for the suggestion. We will release the weights along with the codes after acceptance.
---
Rebuttal 2:
Comment: Dear Reviewer Xqko,
Thank you once again for your insightful comments! We look forward to receiving your feedback. We hope the new experiments and additional explanations have demonstrated the merits of this paper. If you have any further questions, please do not hesitate to reach out.
Best regards, Authors | Summary: This paper have introduced an effective way to improve the generalization of DeepFake detection via generating pseudo-fake faces by blending frequency knowledge. The proposed approach achieves state-of-the-art (SOTA) results on various deepfake detecion datasets.
Strengths: 1) This paper attempts to combine frequency domain information and spatial domain information to deal with deepfake detection, which is very interesting. Spatial domain blending is very common, but it is still very rare to use it in the frequency domain, which is quite innovative.
2) The writing of this paper is easy to understand and the logic is clear.
3) The experiments in this paper are sufficient and effectively support the author's theoretical basis.
Weaknesses: 1) The idea of this paper is similar to some already published papers, such as [1], [2] and [3]. I hope this paper can cite and further analyze them:
[1] Tan, C., Zhao, Y., Wei, S., Gu, G., Liu, P., & Wei, Y. (2024, March). Frequency-Aware Deepfake Detection: Improving Generalizability through Frequency Space Domain Learning. In Proceedings of the AAAI Conference on Artificial Intelligence (Vol. 38, No. 5, pp. 5052-5060).
[2] Yu, B., Li, W., Li, X., Lu, J., & Zhou, J. (2021). Frequency-aware spatiotemporal transformers for video inpainting detection. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 8188-8197).
[3] Frank, J., Eisenhofer, T., Schönherr, L., Fischer, A., Kolossa, D., & Holz, T. (2020, November). Leveraging frequency analysis for deep fake image recognition. In International conference on machine learning (pp. 3247-3258). PMLR.
2) Moreover, Ref.30 is an important and widely cited work that introduces the frequency domain into deepfake detection. I don’t quite understand why the author does not compare and analyze it with this work.
3) The source information, page numbers, publishers, etc. of many references are incomplete, and some even are incorrect. For example, [4] should come from CVPR2024 instead of arXiv.
[4] Choi, J., Kim, T., Jeong, Y., Baek, S., & Choi, J. (2024). Exploiting Style Latent Flows for Generalizing Deepfake Video Detection. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 1133-1143).
4) The faces in Figure 6 should have been resized. It would be better if the author could show the original resolution image to help readers better understand the experimental results.
5) Why is the fluctuation of λ3 so much larger than that of other parameters? I hope the author can provide the corresponding theoretical basis and further detailed analysis.
6) If the author can answer and revise the relevant questions in the final version, I will consider increasing the final score in the next round.
Technical Quality: 3
Clarity: 3
Questions for Authors: Some images in the paper will be a little blurry after zooming in, so it is best to convert all images to pdf or eps format.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for the positive comments on the novelty, writting and experiment configuration, and for the constructive suggestions.
**Q1: The idea of this paper is similar to some already published papers, such as [1], [2] and [3]. I hope this paper can cite and further analyze them.**
**R1:** Thanks for highlighting these references. We analyze them in the following:
* **Analysis of [1][3]**: References [1] and [3] focus on detecting deepfakes (e.g., GAN-generated faces) by learning frequency information. Specifically, [1] introduces a dedicated architecture to extract frequency information hidden in deepfake faces, while [3] directly analyzes the frequency spectrum of these faces. **Our method differs significantly from these methods:**
* **Motivation and methodology are different**: [1][3] focus on designing frequency-sensitive architectures or strategies to capture frequency information from input samples. In contrast, our method creates pseudo-fake faces for training deepfake detectors. This is achieved by a novel frequency-blending strategy to make pseduo-fake faces resemble the real-world fake faces.
* **Scope is different**: It should be noted that [1][3] mainly focus on detecting whole face image synthesis, whereas our method lies in the scope of face-swap deepfake detection.
* **Analysis of [2]**: Reference [2], on the other hand, explores the use of frequency knowledge for **general image forensic tasks (e.g., object inpainting), rather than deepfake face detection**. Thus, this method could not be directly adapted to our task.
In summary, while these references also employ frequency information, the motivation, methodology and scope are different. We will include these papers and their analyses in the revision.
[1] Frequency-Aware Deepfake Detection: Improving Generalizability through Frequency Space Domain Learning. AAAI 2024.
[2] Frequency-aware spatiotemporal transformers for video inpainting detection. ICCV 2021.
[3] Leveraging frequency analysis for deep fake image recognition. ICML 2020.
**Q2: [30] is an important and widely cited work that introduces the frequency domain into deepfake detection. Why the author does not compare and analyze it with this work.**
**R2:** Thank you for the question. Reference [30] is an earlier work (ECCV 2020) that investigated the use of the frequency domain. We cited this work in our main text in order to provide context for using DCT (L108). Since this approach is somewhat outdated, we did not include it in Table 1 of the main text. As suggested, we retrain the model using the released code and evaluate it rigorously according to their instructions. The results are shown in **Table A**. It can be seen that [30] can only perform decently on these recent datasets. We will include this analysis in the revised version.
**Table A: Performance of [30] and our method.**
| Method | CDF | DFDC | DFDCP | FFIW |
|:-----------:|:-----:|:----:|:-----:|:-----:|
| [30] | 72.93 | 61.16| 81.96 | 61.58 |
| FreqBlender | 94.59 | 74.59| 87.56 | 86.14 |
**Q3: Why is the fluctuation of λ3 so much larger than that of other parameters? I hope the author can provide the corresponding theoretical basis and further detailed analysis.**
**R3:** Thanks for the constructive suggestion. $\lambda_3$ is the coefficient of Quality-agnostic Loss (L222), which helps regulate the intensity of noise information. Since Prior and Integrity Loss also play a role in controlling noise intensity (implicitly), $\lambda_3$ should be relatively subtle compared to other coefficients. To furthter illustrate this trend, we conduct an additonal experiment with $\lambda_3 = 0.1$. Note that $\lambda_3 = 1,0.01,0.001$ has been studied in Table 6 of Supplementary. The results are shown in **Table B**, indicating that our method is slightly improved with a lower $\lambda_3$. We will provide a more detailed analysis in the revision.
**Table B: Effect of our method on different loss proportions.**
| $\lambda_1$ | $\lambda_2$ | $\lambda_3$ | $\lambda_4$ | CDF | FF++ | DFDCP | FFIW | Avg |
|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:-----:|:-----:|:-----:|
|0.1 | 1 | 0.1 | 0.5 | 93.59 | 95.60 | 86.60 | 84.78 | 90.14 |
|0.1 | 1 | 0.01 | 0.5 | 94.27 | 96.11 | 87.81 | 85.54 | 90.93 |
**Q4: Many references need to be revised.**
**R4:** We will carefully check the references and revise them accordingly.
**Q5: It would be better if the author could show the original resolution image (Figure 6) to help readers better understand the experimental results.**
**R5:** We will revise Figure 6 accordingly.
**Q6: Some images in the paper are a little blurry after zooming in. It is best to convert all images to pdf or eps format.**
**R6**: We will update these figures accordingly to enhance clarity.
---
Rebuttal 2:
Comment: Dear Reviewer RzXt,
We sincerely appreciate your time and thoughtful comments. We eagerly look forward to your feedback, especially on the issues we've addressed in our rebuttal. Our main goal is to ensure that our response aligns closely with your suggestions. Your input is invaluable to improving our work.
Best regards, Authors | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their valuable time and professional comments. We are encouraged by the positive feedback and suggestions on the following aspects:
* **Soundness, Presentation, and Contribution**:
* **All reviewers** rate these sections as **Good** in their reviews.
* **Novelty of our method**:
* **Reviewer RzXt** acknowledges that our method is quite interesting and innovative in terms of frequency blending.
* **Reviewer Xqko** recognizes the originality and significance of our method, particularly in the new pseudo-fake mechanism.
* **Reviewer Z1eA** highlights that the method addresses an existing challenge by taking an innovative approach of frequency blending.
* **Extensive and sufficient experiments**:
* **Reviewer RzXt** notes that the experiments are sufficient and effectively support the theoretical basis.
* **Reviewer Xqko** comments that our method is well evaluated, with consistent results.
* **Reviewer Z1eA** emphasizes that our method is valdate across various DeepFake datasets with different backbones.
* **Well-written and clear logic**:
* **Reviewer RzXt**, **Reviewer Xqko** and **Reviewer Z1eA** comment that the paper is well-written, the logic is clear and the method is well explained.
* **Reviewer Xqko**, **Reviewer GPf8** like the preliminary analysis in figures of frequency components in faces.
We appreciate that **Reviewer GPf8** has a positive view of our unsupervised way of learning frequency comopents, the analysis of frequency distribtuion and the problem formation along with learning objects. Meanwhile, **Reviewer GPf8** also raises two concerns: **"...but I am inclined to reject it because it does not achieve reasonable sota performance and does not show generalization to diffusion-generated images."**
We would like to highlight the responses to these concerns:
* **Concern of SOTA**: We would like to highlight that **our method achieves the highest number of top-1 rankings compared to all others (best performance on 3 out of 4 datasets)**, and ranks 3rd on the DFDC dataset among the 20 methods compared. Given the wide variety of detection strategies and deepfake datasets, we believe this achievement can demonstrate the comprehensive superiority of our method. **Moreover, we respectfully believe that both the AC and all reviewers could agree with that, the value and contribution of a work should not solely be judged by empirical results, but also by the innovative insights it offers for future research.**
* **Concern of generalization to Diffusion-generated images**: We would like to thank the reviewer for this thoughful comment.
* **Our method is in scope of face-swap deepfake detection**: As described in Introduction (L29-L31) and stated in the limitation section (L322-L325), our method targets for the face-swap deepfake detection, **a significant topic in recent years, as evidenced by works such as Face x-ray [14] (CVPR2020), PCL [15] (ICCV2021), SBI [16] (CVPR2022), UCF [12] (ICCV2023), BiG-Arts [25] (PR2023), F-G [43] (CVPR2024), and LSDA [23] (CVPR2024)**. This scope limitation has been noted by other reviewers as well.
* **It is important to note that detecting diffusion-generated faces and face-swap deepfake are typically two different tasks**: Detecting Diffusion-generated images typically falls under the category of whole image synthesis, as seen in works such as DIRE (ICCV2023, the suggested work by reviewer), AVG (ICASSP2023). In these works, they do not validate themselves on face-swap deepfake datasets studied in our paper. Therefore, **while detecting diffusion-generated images is also an important task, it falls outside the scope of our method.**
* **In response to the suggestion, we investigate whether our method can facilitate the detection of Diffusion-based generated images.** We conduct an additional scenario: **Diffusion-based face-swap deepfake detection**, where the diffusion model is adapted to a face-swap scenario. In this scenario, a diffusion model (Collaborative Diffusion (CVPR2023)) is used to synthesize faces, which are then blended into original videos. The results are shown in **R3** of our response to **Reviewer GPf8**, demonstrating the efficacy of our method in detecting Diffusion-based face-swap deepfakes. We will include this experiment in the revision.
We hope above responses could address the concerns of **Reviewer GPf8**, and facilitate AC to make a more comprehensive assessment of the value and contribution of our work. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
OctreeOcc: Efficient and Multi-Granularity Occupancy Prediction Using Octree Queries | Accept (poster) | Summary: This paper proposes a new occupancy predictor that utilizes the octree structure to reduce the total number of occupancy query. The octree tree is initialized with semantic information and recursively rectified, so that it can accurately represent the structure of the driving scenes.
Strengths: 1. This work gives a new way to represent occupancy instead of voxel feature or dense query, which is enlightening for later works.
2. The prediction precision of OctreeOcc exceeds FB-Occ and PanoOcc, indicating a new SOTA. The visualization results also demonstrate its ability to predict fine structures.
Weaknesses: 1. OctreeOcc requires extra segmentation model to provide the reference for the initialization of octree queries, which reduces the efficiency gain. More details of the segmentation model should be published.
2. The ablation study shows that the accuracy is largely improved after using the extra semantic information. A fair comparison between OctreeOcc without extra semantic information and other methods is missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It is not clear how the high-confidence and low-confidence regions are determined in Iterative Structure Rectification. Is the confidence related to the split probability?
2. How does OctreeOcc handle the conflict between the octree queries initialized by current semantic information and the octree queries aligned from the previous frame?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have described the limitation of this work, and there is no potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1. **More details of the segmentation model**
Segmentation models are trained using homologous data and then integrated into the network as fixed components, being inferred together rather than as additional separate modules. In this setup, a UNet architecture is used, where the encoder part is shared with the backbone's ResNet, and the decoder consists of a three-layer deconvolution. This design minimizes additional memory overhead.
Thank you for the suggestion. We will include these details in the revised version.
W2. **Comparison between OctreeOcc without extra semantic information and other methods**
The **input and output** of our network are **the same** as those of other methods (i.e., input surround-view images and output occupancy prediction result). We trained a sub-network using the same data and incorporated it into the overall framework without relying on additional semantic information. The segmentation network is an integral part of our model, not a separate component. Theoretically, **we do not use extra information**.
Semantic segmentation is useful for initializing the octree structure; however, even without this component, our network's performance remains comparable to the dense voxel method (as shown in the table below).
| Method | mIoU | Memory |
| ------------------------ | ----- | ------ |
| PanoOcc | 42.13 | 35000M |
| FB-OCC | 43.41 | 31000M |
| Ours | 44.02 | 26500M |
| Ours w/o sem-guided init | 42.08 | 25700M |
Q1. **How the high-confidence and low-confidence regions are determined**
As shown in Section 3.5 of original paper, a split probability is maintained throughout the process, dividing high and low confidence areas using this probability and top-k. In this context, confidence actually refers to the split probability.
Q2. **How does OctreeOcc handle the conflict between the octree queries initialized by current semantic information and the octree queries aligned from the previous frame?**
During temporal fusion, the octree query of the historical frame must first be converted into the same structure as the current frame during ego-motion conversion to ensure that fusion can be performed.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. Most of my previous concerns have been resolved. Has the code for this work been open sourced? I believe that open source will have a promoting effect on the development of this field.
---
Reply to Comment 1.1.1:
Comment: We are pleased to open-source our code to promote the field. Due to the rebuttal rules, we can only send link to the AC at this time. We have sent the code's anonymized link to the AC and promise to make the code public once the paper is accepted. | Summary: This paper introduces OctreeOcc, aiming to tackle the heavy computational demands of the dense and regular grid representations employed by the previous methods. Instead of randomly initializing the octree structure, OctreeOcc incorporates the semantic priors of images as guidance. The octree structure are further updated iteratively to correct the potential errors.
Strengths: 1. Octree representation is a good solution to the heavy computation burden of the dense grid representation. OctreeOcc presents the details of the network components clearly.
2. OctreeOcc provides detailed ablation experiments.
3. OctreeOcc achieves good balance between the accuracy and the efficiency.
Weaknesses: 1. The image resolution and visible mask should be marked for fairly comparing with other methods. To my understanding, using the visible mask or not results in significant performance differences.
2. This paper mentioned temporal fusion in line 131. Is the temporal information obtained similar to the bevformer? Besides, how many frames are employed? (e.g., 4 frames in the PanoOcc)
3. The metrics of the Symponies in the tables should be updated. The performance differences are acceptable because Symponies uses stereo information (If OctreeOcc were to rely solely on monocular input).
4. Dose the Memory presented in the tables refer to training memory? To my understanding, the inference memory for most of the methods are highly lower than this.
5. In Table 6, query form of octree attend to the image features in a hierarchical manner, what is the number of the attention layers of the 3D voxel?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see above weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1. **The image resolution and visible mask should be marked**
In Table 1 of original paper, we use "$\star$" to indicate whether a method uses a camera mask during training. Methods marked with "$\star$" are the latest SOTA methods.
The image sizes used in training for each method are as follows:
| Method | Image Resolution |
| --------- | ---------------- |
| BEVFormer | 900 × 1600 |
| TPVFormer | 900 × 1600 |
| OccFormer | 896 × 1600 |
| CTF-Occ | 900 × 1600 |
| RenderOcc | 512 × 1408 |
| BEVDet4D | 512 × 1408 |
| PanoOcc | 900 × 1600 |
| FB-OCC | 640 × 1600 |
| Ours | 900 × 1600 |
All methods use settings close to full-size images within their respective ranges, ensuring a fair comparison.
W2. **Temporal Fusion Details**
Yes, the temporal fusion method is the 3D version of BEVFormer’s TSA, using a fusion of four frames, consistent with PanoOcc.
W3. **The metrics of the Symponies in the tables should be updated**
Thanks for your reminding, we will follow the number in its updated version and update it in our final version.
W4. **Memory presented in the tables**
Yes, the table shows memory consumption during training.
W5. **The number of the attention layers of the 3D voxel**
As shown in implementation details, octree encoder consists of three layers, each composed of TSA, ICA, and ISR modules.
---
Rebuttal Comment 1.1:
Comment: Thank you for the rebuttal. My concerns have been addressed, and I am inclined to accept this work. The content of the rebuttal should be added to the revised version.
---
Reply to Comment 1.1.1:
Comment: Thank you for raising your score. We will include the details mentioned in the rebuttal in the revised version. | Summary: This paper introduces OctreeOcc, an innovative 3D occupancy prediction framework that leverages the octree representation to adaptively capture valuable information in 3D, offering variable granularity to accommodate object shapes and semantic regions of varying sizes and complexities. The authors incorporate image semantic information to improve the accuracy of initial octree structures and design an effective rectification mechanism to refine the octree structure iteratively. Extensive evaluations show that OctreeOcc not only surpasses state-of-the-art methods in occupancy prediction, but also achieves a 15% − 24% reduction in computational overhead compared to dense-grid-based methods.
Strengths: - OctreeOcc utilises octree to represent 3D space, which is novel and no one has done it before as far as my knowledge.
- Owing to the design of octree queries, OctreeOcc improves the accuracy of occupancy prediction with the latency also decreasing.
- Extensive experiments on Occ3D and SemanticKitti demonstrate the effectiveness of OctreeOcc.
Weaknesses: - The pipeline is quite complex and heavily relies on manual operations, such as the selection strategy in Iterative Structure Rectification Module. My concern is that the complicated design may reduce the generalization.
- The updating processing of octree queries is not very clear. In my understanding, if the octree mask changes, the newly generated octree queries cannot directly match the original octree queries.
- Different octree queries may have different granularity, but, as shown in Eql (3), all octree queries seem to interact with image features only referring to their centers, which ignoring the influence of granularity.
Technical Quality: 3
Clarity: 2
Questions for Authors: please refer to weaknesses.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: NaN
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1. **Too many manually set hyperparameters affect generalisability**
Octree-related hyperparameters focus on selecting the top-k operation’s K during octree query sparsification and rectification.
These hyperparameters are statistically derived from the dataset and are not difficult to determine.
The experiments conducted on two datasets in the original paper used **the same set of octree hyperparameters**. Table 2 shows that applying the octree hyperparameters from nuScenes to SemanticKITTI resulted in SOTA performance on the IoU metrics and performance comparable to SOTA on the mIoU metrics. These experiments validate the generality of our approach.
W2. **Updating processing of octree queries**
When the octree mask changes, the strategy for adjusting queries in the affected region is to first convert to a dense form and then adapt to the new sparse structure based on the updated octree mask.
W3. **Octree queries focus only on their centers, neglecting granularity effects**
Thank you for the suggestion. We have compared the sampling scheme you mentioned (as shown in the Table below), where we assign different numbers of reference points to queries of varying granularity during cross-attention. For example, the finest granularity query uses only its centroid, while the medium granularity query samples four additional points around its centroid.
This approach can lead to performance gains but also increases memory overhead due to the effective increase in the number of queries. Our original design **balances memory overhead with performance**; regions with coarse query granularity are likely part of the same object, so sampling features only at the query's center is adequate. Overall, finding better ways to utilize octree queries will be our next step in enhancing our work.
| Sampling Method | mIoU | Memory |
| -------------------- | ----- | ------ |
| Original | 37.40 | 18500M |
| More sampling points | 38.12 | 23100M |
---
Rebuttal Comment 1.1:
Comment: Thank the authors for responding my concerns. Most of my concerns are addressed, but I still consider the pipeline is too complicated in updating octree queries.
However, it will not affect I think this work is a solid work and I will maintain my positive assessment of this paper. | Summary: The authors aim to tackle the problem of high memory usage in dense occupancy prediction for 3D scenes. They introduce OctreeOcc, a method that uses octree structures to make predictions more efficiently. Experimental results show that OctreeOcc reduces computational load and achieves competitive performance.
Strengths: S1: The authors address the significant issue of high memory consumption in dense occupancy prediction, highlighting the importance of finding more efficient solutions.
S2: The use of octree representation for occupancy prediction is a novel and effective approach, offering adaptive granularity for various object shapes and regions.
S3: Experimental results show some reduction in memory usage and latency, demonstrating the efficiency of the proposed method.
Weaknesses: W1: The proposed method seems difficult to optimize and relies on segmentation and historical information.
W2: The results on the SemanticKITTI dataset do not achieve state-of-the-art performance, and the reduction in memory usage is not very significant.
W3: The training process is complex and costly, requiring three days on 8 A100 GPUs, which seems less efficient compared to previous methods.
Technical Quality: 3
Clarity: 3
Questions for Authors: Q1: The proposed method seems to be quite time-consuming to train. Can the authors provide a comparison of training times with other methods? This is especially relevant given the emphasis on efficiency throughout the paper.
Q2: There is some confusion regarding the reported memory consumption in the experiments. For instance, the original VoxFormer [1] paper reports memory usage of less than 16GB, but the authors here report around 23GB. What accounts for this significant discrepancy? Are there differences in the settings or configurations used?
Q3: The implementation details provided focus on the nuScenes dataset. Could the authors include the specifics of the implementation for the SemanticKITTI dataset as well?
Q4: The paper lacks results or discussions on other commonly used datasets, such as SSCBench-KITTI360 [2]. Including these would provide a more comprehensive evaluation of the proposed method.
[1] VoxFormer: Sparse Voxel Transformer for Camera-based 3D Semantic Scene Completion
[2] SSCBench: Monocular 3D Semantic Scene Completion Benchmark in Street Views
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have identified a significant limitation in their work: the dependency on the quality of the occupancy ground truth (GT). This reliance can lead to suboptimal performance if the GT data, derived from sparse LiDAR point clouds and surface reconstruction, is not accurate or complete.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: W1. **The proposed method seems difficult to optimize and relies on segmentation and historical information**
Regarding optimization, our method uses the same optimizer settings as other methods and is trained for 24 epochs as well. Under **the same training conditions**, our approach achieves SOTA performance on multiple benchmarks.
Our segmentation component is integrated into the model as a whole and is used to obtain a more accurate initialization. Importantly, **the input to the framework remains unchanged**, continuing to use surround-view images. As shown in Table 4 of the original paper, even without semantic-guided initialization, our approach still outperforms the baseline (34.17 vs. 35.88) while reducing memory consumption (27200M vs. 18500M).
Furthermore, incorporating historical information is **standard** in all occupancy prediction methods, including ours. Including historical information is optional, and using only the current frame does not affect the model's inference.
W2. **Results on the SemanticKITTI**
In our experiment on SemanticKITTI, we used the same set of octree hyperparameters as for nuScenes, achieving SOTA IoU, which highlights the effectiveness of our approach in capturing the overall spatial structure.
For smaller objects (e.g., bicyclists, poles), our IoU is limited by the three-level octree structure. The 32 × 32 × 4 resolution of the first level may not fully capture small objects, leading to early information loss. This represents a **trade-off** between performance and computational resources; increasing the number of queries generally improves performance but significantly raises memory overhead.
Regarding memory overhead reduction, since model sizes for different methods on SemanticKITTI are small, the potential for memory reduction is limited. However, our memory reduction percentage on SemanticKITTI remains considerable.
Overall, the current results sufficiently validate the effectiveness of our approach. We will continue to fine-tune the hyperparameters for SemanticKITTI and explore better trade-offs to achieve improved performance across different datasets.
W3&Q1. **The training time**
We present the results of the comparison of the training time of each model on the nuScenes dataset, which show that our method **does not significantly increase training time** compared to other methods.
| Method | Training Time |
| --------- | ------------- |
| BEVFormer | 61h |
| PanoOcc | 67h |
| FBOCC | 63h |
| Ours | 71h |
Q2. **Memory consumption of VoxFormer**
The memory consumption reported in the VoxFormer paper accounts only for the stage-2. For a fair comparison, our calculations include both the stage-1 and stage-2. Additionally, the memory consumption we report for our method also includes the segmentation part of the model.
Q3. **Implementation details for SemanticKITTI**
The setup for SemanticKITTI remains essentially the same as in the previous method. We use ResNet50 as the backbone with an image size of 1220x370. The Adam optimizer is employed, with a learning rate of 2e-4 and a weight decay of 0.01. The octree-related hyperparameters are consistent with those set for the nuScenes dataset. We will include these details in the final version.
Q4. **Results on SSCBench-KITTI360**
We provide experiment results on SSCBench-KITTI360, and with the same efficient framework and settings, our method also outperforms other methods in both metrics.
| Method | IoU | mIoU |
| --------- | ----- | ----- |
| TPVFormer | 40.22 | 13.64 |
| OccFormer | 40.27 | 13.81 |
| VoxFormer | 37.76 | 11.91 |
| Ours | 40.89 | 14.03 |
---
Rebuttal Comment 1.1:
Comment: I appreciate your efforts in addressing my concerns in the rebuttal. Based on your responses, I am increasing my score to weak accept. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Getting More Juice Out of the SFT Data: Reward Learning from Human Demonstration Improves SFT for LLM Alignment | Accept (poster) | Summary: This paper proposes an approach to aligning Large Language Models (LLMs) with human preferences by integrating Inverse Reinforcement Learning (IRL) into the supervised fine-tuning (SFT) stage. The proposed IRL-based method simultaneously builds a reward model and a policy model during the SFT stage, enhancing efficiency and robustness against low-quality data.
Strengths: 1. The paper introduces a novel IRL-based method for the SFT stage, providing a fresh perspective on improving LLM alignment.
2. The paper provides a strong theoretical basis for the proposed algorithms, ensuring that they converge to stationary solutions of the IRL problem.
3. The evaluation includes both theoretical analysis and empirical testing on large-scale models, ensuring the validity of the results.
Weaknesses: 1. The paper does not address the scalability of the proposed methods to even larger models or different types of LLMs beyond the ones tested.
2. The evaluation results only present the empirical performance but lack experiments on computation costs introduced by this method.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. It seems that RFT and IRFT perform equally. What's the takeaway from these two methods? Moreover, how to choose them wisely.
2. Typo in L226, there's an error in referring to the Appendix.
3. In L204-206 "In 204 practice however we take a relatively small T and large K, because frequent on-line sampling is time 205 consuming" How does this parameter choice affect the final performance, especially compared to large T and Large K?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are included in the conclusion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The paper does not address the scalability of the proposed methods to even larger models or different types of LLMs beyond the ones tested.
**Response**: We thank the reviewer for the suggestion. Here we believe provide a theoretical analysis would be enough since we know exactly how much memory and time each of the proposed algorithm would need comparing to SFT. For Algorithm 1 in the paper, we need to maintain a reward model and a policy model (which is the LLM), and this is doubling the standard LLM fine-tuning. In short, the memory consumption and computation time of Algorithm 1 is similar to the standard RLHF process, where we also have the reward learning and policy optimization processes; For Algorithm 2, we simply need to maintain the policy (LLM) model, and the memory consumption would be exactly the **same** as the standard SFT, whereas the computation time would involving generating for the entire training sample, which would be of similar level as the standard RLHF process.
We summarize the memory consumption and the computational time of the proposed methods in the table below, *assuming that the reward and policy models are of same size*. Here *Forward* means the memory required for storing a model in inference mode, and *Backward* is the memory required for storing a model in training mode, including weights, activations and gradients; also *SFT* means the computational time as standard SFT, and *Generation* means the time to generate continuations for each of the input training prompts. *2SFT+Generation* is roughly the same time as standard RLHF.
| Method | Peak Memory | Computational Time |
| ------- | -------- | ------- |
| Algorithm 1 | Forward+Backward | 2SFT+Generation |
| Algorithm 2 | Backward | SFT+Generation |
> The evaluation results only present the empirical performance but lack experiments on computation costs introduced by this method.
**Response**: We thank the reviewer for the question. We refer to the answer to the previous question, where we do include the computational time analysis of the proposed methods. In short, Algorithm 1 is similar to RLHF, and Algortihm 2 is similar to a generation plus a DPO process.
> It seems that RFT and IRFT perform equally. What's the takeaway from these two methods? Moreover, how to choose them wisely.
**Response**: The difference between RFT and IRFT are similar to that of RLHF and DPO. RFT produces an explicit reward model for further generalization and IRFT is more memory and time efficient. We advocate using IRFT if you are not looking for an explicit reward model from the demonstration dataset (see our general response for the detail of the generalization ability of the reward model).
> Typo in L226, there's an error in referring to the Appendix.
**Response**: We thank the reviewer for the reminder. We modfied it correspondingly.
> In L204-206 "In 204 practice however we take a relatively small T and large K, because frequent on-line sampling is time 205 consuming" How does this parameter choice affect the final performance, especially compared to large T and Large K?
**Response**: We did our test on a large T and small K, i.e. we generate samples for every training batch and do the update. The performance is largely similar to the SFT performance (less than 1% lift from the baseline), as shown below (see our general rebuttal to all reviewers for the implementation details):
| Task | Arc | TruthfulQA | Winogrande | GSM8k | Hellaswag | MMLU | Average |
| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| zephyr-7b-sft-full | 74.83 | 34.07 | 76.09 | 31.92 | 81.09 | 58.86 | 59.48 |
| SFT | 75.20 | 34.18 | 76.16 | 34.95 | 80.96 | 57.71 | 59.86 |
| IRFT (T=1, SPIN) | 75.31 | 35.67 | 75.85 | 34.5 | 81.98 | 57.46 | 60.13 |
| IRFT (T=5) | 74.92 | 37.96 | 76.95 | 35.25 | 82.48 | 57.66 | **60.87** |
| IRFT (T=#batch) | 75.23 | 33.58 | 75.37 | 33.13 | 82.26 | 57.68 | 59.54 |
where T=#batch is the case when $T$ is large and $K$ is small.
In short, we believe that generating too frequently would not only be time-consuming, but also detrimental to the model performance (since we take to much variance into account at each stochastic gradient step for generation). We would recommend a reasonable frequency in generation which leads to the optimal performance, such as what we observed when we generate 5 to 10 times for the entire training dataset.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clear response, I will keep my positive score. | Summary: This paper proposes to study if IRL techniques can be used for aligning large language models. They propose two different algorithms: one that explicitly learns a reward model and one that implicitly learns the reward in the policy. These reward models are learned by contrasting expert generations and the policy generations.
Strengths: - The paper writing and motivation is clear.
- The paper includes in depth theoretical analysis.
- They evaluate the trained LLMs on a wide variety of benchmarks.
Weaknesses: - The experiments are not very convincing. For the Open LLM leaderboard experiments, the gain is really small (around 1 or 2%). It seems likely that this gain is due to variance in the training process, especially since IRFT has ~1% variance in different hyperparameter settings.
- There isn’t much discussion of the computational cost of the algorithms.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How accurate is the learned reward model?
- The experimental setting is not very clear. Is Algorithm 1 used on the top 10K data selected by the reward model? If so this is really problematic: shouldn’t you fine tune on the whole dataset (or a randomly downsampled subset)? Can you assume that you will have access to 10K samples with high reward scores? And wouldn't just be better to use the reward model used for data selection for RL?
- How robust is IRFT to hyperparameter setting compared to SFT?
- Is 1-2% increase on the OpenLLM benchmark meaningful?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The experiments are not very convincing. For the Open LLM leaderboard experiments, the gain is really small (around 1 or 2%). It seems likely that this gain is due to variance in the training process, especially since IRFT has ~1% variance in different hyperparameter settings.
**Response**: To further show the strength of our algorithms, we tested two extra experiments and inclide the results in the general rebuttal to all reviewers. First, we test our algorithm for 7b models with parameter-efficient fine-tuning (PEFT) settings and show that the proposed method could yield a **2.3% improvement** comparing to the baselines. Additionally, we also conduct a reward accuracy analysis in the general rebuttal where we show that the implicit reward learned by our method is more accurate than that of SFT and SPIN (please also see our answer to your third question). We hope this could give more strong evidence on the superiority of the proposed method.
> There isn’t much discussion of the computational cost of the algorithms.
**Response**: We thank the reviewer for bringing this issue up. We refer to the general rebuttal to all reviewers where we include a detailed computational cost analysis.
> How accurate is the learned reward model?
**Response**: We conduct a very simple experiment to show how accurate the reward learned by our model is. Since we train our 7b model with a high-quality dataset (**ultrachat**) and the corresponding implicit reward $r=\log(\frac{\pi_{\mathbf{\theta}}}{\pi_{\text{ref}}})$ should already be pretty accurate in terms of distinguish the good over the rejected continuations. Therefore we construct the implicit reward by equation $r=\log(\frac{\pi_{\mathbf{\theta}}}{\pi_{\text{ref}}})$ where we compare different $\pi_{\mathbf{\theta}}$ (pretrained, SFT, SPIN and IRFT) on the **ultrafeedback** dataset (note that we **did not do training on this dataset**) which is a preference dataset. We believe that the accurate reward model $r=\log(\frac{\pi_{\mathbf{\theta}}}{\pi_{\text{ref}}})$ should be able to distinguish the preferred and rejected continuations, and we compute the ratio of $r(\text{preferred})>r(\text{rejected})$ and obtain the follow table:
| Model | SFT | SPIN (IRFT with $T=1$) | IRFT ($T=5$) |
| -------- | ------- | ------- | ------- |
| Ratio $r(\text{preferred})>r(\text{rejected})$ | 42.6% | 42.8% | 55.6% |
where we can see that IRFT **significantly improves the implicit reward's distinguishability of chosen over rejected**. We hope this addresses the reviewer's concern.
> The experimental setting is not very clear. Is Algorithm 1 used on the top 10K data selected by the reward model? If so this is really problematic: shouldn’t you fine tune on the whole dataset (or a randomly downsampled subset)? Can you assume that you will have access to 10K samples with high reward scores? And wouldn't just be better to use the reward model used for data selection for RL?
**Response**: We thank the reviewer for the question. For the experiment on Algorithm 1, we want to demonstrate that with **high quality** demonstration data, our proposed Algorithm 1 is more capable of learning an accurate reward model that is able to distinguish the good over the rejected continuations, by assigning higher scores to the good continuations. Note that we need to construct a "high quality dataset" at the first place. The experiment setting is that we use a well-established reward model (beaver-7b-v3.0-reward) to pick up good demonstration data (which are the top 10k data in terms of reward score), and **fairly** train the model with SFT and our method on these 10k data, then compute the rewards of the continuations generated by SFT trained model and our trained model.
We are not assuming we always have access to top 10k samples, instead, top 10k sample is used as a relatively simple way to obtain high quality data to verify our presumption that our method is more capable of getting better reward if we have good demonstration data.
In practice, the implication of this experiment is that, if you have good demonstration data (under certain score criteria) at hand and would like your language model to align with it, then our proposed method should be more capable of extract this underlying score criteria and better aligning with it. **We do not assume that we always have a well-established reward model on hand in practice**.
> How robust is IRFT to hyperparameter setting compared to SFT?
**Response**: We thank the reviewer for the question and we believe IRFT is robust to hyperparameters in general. We test all our experiment of IRFT, SPIN and SFT with the same learning rate, epochs and batchsize. We do have new hyperparameters $T$ and $K$. However when you fix $T$, total epochs and the training dataset, $K$ is automatically determined. Therefore the only parameter one needs to determine in extra is $T$. We found that in practice taking $T=5$ (or multiples of 5, meaning training for more epochs) would result in the best performance for both 1b and 7b models. In short, the only extra parameter we have is $T$ and we believe $T=5$ is readily a good candidate for most of the alignment tasks.
> Is 1-2% increase on the OpenLLM benchmark meaningful?
**Response**: Yes, we do believe that it is meaningful. The openLLM leaderboard benchmark is one of the most heavily-used (SPIN also primarily used this) influential benchmark in the community where it does not rely on large models such as GPT4 but only on multiple downstream tasks to test the performance of different LLMs. For models like Llama3-70B, they in general also only achieve only 1 to 2% increase over the previous state-of-the-art (if you check their hugginface webpage, they also tested on tasks such as MMLU, GSM8K and Winogrande). Therefore we believe we are making meaningful progress using the same tasks as SPIN on 7b models.
---
Rebuttal Comment 1.1:
Title: Concern about OpenLLM benchmark and Comparison to SPIN
Comment: Thank you for providing more details on the computational cost of the algorithm and showing the accuracy of the obtained reward model. I am still concerned over the weak performance on the OpenLLM leaderboard. I checked the SPIN paper, and they reported a 8-9% improvement on the OpenLLM leaderboard. Do you know why there is a large discrepancy between the results reported in SPIN and those in this paper? Furthermore, is it possible to re-run some of your experiments with different seeds and conduct significance tests on the results?
---
Reply to Comment 1.1.1:
Comment: > **Concern about OpenLLM benchmark and Comparison to SPIN**:
Thanks for checking the rebuttal. TL;DR: 1. we suspect that a different version of baseline "zephyr-7b-sft-full" has been used in the SPIN paper which led to a significantly higher lift of 8-9% improvement, compared to ours of 2.6%. 2. when evaluated on the same baseline and codebase, every iteration our proposed algorithm outperforms every iteration of SPIN.
The SPIN paper reported the following improvements:
*(base model: older version; evaluation codebase: v0.4.0; test without LoRA)*
| Iterations | zephyr-7b-sft-full | iter0 | iter1 | iter2 | iter3 |
| -------- | ------- | ------- | ------- | ------- | ------- |
| Average Performance | 58.14 | 60.80 | 62.12 | 62.97 | 63.16 |
| Absolute increase | | 2.66 | 1.32 | 0.85 | 0.89 |
| Relative increase | | 4.56% | 6.85% | 8.31% | 8.63% |
We believe the result should be interpreted in the following way:
1. First, the baseline in our paper is different from SPIN paper: We believe a different baseline model is used in SPIN as evidenced in the Github discussions & the SPIN paper was released before the baseline is fully trained (Jan 2 vs Jan 10), see for example [this discussion](https://github.com/uclaml/SPIN/issues/12). In particular, in Table 3 of the SPIN paper, the base model yields a "26.76" accuracy for GSM8k dataset, but we observe "31.92" which is significantly higher. We notice that a newer version of both the model zephyr-7b-sft-full (see [their model commit history](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full/commits/main)) and the lm_eval evaluation package which both our paper and SPIN use for evaluation (see [their codebase](https://github.com/EleutherAI/lm-evaluation-harness)) are used in our paper. We run test on different versions of base model and eval codebase and obtain a table as follows:
*(Performance of different version of zephyr-7b-sft-full, note we also change the first two tasks to make it fully aligned with SPIN paper)*
| Base model version| Eval package version | Arc_challenge | TruthfulQA_mc2 | Winogrande | GSM8k | Hellaswag | MMLU | Average |
| -------- | -------| ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| [Newest](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full) | [v0.4.0](https://github.com/EleutherAI/lm-evaluation-harness/tree/v0.4.0) | 58.02 | 40.40 | 76.16 | 34.19 | 80.89 | 57.46 | 57.85 |
| [Version in SPIN](https://huggingface.co/alignment-handbook/zephyr-7b-sft-full/commit/90e0792328bc522e1662a3a7c611b030d563bf5b) | [v0.4.0](https://github.com/EleutherAI/lm-evaluation-harness/tree/v0.4.0) | 60.84 | 43.74 | 78.69 | 26.23 | 82.79 | 58.97 | 58.54 |
where we indeed see that the new version of the base model has a significant lift in the performance on GSM8k comparing to the old version. We remark that SPIN paper observes the most significant increase of SPIN algorithm on GSM8k task (from 26.76 to 35.10). **Since we use the newest model in all our experiments, it leaves much less space for us to improve from.**
2. Second, we should not compare the iter3's 8.63% increase with our paper's 2.66% increase directly. When we say we take $T=5$ and epoch=2 as in Table 3 of our paper, we essential split the data into 5 chunks and generate more frequently than SPIN, but still consume and generate for all the training data for **2 epochs in total** (SPIN iter0 also trained for 2 epochs). So what we need to compare is the first iteration of SPIN (which is SPIN iter0 in SPIN's original paper). In view of Table 3 in our paper, a fair comparison is the following table, note that the baseline is newest version of the model on the newest evaluation codebase (we follow the "iteration" notion in SPIN paper):
*(base model: newest version; evaluation codebase: v0.4.3; test without LoRA)*
| Iterations | zephyr-7b-sft-full | iter0 | iter1 |
| -------- | ------- | ------- | ------- |
| SPIN | 59.48 | 60.32 | 61.02 |
| IRFT | 59.48 | 60.71 | 61.03 |
Essentially **under our fair comparison setting**, every iteration of our proposed algorithm outperforms every iteration of SPIN. We didn't further do experiment for iter2 and iter3 because under our experiment setting, both SPIN and IRFT seem to have less significant improvement from iter0 to iter1 (less than 1 point in average accuracy), and also due to our limited computing resources.
(To be continued) | Summary: The authors propose using inverse reinforcement learning (IRL) on demonstration data in place of supervised learning, as is typically done for LLMs. The intuition is that human preferences are also encoded in demonstrations collected for SFT. Concretely, the authors propose a bilevel optimization approach with policy learning at the lower level and reward learning at the upper level. Two methods are proposed for alignment: with either implicit or explicit reward learning. The authors demonstrate improved performance when used to finetune smaller language models, compared to existing SFT approaches.
Strengths: - The method seems (to the best of my knowledge) to be novel and interesting, tying together recent LLM literature and IRL methods.
- The authors compare against relevant baselines (normal SFT, or SPIN) across a suite of benchmarks and two model sizes.
- The overarching idea and question the paper strives to answer is of importance and relevance to the research community, as the proposed methods enable extracting more value out of SFT data.
Weaknesses: - The experimental results seem to support the efficacy of the proposed algorithms, however the gains across various tasks with IRFT seem fairly marginal (it would help if Table 2 and 3 contained error bars, perhaps with policies finetuned with different seeds). As it stands, it seems like different benchmarks results are sensitive to different hyperparameters (choice of T), and the baselines outperform IRFT on 3/6 of the benchmarks in Table 3, suggesting that the additional complexity of the method does not necessarily lead to improved performance across all tasks. For example, the best average performance of IRFT is 61.03, whereas the best baseline achieves 61.02.
- Furthermore, the results seem more mixed in Table 3 compared to Table 2, suggesting that it's not clear the method is still effective as model sizes increase.
- It would help to present the SFT results in Table 3. Even though the authors note that further SFT could degrade performance, it would help to see how effectively using the demonstration data with IRFT or SPIN compares against naive SFT.
- Minor note: line 157 seems incomplete, "even when the demonstration policy is"...(extreme)?
- Minor note: the notation is a bit confusing, with theta referring to both the model parameters and the reward model parameters. For clarity, it would help if separate variables were used.
- Minor note: missing reference to appendix in line 226.
- Minor note: grammar error in line 320 -- "SPIN and IRFT are both capable of further improv(ing) the performance.."
Technical Quality: 2
Clarity: 2
Questions for Authors: See Weaknesses section for suggestions.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The paper is missing more thorough discussion on the method’s limitations and weaknesses (e.g. sensitivity to T, more complicated training process compared to SFT, etc.).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > The experimental results seem to support the efficacy of the proposed algorithms, however the gains across various tasks with IRFT seem fairly marginal (it would help if Table 2 and 3 contained error bars, perhaps with policies finetuned with different seeds). As it stands, it seems like different benchmarks results are sensitive to different hyperparameters (choice of T), and the baselines outperform IRFT on 3/6 of the benchmarks in Table 3, suggesting that the additional complexity of the method does not necessarily lead to improved performance across all tasks. For example, the best average performance of IRFT is 61.03, whereas the best baseline achieves 61.02.
**Response**: To further show the strength of our algorithms, we tested our algorithm for 7b models with parameter-efficient fine-tuning (PEFT) settings and show that the proposed method could yield a **2.3% improvement** comparing to the baselines. The result tables are in our general rebuttal to all reviewers. We hope this could give more strong evidence on the superiority of the proposed method.
> Furthermore, the results seem more mixed in Table 3 compared to Table 2, suggesting that it's not clear the method is still effective as model sizes increase.
**Response**: Again please see our new results included in the general rebuttal to all reviewers. First, we show that for the LoRA setting, our proposed method significantly outperforms SFT. Additionally, we also conduct a reward accuracy analysis in the general rebuttal where we show that the implicit reward learned by our method is more accurate than that of SFT and SPIN. We believe the extra evidence should further support that our proposed method yields significant improvements over SFT.
> It would help to present the SFT results in Table 3. Even though the authors note that further SFT could degrade performance, it would help to see how effectively using the demonstration data with IRFT or SPIN compares against naive SFT.
**Response**: We thank the reviewer for this suggestion. Our claim "SFT could degrade performance" is largely from [1] page 8 "SFT on further epochs 2 and 3 fails to yield more than 1% improvement". We also verified this during the rebuttal period. Due to the limit time during the rebuttal period, we were only able to conduct the experiment on LoRA and the result can be see in the general rebuttal to all reviewers, where we indeed see **less than 1% lift of SFT from the baseline**, verifying the claim made by [1].
We will rephrase "SFT could degrade performance" to "SFT on further epochs 2 and 3 fails to yield more than 1% improvement" in the revised paper.
> Minor note: line 157 seems incomplete, "even when the demonstration policy is"...(extreme)?
**Response**: We thank the reviewer for the reminder. We meant to say "even when the demonstration policy is extreme" and will modify it accordingly.
> Minor note: the notation is a bit confusing, with theta referring to both the model parameters and the reward model parameters. For clarity, it would help if separate variables were used.
**Response**: We thank the reviewer for the reminder. For RFT (Algorithm 1), the $\theta$ is the parameter for reward since the policy is determined by tge reward; For IRFT (Algorithm 2) $\theta$ is the parameter for the policy. We will modify the $\theta$ in Algorithm 1 into $\phi$ in our revised paper.
> Minor note: missing reference to appendix in line 226.
**Response**: We thank the reviewer for the reminder. We modfied it correspondingly.
> Minor note: grammar error in line 320 -- "SPIN and IRFT are both capable of further improv(ing) the performance.."
**Response**: We thank the reviewer for the reminder. We modfied it correspondingly.
> The paper is missing more thorough discussion on the method’s limitations and weaknesses (e.g. sensitivity to T, more complicated training process compared to SFT, etc.).
**Response**: We thank the reviewer for raising this issue up. We believe IRFT is robust to hyperparameters in general. We test all our experiment of IRFT, SPIN and SFT with the same learning rate, epochs and batchsize. We do have new hyperparameters $T$ and $K$. However when you fix $T$, total epochs and the training dataset, $K$ is automatically determined. Therefore the only parameter one needs to determine in extra is $T$. We found that in practice taking $T=5$ (or multiples of 5, meaning training for more epochs) would result in the best performance for both 1b and 7b models. In short, the only extra parameter we have is $T$ and we believe $T=5$ is readily a good candidate for most of the alignment tasks.
Please refer to our general rebuttal to all authors for the computational cost analysis.
**References**:
[1] Chen, Zixiang, et al. "Self-play fine-tuning converts weak language models to strong language models." International Conference on Machine Learning (2024).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for responding to my questions. The additional experiments on the accuracy of the implicit reward model are interesting to see, and overall the proposed method presents an interesting perspective on introducing IRL in the SFT stage. However, the improvements in the LoRA setting do not fully address my concerns around the lack of clear improvement coming from the method. As such, I have adjusted my score accordingly. | Summary: This paper proposes two methods focusing on RLHF, namely RFT and IRFT. The takeaway message is that the SFT stage also significantly benefits from learning a reward model instead of using the human demonstration data directly via supervised learning.
Strengths: 1. The paper is clear, illustrating the differences between similar works.
2. The motivation is clear, and the method is reasonable. Most importantly, it provides parts of theoretical analysis for the convergence.
Weaknesses: 1. The training procedure is complicated, meaning there are many issues in tunning. Is there any computation cost analysis for better understanding the limitations of this method?
2. Though the author discusses SPIN, the truth is that equation 4.7 in SPIN is very similar to equation 6 in this paper.
3. The experimental results are not strong. It did not surpass the SPIN by a large margin. Are there any other baselines that can be incorporated such as DPO?
Technical Quality: 2
Clarity: 3
Questions for Authors: How can the reward model obtain the generalization power? Can the author say more about it, in my understanding, the main issue that affects the generalization is the scale of the demonstration dataset.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: See weaknesses. I have to say I am not an expert in this field, therefore, I am willing to change my attitude if I find I am wrong.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > 1. The training procedure is complicated, meaning there are many issues in tunning. Is there any computation cost analysis for better understanding the limitations of this method?
**Response**: We thank the reviewer for bringing this issue up. For Algorithm 1 in the paper, we need to maintain a reward model and a policy model (which is the LLM), and this is doubling the standard LLM fine-tuning. In short, the memory consumption and computation time of Algorithm 1 is similar to the standard RLHF process, where we also have the reward learning and policy optimization processes; For Algorithm 2, we simply need to maintain the policy (LLM) model, and the memory consumption would be exactly the **same** as the standard SFT, whereas the computation time would involving generating for the entire training sample, which would be of similar level as the standard RLHF process.
We summarize the memory consumption and the computational time of the proposed methods in the table below, *assuming that the reward and policy models are of same size*. Here *Forward* means the memory required for storing a model in inference mode, and *Backward* is the memory required for storing a model in training mode, including weights, activations and gradients; also *SFT* means the computational time as standard SFT, and *Generation* means the time to generate continuations for each of the input training prompts. *2SFT+Generation* is roughly the same time as standard RLHF.
| Method | Peak Memory | Computational Time |
| ------- | -------- | ------- |
| Algorithm 1 | Forward+Backward | 2SFT+Generation |
| Algorithm 2 | Backward | SFT+Generation |
> 2. Though the author discusses SPIN, the truth is that equation 4.7 in SPIN is very similar to equation 6 in this paper.
**Response**: As we stated from Line 251 page 7, our Algorithm 2 include the algorithm in SPIN as a special case and our inverse reinforcement learning framework naturally introduces a difference of log probabilities formulation. We believe that the claim "the truth is that ... is very similar" is not very accurate, but instead we would say that the algorithms **coincide under certain conditions**. There is still a key difference between (4.7) in SPIN and our equation 6: in our equation 6 we take the negative sample $\tilde{y}$ generated by our current model $p_{\mathbf{\theta}}$, whereas in SPIN the negative sample $\tilde{y}$ is sampled from the model from the based model of current training epoch $p_{\mathbf{\theta}_t}$.
> 3. The experimental results are not strong. It did not surpass the SPIN by a large margin. Are there any other baselines that can be incorporated such as DPO?
**Response**: We don't think it necessary to compare with DPO. As we made it clear in the abstract, our proposed framwork is a method for the supervised fine-tune stage of alignment, which means that we do not have access to a preference dataset but only a demonstration dataset. Direct preference optimization (DPO) is not designed for aligning demonstration dataset. In fact, our proposed method can be regarded as a DPO-type algorithm for the supervised fine-tune stage. So we cannot compare our method fairly with DPO, at least not when DPO utilizes preference data.
However, in order to further show the strength of our algorithms, we tested our algorithm for 7b models with additional parameter-efficient fine-tuning (PEFT) settings and show that the proposed method could yield a **2.3% improvement** comparing to the baselines. The result tables are in our general rebuttal to all reviewers. We hope this could give more strong evidence on the superiority of the proposed methods.
> 4. How can the reward model obtain the generalization power? Can the author say more about it, in my understanding, the main issue that affects the generalization is the scale of the demonstration dataset.
**Response**: The power of this proposed method is that we can learn a reward model even without the preference dataset, and we completely agree with the reviewer on this point that the generalization power relies on the scale and quality of the demonstration dataset. For example, we train our 7b model with a high-quality dataset (**ultrachat**) and we believe that the corresponding implicit reward $r=\log(\frac{\pi_{\mathbf{\theta}}}{\pi_{\text{ref}}})$ should already obtain certain generalization ability. Therefore we did a simple test below to show the generalization power of the reward model learned from the proposed algorithm: we construct the implicit reward by equation $r=\log(\frac{\pi_{\mathbf{\theta}}}{\pi_{\text{ref}}})$ where we compare different $\pi_{\mathbf{\theta}}$ (SFT, SPIN and IRFT) on the **ultrafeedback** dataset (note that we **did not do training on this dataset**) which is a preference dataset. We believe that the reward model $r=\log(\frac{\pi_{\mathbf{\theta}}}{\pi_{\text{ref}}})$ with a better generalization ability should be able to distinguish the preferred and rejected continuations, and we compute the ratio of $r(\text{preferred})>r(\text{rejected})$ and obtain the follow table:
| Model | SFT | SPIN (IRFT with $T=1$) | IRFT ($T=5$) |
| -------- | ------- | ------- | ------- |
| Ratio $r(\text{preferred})>r(\text{rejected})$ | 42.6% | 42.8% | 55.6% |
where we can see that IRFT **significantly inproves the implicit reward's distinguishability of chosen over rejected, indicating a superior generalization ability.**
---
Rebuttal Comment 1.1:
Comment: Thanks to the author's rebuttal, most of my doubts have been resolved, but I still don't think the performance gap is significant. In the discussion phase, I would discuss this point. No matter what the discussion result is, my final score will align with other reviewers. | Rebuttal 1:
Rebuttal: We thank all the reviewers for valuable comments. In this general rebuttal, we provide answers and results to questions that are raised by multiple reviewers. We wish our extra analysis and results could help you better understanding the contribution of the work, which we believe is significant to the community.
# Additional experiments to show the effectiveness of proposed algorithms
## LoRA experiments
We first sincerely apologize for an important typo in our initial draft. In Table 3, we claimed that we use LoRA for 7b models. This is not true since **we actually did a full fine-tune** with the proposed method. We did use LoRA to validate the method but did not collect the results. We now collect the correct results where we test our algorithm with LoRA (r=64) as follows:
| Task | Arc | TruthfulQA | Winogrande | GSM8k | Hellaswag | MMLU | Average |
| -------- | ------- | ------- | ------- | ------- | ------- | ------- | ------- |
| zephyr-7b-sft-full | 74.83 | 34.07 | 76.09 | 31.92 | 81.09 | 58.86 | 59.48 |
| SFT | 75.20 | 34.18 | 76.16 | 34.95 | 80.96 | 57.71 | 59.86 |
| IRFT (T=1, SPIN) | 75.31 | 35.67 | 75.85 | 34.5 | 81.98 | 57.46 | 60.13 |
| IRFT (T=5) | 74.92 | 37.96 | 76.95 | 35.25 | 82.48 | 57.66 | **60.87** |
We compared the standard SFT, SPIN (IRFT with T=1) and IRFT with T=5. In this setting we see a significant improvement over the pretrained model (zephyr-7b-sft-full), where we observe **2.3% lift from the baseline and a 1% lift from SPIN**. SFT in contrast can only achieve less than 1% lift from the base line. The reason here might be due to the limited model size when using LoRA, a constrastive training better helps the model distinguishing the referred and non-preferred continuations, yielding better performance over the standard SFT.
As a side note, we **do not** anticipate to significantly outperform SPIN since algorithmically our proposed IRFT method includes SPIN as a **special case**. Rather, one of our main objective is to provide a theoretical foundation for the constrastive-type training algorithm, such as SPIN, which can all be studied under the bilevel inverse RL framework. The comparison with SPIN largely indicates that SPIN is still a RL-based fine-tuning method, suggesting an alternative interpretation that leads to provable convergence in lieu of the two-player game interpretation in [1].
## The accuracy/generalization ability of the learned reward model
We conduct a simple experiment to show how accurate the reward learned by our model is. This experiment also addresses the generalization ability of the reward. Since we train our 7b model with a high-quality dataset (**ultrachat**) and we believe that the corresponding implicit reward $r=\log(\frac{\pi_{\mathbf{\theta}}}{\pi_{\text{ref}}})$ should already be pretty accurate in terms of distinguish the good over the rejected continuations. Therefore we did a simple test: we construct the implicit reward by equation $r=\log(\frac{\pi_{\mathbf{\theta}}}{\pi_{\text{ref}}})$ where we compare different $\pi_{\mathbf{\theta}}$ (pretrained, SFT, SPIN and IRFT) on the **ultrafeedback** dataset (note that we **did not do training on this dataset**) which is a preference dataset. We believe that the accurate reward model $r=\log(\frac{\pi_{\mathbf{\theta}}}{\pi_{\text{ref}}})$ should be able to distinguish the preferred and rejected continuations, and we compute the ratio of $r(\text{preferred})>r(\text{rejected})$ and obtain the follow table:
| Model | SFT | SPIN (IRFT with $T=1$) | IRFT ($T=5$) |
| -------- | ------- | ------- | ------- |
| Ratio $r(\text{preferred})>r(\text{rejected})$ | 42.6% | 42.8% | 55.6% |
where we can see that IRFT **significantly improves the implicit reward's distinguishability of chosen over rejected**. We hope this addresses the reviewer's concern.
# Computational cost analysis
For Algorithm 1 in the paper, we need to maintain a reward model and a policy model (which is the LLM), and this is doubling the standard LLM fine-tuning. In short, the memory consumption and computation time of Algorithm 1 is similar to the standard RLHF process (RLHF=reward learning + policy optimization); For Algorithm 2, we simply need to maintain the policy (LLM) model, and the memory consumption would be exactly the **same** as the standard SFT, whereas the computation time would involving generating for the entire training sample, which would be of similar level as the standard policy optimization process (same computational time as SPIN). Note that standard policy optimization process is equivalent to the time of standard SFT and a generation process toward all training input prompts.
We thus summarize the memory consumption and the computational time of the proposed methods in the table below, *assuming that the reward and policy models are of same size*. Here *Forward* means the memory required for storing a model in inference mode, and *Backward* is the memory required for storing a model in training mode, including weights, activations and gradients; also *SFT* means the computational time as standard SFT, and *Generation* means the time to generate continuations for each of the input training prompts. *2SFT+Generation* is roughly the same time as standard RLHF.
| Method | Peak Memory | Computational Time |
| ------- | -------- | ------- |
| Algorithm 1 | Forward+Backward | 2SFT+Generation |
| Algorithm 2 | Backward | SFT+Generation |
Reference:
[1] Chen, Zixiang, et al. "Self-play fine-tuning converts weak language models to strong language models." International Conference on Machine Learning (2024). | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Compositional 3D-aware Video Generation with LLM Director | Accept (poster) | Summary: This paper presents an LLM-involved three-stage pipeline for text-guided compositional 3D-aware video generation. In the first stage, an LLM is employed as director to decompose input textual prompts into sub-prompts including scene, object and motion. Subsequently, it leverages multi-modal LLM to make an initial estimation about scales and trajectories for each object. Moreover, 2D diffusion priors are further leveraged to refine the 3D generation results with SDS loss. Extensive experiments demonstrate that the effectiveness of the proposed method in 3D-aware high-fidelity video generation.
Strengths: 1.The idea of decompose the generation task into scene, object and its motion in a more organized way with LLM is interesting.
2.The method of generating trajectories following a step-by-step manner is a reasonable solution and is validated in the experiments.
3.Using SDS loss to distill generative priors from pretrained LDM/SD models helps refine the quality of generation and is validated in the ablation study.
4.With LLM as director and 3D as structural representation, the proposed method is able to generate 3D-aware videos with diverse motion and high visual quality.
Weaknesses: 1.How to choose pretrained expert models? How would those models influence generation performance?
2.Since the proposed method adopts the divide-and-conquer strategy, I wonder how it perform in case of complex scene with multiple objects, especially with interaction among objects.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.Rendering 2D video from 3D representation is real-time as said in the paper, what about the time spent on 3D representation generation and trajectory generation ?
2.How is the influence of selection of pretrained expert models and open-source LLM/MLLM as director?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, limitations have been discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer BLtC, thank you for taking the time to review our work and for your positive and insightful feedback. We are pleased to hear that you found our idea interesting and that it demonstrated improved performance. We hope the following comments address your concerns.
**Q: Criteria for choosing pretrained expert models.**
A: In this paper, for scene and object generation, we choose models based on 3D Gaussians considering their explicit structure, which is beneficial for composition and editing. For motion generation, we use models capable of leveraging the generated motions to drive the object's 3D Gaussians.
**Q: Performance with multiple objects.**
A: As shown in Fig. 3(b) in the paper and the results in the attached PDF file, we can still achieve satisfactory results when generating videos with multiple objects and complex scenes. By decomposing the video generation task into several sub-tasks, we can pre-generate multiple objects' motions and then compose them into the same scene to produce a coherent video. In the future, we plan to incorporate more physics-based priors to generate more complex interactions.
**Q: Time spent on 3D generation and trajectory generation.**
A: Since we use pre-trained expert models to generate corresponding 3D representations, the time needed depends on the model itself. Specifically, for trajectory estimation, it takes about one minute for GPT-4V to make an inference. For scene generation using LucidDreamer, it takes about 15 minutes to generate a scene. For object generation with HumanGaussian, it takes approximately one and a half hours to generate an object. However, as noted by Reviewer 6K4s, the performance can be improved by leveraging more powerful expert models, such as [1] for scene generation and [2] for object generation, which will significantly accelerate the generation process. We will explore this in future works.
[1] InstantSplat: Unbounded Sparse-view Pose-free Gaussian Splatting in 40 Seconds
[2] Animatable 3D Gaussian: Fast and High-Quality Reconstruction of Multiple Human Avatars
**Q: Influence on choosing different expert models.**
A: According to our criteria for selecting pretrained expert models, when using models based on 3D Gaussians, no significant performance gap is observed. For trajectory estimation, since we follow a step-by-step approach, we believe that reasonable trajectories can also be generated by open-source MLLMs. We will validate this in future works.
---
Rebuttal Comment 1.1:
Title: response
Comment: I appreciate the efforts of authors in responding my comments. My concerns have been partially addressed, but the influence of expert models and LLMs/MLLMs are not investigated sufficiently. After reading other reviewers' comments, I also think the quality of the generation should be further improved. Hence I would change my score to 5 borderline accept.
---
Reply to Comment 1.1.1:
Title: We hope that our response addresses your misunderstanding.
Comment: Dear Reviewer BLtC,
Thanks for your reply! We would like to thank you for your involvement and are happy to see that our response has partially addressed your concerns.
**Q: The influence of different expert models and LLMs/MLLMs.**
We aim at compositing 3D scenes, objects and motion with LLM director in this work. Therefore, **our focus is on how to guide the composition with prior knowledge from LLM and diffusion models** (by generating transformation and trajectory step-by-step and refining them with 2D diffusion priors). To achieve this goal, we instantiate our idea with the state-of-the-art expert models and LLMs. We have also deeply investigated the composition in Table 1 of the paper, as well as Table 1 and Table 2 in our response to Reviewer 7svJ, which is acknowledged by Reviewer 7svJ.
We truly appreciate the suggestions on the investigation of various expert models and LLMs/MLLMs, which may further enrich our work. However, as it is not the focus in this work, we are disheartened that this point may have an impact on your evaluation.
**Q: The quality of the generation.**
As detailed in our responses to Reviewer 7svJ and as shown in Table 1 of the paper, we generated **400 videos** featuring diverse scenes, objects, and motions. The average scores for both CLIP Score and Q-Align Score are presented below:
**Table. 1**: Quantitative Comparisons with Competitors.
| Metric | 4D-FY | Comp4D | VideoCrafter2 | Ours |
|---------------------------|----------------------------|----------------------------|--------------------------------------------|--------|
| QAlign-img-quality $\uparrow$ | 1.681 | 1.687 | 3.839 | **4.030** |
| QAlign-img-aesthetic $\uparrow$ | 1.475 | 1.258 | 3.199 | **3.471** |
| QAlign-vid-quality $\uparrow$ | 2.154 | 2.142 | 3.868 | **4.112** |
| QAlign-vid-aesthetic $\uparrow$ | 1.580 | 1.425 | 3.159 | **3.723** |
| CLIP Score $\uparrow$ | 30.47 | 27.50 | 35.20 | **38.36** |
The experimental results illustrate that our proposed method represents a significant advancement over existing state-of-the-art / concurrent approaches, achieving notable improvements in terms of quality, aesthetics, and alignment with input text prompts.
If you have any further questions, please feel free to reach out. | Summary: This paper proposes a method to synthesize dynamic 3D videos, with moving objects and camera. It treats the tasks compositionally, generating the (static) background and foreground figures separately. The generation process is orchestrated by an LLM, which provides prompts to separate models that specialize in different scene components (and which are pretrained); these components each use gaussian spats allowing assembly into a single dynamic 4D scene. This compositionality also enables controllable generation, where certain scene elements are replaced depending on user input. The method is demonstrated on several text prompts, and shown to out-perform three baselines.
Strengths: - The idea of treating 3D/4D video generation as a compositional task is elegant, and it is sensible to leverage existing strong domain-specific models; it is also a nice idea to use an LLM here to automatically determine a suitable sequence of domain models to apply, and how to combine them.
- The proposed pipeline is novel, and fairly natural for the task. The choice of stages/components is clearly motivated.
- The method successfully generates videos from at least two text prompts, showing somewhat plausible motion. Qualitative results from two prompts show significantly better visual quality than the selected baselines (VideoCrafter, Comp4D, 4Dfy)
- Quantitative results based on CLIP score (adherence of frames to prompt) and Q-Align again exceed the baselines
- As well as text-conditioned generation, the method also supports certain other kinds of controllability. Since the scene representation is compositional, foreground humanoids can be replaced by others, motion can be modified, and the background can be replaced. This affords a degree of precise control that is missing from 'monolithic' video generation models
- Ablation experiments were conducted, removing three components, aiming to establish their importance in the overall pipeline
- The writing is generally clear (modulo a few grammar issues); the paper is well organized.
Weaknesses: - Very few qualitative examples are given (and presumably these were cherry-picked rather than random). In particular, only two text-conditioned generations are shown (same in paper and supplementary). This makes it difficult for the reader to judge the visual quality of the model outputs, which is vital for such a task
- Even in the given two examples, prompt adherence is poor, with the "skeleton man" missing, incorrect positioning ("in front" of the stage vs at the front; "in" a cabin vs outside), and unrealistic lighting (no shadowing). This is problematic given that object compositionality being guided by the LLM is claimed as a key contribution of the work
- It is unclear how many videos were used for the quantitative evaluation, nor where the set of prompts was drawn from. This makes it hard to judge the significance of these results.
- The ablation experiments are on a prompt and qualitative only. This means they are statistically meaningless, and the benefits of the different components need to be demonstrated more rigorously.
- The exact prompting strategy for the LLM is unclear, in particular the initial stage of creating the sub-tasks, and the creation of the scale/trajectory estimation prompt.
- The method is limited by the choice of 'experts' that synthesize parts of the scene (currently static background from LucidDreamer, humanoids from HumanGaussian, and humanoid motion from Motion-X). While 'delegating' generation subtasks is a neat idea, it seems that significant engineering work is required to incorporate each 'expert', and there is not a clear path to adding e.g. other dynamic object types such as quadruped animals.
Technical Quality: 2
Clarity: 3
Questions for Authors: - How many prompts were in the evaluation set? How were these selected?
- What are the quantitative results from the ablation study?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: There is adequate discussion of limitations. There is an exceedingly brief discussion of broader impacts, borderline adequate for this task.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 7svJ, thank you for your thoughtful feedback and for looking into every detail of our work. We are pleased that you found our idea elegant, novel, clearly motivated, and well-organized. We hope the following comments address your concerns.
**Q: More qualitative examples should be provided.**
A: Thanks for pointing this out. In fact, the two examples in this paper were randomly selected rather than cherry-picked. In the attached PDF file, we provide additional five examples with different 3D viewpoints and corresponding depth maps, demonstrating similar performance. We will release more results in the future version of our paper.
**Q: Prompt adherence is poor.**
A: Compared to competitors like 4D-fy, Comp4D, and VideoCrafter2, our method significantly outperforms them in generating results that more accurately adhere to the text prompts. However, despite this improvement, some inconsistencies in detail are currently unavoidable due to the limitations of the expert models adopted. For example, the generated "skeleton man" from HumanGaussian is indeed the actor on the right side of the stage (Fig. 3(b) in the paper), and it will be difficult for the LLM to estimate a trajectory "inside the cabin" when the 2D image it relies on represents scenes "outside the cabin." Additionally, this paper does not consider factors such as illumination, as this is the first work towards compositional 3D-aware video generation, and we only consider several basic properties. As mentioned by Reviewer 6K4s, our pipeline can be easily improved by incorporating better modules for each component, and we plan to address this in future works.
**Q: Ways to obtain prompts and number of videos used for quantitative evaluation.**
A: Since no public benchmarks are available for compositional 3D-aware video generation, we first obtain sub-prompts for each expert model following their schemes. These sub-prompts are then used as key terms and composed into a complete input prompt using LLMs. For quantitative evaluation, we **randomly generated 400 videos** with varied scenes, objects, or motions, avoiding cherry-picking, and reported the average value of CLIP Score and Q-Align Score.
**Q: Quantitative ablation studies should be provided.**
A: We provide the ablation experiments quantitatively as follows:
**Table. 1**: Quantitative comparisons of ablation studies on trajectory estimation with multi-modal LLM
| Methods | Direct Estimation. | Estimation using bounding box. | Step-by-step estimation. | Ours. |
|-----------------------------------------|--------------------|-------------------------------|--------------------------|-------|
| **QAlign-img-quality** $\uparrow$ | 2.056 | 2.894 | 3.752 | **4.030** |
| **QAlign-img-aesthetic** $\uparrow$ | 1.568 | 2.156 | 3.047 | **3.471** |
| **QAlign-vid-quality** $\uparrow$ | 2.178 | 3.043 | 3.904 | **4.112** |
| **QAlign-vid-aesthetic** $\uparrow$ | 1.680 | 2.346 | 3.342 | **3.723** |
| **CLIP Score** $\uparrow$ | 25.68 | 29.84 | 36.73 | **38.36** |
**Table. 2**: Quantitative comparisons of ablation studies on composition with 2D diffusion models
| Methods | Without SDS. | With scale refinement. | With trajectory refinement. | Ours |
|-----------------------------------|--------------|------------------------|-----------------------------|-------|
| **QAlign-img-quality** $\uparrow$ | 3.045 | 3.674 | 3.826 | **4.030** |
| **QAlign-img-aesthetic** $\uparrow$ | 2.752 | 3.046 | 3.341 | **3.471** |
| **QAlign-vid-quality** $\uparrow$ | 3.129 | 3.794 | 3.983 | **4.112** |
| **QAlign-vid-aesthetic** $\uparrow$ | 2.704 | 3.468 | 3.603 | **3.723** |
| **CLIP Score** $\uparrow$ | 31.35 | 35.27 | 37.04 | **38.36** |
As shown in the tables above, using a direct prompt with a multi-modal LLM for trajectory estimation results in clearly unsatisfactory outcomes. Relying solely on bounding boxes to indicate object locations within the scene yields improved but still limited performance. While the step-by-step estimation strategy offers noticeable improvements, the best results are achieved by combining both approaches. Similarly, for SDS-based refinement, applying SDS incrementally to adjust scale, location, and rotation results in substantial performance improvements.
**Q: The exact prompting strategy for the LLM.**
A: As demonstrated in Line 136, for an input prompt, we query the LLM with the instruction: *"Please decompose this prompt into several sub-prompts, each describing the scene, objects in the scene, and the objects' motion."* From this, we obtain the corresponding sub-prompts. The creation of the scale/trajectory estimation prompt is shown in Fig. 2 in the paper.
**Q: Method is limited by expert models.**
A: As recognized by Reviewer 6K4s, we can easily improve the performance of our method by leveraging more powerful expert models, without the need for significant engineering work. To add other dynamic objects such as animals, we can use methods such as [1], which is a 3D Gaussian-based method that can drive animals. We will explore this in future works.
[1] GART: Gaussian Articulated Template Models, CVPR 2024
---
Rebuttal Comment 1.1:
Title: Response to rebuttal
Comment: I thank the authors for their detailed responses. In particular the additional qualitative results are appreciated, as well as the ablations. While I still have concerns about quality (particularly in terms of prompt adherence), the rebuttal largely addresses my concerns.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Response
Comment: Dear Reviewer 7svJ,
Thanks for your reply! We are pleased to hear that our response has largely addressed your concerns. We would be very grateful if you could raise your rating accordingly.
If you have any further questions, please feel free to reach out—we are open to continued discussion.
Best,
Authors
---
Rebuttal 2:
Title: We hope that our response addresses your concern
Comment: Dear Reviewer 7svJ,
We greatly appreciate the time you've invested in reviewing our paper. Having submitted our rebuttal, we are eager to know if our response has addressed your concern. As the end of the rebuttal phase is approaching, we look forward to hearing from you for any further clarification that you might require.
Best,
Authors | Summary: In this work, the authors propose a framework for 3D-aware video generation using guidance from LLM. Specifically, this work follows previous studies on LLM for video generation that takes language model as a director to do below sub-tasks:
1) expand and decompose the prompt into different aspects, and then use off-the-shelf expert models to generate objects/motions/scenes.
2) plan the trajectory and other scene configurations.
3) put all of them together and update with SDS loss.
Experiments are conducted to verify the effectiveness of this work.
Strengths: 1) The focused setting is novel, which enables LLM-guided text-to-video generation with 3D awareness.
2) The proposed pipeline generally makes sense to me.
Weaknesses: 1) The novelty of this work is limited. It seems like to be the combination of 3D scene generation + 3D avatar generation + motion generation + LLM + SDS. It would be subjected to the performance upper bound set by each expert model, and combining them altogether would make the quality even worse.
2) In TC4D, the trajectory can also be generated by LLMs, which should not be a drawback of that work.
Technical Quality: 2
Clarity: 3
Questions for Authors: How do the authors handle the situation where scene configurations are not properly
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer N95p, thank you for taking the time to review our work and for providing thoughtful feedback. We are pleased that you found our setting novel and the pipeline coherent. We address your concerns as follows.
**Q: Novelty of this work.**
A: As noted by Reviewers 7svJ and BLtC, our work transcends a mere combination of expert models. It is rooted in the concept that our understanding of the world is inherently compositional, and we achieve this by leveraging priors from expert models. Specifically, to seamlessly combine different models into a cohesive whole, we introduce a novel method that employs LLMs as a step-by-step director, capable of generating plausible trajectories with only the background scene. Subsequently, to enhance the details of the composed dynamic scenes, we propose using 2D diffusion priors (i.e., SDS) to ensure that the rendered images more closely match natural image distributions. Additionally, as acknowledged by Reviewer 6K4s, our pipeline can be easily enhanced by developing better modules for each component, highlighting the potential of our method.
**Q: Trajectory in TC4D.**
A: Thank you for bringing this to our attention, and we apologize for any confusion caused. As an outstanding approach for text-to-4D generation, TC4D enables trajectory-conditioned 4D creation, accommodating trajectories that are either user-defined or estimated directly by LLMs, and facilitating applications such as generating 4D scenes from arbitrary trajectories and synthesizing compositional 4D scenes. Our method differs from TC4D in several key aspects: **1)** we use 3D Gaussians instead of a deformable NeRF for representing 4D scenes, as 3D Gaussians offer real-time rendering and easier editing; **2)** for trajectory estimation, we query the LLM in an step-by-step manner, which can estimate reasonable trajectories using only the background scene; **3)** in addition to object-level composition, our method achieves composition at the scene level, including interactions between complex scenes and objects, marking a novel advancement in scene-level composition. We will make the necessary corrections in a future version to address this issue and ensure that our work meets the highest standards of accuracy and clarity.
**Q: Solutions when scene configurations are not properly.**
A: Based on the estimated scene configurations provided by LLMs, we will use SDS for refinement to ensure the composed 4D scene renders more realistic images that align with human intuition. Specifically, in this paper, we treat properties such as scale, location, and rotation as optimizable variables, refined by SDS. In the future, we plan to consider additional factors, such as illumination and spherical harmonic coefficients, to achieve more realistic video generation.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the reply and I acknowledge that I've read the rebuttal. Partial of my concerns are resolved, but the novelty and contribution is still kind of limited to me after reading other reviewer colleagues' comments. I hence maintain my score.
---
Reply to Comment 1.1.1:
Title: Thanks for Your Response
Comment: Dear Reviewer N95p,
Thanks for your reply! We truly value your involvement in this discussion. We are happy to know that our response has resolved your concerns.
If you have any further questions, please feel free to reach out—we are open to continued discussion.
Best,
Authors | Summary: The paper presents a pipeline for 3D-aware video generation by composing scenes, objects, and motions. One key idea is to use existing LLM to provide coarse guidance on the scale and trajectory of objects, and then refine the coarse rendering with SDS. The method is compared with multiple methods.
Strengths: The idea is explainable as it composes the scene, objects, and motions in an explicit manner. So potentially it is easy to improve the pipeline by building better modules for each component.
The paper shows both quantitative results and qualitative examples. The paper is overall easy to read.
Weaknesses: $\textbf{Method}$
1. It would be great to clarify what are considered to be the main contributions of the paper. The pipeline uses multiple existing modules for 3D generation, and then using LLM to generate a series of bounding boxes for the guidance of the rendering. The idea of using LLM to generate a trajectory does not look novel enough given the existing volume of literature.
2. For the refinement stage, is the scale and 3D location refined for each image individually? If so, how does the scale and location maintain temporal consistency?
3. How does the motion be compatible to the scene? Suppose it's a rough terrain or mountain area, the motion has to be adjusted?
4. To generate the bounding box trajectory, is it correct that a single image is used as input, which means no 3D information is used for LLM? If the 3D scene is existing, do you think adding 3D information and generating 3D locations could be a better option rather than 2D bounding boxes?
$\textbf{Experiments}$
1. One concern is that the overall quality of the human actors is still limited. Since the resolution is not high enough, it is hard to tell whether the method is harmonizing the image well given the examples presented.
2. It is also hard to see the 3D viewpoints changes given the examples, it seems like for most examples, the viewpoints are not changing much for the sequence of renderings. I imagine identity-preserving character inpainting methods can show similar examples? Maybe show more examples demonstrating the uniqueness of the proposed method.
3. The baselines do not include the strong motion prior as the proposed method so I am not sure if this is fair. How does the method compare to methods with explicit 3D priors [1]?
[1] Image Sculpting: Precise Object Editing with 3D Geometry Control. CVPR 2024.
Technical Quality: 3
Clarity: 3
Questions for Authors: I think the paper tackles an interesting problem but the effectiveness of the solution remains to be justified.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 6K4s, we are grateful for your careful review and the valuable feedback that you provided for our paper. We appreciate that you found our paper easy to read and our ideas explainable. We hope the following comments address your concerns.
**Q: Main contributions of this paper.**
A: As recognized by Reviewer 7svJ and Reviewer BLtC, the main contributions of this paper can be summarized as follows:
1. We approach video generation as a compositional task by leveraging existing strong domain-specific models. To the best of our knowledge, this is the first work that can realize video/4D generation from the perspective of compositional scene, object, and motion generation.
2. To compose different models into a harmonic whole automatically, we provide a novel method that utilizes LLMs as a director in a step-by-step manner, which is able to generate reasonable trajectories based solely on the background scene.
3. To further refine details of composed dynamic scenes, we propose leveraging 2D diffusion priors (i.e., SDS) to ensure the rendered images align more closely with natural image distributions.
4. We have conducted extensive experiments to demonstrate the effectiveness of our method, showing that high-fidelity videos with diverse motion and flexible control over each concept can be achieved from text.
**Q: Is the scale and 3D location refined for each image individually?**
A: No. Since the scale describes the relative size of the object within the 3D scene, it should remain consistent across different time steps. Therefore, we refine the scale for the first frame and apply it to all subsequent frames. For the 3D location, we optimize it for each image individually to achieve better composition.
**Q: How to accommodate both scenes and motions?**
A: Since the motion is generated using expert models, complex object motions will be produced for more intricate scenes. We then use SDS to enhance the compatibility of these motions with the scenes.
**Q: Whether 3D information is used for LLM?**
A: No. In this paper, we use only a single image as input without 3D information available. However, we believe that injecting 3D information into LLM would be beneficial for better composition, which we will explore in the future. Thank you for pointing this out!
**Q: The resolution is limited.**
A: In this paper, all generated videos have a resolution of 512 × 512, which is higher than that of many previous methods (e.g., Comp4D, 4D-fy, etc.). We also provide more examples in the attached PDF file to demonstrate the effectiveness of our method.
**Q: Hard to see 3D viewpoints changing.**
A: In this paper, we focused on highlighting the composition result by cutting out the region of the moving object, which may have made viewpoint changes less noticeable. We have included additional examples with more varied viewpoints in the attached PDF. Furthermore, we believe that applying identity-preserving character inpainting methods directly may not be effective, as 3D consistency is difficult to ensure.
**Q: Comparisons with methods using different motion priors.**
A: Actually, methods like Comp4D and 4D-fy also use strong implicit motion priors derived from pre-trained video diffusion models, whereas we use explicit motion priors to achieve better composition and controllability. In Image Sculpting, 2D objects are projected into 3D for image editing, we share a similar concept on 3D structural representation, but focuses on composing various 3D concepts into a 4D scene for video generation. Since Image Sculpting is not designed for video generation, a direct comparison is challenging. We plan to explore integrating methods from Image Sculpting in the future to incorporate more robust motion priors. | Rebuttal 1:
Rebuttal: We extend our sincere thanks to all the reviewers for their time and effort. We appreciate your positive feedback, noting that our work was described as "novel and elegant" (N95p, 7svJ, BLtC), "effective and achieving better results" (N95p, 7svJ, BLtC), and "easy to read and well-organized" (6K4s, 7svJ). In response to your comments, we have addressed each reviewer's concerns individually and uploaded a PDF file with additional qualitative examples showcasing the superiority of our proposed method. We would be grateful if you would consider increasing your score based on our responses.
Pdf: /pdf/3bc21068194ea68776738c5f60fcd2a871f94067.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Optimal ablation for interpretability | Accept (spotlight) | Summary: This paper presents an alternative approach to perform component ablations for neural network interpretability. Specifically, the authors consider a neural network as a causal graph and propose *optimal ablations* which simulate component removal by setting the value of a node in the computational graph to a constant value. In contrast to alternative task-agnostic ablation methods (e.g. zero and mean ablations), this constant is learned using gradient descent. The authors argue that alternative task-agnostic ablation methods could confuse downstream computations as these specific values might not have been observed in the training data and then demonstrate how optimal ablations can lead to improvements across a range of different application domains: 1. They evaluate single-component ablations on the indirect object identification (IOI) task and find that optimal ablations perform better than other subtask-agnostic methods (although still worse than subtask-specific approaches); 2. They perform circuit discovery on the IOI and Greater-Than tasks using a new method, which they term uniform gradient sampling (UGS), which outperforms existing circuit discovery approaches; 3. They study factual recall, or more specifically localize where factual associations are stored within a language model, and in contrast to previous studies observe that adjacent components often have very different patching losses which suggests that factual associations are likely stored in single layers, as opposed to being spread across multiple layers as previously hypothesized; 4. Finally, the authors apply optimal ablations to the domain of decoding latent predictions. Here, they propose the optimal constant attention (OCA) lens, which sets all following components to their optimal constant value, instead of skipping these by learning a linear map such as in the tuned lens. They find that OCA scores better on KL-divergence and in terms of faithfulness to the model computations.
Strengths: This paper made a strong impression on me as it combines methodological contributions with extensive empirical studies. The authors take a simple but well-motivated idea and then demonstrate how it can lead to improvements across a range of different interpretability application domains. To this end, the authors propose a range of novel methods that leverage this idea, including UGS, and OCA. I believe that these results could have a significant impact on the (mechanistic) interpretability field and should be of interest to various other subfields.
Weaknesses: - Some of the applications lack depth as a result of the number of different methods studied. For example, it would have been interesting to further investigate the observations on factual recall.
- Optimal ablations still appear to perform worse than subtask-specific ablation methods in single-component ablations. However, the authors make a reasonable argument that these are subjective and require human and require manual effort, making them hard to use in some settings.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Resample ablations are almost on par with optimal ablations in the single-component ablation setting (see Table 1), but perform significantly worse than mean and optimal ablations when it comes to making latent predictions (see Figure 3). Do you have a hypothesis as to why this might be the case?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: I believe the limitations are properly addressed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive and helpful feedback, and we’re glad you appreciated our work!
> Some of the applications lack depth as a result of the number of different methods studied. For example, it would have been interesting to further investigate the observations on factual recall.
## Comment on Figure 2
**We agree that it would be great to show further results on factual recall. We’ve extended our analysis to include localization results for different token positions and sliding window sizes as considered by Meng et al. (see PDF attached to global reply), which we will add to the appendix. We also made a few modifications, like increasing our dataset size for factual prompts from 500 to 4,000 and changing the y-axis to reflect a percent (%) difference between original and corrupted probability recovered rather than a difference in percentage points, and added appropriate standard errors to the figures, which is why the new Figure 2 looks different.**
There are more directions we could pursue, like human mechanistic interpretability analysis to investigate how the recovered probability occurs, but in general, we acknowledge that we have limited space, and want to show as many different applications of OA as possible. We hope we’ve done enough to show that OA is a promising and versatile method; depth will hopefully come from further studies.
> Optimal ablations still appear to perform worse than subtask-specific ablation methods in single-component ablations. However, the authors make a reasonable argument that these are subjective and require human and require manual effort, making them hard to use in some settings.
Note that the only place where OA performs worse than CF is the predictive power of single-component OA loss for inclusion in the manual circuit (the second line of Table 1 in the original submission). However, one element we should point out is that **the manual circuit was discovered by using counterfactual ablations to investigate the model,** which provides a strong inductive bias in the human studies toward selecting components that create high loss when ablated with CF. Thus, it’s not surprising that single-component CF has better predictive power for this particular circuit.
Instead, the point of this analysis is to confirm that OA produces measurements that are related to CF. To clarify, our stance is that, among *previous* methods, CF likely best reflects effect 1 (which we loosely consider “ground truth importance”) from the rewritten part of section 2.2 (see global reply), and we will clarify this point in the paper. Thus, it would make sense that, if OA were now the closest method to “ground truth importance,” then OA would also be the method closest to CF among existing methods, and we present the circuit prediction result in Table 1 to confirm this occurrence. However, after further reflection, we think this result may only confuse the reader, since we do not propose to use single-component ablation loss for circuit discovery, and propose to remove it from the paper (while keeping the correlation result).
> Resample ablations are almost on par with optimal ablations in the single-component ablation setting (see Table 1), but perform significantly worse than mean and optimal ablations when it comes to making latent predictions (see Figure 3). Do you have a hypothesis as to why this might be the case?
Yes – one general theme is that replacing activations from one sequence with those from another sequence produces the least loss of coherence when the sequences have a similar format or share many tokens in common. The single-component ablation experiment uses IOI, a dataset with 13 similar-looking prompt templates, so we are resampling between sequences that mostly look the same. But in the lens experiments, we are trying to perform latent prediction on the entire OpenWebText dataset, and we are resampling activations between sequences that may have completely different structure and content, hence the significantly worsened performance of resample ablation. This result furthers the notion that OA is more universally applicable than resample ablation and, in particular, can be used for more diverse datasets.
---
Rebuttal Comment 1.1:
Comment: Thank you for the additional experiments on factual recall and for clarifying some of the observations in the paper. I have read the rebuttal, as well as the other reviews. I believe that the ideas in the paper are well motivated and that it makes a strong contribution to the field. Thus, I strongly suggest to accept the paper for the conference and increase my score accordingly. | Summary: This paper explores a new approach to replacing features in the forward pass of a neural network, for purposes of interpretability. The approach is, rather than zeroing out features or replacing them with some pre-specified constant or random variable, to optimize a replacement constant to minimize the loss of the model over the subtask of interest. So when ablating an attention head or MLP layer, the ablated component’s outputs are replaced with a constant that is optimized to maximize model performance over some data of interest. This is done to be “minimally disruptive to model inference”, e.g. avoid disrupting model inference in the way that OOD replacement values are known to disrupt model inference, as has long been lamented by past work in interpretability. This new feature replacement method is demonstrated in three case studies, focused on circuit analysis, factual association localization, and something to do with the tuned lens that I could not quite understand. Results suggest that this method reveals clearer/sharper/stronger phenomena in circuit analysis and factual association localization, a promising result for interpretability research.
Strengths: - Very important: The paper’s main strength is that it does something new and interesting in what has been a very saturated space for years. For a long time — since at least 2007 by my count — people have wondered how to remove features (or the outputs of some feature extractor / neural model) from a model, in order to do something like estimate that feature’s effect on the model. The debate rages on, now re-enacted in the era of mechanistic interpretability. While I’m still not sure we have a general theory for this, this paper makes a reasonable proposal on how to do this in a task and model agnostic way, with some promising early results for downstream applications. More specifically, avoiding disrupting model computation “too much” has stood out as a goal of feature replacement methods, and this paper proposes a new method for doing so and provides some interesting case studies showing that this may produce more stronger results in different kinds of interpretability analyses.
- Important: The paper is ambitious in a way rarely seen in ML papers, with a new method being applied across three case studies. The paper covers a lot of ground quickly.
- Important: The paper is well-written and the notation is clear.
- Important: I am sympathetic to the view that subtask-specific interchange interventions can be difficult to construct (a view argued for in the paper, that motivates the proposed method). At the very least, they take some manual effort to construct and are subjective. A drop-in automated replacement could be very useful. For what it’s worth, an argument I would add to the paper is that task-specific counterfactuals can sometimes be difficult to interpret as well. Does changing a name from Alice to Carol have *only* the effect of removing task-relevant information? We can imagine some hidden effects appearing with counterfactuals that we try to construct to remove only one variable, especially for more complex tasks.
- Important: The results on circuit analysis and localization are pretty promising, in my view. (I have a very hard time interpreting the results regarding the tuned lens — I don’t understand that experiment’s setup or its broader purpose.)
- Important: Experiments appear to be conducted with great care. Many small design choices seem good and clever, like the ReLU in Eq5 preventing rewarding the optimization process for finding a value that leads to over-restoration of model predictions during the causal intervention.
- Important: Appendix D shows that the optimization process is unlikely to adversarially induce good task performance in the model, and therefore unlikely to produce interpretability artifacts, because the causal intervention does not seem expressive enough to so.
Weaknesses: - Very important: In my opinion, the approach presented cannot be the right one in general, preventing the proposal from providing a generic solution that circumvents the need to design subtask-specific interchange interventions / counterfactuals. There are two reasons for this, one a criticism of the argument in the paper and the other a counterexample. First, the argument in the paper, while being mathematically clear, is not philosophically rigorous. It is said that we want to “simulate the removal of” model components. What does that mean? I could imagine removing a layer from a Transformer by not running it — by virtue of the residual connection between layers, this is equivalent to zero-ablating the outputs of that layer. Yet the authors take issue with zero ablation, so removing a component must mean something else. Then, to remove a component, it is said that we should be “minimally disruptive to model inference”. What does that mean? Evidentally we are disrupting the model inference. What does it mean to not disrupt it too much? There are some other specific claims that are difficult to interpret precisely, such as avoiding an intervention that “disrupts this flow of information” or using an intervention that provides information that is “inconsistent with information the model derives from other vertices.” To be clear, I basically do like these kinds of arguments in ML papers, because we need arguments for what is right and wrong to do stated in plain English, but I am not sure this one is adequate for proving the point that optimal ablation is the right way to go about things.
Second, a counterexample can build on an intuition hinted at in the paper that “there may not exist a constant value that perfectly conveys a lack of information.” The problem is that it is not clear that optimal ablation is the answer to an intelligible question. For example, suppose we have a neural model that classifies people into “give loan” and “reject loan” categories, and that we know that a neuron in this model represents a person’s annual income in USD. So, we know what 0 means, and we know what 100,000 means (it’s their income). Now let’s say there’s a component that takes in this value, and we want to know this value’s “importance” to that component (or later model outputs), so we replace this value with the optimal ablation constant c. What value of c would be correct for this? The question doesn’t really make sense, and it’s because it doesn’t make sense to ask how “important” a component is. Importance is not well-defined concept in causation. What would make sense is to ask what setting the income value to 100,000, when it was 60,000, for a particular datapoint $x$ does to the output of that downstream component. I would posit that the important thing about counterfactual patching / interchange intervention is not that it “selects uninformative counterfactual inputs“ but rather that it selects *known* counterfactual inputs, i.e. inputs we know the meaning of. This allows us to say precsely what the causal intervention means, as well as what its effect was. This is the argument in https://proceedings.neurips.cc/paper_files/paper/2021/file/4f5c422f4d49a5a807eda27434231040-Paper.pdf, as far as I can tell.
- Important: while results are plenty promising on the first two case studies, optimal ablation seems to hardly improve over marginal resampling based on Table 1, and the improvement is extremely marginal looking at the rank correlation in Fig. 6 where a linear correlation seems inappropriate for the log-log relationships. So why then, does optimal ablation perform far better than counterfactual patching in Fig. 1, when counterfactual patching was just treated as something of a ground-truth in Sec. 2. This apparent over-performance in circuit analysis seems like it could be an interpretability artifact, in spite of Appendix D suggesting overfitting is unlikely.
- Important: I really could not tell what the third case study was trying to show, or how it tried to show it. See next point.
- Of some importance: The paper suffers from trying to do so much in nine pages. I think it would be difficult for someone without a deep background in interpretability to read this paper and get as much out of it as the authors would hope. Unless the authors feel strongly about the third case study, I would totally remove that and focus on providing more detail to earlier sections of the paper.
- Of some importance: It is hard give any credit to the paper for the introduction of the uniform gradient sampling (UGS) method because it is introduced without nearly enough detail, and without basically any motivation, in the main body of the paper. And it is evaluated only in one case study. I can’t tell if the UGS direction is big enough to split off into another paper, but it doesn’t feel like it belongs in this one.
- Of some importance: The organization of the paper is a little unorthodox, without a related work or conclusion section. While the introduction does a good job situating the paper in the literature, I would recommend discussing some earlier work on how this entire debate has played out with feature attribution methods before the intensified focus on “mechanistic” interpretability (i.e. methods focused on data space ablations and not only hidden feature ablations), including (1) https://arxiv.org/pdf/1806.10758 (2) https://arxiv.org/pdf/1910.13413 (3) https://arxiv.org/pdf/2106.00786.
Technical Quality: 3
Clarity: 3
Questions for Authors: - See questions in the above.
- A note on the references: I think it’s extremely non-standard for the references section to be full of references that do not appear in the paper. I would say that this should be fixed.
- In the intro, I think causal tracing is described as a noising ablation, when it is a denoising ablation.
- (Goldowsky-Dill et al., 2023) recommends → Goldowsky-Dill et al. (2023) recommend
- “constant to which to ablate” → constant for ablating?
- "confluence of conflicting information may cause model representations to spiral out-of-distribution” — are the hidden states during a forward pass in circuit analysis with optimal ablation in-distribution?
- The sentence beginning with “To assess the magnitude component” is not at all clear.
- The paper is not in correct submission format, as it is missing line numbers.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The limitations could discuss some higher level points around feature replacement and what it could be useful, vs. what the paper empirically shows it may be useful for. It could also include some of the caveats about the proposed method from Sec. 2.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive and helpful feedback, and we appreciate your optimism about our work!
>First, the argument in the paper, while being mathematically clear, is not philosophically rigorous
We agree and have rewritten this section (see global reply). The term “total ablation” clarifies what we consider ablation: replacing the value of component A with a random variable that is independent from the input X, so the new value of A cannot provide any information about X. We think the revised section 2.2 crystallizes why it’s desirable to reduce ∆.
>it is not clear that optimal ablation is the answer to an intelligible question. For example, suppose we have a neural model that classifies people into “give loan” and “reject loan” categories, and that we know that a neuron in this model represents a person’s annual income in USD
Interesting example! In this case, we think OA does, in fact, answer an intelligible question: “How much worse does the model perform if it had to treat everyone as having the same income?” which is arguably extremely similar to “How much worse would the model perform if it had no information to distinguish anyone’s income?,” i.e. "How important is information about income to the model's performance?"
> it doesn’t make sense to ask how “important” a component is. Importance is not well-defined concept
We agree it’s not clear “importance” is uniquely defined, but we think it’s still helpful to discuss! If we know a component represents income, there are specific interventions we can assess (e.g. input-counterfactual pairs). But if we aren’t sure what it represents, as is typically the case in interpretability work, we may make mistakes like (as you mentioned) changing its value from Alice to Carol without changing a different component that stores the “word starts with A” feature to a “word starts with C” feature, making the model lose coherence (whereas OA could have changed the component from Alice to a value that maintains consistency with whatever other components reflect). Even if we know what each component represents, it’s often too complicated to assess causal interventions between every pair of inputs, and it’s helpful to have a simple conception of importance that somehow aggregates causal effects to guide our search for relevant components a model uses to perform an algorithm. Our claim is that the average causal effect of setting an activation for every input to the same constant adequately summarizes importance (similar claims are made to justify other ablation methods).
> while results are plenty promising on the first two case studies, optimal ablation seems to hardly improve over marginal resampling based on Table 1
We were also confused, and when we thought about this experiment more carefully, we realized we implemented the ablation methods inconsistently and were not doing an apples-to-apples comparison. For each ablation method, we can either perform some prescribed intervention at the sequence level, or iterate an intervention over each token position in the sequence (note that Prop 2.3 only holds if we are consistent in this choice). For resample ablation, we were performing sequence-to-sequence replacement by resampling between corresponding token activations in a different input sequence, but for mean/optimal ablation, we were performing token-level replacement by setting activations to the same value at all sequence positions.
We updated Table 1 to adopt the first approach for every method (i.e. conditioning on sequence position), which shows a much larger improvement of OA over resampling.
| | Zero | Mean | Resample | CF-Mean | Optimal | CF |
|-|-|-|-|-|-|-|
Log-log correlation with CF | 0.626 | 0.831 | 0.826 | 0.847 | 0.908 | 1 |
Mean ∆ | 0.0584 | 0.0405 | 0.0559 | 0.0412 | 0.0035 | 0.0296 |
Median ratio of ∆(opt) to ∆ | 11.1% | 33.0% | 17.7% | 31.7% | 100% | 88.9% |
For completeness, we’ll also show in the appendix that if we implement all ablation methods in the tokenwise fashion (e.g. resampling all tokens individually), OA still shows much higher correlation with CF.
> why then, does optimal ablation perform far better than counterfactual patching in Fig. 1, when counterfactual patching was just treated as something of a ground-truth in Sec. 2.
Just to clarify, our stance is that, among *previous* methods, CF may best reflect effect 1 (which we loosely consider “ground truth importance”) from the rewritten part of section 2.2, and we will clarify this point in the paper. Thus, it would make sense that, if OA were now the closest method to “ground truth importance,” then OA would also be the method closest to CF among existing methods. We present Table 1 to confirm this, and do not mean to present CF as a ground truth.
Even for single components, OA achieves lower ∆ on average than CF (0.0035 vs 0.0296), in line with what we see for circuits. The disparity may be larger for circuits due to ablating many components at once. Rushing and Nanda (2024) showed that models can self-repair if a few components contribute weird values; however, ablating many components with previous methods may contribute much more to effect 2 (“spoofing” the model with inconsistent values).
> I really could not tell what the third case study was trying to show
See global reply
> the uniform gradient sampling (UGS) method…is introduced without nearly enough detail
See global reply
> I would recommend discussing some earlier work on how this entire debate has played out with feature attribution methods
Good suggestion and thanks for the references! We will add discussion.
> it’s extremely non-standard for the references section to be full of references that do not appear in the paper
We will correct this and the phrasing corrections mentioned in the questions section.
> The limitations could discuss some higher level points [and] could also include caveats about the proposed method from Sec. 2.
We agree and will add these points.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: Thanks to the authors for their reply above! Some comments below:
> We agree and have rewritten this section (see global reply). The term “total ablation” clarifies what we consider ablation: replacing the value of component A with a random variable that is independent from the input X, so the new value of A cannot provide any information about X. We think the revised section 2.2 crystallizes why it’s desirable to reduce ∆.
Thanks! I think this is a step in the right direction. I still do not totally agree, because I think there is something missing from the list. The list includes deletion and spoofing, but it could also include “insertion”. Similar to our other income example, changing someone’s income from 100k to 40k might be like deletion (I don’t think it is like spoofing, unless this new feature vector is totally OOD / logically impossible), but I think it is more like “inserting a new value.”
For what it’s worth, some of my original criticisms still stand, like the idea of “inconsistency” with other information that is relied on in the “spoofing” definition. But I think the new 2.2 is clearer, and I don’t really expect this paper to present an unassailable definition of inconsistency anyway.
>Interesting example! In this case, we think OA does, in fact, answer an intelligible question: “How much worse does the model perform if it had to treat everyone as having the same income?” which is arguably extremely similar to “How much worse would the model perform if it had no information to distinguish anyone’s income?,” i.e. "How important is information about income to the model's performance?”
Interesting! I agree with your first claim here, and thanks for pointing it out. We do know the meaning of this operation, which is intervening on everyone’s income for it to be the same. I also understand that you treat the 2nd and 3rd question as the same. This is reasonable, and people used to do it years ago when estimating feature importance for decision trees. They would shuffle the column of a dataset, and check the accuracy of the model. That is, they would do marginal resampling. But the thing is, many ablation methods imply that the model has no information to distinguish anyone’s income, including marginal resampling, zero ablation, constant ablation and random ablation. So what is special about optimal ablation that it is the *right way* to make sure the model has no information about anyone’s income?
My other question continues to be, why are we trying to estimate “importance” when it is ill-defined? But I see that comes up next.
> We agree it’s not clear “importance” is uniquely defined, but we think it’s still helpful to discuss! If we know a component represents…
Ultimately, I agree the whole argument here. Counterfactuals aren’t perfect, and I think we do need a method that “somehow aggregates causal effects to guide our search for relevant components.” This would be a nice big caveat to add to the paper when the word “importance” gets used, since it has haunted interpretability papers for years.
>We updated Table 1 to adopt the first approach for every method (i.e. conditioning on sequence position), which shows a much larger improvement of OA over resampling.
Ok great!
>…We present Table 1 to confirm this, and do not mean to present CF as a ground truth
Thanks makes sense.
> Even for single components, OA achieves lower ∆ on average than CF (0.0035 vs 0.0296), in line with what we see for circuits. The disparity may be larger for circuits due to ablating many components at once.
Thanks, this also makes sense. It would probably be worth adjusting some of the language in the paper to not present CF so much as a ground truth, but more as the closest thing we had to a ground-truth (to the extent the intro does this). Then it would help to present Table 1 and the single-components results on somewhat equal footing, to show that OA could even be better than CF in some ways, while also being easier/automatic. Right now the Table 1 result reads more like a sanity check/validation before moving into “real” results.
---
Based on this discussion, I plan on keeping my score at 8. I leave it to the authors to clean up some of the motivation/argument for the method, and to figure out how to present some of the extra UGS / OCA results in a better way, but the core the paper is very good.
---
Reply to Comment 1.1.1:
Comment: Thank you for the insights and discussion! To touch on a few points:
>The list includes deletion and spoofing, but it could also include “insertion”. Similar to our other income example, changing someone’s income from 100k to 40k might be like deletion (I don’t think it is like spoofing, unless this new feature vector is totally OOD / logically impossible), but I think it is more like “inserting a new value.”
This is an excellent point! We thought we had accounted for this option in what we wrote, but upon rereading it we hadn’t, so we will definitely incorporate this category into this part!
>But the thing is, many ablation methods imply that the model has no information to distinguish anyone’s income, including marginal resampling, zero ablation, constant ablation and random ablation. So what is special about optimal ablation that it is the right way to make sure the model has no information about anyone’s income?
To be a bit more precise than what we wrote in the rebuttal, the question that optimal ablation answers is, “What is the best performance the model could have achieved on the task with no information to distinguish anyone’s income?” We think this is a good way to frame OA, and arguably component importance in general, and will make this question explicit in the paper.
We think that “best performance,” rather than just “performance,” is the relevant question to ask when considering a component’s importance. There are many ways to remove information about income (such as assuming that everyone’s income is zero) that achieve worse performance than the optimal-ablated model, but this underperformance cannot be attributed to *removing information about income* (since OA also totally ablates the information in the component), and thus must be attributed to some arbitrary choice made about how to remove this information. This question has a clear parallel to other questions we might ask in ML. If we’re trying to assess, for example, “how much does the assumption of linearity constrain our ability to predict Y from X,” we want to compare the performance of a nonlinear model to the *best* linear model rather than a random linear model. Another way to see the importance of the word “best” (and this descriptor is what what makes OA distinct from other total ablation methods) in the definition of component importance is that we should expect an *unimportant* component to have a value of 0. But for this to be true, we must make sure that when we ablate an unimportant component, we don’t also do something that messes up the model in some other way, since if we do, then we’ll erroneously end up with a non-zero value for that component’s importance.
We’ll be sure to incorporate these and the other presentational changes you suggested into the paper. Thank you again for the helpful remarks! | Summary: - Introduces a notion of "optimal component ablation" for activation patching methods in mechanistic interpretability. Specifically, they propose an ablation method that involves setting a component’s value to the constant value that minimizes ablation loss over a chosen distribution.
- Shows that optimal ablation can improve algorithmic circuit discovery—the identification of sparse subnetworks that recover
low loss on interpretable subtasks.
Strengths: - Simple approach to compute 'optimal' ablations that simulate the removal of a given component/vertex. This fares better than over zero, mean, and resample ablation in terms of ablation loss.
- The OCA lens is a clever idea to evaluate layerwise representations and an alternative to linear probing / tuned lens.
Weaknesses: - Writing and paper structure can be significantly better (motivation, problem setup, describing the applications in more cohesive manner, results etc).
- Justification for why ablation loss is the "right" way to measure "optimality" could be better. Also, given that the proposed method explicitly optimizes for ablation loss minimization, it is unclear why making this the primary evaluation metric in the experiments is insightful. Are there other downstream metrics for localization that also improve as a result of minimizing ablation loss? For example, does it find smaller circuits that induce the same effect on model behavior? Is the second-order effect on unrelated tasks less (as intended) due to optimal ablations? These evaluations would make the overall story significantly stronger.
- One of the main contributions (uniform gradient sampling method) is not described well. I would like to see a more step-by-step description of the proposed method. Most of the important details are deferred to the appendix
- These applications are niche and specific to mechanistic interpretability subroutines. To me, factual recall (the second application) seems like a special case of the first one, as the goal is again to localize a given task down to a subset of model components. I would be interested to see optimal ablations improve model editing (e.g., https://arxiv.org/abs/2404.11534 show that accurate localization via zero ablations can be directly used for model editing).
- Novelty concerns. I am not sure if the contributions in this paper are strong enough right now, especially given related work in this area, e.g., https://arxiv.org/abs/2309.16042 also focus on systematically evaluating activation patching
- I like the OCA lens idea, but it's a bit unfair to compare it to tuned lens because OCA lens is still using additional computation via learned ablated layers. I think this experiment requires fair-er baselines that are more of an apples-to-apples comparison (e.g., smaller models with fewer learned layers?)
Technical Quality: 2
Clarity: 2
Questions for Authors: See strengths and weaknesses
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See strengths and weaknesses
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback.
>Writing and paper structure can be significantly better
We’ve revised the writing so that the structure makes more sense, improving flow and clarifying differences between sections. In particular, we’ve substantially rewritten section 2 to better contextualize our motivation and problem setup, and edited later sections to clarify the motivation and description for each application.
>Justification for why ablation loss is the "right" way to measure "optimality" could be better.
We agree and have rewritten this section. See global reply.
>it is unclear why making [ablation loss] the primary evaluation metric in the experiments is insightful
The ablation loss gap ∆ is the primary metric in previous circuit discovery work. While it is true that OA is guaranteed to have lower ∆ than zero, mean, and resample ablation, studying ∆ still produces several surprising results.
1. The amount that OA reduces ∆ (67-89% for components, 85-97% for circuits) is **very large**. In light of our motivation for minimizing ∆ (see global reply), this suggests most of ∆ for these methods is accounted for by “spoofing,” or inserting inconsistent information into the model, not removing important information.
2. OA produces lower ∆ than CF, which is not guaranteed, since CF is input-dependent and removes less information than OA.
3. Manual circuits achieve close to Pareto-optimal ∆ when evaluated with OA, but for other ablation methods, there are similar-size circuits that get much lower ∆ than the manual circuit. This is likely because there are components that do not serve relevant functions according to the manual analysis but need to be included in the circuit to avoid spoofing the circuit components with conflicting information when ablated. If you believe that manual circuits from previous work capture the important mechanisms, then this is evidence that OA is a better evaluation metric for circuits.
We agree with the general idea of having more evaluations. We added a result for OCA lens that shows that it elicits more truthful predictions than tuned lens (see global reply).
>does it find smaller circuits that induce the same effect on model behavior?
We believe we show the answer is yes by looking at the Pareto frontier; for the same effect on model behavior, we can achieve smaller circuits. We will make this conclusion more clear in the paper.
>Is the second-order effect on unrelated tasks less (as intended) due to optimal ablations?
While this is an interesting question, we’re not sure we should expect circuits to generalize to unrelated tasks. We are not trying to find a small general-purpose circuit; we are deliberately trying to isolate a circuit that performs one particular task, and we might even prefer the circuit does not perform unrelated tasks because we’d have wasted circuit complexity.
>uniform gradient sampling method is not described well
See global reply
>These applications are niche and specific to mechanistic interpretability subroutines.
Each of our selected applications is the subject of extensive discussion in interpretability. We acknowledge that in the original submission, we only cited a few relevant works. We’ve added far more citations for each of the applications and previous ablation methods.
Furthermore, as other reviewers have noted, our work presents a new approach to the longstanding question of feature importance and has broad implications beyond mechanistic interpretability. We focus on this area specifically for the sake of cohesion and because it is a growing subfield in which the use of ablation is particularly common and for which our method results in concrete empirical improvements.
>factual recall (the second application) seems like a special case of the first one
We agree that these applications are mathematically similar, but they are answering substantively different scientific questions about a model, and they are separate lines of inquiry in previous work. Circuit discovery asks how a model performs an algorithmic task, and seeks to find components that perform steps of the algorithm. Factual recall asks where the model has memorized factual associations. We will clarify this point in the paper.
>I would be interested to see optimal ablations improve model editing
Thanks for pointing us to this paper, and we agree it would be an interesting result. Our results on localization suggest that our method could do well here. But we think it would require significant extra thought – we’d have to make choices about how to edit the model, compare to various baselines, etc. Given our space limitations, we defer this to future work.
>Novelty concerns. I am not sure if the contributions in this paper are strong enough right now, especially given related work in this area, e.g., https://arxiv.org/abs/2309.16042 also focus on systematically evaluating activation patching
As noted by other reviewers, our approach is fundamentally quite different from existing approaches, which manifests in dramatic differences in several important metrics. To be clear, unlike the linked paper, this paper is not about evaluating previous methods. Our evaluation is about showing our method outperforms others, including all methods that are evaluated in that paper.
>I like the OCA lens idea, but it's a bit unfair to compare it to tuned lens because OCA lens is still using additional computation via learned ablated layers
See global reply for more context about why it is a fair comparison – both methods elicit predictions from the last token position without using activations at previous token positions. Indeed, OCA lens is actually a more constrained elicitation model than tuned lens: it has a factor of $d_\text{model}/n_\text{layers}$ fewer learnable parameters, since we train a $d_\text{model}$-parameter constant at each layer for OCA lens, compared to a $d_\text{model}^2$-parameter weight matrix for tuned lens. | Summary: Different intervention techniques aim to "ablate" parts of the representation of the model to infer their causal function, e.g. by adding gaussian noise. the paper suggests to derive a notion of "optimal" ablation. Particularly, instead of zeroing out or replacing the ablated part with its mean, it is proposed to optimize with GD the constant value that minimizes the loss, I.e, the "perfect" ablation would be replacing the element with the "best" constant value. It is shown that the proposed techniques improves circuit discovery, identifying a subnetwork that reconstructs the original performance of the model in some task.
Strengths: The contribution is well defined and elegant. The proposed method is experimentally validated on several interpretability-related tasks and it improves over commonly used ablation method.
Weaknesses: I don't see major weaknesses in this work.
Technical Quality: 4
Clarity: 4
Questions for Authors: none.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your generous feedback, and for recognizing the value of our work! | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive feedback. We use this space to address questions shared between multiple reviewers.
**Reviewers note our motivation for minimizing ∆ could be clearer.** We agree and propose the following changes. We add the following definition to section 2.1:
>For any model component $A$, a **total ablation method** satisfies $M^{\setminus A}(X) = M_A(X,Z)$ for some random $Z$, where $Z\perp \mkern-18mu \perp X$.
This definition aligns with an intuitive notion of entirely removing $A$: applying a total ablation method prevents $A$ from providing any information that distinguishes between inputs $X$. Total ablation methods include zero, mean, resample, and optimal ablation.
We also rewrite part of section 2.2:
>To see why lower ∆ is desirable, consider that intuitively, for ablation methods that involve intervening on M by replacing A(x) with a different value a, there are two potential contributors to ∆.
>1. Information deletion. The original value A(x) could carry informational content, specific to each x, that serves some mechanistic function in downstream computation and helps the model arrive at its prediction M(x). Replacing A(x) with some other value $a$ could delete this information about x, hindering the model’s ability to compute the original M(x).
>2. Information spoofing. The replacement value $a$ could insert information about the input that is inconsistent with information about x derived from retained components, “spoofing” the downstream computation. This confluence of conflicting information may cause later activations that combine A(x) with other information to become incoherent, leading to high ∆ because these abnormal activations were not observed during training and thus not necessarily regulated to lead to reasonable predictions.
>Measures of component importance seek to isolate the contribution of effect 1 from that of effect 2. Like other total ablation methods, optimal ablation captures a maximal information deletion effect since $a^*$ does not depend on x. However, compared to other methods, OA minimizes the contribution of information spoofing to ∆ by setting ablated components to constants $a^*$ that are maximally consistent with information from other components, e.g. by conveying a lack of information about the input or by hedging against a wide range of possible x rather than strongly associating with a particular input other than the original x. Optimal ablation does not entirely eliminate information spoofing, since it may be the case that every possible value of A conveys at least weak information about the input. However, the excess ablation gap ∆ − ∆(opt) for ∆ measured with ablation methods that replace A(x) with a (random) value A is *entirely* caused by information spoofing, since replacing A(x) with the constant $a^*$ achieves lower loss without giving away any more information about x. In practice, ∆ − ∆(opt) for prior ablation methods is typically very large compared to ∆(opt). In Appendix C.3, we show that on average, ∆(opt) accounts for only 11.1% of ∆(zero), 33.0% of ∆(mean), and 17.7% of ∆(resample) for attention heads and MLP blocks on a prototypical language modeling subtask. Section 3 extends this finding to circuits: circuits manually selected in prior work to capture important mechanisms on language subtasks incur 85-97% lower loss with OA than with other ablation methods. This disparity indicates that effect 2 dominates these other ∆ measurements, making them poor estimators for effect 1 compared to OA.
**Reviewers note that we do not spend much time explaining UGS.** While UGS may be of interest to some, it is an auxiliary contribution. As some pointed out, we packed a lot of content into this paper, which does not leave us room to delve separately into UGS in the main text. We choose to de-prioritize it, but wish to acknowledge that it is novel and could be useful to others, so we prefer to describe it briefly in the main text and defer additional context to the appendix for interested readers.
**Reviewers found the presentation of OCA lens confusing.** We rewrite some of section 5 to better contextualize the motivation for OCA lens and why the comparison to other methods is fair. We add references to logit attribution, a popular precursor to tuned lens. Latent prediction methods, e.g. logit attribution and tuned lens, ascribe semantic meaning to the activation at the last token position (LTP) at an early layer $i$ by making next-token predictions using only that activation.
>Tuned lens allows researchers to study *when* important information is transferred to LTP: if replacing $\ell_N(X)$ with $\hat\ell_N(X)$ achieves low loss, then $\ell_i(X)$ already contains crucial context for computing $M(X)$, and key information was transferred prior to layer $i$. Similar to tuned lens, OCA lens reveals whether the LTP activation at layer $i$ contains sufficient context to compute $M(X)$ by eliminating information transfer from previous token positions to LTP after layer $i$.
Tuned lens and OCA lens share the information-theoretic limitation that they are functions of *only* the LTP activation at layer $i$ and do not depend on activations at other token positions. Tuned lens is a linear map, while OCA lens is a function that involves applying later MLP layers and adding constants, and has far fewer learnable parameters: $O(Nd_{model}) < O(d_{model}^2)$. A key insight is that ablation is a more parameter-efficient way to elicit latent information than training a simple auxiliary model.
**Updates to results.** We update Figure 2 by augmenting our dataset and slightly modifying the metric. (See “Comment on Figure 2” in our reply to review Nr7M for more.) Additional figures show more added results for factual recall, varying token position and window size. We also add a new evaluation for OCA lens demonstrating that elicited responses on news and wiki datasets are more truthful than those for tuned lens.
Pdf: /pdf/76291ef7af5d8a6b5f989fdf4d5e45daf6f2cec2.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AlignedCut: Visual Concepts Discovery on Brain-Guided Universal Feature Space | Reject | Summary: The authors proposes a method to create a universal feature space using brain fMRI response prediction as a training objective. The key idea is that deep networks trained with different objectives share common feature channels that can be clustered into sets corresponding to distinct brain regions, revealing visual concepts. By tracing these clusters onto images, semantically meaningful object segments emerge without a supervised decoder. The paper employs spectral clustering on the universal feature space to produce hierarchical visual concepts, offering insights into how visual information is processed through different network layers. The two main insight being the localization of emerge of foreground/background features, as well as interesting visualization of class-specific concepts using the top spectral-tsne egeinvectors.
Strengths: First, congratulations to the authors, I liked reading this paper, and I think the experiments and the core is great:
1. Using brain voxel response prediction to find the common space between model is interesting and novel
2. The author propose visualizations of what we could interpret at brain-activations subspace
3. I like the idea for the visualization of how visual concepts emerge and transition through different layers of various models. However, i have concern on it's validity (see weakness)
3. Nystrom-like approximation show the authors thought about scaling their methods
Weaknesses: Nevertheless, this paper has problems, some more important than others.
So I will separate them into major problems (**M**) and minor problems (_m_). I want to make it clear that for me, all these problems are solvable and do not detract from the quality of the paper.
Let's start with what I think are the Major problems (**M**):
**M1**. Related Work Quality (Page 9, Section 4):
- The related work section is critically weak, with **only 24 references** and lacks depth in discussing relevant literature. The paper misses an entire set of works on (1) concepts xai, (2) alignment of brain and activations (3) study of representations and (4) attributions methods, which are either crucial or should be mentioned for this study. **A significant rewrite is necessary** to properly position the paper within the existing body of work.
**M2**. Validity of t-SNE for Distance Measures (Figure 9):
- The paper uses t-SNE for analyzing bifurcation in feature space, but t-SNE is known to distort distances. This raises concerns about the validity of conclusions drawn from t-SNE plots regarding feature bifurcation.
Now for the minors problems:
_m1_. Redundancy of Discovery Claim (Page 2, Line 25):
- The claim that channel feature correspondence exists across networks is not new, as it has been extensively studied in major works (eg using CKA, RSA...), please update and compare your work to this litterature.
_m2_. Reliance on Channels (General):
- Channels are not necessarily the best basis for analysis, as recent researchs suggests there are better ways to represent features, directions (neuron is not a great basis). The paper should address why it continues to rely on channels, considering the limitations.
_m3_. Orthogonality Assumption (General):
- The paper's assumption of orthogonality in feature decomposition does not align with current understanding, especially regarding the neural collapse phenomenon in late layers. Say it otherwise, all point for the class tench are nearly collapse in the latest layer, relaxing othogonality (e.g dict learning) may be a good idea (althought if i am correct, the nystrom approx should not yield perfectly orthogonal vectors). This should be discussed.
_m4_. Parameter Sensitivity (Appendix):
- As always when we have hyperparameter, i expect a small discussion discussing the effect of changing the parameters (λ eigen, λ zero, and λ cov). This would help understand the robustness of the method to these hyperparameters.
_m5_. Direct t-SNE Application (General):
- The paper uses eigenvectors for t-SNE. It would be more straightforward to apply t-SNE directly to the data, and the paper should justify the chosen approach.
Technical Quality: 3
Clarity: 3
Questions for Authors: The paper presents a novel and promising approach, but it has several critical issues that need addressing. The major problems, particularly regarding the related work quality, assumptions of linear transformations, and the validity of t-SNE for distance measures, are significant and currently undermine the paper's contributions. Addressing these concerns would substantially improve the paper. The minor issues, while less critical, also need attention to enhance the clarity and robustness of the presentation.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the limitations identified by the authors are accurate and well-documented. Regarding the weakness I mentioned, I reserve the right to increase the score if the authors adequately address my major concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | null | Summary: The paper proposes a method to align different vision models' features to a common space, and to discover interpretable features as clusters in this space.
The alignment is done by learning linear mappings from features to fMRI activations, the intuition being that the human visual cortex provides a meaningfully structured space, in which locations have known properties (e.g. different regions are known to respond to specific concepts) and are thus readily interpretable. Once the features have been linearly aligned to this common space, they are treated as a weighted, fully connected graph, wherein each image patch is a node, and edge weights (affinities) are computed based on the cosine similarity between the features in each node. A standard spectral clustering method (Normalized Cut) is used to compute a soft partition of this graph into sub-graphs (clusters). As performing this clustering on the full graph would be computationally infeasible, the authors propose to cluster a sub-sample of the graph, and then propagate the resulting clusters to the K nearest neighbors of each subsampled node. Finally, in order to learn linear mappings that preserve the quality of the clusters, a regularization term is added to the reconstruction loss, which ensures that spectral clustering eigenvectors are preserved across the mapping, based on a subsample of nodes.
This method is used to visualize, for each layer of three different models (MAE, DINO and CLIP), the concept that each image patch is assigned to, coded as a color. The 20 top eigenvectors are reduced to 3 dimensions using t-SNE, and these 3D vectors are shown as RGB colors. This visualization reveals that CLIP and DINO produce maps that are close to uniforms in the first 4 layers, suggesting that figure-ground segmentation only emerges in later layers. MAE, on the other hand, shows signs of segmentation from earlier layers. The segmentations extracted from the models in this way are evaluated on the ImageNet-segmentation benchmark, confirming that in CLIP (the model that showed the strongest segmentation) the segmentation emerges at layer 4, and plateaus afterwards. Using the PASCAL VOC benchmark, which also includes category labels, CLIP is also found to encode categorical information, which peaks at layers 9 and 10. In another analysis, a discovered "figure/ground" concept is visualized by averaging its activation within the "figure" and "ground" regions (based on the ImageNet-segmentation ground truth labels) and plotting it on the surface of the brain, showing that areas known to encode objects, faces or bodies tend to respond more to the foreground, while scene-selective areas more to the background. This figure/ground concept is found to be agnostic to object category, and to an extent, consistent across models. In the next section, concepts corresponding to different object categories are visualized on images, and on the surface of the brain. Finally, a 2D t-SNE visualization of the evolution of the features across layers shows a bifurcation between figure and background as the layer depth increases, in both CLIP and DINO.
Strengths: - The paper proposes to use the human brain as a shared space in which to evaluate different models: this is a clever intuition, which might prove useful for interpreting differences between models.
- It shows that spectral clustering can be a well-suited method for grouping the features of vision models, and in particular ViTs, into meaningful clusters.
- It proposes a clever modification of an existing subsampling-based method for graph clustering, by using K nearest neighbors.
Weaknesses: - The overall concept is not made clear in the Introduction. The method is relatively simple conceptually, consisting of a step in which multiple models' features are aligned to the brain to provide a common reference frame, followed by clustering of the features within this common space. The Introduction does not make this pipeline clear. Specifically, while the alignment into a common space is clear, the clustering procedure is not explained: first, the authors evoke neuroscientific ideas on lines 39-41, without discussing how these relate to the problem of clustering features, nor even that the goal of the current work is to find clusters of features. Subsequently, on lines 42-47, they discuss the problem in terms of "graph edges incidents on each pixel", and "channel grouping hypothes[e]s". While this is partly a matter of subjective taste, I believe that discussing the problem at hand in terms of reducing the dimensionality of features at each location (image patch), thus finding a small number of dimensions (combinations of features) which can explain the affinity structure between patches, would be more easily understood by most readers.
- Several key details about the methods are left out: for example, almost no information about training (learning rate, optimizer used, batch size, number of epochs) is provided.
- Related to the previous point, as the Appendix contains several important methodological details (such as the use of additional regularization losses) but is never referred to in the main text, the authors should add references to it where relevant.
- A key component of the proposed method is the learning of a mapping of different models' features into a common space, and as shown in Figure 3, this does indeed result in the cosine similarities of different channels' activations becoming more similar across models. At the same time, the goal of the method is to uncover differences between models. The authors should include an explicit discussion of what kind of model differences are likely to be preserved, and which are likely to be destroyed in the alignment process. As a possible suggestion in this direction, the paper makes several references to the models' features before alignment (for example comparing their segmentations' evaluations with the aligned features in Figure 5), but these features are never visualized. A direct comparison of each model's (clustered) features before and after alignment would be very informative.
- The feature clusters are visualized by reducing their dimensionality to 3D using t-SNE, and visualizing the resulting 3D features as RGB colors. This visualization, however, is not easily interpretable, as different channels are conflated together by the additional dimensionality reduction. Visualizing single channels separately might be more useful to understand the nature of the discovered clusters. Was there a specific reason for choosing the 3D t-SNE visualization rather than showing single channels?
- Overall, it is not clear what the discovered channels can tell us about the models. The single interpretable channel that is discussed in depth in the paper is figure/ground. While this provides a good sanity check on the meaningfulness of some of the discovered features, the ability of vision transformers to segment objects has been observed in several papers (e.g. Melas-Kyriazi et al. 2022, Xu et al. 2023), and the different responsiveness of different regions in the visual cortex to figure and background is well established. Other concepts revealed by the method (the category concepts in Figure 8) are shown on the surface of the brain, but it is hard to interpret what these brain maps mean. The authors should make the questions that can be answered using the proposed method clear and explicit.
- The paper fails to cite closely related work in the Related Works section. Particularly, it cites mechanistic interpretability work in other fields, such as language, but not the recent rich line of work that has specifically looked at vision transformers' ability to perform specific visual tasks. The two papers cited above (Melas-Kyriazi et al. 2022, Xu et al. 2023) are a good example, the former in particular as it proposes a segmentation method based on spectral clustering which is very close to the present one. Another relevant paper is El Banani et al. (2024), which looks at 3D-related tasks. As this is not my field of expertise, I am not aware of papers that look for meaningful directions in the space of vision transformers' channels, but I would be surprised if this didn't exist. I would recommend the authors to do a more exhaustive literature search to find papers that more closely relate to the method proposed here.
- In Figure 7, the figure-ground visual concepts discovered by different models are plotted on brain maps. In the text, the authors write that "the foreground or background pixels activates similar brain ROIs across the three models". However, a glance at the brain maps reveals similarities, but also differences. A statistical evaluation of the similarity between different models' brain maps would be recommended.
**References**
El Banani, M., Raj, A., Maninis, K. K., Kar, A., Li, Y., Rubinstein, M., ... & Jampani, V. (2024). Probing the 3d awareness of visual foundation models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 21795-21806).
Melas-Kyriazi, L., Rupprecht, C., Laina, I., & Vedaldi, A. (2022). Deep spectral methods: A surprisingly strong baseline for unsupervised semantic segmentation and localization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 8364-8375).
Xu, J., Liu, S., Vahdat, A., Byeon, W., Wang, X., & De Mello, S. (2023). Open-vocabulary panoptic segmentation with text-to-image diffusion models. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 2955-2966).
Technical Quality: 3
Clarity: 1
Questions for Authors: - As I wrote in the "weaknesses" section, I found the explanation of the clustering procedure given in the Introduction, in terms of a bipartite graph connecting pixels and feature channel, misleading. However, I want to be absolutely sure that this is not due to my misunderstanding of the method. The graph on which the clustering is performed is the graph wherein each node is a vector with the features in a given image patch, every node connects with every other node, and the weights of the edges are the affinities between feature vectors, is that correct? What was the reasoning, then, behind discussing a graph comprising channels and pixels as different nodes in the Introduction?
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: As I wrote in the "weaknesses" section, I believe the precise scope of the method (what kinds of questions can and cannot be answered with it) has not been properly acknowledged and discussed by the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | null | Summary: In the domain of interpretability research, this paper aims to make a mark by proposing AlignedCut, a method to discover shared and expressive visual feature spaces across networks by aligning those spaces with neural responses in human brains. The method is quite interesting - channel-wise responses to images are aggregated and "feature clusters" are formed on the basis of the functional connectivity between the pixels and linear combinations of channels. These linear combinations are acquired by predicting neural responses to the same images. The feature space spanned by the neural responses is considered the universal feature space. The eigenvectors corresponding to those feature clusters help us visualize what parts of the image the networks rely on to encode which concept, thus providing an interesting interpretability lens. The most striking example presented is how figure-ground segmentation can be interpreted as a mapping between the input and specific channels in various networks.
Strengths: Originality
- The AlignedCut method is new to me - and is super interesting - however, I am not an expert in that specific sub-field so I am not sure of its novelty.
- The interpretability lens on figure-ground segmentation is very informative, however, again I cannot judge its novelty.
Quality
- The authors present plenty of analysis to demonstrate the power of their method, which helps in inspiring some confidence in the claims.
Clarity
- The methods and results are relatively clear and the authors provide useful context at the start of each section.
Significance
- Linking pixels to visual features, parameterized through network activations and neural activations opens doors in interpretability research.
Weaknesses: I see three major weaknesses:
1. The necessity of the brain is unclear to me. Instead of aligning features to the brain, you could've aligned the features of the different networks to each other - creating an "emergent" universal feature space. Would your results, e.g. w.r.t. the figure-gound segmentation, change much if you do so? If not, what does bringing the brain into play buy us here in terms of network response interpretability? This is unclear to me.
2. Most of the results need robustness checks. For e.g., in Fig. 6 you show foreground vs background difference in neural response associations. Presumably, that's an average across a lot (all?) of images. Could you indicate some sign of robustness, for e.g, running a permutation test to assess how likely the differences you see would've been expected given the data statistics alone? Same holds for Figs. 7 and 8. We need to know if these differences are flukes or not.
3. Reliance solely on ViTs. To make your point more general, showing that a high-performing CNN shows the same results would be very informative. ViTs have more expressivity in terms of patches interacting with each other - perhaps figure-ground segmentation isn't as strong in CNNs (although if previous research is to be trusted, CNNs should have some notion of figure-ground segmentation; see Hong et al. NatNeuro 2016 and Thorat et al. SVRHM 2021).
Refs:
- Hong, Ha, et al. "Explicit information for category-orthogonal object properties increases along the ventral stream." Nature Neuroscience 19.4 (2016): 613-622.
- Thorat, Sushrut, Giacomo Aldegheri, and Tim C. Kietzmann. "Category-orthogonal object features guide information processing in recurrent neural networks trained for object categorization." SVRHM 2021 Workshop @ NeurIPS.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Is there a reason to start the paper with Yang et al. 2024 as a reference for "mapping b/w brain and deep nets"? These types of mappings have been studied since 2014 (see Khaligh-Razavi et al. PLOS Compbio 2014) and reviews such as Doerig et al. NatRev 2023 might be better suited for this.
2. Correction for 2.1: In NSD, the plan was to collect responses to 10k images per participant - however many participants did not complete the experiment.
3. In Sections 2.2 and 2.3, would it be possible to provide a layperson summary to drive the assumptions home? Esp. the interpretation of Eqs. 3 and 4
4. In Section 3.1 you claim figure-ground segmentation emerges before categories based on the accuracy hitting ceiling for figure-ground segmentations earlier. I found it hard to digest. Usually, if we want to say some information is present before another, we look for hints of that information through "time"/layers. That would amount to something like the first layer where the performance is above chance.
Refs:
- Khaligh-Razavi, Seyed-Mahdi, and Nikolaus Kriegeskorte. "Deep supervised, but not unsupervised, models may explain IT cortical representation." PLoS computational biology 10.11 (2014): e1003915.
- Doerig, Adrien, et al. "The neuroconnectionist research programme." Nature Reviews Neuroscience 24.7 (2023): 431-450.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors mentioned methodological limitations. It is sufficient.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | null | Summary: The paper introduces a new method to interpret deep learning models using brain data. The two apparent contributions are that (1) this new model is able to align channels activations from different layers of different models into a universal feature space, and (2) a Nystrom-like approximation is introduced to speed up spectral clustering analysis.
Strengths: I have to be very transparent in this review. Unfortunately, I found extremely difficult to follow this paper, and I was not able to understand its different components; as a consequence, I don't feel capable to evaluate what the possible strengths of this work are.
Weaknesses: As I said in the previous section, I was not able to understand the methodological details of this paper. A lot of terms and concepts are used throughout the paper without a proper explanation, and as a consequence I cannot honestly understand what is going on with the methods of this paper to properly evaluate it. This paper seems to have a lot of work in it, so I want to believe that what's happening here is that (1) this is one of the first - if not the first - paper from the authors, and thus they lack the experience to explain what they did in a way that their peers can understand, (2) a lot of the concepts used are seen by the authors as very obvious jargon from the subfield, and thus the problem is that I'm not familiar with this subfield, or (3) both. I'm leaving my doubts in the next section, in the hope that they will allow us to understand better whether my understanding difficulties are related to points (1), (2), or (3). As a consequence, I'm rating this paper as a borderline reject, hoping that during the rebuttal period the authors will have time to tackle this readability issues, and as a result I'll be able to properly reassess this work.
Technical Quality: 1
Clarity: 1
Questions for Authors: 1. The work by Yang and collaborators from 2024 seems to be very important, as it is mentioned immediately in the first paragraph of Introduction. Indeed, the one-sentence summary seems to indicate that the authors are presenting something very similar to Yang et al.'s work. What are the similarities and the differences between this paper and Yang et al.'s work?
2. In figure 2's caption, what is an "early" and "late" brain? What are the different "levels" of segmentation, and how is this done? What is "spectral-tSNE" method, an adaptation from tSNE? If we have image pixels, how can they be coloured by some sort of "3D" method?
3. Between 42-49 the authors try to explain some sort of "hypothesis"? What hypotheses are we talking about here?
4. Between lines 42-47, it is said that each channel produces a per-pixel response. In later/deeper layers of a deep learning model, the receptive field is larger, and thus there might be different ways to connect an input pixel to different channels. Can the authors clarify this point?
5. If the Nystrom-like approximation is presented as a key contribution of this paper, why don't we have experimental evaluations showing the computational speed-ups in practice?
6. What do the authors mean with "using brain encoding as supervision" in line 74?
7. What do the authors mean with the term "meanings" in line 76? Can they be more precise with what this means in more computational terms?
8. What are the actual contributions from points 2 and 3 between lines 77-81? The discovery of these points?
9. What are the "features across different models" mentioned in section 2? Channel activations?
10. When the authors mention "brain response prediction" or "brain prediction" or "brain prediction target" in section 2, what are these "predictions" and how and where do they come from?
11. What do the authors mean by "all levels of semantics" and how do they relate to "rich representations" in line 94?
12. Shouldn't the "Brain Dataset" in lines 103-106 be moved to the section where experiments are being characterised, rather than in the methods section? When the authors say that they've used the "first subject's (...)" does this mean they've only used one person's brain scan in this paper? Why and how was this data preprocessed?
13. What training procedures were done in this paper? Was the NSD dataset used to train CLIP, DINOv2, and MAE mentioned in section 3? What train/validation/test splits were used? How were hyperparameters selected? How was the overall training procedure? How are the NSD, ImageNet, and PASCAL datasets mentioned in this paper used together? Why is NSD presented in section 2.1, but then it's not mentioned again?
14. In section 2.1, I believe the transformation $V_iW_i$ is what the authors name "channel alignment". The way these transformations are presented, they only seem to be learnable linear transformations almost as if we were using an MLP. In this sense, how is this an "alignment"? Assuming there is some complex loss function (though nothing seems to be mentioned in the main paper) applied to the final $Y$ in equation 2, then it means that the learned $W_i$ will be some complex transformation learned by the network, but not necessarily aligned to a "universal" space, right?
15. fMRI signals are supposed to be 4D, so how did the authors get to the 3D brain voxels mentioned in line 115?
16. In lines 125 and 126 the authors mentioned "the" graph. Is this a specific graph? Or just "a" graph to explain how spectral clustering works?
17. What are the "brain scores" and "brain prediction scores" mentioned in section 2.3?
18. It is mentioned that equation 5 is "added". Added to what?
19. When the authors say that they "extracted features" in lines 165, do they maybe mean channel activations? If not, can they be specific about how and what is being extracted?
Confidence: 2
Soundness: 1
Presentation: 1
Contribution: 1
Limitations: Limitations of this work are mentioned in the Conclusion section, but no discussion about the potential negative impact of this work is presented.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Forgetting, Ignorance or Myopia: Revisiting Key Challenges in Online Continual Learning | Accept (poster) | Summary: This paper revisits the core challenges of Online Continual Learning (OCL) in high-speed data stream environments, identifying two significant obstacles: the model's ignorance and myopia. It then introduces a non-sparse classifier evolution framework (NsCE) designed to effectively address these issues. Additionally, the authors offer some theoretical guarantees from a PAC-Bayes perspective, providing insights into the robustness and generalization capabilities of the proposed method.
Strengths: 1. The analysis of factors beyond forgetting in OCL is enlightening, especially in highlighting the suboptimal performance of current approaches.
2. The inclusion of model throughput as a factor in OCL is crucial, and the analysis from a Pac-Bayes perspective offers compelling insights.
3. The examination of sparsity and the proposed Non-sparse Classifier Evolution (NsCE) in continual learning is noteworthy. The method is straightforward, easy to implement, and has proven effective in experiments.
4. The paper is generally well-structured and easy to understand.
Weaknesses: 1. While the authors have imposed constraints on the number and frequency of memory buffer accesses, these conditions still mimic laboratory settings rather than real-world applications. It would be beneficial to provide examples of real-world scenarios where such restrictions are applicable and realistic.
2. Despite that the analysis of model throughput as a factor in OCL is intriguing, it seems that the authors did not give a perfect strategy. More analysis on how to improve the model throughput and the relationship with pre-trained models is needed.
3. Some experiments in the Appendix is better to be concluded in the main text as they also serve as some important validations on the proposed methods like results in Table6, Table11 and Table12.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. See the weakness.
2. There are some typos and grammatical errors, for example, "Plus, we also evaluate NsCE on real-world domain incremental datasets and large-scale image classification datasets." An "and" is missing here.
3. Some existing literatures like [A] have also talked about the training delay in OCL. It's better to cite it in the main text.
4. The performance improvements appear relatively marginal in terms of the last accuracy reported in Tables 4 and 5, yet there is a significant boost in performance on the AUC metric. Could there be specific reasons behind this discrepancy?
[A] Y. Ghunaim, A. Bibi, K. Alhamoud, M. Alfarra, H. A. Al Kader Hammoud, A. Prabhu, P. H.Torr, and B. Ghanem. Real-time evaluation in online continual learning: A new hope. ICCV2023
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: _Respected Reviewer abk8,_ We first thank you for your valuable and insightful feedback, and for recognizing our analysis from a Pac-Bayes perspective and proposed method. Below, we address your concerns in a point-by-point manner and welcome further discussion if anything remains unclear.
Q: _Discussion on more real-world scenarios where such restrictions are applicable and realistic._
A: We agree with your perspective on the importance of realistic experimental settings and hope to address your concerns from two perspectives.
1. Evaluations on real world settings: In fact, we have included two real-world continual image classification benchmark datasets with a natural temporal evolution of visual concepts in the real world that spans a decade (2004-2014) into the experiments to include both class incremental and domain incremental settings.
2. Real-world applications: Similar to answer to reviewer u1kw, our initial intuition for limiting request frequency to the memory buffer is based on practical considerations. In real-world scenarios, such as autonomous vehicles and sensor network data classification where OCL could be applied, ensuring real-time accessibility of the memory buffer without incurring training delays is typically impractical.
Q: _More analysis on how to improve the model throughput and the relationship with pre-trained models._
A: To further improve model throughput, we provided a NsCE Lite version in Appendix C.2.2, which enhances throughput by not fine-tuning the entire network. We found that, in most cases, this lightweight framework achieves comparable results to NsCE, particularly on relatively simple datasets. However, we must acknowledge that such approaches heavily rely on the selection of pre-trained models. For detailed guidelines on selecting pre-trained models, please refer to Appendix C.3.
Q: _Some typos, missing citations and experimental results in main text._
A: We feel sorry for any confusion or inconvenience for you. We will cite the mentioned paper in the main text and carefully proof-read our text to ensure that no grammar mistakes or typos still exist.
Q: Reasons behind the discrepancy between $A_{AUC}$ and Last accuracy.
A: It should be noted that $A_{AUC}$ and last accuracy actually reflect different aspects of the model's performance. $A_{AUC}$ mainly assesses the real-time performance of the model, while the last accuracy measurement reflects the model's performance after processing the entire data stream. Our focus is on achieving a high-performance, anytime inference OCL model with minimal time and memory costs. Therefore, last accuracy is not our first priority. But even without employing data augmentation and knowledge distillation, our NsCE framework still achieves comparable results in last accuracy with SOTA OCL methods. | Summary: This paper identifies two previously overlooked challenges in online continual learning (CL): model ignorance and myopia. In response, it introduces a new framework called Non-sparse Classifier Evolution (NsCE). NsCE features non-sparse maximum separation regularization and targeted experience replay techniques designed to quickly learn new globally discriminative features. Experimental results show significant enhancements in both model performance and throughput.
Strengths: 1. The paper is well-motivated by the shortcomings of existing OCL methods. It tackles a significant learning problem, and the focus on model throughput from both empirical and theoretical perspectives is a pertinent challenge in many practical applications involving data streams.
2. The strength of the paper lies in its thorough empirical evaluation, which convincingly illustrates the concepts of model ignorance and myopia. Additionally, the introduction of the term "myopia" is intriguing, and the accompanying theoretical analysis offers convincing insights into this issue.
3. The analysis of model parameter sparsity is interesting, and the proposed method with pre-trained initialization aims to address the issues of model ignorance and myopia, ultimately achieving good empirical performance.
4. It's the first time theoretical results have included discussions of model throughput, which represents a noteworthy contribution.
Weaknesses: 1. For the model's ignorance, following the general expectation of transfer learning that pre-trained models facilitate fast learning, it is not surprising that using pre-trained models improves the performance of OCL. So how the pre-trained model improve the learning speed is still not clear
2. As highlighted in Section 4.1 of the article, many existing methods struggle with large-scale volatile datasets. While employing pre-trained models can partially mitigate the issue of model ignorance, it is still uncertain whether this approach offers a definitive solution. Although Appendix B.2 addresses this concern, a definitive and clear solution to fully resolve this problem has not yet been established.
3. It's better to illustrate why to claim that using $max()$ function is easily affected by a small number of outliers in the parameters.
4. The paper claims that a smooth classifier can help mitigate the model's myopia, but it hampers the model's ability to perform rapid classification on the current task. Are there any insights into why this occurs?
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Is there any insight on how the size of the memory buffer relates to the number of learning tasks? The size of memory buffers seem quite random across the datasets.
2. Does performance depend on the order of the tasks/domains? Sometimes it will influence the results as different data distributions come from different tasks.
3. Is there a concern about over confidence or under confidence with this approach? Given that modern deep neural networks are particularly prone to issues with confidence levels, an analysis of this aspect would be valuable.
4. See the weakness.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: _Respected Reviewer u1kw,_ We first thank you for your valuable and insightful feedback, and for recognizing our empirical evaluation and theoretical insights. Below, we address your concerns in a point-by-point manner and welcome further discussion if anything remains unclear.
Q: _How the pre-trained model improve the learning speed_
A: We first appreciate your insightful question. As stated in Section 2, we highlighted that there is a significant trade-off between effective learning and model throughput. Meanwhile, Figure 1 clearly shows that a proper pre-trained initialization enables the model to rapidly achieve good performance on continual tasks. This, in turn, reduces the need for extensive training steps to mitigate ignorance, thereby increasing the learning speed. Certainly, fully exploiting pre-trained models to improve learning speed in OCL is a very important topic, and we hope to address this in future works.
Q: _A clear solution on how to resolve the issue of model ignorance when using pre-trained models._
A: First, we must acknowledge that, up until now, using an appropriate pre-trained initialization is the only efficient way we have found to resolve ignorance. However, our research also indicates that the efficacy of pre-training is influenced by various factors, including the domain alignment between the pre-training and target tasks, the volume and quality of the pre-training dataset, specific attributes of the target dataset, and the architecture of the backbone network. Generally, we posit that having insight into the expected data distribution of upcoming tasks allows for the selection of a pre-trained model trained on a similar distribution, which is a prudent and dependable approach.
Q: _Why to claim that using $max()$ function is easily affected by a small number of outliers in the parameters._
A: Thanks for your suggestion. By definition, the max() function selects the highest value in the dataset. It naturally does not consider the frequency or distribution of other values, which may cause some unexpected problems in optimization or evaluations.
Q: _Why a smoother classifier sometimes hampers the model's ability to perform rapid classification on the current task._
A: We have observed this phenomenon in the sensitivity analysis and ablation study. The reasons behind it are straightforward: compared to a sparse classifier, a smoother one tends to give vague predictions, which often indicate degrading performance. Therefore, it is important to ensure that $\mathcal{L}_{ce}$ remains the dominant term in optimization, while $\mathcal{L}_s$ serves more as a regularization.
Q: _How the size of the memory buffer relates to the number of learning tasks?_
A: Since each dataset comprises a varying number of classes, it is typically the case that datasets with more classes require the storage of more data, leading to a need for a larger memory buffer. Our experimental setup largely adheres to the parameters established by [1].
[1] Wei Y, Ye J, Huang Z, et al. Online prototype learning for online continual learning. ICCV, 2023.
Q: _Is there a concern about over confidence or under confidence with this approach?_
A: We appreciate your insightful question and agree that the overconfidence or underconfidence of a model's predictions is a promising aspect for understanding the model's behavior during continual learning, especially in online scenarios. In fact, model myopia, recency bias in the classifier, and even ignorance can be considered from the perspective of the model's prediction confidence. For example, the bias towards the current task can be perceived from the perspective of overconfidence, and the $\mathcal{L}_s$ we proposed can help alleviate this overconfidence. However, how to evaluate the confidence level and how existing techniques addressing issues like overconfidence will perform in OCL remains unclear and warrants further exploration. We hope to leave it our future works. | Summary: The paper identifies and formalizes main challenges specific to OCL. Notably the authors highlight the need for a stronger focus on ignorance (the inability of the online learner to fully converge) and throughput. Similarly, the authors identify Myopia, which corresponds to learning sparse feature, as a potential issue. Therefore they propose a simple yet effective non-sparse regularization strategy, combined with a maximum separation and targeted experience replay strategy to solve myopia with minimal computation overhead. A comparison with state-of-the-art approaches and pre-trained models shows that the proposed approach outperforms existing methods.
Strengths: - The experiments seem to have been realized with care
- The paper is well written
- The experimental results are compelling
- I appreciate the defined challenges as I would also agree that Ignorance is an important topic in OCL as the focus is shifting away from forgetting in recent studies
- I appreciate the effort of providing a theoretical analysis
- the code is available
Weaknesses: **Major Weaknesses**
1. What is the justification behind this limited request on the memory buffer? Why not include experiments in the traditional setup with $Freq=1$?
2. Some references to related work on budget CL are missing. I would suggest the authors to clarify what is the difference between their analysis with regard to the throughput and the analysis of computationally budgeted continual learning [1]. It would be beneficial to include such methods in the comparison table.
3. What is the justification behind the currently used metric $A_{AUC}$? I would like the authors to report the Average Accuracy, which should be the metric of interested in continual learning.
4. A graph showing the weight sparsity with and without the sparsity-regularization term could also showcase better the impact of the loss. At the moment, there is no clear demonstration of how much myopia has been solved. Overall the presentation of how the proposed approach solves the introduced challenges could be improved.
5. The authors should clarify the relation between the defined challenges and the usual stability-plasticity trade-off challenges of continual learning [6]. What is the relation between ignorance and plasticity? Is Myopia related to stability?
6. Figure captions must be enlarges. Confusion matrices are also barely readable.
**Minor Weaknesses**
7. I believe the findings of figure 2 right hand side have been discussed in previous studies such as [5]. Such references could be included in the discussion.
8. Table 1 readability could also be improved (number size specifically).
9. figure 2, right hand side has incorrect markers.
10. If I understand correctly, the "single task setting" is just training for one epoch. If so, I would advise to introduce it as such.
11. equation 3 should be reference in figure4 caption.
12. l184 : "This trend towards simplification is illustrated in Figure 4(Right), where there is a noticeable increase in the sparsity of parameters associated with older tasks as new ones are introduced." You might want to define the sparsity as $1-s(w)$ as current definition can be misleading. A high sparsity is obtained with a low $s(w)$ value.
13. I wonder how would the sparsity be affected by methods such as SS-IL [2], ER-ACE [3] or GSA [4] , which focus on re-arranged last layer weight updates.
**Typos**
- caption of Figure 4 : "class 0 corresponding to class0"
- l167: Appednix
**References**
[1] Prabhu, Ameya, et al. "Computationally budgeted continual learning: What does matter?." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[2] Ahn, Hongjoon, et al. "Ss-il: Separated softmax for incremental learning." Proceedings of the IEEE/CVF International conference on computer vision. 2021.
[3] Caccia, Lucas, et al. "Reducing representation drift in online continual learning." arXiv preprint arXiv:2104.05025 1.3 (2021).
[4] Guo, Yiduo, Bing Liu, and Dongyan Zhao. "Dealing with cross-task class discrimination in online continual learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[5] Buzzega, Pietro, et al. "Rethinking experience replay: a bag of tricks for continual learning." 2020 25th International Conference on Pattern Recognition (ICPR). IEEE, 2021.
[6] Wang, Maorong, et al. "Improving Plasticity in Online Continual Learning via Collaborative Learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
Technical Quality: 4
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The limitations have been correctly discussed in appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: _Respected Reviewer WX6u,_ We first thank you for your valuable and insightful feedback, and for recognizing our motivation and theoretical analysis. Below, we address your concerns in a point-by-point manner and welcome further discussion if anything remains unclear.
Q: _Justification behind the limitation on the request frequency on the memory buffer_
A: We address this question from three aspects.
1. Our initial intuition for limiting request frequency to the memory buffer is based on practical considerations. In real-world scenarios, such as autonomous vehicles and sensor network data classification where OCL could be applied, ensuring real-time accessibility of the memory buffer without incurring training delays is typically impractical. This is detailed in Appendix D.1.
2. When take the traditional setting ($Freq=1$ and replay a small part of data in the memory each time step), the training time is typically much longer and significantly reduce model's throughput as illustrated in Figure 9 & 10, let alone replaying the whole memory buffer at $Freq=1$.
3. Additionally, we find that using a proper pre-trained initialization and setting $Freq=1$ enables all baselines to achieve very high performance as shown in the following table. A high-frequency replay on a moderately sized memory buffer naturally resolves most issues like forgetting, myopia and even ignorance because it makes the setting more akin to an offline, non-continual one. This, to some extent, contradicts the intuition of the OCL setting in my opinion. Here, we take the performance $A_{AUC}$ on EuroSat as an example, for each time step we replay $10$ percent of the data in memory buffer:
| | M=0.1K | M=0.2K | M=0.5K | M=1K |
| ---- | ---- | ---- | ---- | ---- |
| ER |86.4|89.5|89.8|91.3|
| OnPro |84.6|87.6|90.0|90.9|
| NsCE w/o target replay|87.8|89.4|90.4|91.0|
Q: _Comparison with computationally budgeted continual learning_
A: We address this question from three aspects.
1. _Some missing references:_ We first thank the reviewer for pointing out the missing related work on budgeted continual learning. We will cite those references in the revised version.
2. _Difference between our analysis and theirs:_ The main difference is that in our setting, data is strictly treated as an online stream, meaning no revisit is allowed except for the data in the memory buffer. The starting points of the two papers are different: they aim to address the CL problem with limited training iterations, whereas we strive to train a model to match the original speed of the input data stream. Notably, in our experiments, the computational budget is much lower than the mentioned paper. In our setting ($Freq=1/50 or Freq=1/100$), the computational budget $\mathcal{C}$ allows revisiting less than $0.1$ percent of all observed data at given step (in average), whereas in their paper, this proportion is $25-50$ percent.
3. _Including computationally budgeted continual learning methods in the comparison table:_ We take the suggestion and add some baseline methods in computationally budgeted continual learning denoted as CBCL with ACE (We find the implementation of CBCL is very close to ER). For results for larger datasets like CLEAR and ImageNet, we will leave them in the future version. It appears that the method performs slightly better than ER in our original Table 1, but it is less competitive compared to our method.
| | CIFAR10 M=0.1K Freq=1/100 | CIFAR100 M=0.5K Freq=1/50 | EuroSat M=0.1K Freq=1/100 |
| ---- | ---- | ---- | ---- |
| CBCL w/ ACE |84.0 $\pm$ 1.0|62.4 $\pm$ 0.8|60.9 $\pm$ 0.6|
| | CIFAR10 M=0.5K Freq=1/10 | CIFAR100 M=2K Freq=1/10 | EuroSat M=0.5K Freq=1/10 |
| CBCL w/ ACE |89.2 $\pm$ 0.4|72.7 $\pm$ 1.2|85.1 $\pm$ 0.8|
Q: _The relation between the defined challenges and the usual stability-plasticity trade-off_
The stability-plasticity trade-off is well-known in continual learning. Techniques to prevent forgetting often limit new knowledge acquisition. We thank the reviewer for giving us chances to clarify relationships between ignorance vs. plasticity and myopia vs. stability.
1. We first acknowledge that both ignorance and plasticity emphasize the need to improve the model’s capability to acquire new knowledge. However, our concept of ignorance extends beyond plasticity by incorporating the model's throughput. As illustrated in Figure 2 and Figure 11, although the method in [6], referred to as CCL or Distillation Chain in our work, enhances plasticity for OCL, it significantly reduces the model's throughput. This reduction potentially harms its practicality and performance, as detailed in Section 2.
2. Our examination of model myopia aims to refine the concept beyond its interpretation in the context of stability. Previous literature focusing on model stability attributes the model's decreasing performance to the interference or overlap in the feature representations or gradient contention of new and old classes, advocating for safeguarding existing knowledge to mitigate forgetting. Conversely, we contend that this performance degradation arises more from the model's limited exposure to subsets of classes during training, which causes it to favor features unique to these subsets. This leads to a myopic classification perspective with poor generalization. I think as we mentioned in our theoretical analysis (Section 4), our proposed concept of model's myopia actually offers a fresh perspective.
Overall, our defined new challenges serve as an extension or an independent aspect (the other side of the coin) compared to the traditional stability-plasticity trade-off.
**Note: Due to strict space limits, our responses to the other review questions are consolidated in the Author Rebuttal above. We apologize for any confusion caused by not responding individually. Please refer to the Author Rebuttal or let us know if you have any other questions to discuss further.**
---
Rebuttal 2:
Title: Additional response
Comment: Q: _How the proposed approach solves the introduced Myopia could be improved_
A: We first appreciate your insightful question and suggestion to create a graph showing the weight sparsity with and without the sparsity-regularization term. We have included this in Figure 1 of the attached PDF. It can be seen that the proposed sparse regularization term $\mathcal{L}_s$ prevents the classifier from becoming excessively sparse during training. Additionally, we want to clarify that myopia is addressed by the overall design of our proposed method including the sparsity-regularization term, NsCE, as illustrated in Figure 5 of our main text. From the visualization, it can be seen that our model quickly learns the current task while minimizing confusion between past categories and those in the current task.
Q: _The relation between the defined challenges and the usual stability-plasticity trade-off_
A: The stability-plasticity trade-off is one of the most well-known concepts in continual learning. It is recognized that techniques to mitigate catastrophic forgetting often constrain a model’s capability to acquire new knowledge. Some recent studies also perceive these as independent challenges, as the reviewer mentioned in [6]. We thank the reviewer for giving us the opportunity to further clarify the relationship and differences between ignorance vs. plasticity and myopia vs. stability.
1. ignorance vs. plasticity: We first acknowledge that both ignorance and plasticity emphasize the need to improve the model’s capability to acquire new knowledge. However, our concept of ignorance extends beyond plasticity by incorporating the model's throughput, a critical factor in high-speed data stream environments. We observe that the single-pass nature of OCL challenges models to learn effective features within constrained training time and storage capacity, leading to a trade-off between effective learning and model throughput. As illustrated in Figure 2 and Figure 11, although the method in [6], referred to as CCL or Distillation Chain in our work, enhances plasticity for OCL, it significantly reduces the model's throughput. This reduction potentially harms its practicality and performance, as detailed in Section 2.
2. myopia vs. stability: Our examination of model myopia aims to refine the concept beyond its interpretation in the context of stability. Previous literature focusing on model stability attributes the model's decreasing performance to the interference or overlap in the feature representations or gradient contention of new and old classes, advocating for safeguarding existing knowledge to mitigate forgetting. Conversely, we contend that this performance degradation arises more from the model's limited exposure to subsets of classes during training, which causes it to favor features unique to these subsets. This leads to a myopic classification perspective with poor generalization. I think as we mentioned in our theoretical analysis (Section 4), our proposed concept of **model's myopia** actually offers a fresh perspective to understand the performance degradation in OCL.
Overall, our defined new challenges serve as an extension or an independent aspect (the other side of the coin) compared to the traditional stability-plasticity trade-off.
[6] Wang M, Michel N, Xiao L, et al. Improving Plasticity in Online Continual Learning via Collaborative Learning. CVPR, 2024.
Q: _Whether the sparsity would be affected by methods that focus on re-arranged last layer weight updates_
A: After the reviewer's reminder, we are also very interested in how sparsity would be affected by methods focusing on re-arranged last layer weight updates. After implementing ER-ACE and ER-AML, we found that the phenomenon of parameters rapidly becoming sparse is indeed somewhat mitigated, though not as significantly as with our proposed regularization term $\mathcal{L}_s$, as illustrated in Figure 1 of the attached PDF. Additionally, incorporating ACE or AML can also boost performance for baselines like ER and SCR. We believe that when ACE and AML nudge the learned representations to be more robust to new future classes, they indirectly decrease the sparsity of the model parameters.
For GSA, the sparsity is not affected. Due to very limited time, we are not entirely sure whether this part is perfectly embedded or if further tuning would help, as the authors only provide hyperparameters for CIFAR-100 and there are some differences between the original paper's general settings and ours. For SS-IL, we did not find its implementation, so it may be left for future works.
Overall, re-arranging the latter layer weight updates is an interesting and important problem in the area of OCL. We genuinely appreciate this insightful suggestion and plan to conduct detailed exploration of the dynamics of latter layer weight parameters during training in future works. This may lead to more intuitive designs.
---
Rebuttal 3:
Title: Additional response
Comment: Q: Justification behind $A_{AUC}$ (Area under the Accuracy Curve) but not Average Accuracy $A_{avg}$
A: At this point, we respectfully disagree with the reviewer's opinion. Compared to $A_{avg}$, $A_{AUC}$ is typically perceived as a more suitable and modern evaluation metric for the OCL scenario, especially in boundary-free settings like ours. Specifically, $A_{AUC}$ was first proposed by [1] and has been widely adopted as a crucial metric in existing OCL literature to substitute the old common used $A_{avg}$, such as [2], [3], [4]. This metric helps address the limitation of $A_{avg}$, which only provides evaluation about the model's performance at specific task transition points, usually occurring only 5-10 times in most OCL setups.
In contrast to [1], we measure accuracy more frequently—at least 20 times—by evaluating it after every $\Delta n$ samples ($\Delta n=500$ for eurosat). Instead of taking an average on these observed accuracy, we compute the area under the accuracy-to-number-of-samples curve (AUC) $A_{AUC}=\frac{1}{n}\sum_{i=1}^t acc(i \cdot \Delta n) \cdot \Delta n$ (actually similar to the average taken on more observations $A_{avg}=\frac{1}{t}\sum_{i=1}^tacc(i\cdot \Delta n)$), providing a more precise evaluation than simply taking average accuracy ($n$ denotes the total number of training data). As illustrated by the following table (we directly copy the performance of RM[5] in [1]), a higher $A_{AUC}$ typically induces higher $A_{avg}$ and good real-time inference performance, but a high $A_{avg}$ does not necessarily represent good real-time inference performance. We highly respect reviewer's suggestion and plan to include a more detailed discussion including the comparison between these two metrics (including both numerical results and illustrations) in our revised version.
| | CIFAR10 | CIFAR100 | TinyImageNet | ImageNet |
| ---- | ---- | ---- | ---- |---- |
| $A_{AUC}$ |23.00 $\pm$ 1.43|8.63 $\pm$ 0.19|5.74 $\pm$ 0.30|6.22|
| $A_{avg}$ |61.52 $\pm$ 3.69 |33.27$\pm$ 1.59|17.04 $\pm$ 0.77|28.30|
[1] H. Koh, D. Kim, J.-W. Ha, and J. Choi. Online continual learning on class incremental blurry task configuration with anytime inference. ICLR, 2022.
[2] Moon J Y, Park K H, Kim J U, et al. Online class incremental learning on stochastic blurry task boundary via mask and visual prompt tuning ICCV 2023.
[3] Koh H, Seo M, Bang J, et al. Online boundary-free continual learning by scheduled data prior. ICLR, 2023.
[4] Seo M, Koh H, Jeung W, et al. Learning Equi-angular Representations for Online Continual Learning. CVPR, 2024.
[5]Jihwan Bang, Heesu Kim, YoungJoon Yoo, Jung-Woo Ha, and Jonghyun Choi. Rainbow memory: Continual learning with a memory of diverse samples. CVPR, 2021
Q: _Readability and typos (especially on figure 2 and table 1)_
A: We feel sorry for any confusion or inconvenience for you. We take your suggestions on the readability of figures. We will enlarge the captions of our figures to ensure a better readability. Plus, we will carefully proof-read our text to ensure that no grammar mistakes or typos still exist.
---
Rebuttal 4:
Title: Thank you
Comment: I really appreciate the time and effort the authors have put into their work and rebuttal. I genuinely found this discussion very interesting and I believe this work to be of high quality.
**About the $Freq=1$ scenario**
I appreciate the authors honesty. My intuition was indeed that the proposed work performance gain might not be as significant in this setup. That being said, I fully agree that limiting the frequency of replay makes perfect sense and should be a prior focus compared to the $Freq=1$ scenario.
**Comparison with BudgetCL**
I agree with the authors comments. Thank you for the clarification.
**Discussion on the link with stability-plasticity**
Thank you for the clarifications.
**Extra experiments**
Thank you for sharing your findings. I also agree with your interpretation. I found these experiments particularly interesting and I believe them to be valuable to the community. I think such experiments would be worth including in the main draft.
**Sparsity metric**
Thank you for defining $\frac{1}{s(w)}$ which I believe makes more sense to me and is more easily understable. I apologize for suggesting $1-s(w)$ which of course was not suited.
**On the usage of $A_{AUC}$**
Thank you for the interesting reference. I still believe that the *final* average accuracy is valuable and could be reported in appendix, and I thank the authors for including it in the rebuttal. If I am not mistaken, I believe it is not currently included. In any case, I was convinced by the authors arguments and the usage of $A_{AUC}$ now makes perfect sense to me. Again, I appreciate the effort of clarifying and justifying the choices made in this work. After carefully checking the manuscript, I realize such choices were already justified in the appendix.
For all the above reasons, **I will increase my score to 8.**
---
Rebuttal Comment 4.1:
Comment: Thank you for your encouraging feedback and insightful suggestions.
Regarding the final accuracy (the model's accuracy on all seen tasks after the entire training process), we have included it in Appendix C.2.1 (Table 4 and Table 5). We apologize for any potential confusion and will highlight this in the revised version. | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely appreciate your time and effort in reviewing our manuscript and offering valuable suggestions. Based on some of the reviews, we provide a pdf including a figure showing the redefined weight sparsity value during the training with and without the sparsity-regularization term and displaying the impact of some methods (GSA,ACE,AML) which focus on re-arranged last layer weight updates as the comparison.
Due to the strict space limits, regarding how the proposed approach solves the introduced Myopia could be improved, whether the sparsity would be affected by methods that focus on re-arranged last layer weight updates and the evaluation metrics, we hope to provide a unified answer here.
Q: _How the proposed approach solves the introduced Myopia could be improved_
A: We first appreciate your insightful question and suggestion to create a graph showing the weight sparsity with and without the sparsity-regularization term. We have included this in Figure 1 of the attached PDF. It can be seen that the proposed sparse regularization term $\mathcal{L}_s$ prevents the classifier from becoming excessively sparse during training. Additionally, we want to clarify that myopia is addressed by the overall design of our proposed method including the sparsity-regularization term, NsCE, as illustrated in Figure 5 of our main text. From the visualization, it can be seen that our model quickly learns the current task while minimizing confusion between past categories and those in the current task.
Q: _Whether the sparsity would be affected by methods that focus on re-arranged last layer weight updates_
A: After the reviewer's reminder, we are also very interested in how sparsity would be affected by methods focusing on re-arranged last layer weight updates. After implementing ER-ACE and ER-AML, we found that the phenomenon of parameters rapidly becoming sparse is indeed somewhat mitigated, though not as significantly as with our proposed regularization term $\mathcal{L}_s$, as illustrated in Figure 1 of the attached PDF. Additionally, incorporating ACE or AML can also boost performance for baselines like ER and SCR. We believe that when ACE and AML nudge the learned representations to be more robust to new future classes, they indirectly decrease the sparsity of the model parameters.
For GSA, the sparsity is not affected. Due to very limited time, we are not entirely sure whether this part is perfectly embedded or if further tuning would help, as the authors only provide hyperparameters for CIFAR-100. For SS-IL, we did not find its implementation, so it may be left for future works.
Overall, re-arranging the latter layer weight updates is an interesting and important problem in the area of OCL. We genuinely appreciate this insightful suggestion and plan to conduct detailed exploration of the dynamics of latter layer weight parameters during training in future works. This may lead to more intuitive designs.
Q: Justification behind $A_{AUC}$ (Area under the Accuracy Curve) but not Average Accuracy $A_{avg}$
A: At this point, we respectfully disagree with the reviewer's opinion. Compared to $A_{avg}$, $A_{AUC}$ is typically perceived as a more suitable and modern evaluation metric for the OCL scenario, especially in boundary-free settings like ours. It can be seen as a refined version of $A_{avg}$ to prevent misinterpretation on model's real-time performance. $A_{AUC}$ was first proposed by [1] and has been widely adopted as a crucial metric in existing OCL literature to substitute the old commonly used $A_{avg}$, such as [2], [3], [4]. It helps address the limitation of $A_{avg}$, which only provides evaluation about model's performance at specific task transition points, usually occurring only 5-10 times in most OCL setups.
$A_{AUC}$ measures accuracy more frequently—at least 20 times—by evaluating it after every $\Delta n$ samples ($\Delta n=500$ for EuroSat). Instead of taking an average on these observed accuracy, $A_{AUC}$ computes the area under the accuracy-to-number-of-samples curve $A_{AUC}=\frac{1}{n}\sum_{i=1}^t acc(i \cdot \Delta n) \cdot \Delta n$ (actually similar to the average taken on more observations $A_{avg}=\frac{1}{t}\sum_{i=1}^tacc(i\cdot \Delta n)$), providing a more precise evaluation than simply taking average accuracy ($n$ denotes the total number of training data). As illustrated by the following table (we directly copy the performance of RM[5] in [1]), a higher $A_{AUC}$ typically induces higher $A_{avg}$ and good real-time inference performance, but a high $A_{avg}$ does not necessarily represent good real-time inference performance. We highly respect reviewer's suggestion and plan to include a more detailed discussion including the comparison between these two metrics (including both numerical results and illustrations) in our revised version.
| | CIFAR10 | CIFAR100 | TinyImageNet | ImageNet |
| ---- | ---- | ---- | ---- |---- |
| $A_{AUC}$ |23.00 $\pm$ 1.43|8.63 $\pm$ 0.19|5.74 $\pm$ 0.30|6.22|
| $A_{avg}$ |61.52 $\pm$ 3.69 |33.27$\pm$ 1.59|17.04 $\pm$ 0.77|28.30|
[1] H. Koh, D. Kim, J.-W. Ha, and J. Choi. Online continual learning on class incremental blurry task configuration with anytime inference. ICLR, 2022.
[2] Moon J Y, Park K H, Kim J U, et al. Online class incremental learning on stochastic blurry task boundary via mask and visual prompt tuning ICCV 2023.
[3] Koh H, et al. Online boundary-free continual learning by scheduled data prior. ICLR, 2023.
[4] Seo M, Koh H, Jeung W, et al. Learning Equi-angular Representations for Online Continual Learning. CVPR, 2024.
[5]Jihwan Bang, Heesu Kim, YoungJoon Yoo, Jung-Woo Ha, and Jonghyun Choi. Rainbow memory: Continual learning with a memory of diverse samples. CVPR, 2021
Q: _Definition of sparsity $s(w)$._
A: We redefine sparsity score as $1/s(w)$ so that parameters with higher sparsity will have a higher sparsity score as shown in the attached PDF.
Pdf: /pdf/0b06a1969cf6866b85d065b1633669b5df1b5676.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Learning to Mitigate Externalities: the Coase Theorem with Hindsight Rationality | Accept (spotlight) | Summary: This work considers a two player game with externalities imposed from one agent (“upstream”) to the other (“downstream”) agent. Equilibrium in the absence of taxation or payments between players would result in heavy efficiency loss due to the externalities.
The Coase theorem tells us that in these situations we should expect to see payments from the downstream agent to the upstream agent in order to restore the socially optimal outcome.
This work formulates this scenario as a multi-armed bandit problem, in which the payment term forms a part of the players payoffs and options. Namely, the downstream agent can put a conditional payment on the arms played by the upstream bandit. In this situation, the paper asks whether or not normal bandit learning algorithms can learn the optimal outcome.
The paper puts forwards an assumption that must be satisfied on the upstream agents learning behavior, which is satisfied for many common bandit learning algorithms. A translation is given for the downstream agent to go from the raw bandit problem to a transformed bandit problem. With these, sublinear regret is realized relative to the socially optimal outcome.
Strengths: The work takes a well-known economic theory and asks whether a given type of ML can recover the expected phenomenon without knowing the specific externality a prior. Rather than showing an exact algorithm for recovering it, mappings are done so that any suitable bandit learning algorithm can successfully learn the results. This is a nice addition over prior work, and helps in suggesting that this is something with broad learnability. In the process, it is interesting to see the manner in which the externality needs to make it into the payoffs and how that space needs to be structured.
Overall the paper is well written and has good fit at NeurIPS. Relative to prior work, it is able to show that as long as the upstream agent is using a suitable approach, any bandit learning algorithm will work. This is a much stronger guarantee for the downstream agent as they are less dependent on exactly what the upstream agent is doing.
Weaknesses: The paper and situation is quite interesting, but is one specific (classic) example scenario so applicability is good because it is a classic example, but still limited. A natural followin question is how broad do we expect these situations to go - can we recover general externalities in multi agent situations?
Technical Quality: 4
Clarity: 4
Questions for Authors: How broad do you expect the BELGIC routine to be applicable? In a general $M$ agent situation with less obvious structure on the externalities would you expect optimal transfers to still be learned efficiently?
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes the authors have adequately addressed limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed feedback. All the comments will be taken into account in the new version of the paper. You can find below our answers to each of your concerns.
> It will be nice to see a discussion about possible notions of optimality of mechanisms that incentivize participation in this context.
As explained in the general answer, with the current assumption on the upstream player, we cannot hope for a better bound than $T^{\frac{\kappa+1}{2}}$. Indeed, in the almost “ideal” scenario where the downstream player proposes a payment $\tau_{a}^{\star}+\varepsilon$ on the (socially) optimal arm $a$ at each round, within our assumptions, the upstream player might not pull $a$ a number of times $\frac{T^{\kappa}}{\varepsilon}$, while still having an upstream regret smaller than $T^{\kappa}$. This would then imply a total downstream regret of order $\frac{T^{\kappa}}{\varepsilon}+T\varepsilon$, which is minimized for $\varepsilon=T^{\frac{\kappa-1}{2}}$ with value $T^{\frac{\kappa+1}{2}}$. Note that this is also the reason for which we add an extra $T^{\frac{\kappa-1}{2}}$ term in the proposed payment in the final phase of the algorithm.
Yet, it might be possible that stronger assumptions on the upstream player strategy (that remain realistic) could lead to better regret bounds in the end.
> How broad do you expect the BELGIC routine to be applicable?
We invite the reviewer to look at our general answer on the use of our work for more general settings. To summarize, if externalities go both ways (i.e. downstream action also influences upstream utility), the problem becomes much more complex and BELGIC shouldn’t learn the optimal equilibrium. This would then become a general repeated game, for which learning is hard and a long line of research already exists.
Our goal in this work is instead to consider a simpler game structure, where learning is easier. Extending this kind of problem to more than two agents is an interesting question as explained in our general answer, and we believe that BELGIC can pave the way towards designing satisfying learning strategies in those games and understanding how they behave when increasing the number of involved agents. | Summary: The paper explores the field of mitigating externalities in economic interactions by applying the Coase Theorem with hindsight rationality. The Coase Theorem, a fundamental concept in economics, suggests that in the presence of externalities, property rights, and bargaining strategies can be utilized to achieve an optimal outcome for all parties involved. However, traditional applications of the Coase Theorem assume that players have perfect knowledge of the game, which may not hold true in real-world scenarios.
In this paper, the authors introduce the concept of hindsight rationality, where players learn from past interactions to improve their decision-making processes. By incorporating hindsight rationality into the Coase Theorem framework, the paper aims to provide a mechanism through which players can adapt their strategies over time to maximize social welfare, even in the presence of uncertainty. The theoretical foundation of the paper is built upon a series of theorems supported by assumptions and proofs.
Overall, the paper offers a novel perspective on addressing externalities in economic settings by integrating hindsight rationality into the Coase Theorem framework. Through a rigorous theoretical analysis, ethical considerations, and a discussion of limitations, the paper contributes to the ongoing discourse on optimizing social welfare in the presence of externalities.
Strengths: S1. One of the main strengths of the paper is its solid theoretical foundation. The authors create a clear framework based on well-known economic principles, like the Coase Theorem, and expand it by adding the idea of hindsight rationality. The thorough theoretical analysis, backed by detailed proofs and assumptions, strengthens the credibility and robustness of the proposed approach.
S2. By introducing the concept of hindsight rationality within the context of mitigating externalities, the paper offers a novel perspective on addressing economic inefficiencies.
Weaknesses: W1. One significant weakness of the paper is the lack of empirical validation or experimental results to back up the proposed theoretical framework. Although the theoretical analysis is good, empirical evidence would improve the practical applicability and real-world relevance of the research findings.
W2. It would have been insightful to see a comprehensive discussion on the potential challenges and complexities associated with implementing the proposed approach in practical economic settings. Addressing implementation hurdles could provide valuable insights for policymakers and practitioners.
W3. The setting (extension of Coase theorem in a two-player bandit setting where the actions of one of the players affect the other player) considered in the paper is narrow and limited. The results are also exemplified for a simple setting of two firms, an upward and a downward. Do the observations apply to potentially broader settings?
W4. The mathematical notations provided in the paper sometimes become too difficult to follow and correlate with the text.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please clarify the points in the Weaknesses section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the limitations of the proposed method are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the detailed feedback. All the comments will be taken into account in the new version of the paper. You can find below our answers to each of your concerns.
> W1. One significant weakness of the paper is the lack of empirical validation or experimental results to back up the proposed theoretical framework.
We will add in the revised version experiments on a toy model. Please see the attached pdf for the corresponding, unpolished, figure. In a few words, we consider a simple environment with two firms having continuous (quadratic) profit functions, similarly to Example 1. We discretize these functions so each level corresponds to an arm. We consider the two cases where (i) no property rights are defined and both firms playing UCB; (ii) property rights are defined, with the downstream firm applying BELGIC (and upstream plays UCB). In both cases, we plot the empirical frequency of the action chosen by the upstream firm.
As expected by our theory, we observe in this example that the learning agents will quickly converge to the social optimum when using transfers and implementing the algorithm BELGIC (for the downstream player). On the other hand, there is a social inefficiency in the absence of transfers, as the upstream player ends up choosing his best action, regardless of the downstream utility.
> W2. Addressing implementation hurdles could provide valuable insights for policymakers and practitioners.
Addressing implementation of our approach on real world situations for policymakers and practitioners requires an extensive line of research that is generally run over multiple years in economics. We believe that studying these questions is out of our scope and left for future work. Before that, we also believe that building a more solid theoretical understanding of the problem is necessary, through extensions to more realistic cases (as in our answer below to W3).
As an example, Abildtrup et al (2012) observed that the predictions of Coase theorem are not verified for an interaction between farmers and waterworks in Denmark. They suggested that the uncertainty on the reward functions might be a reason for that. We can see our work as an illustration of the cost of learning in the presence of uncertainty: this cost might be an obstacle for practical implementation.
> W3. Do the observations apply to potentially broader settings?
Please see our general answer on the use of our work for more general settings.
To summarize, extending our work to broader settings is a very interesting direction for future work. We believe that our work can pave the way towards tackling more general settings.
> W4. The mathematical notations provided in the paper sometimes become too difficult to follow and correlate with the text.
We realize that we build on a quite heavy set of notations and have faced several times the challenge of writing something clear but still rigorous. The latter forced us to rely on several different mathematical notations but we can either introduce clearer environment definitions or a table of notations.
If the reviewers agree, we would love to include a table of notations after the references to help the reader. We can also insist on the notations and their definitions each time we introduce a new one to ease the reading.
--------
We hope that we answered all your different concerns. If you still have any, we would be happy to answer any further questions.
---------
**References**
Abildtrup, J., Jensen, F., & Dubgaard, A. (2012). Does the Coase theorem hold in real markets? An application to the negotiations between waterworks and farmers in Denmark. Journal of environmental management, 93(1), 169-176.
---
Rebuttal Comment 1.1:
Comment: Thank you for your responses. The authors address some of my concerns and I am happy to increase my score to 6. | Summary: The paper presents a two-player sequential game with bandit feedback where one player's actions create externalities that affect the other player's outcomes. In this model, the utility of only one player (the downstream player) is impacted by the collective actions of both the upstream and downstream players. They consider two scenarios: one where players act independently to maximize individual utility, and another where players can interact via transfers.
In the first scenario (without transfers), the authors show that individual utility maximization can lead to suboptimal social welfare in some instances. In the second scenario, where the downstream player can choose a transfer, they demonstrate that social welfare can reach its optimum. They propose an algorithm for the downstream player that achieves this outcome, provided the upstream player follows any no-regret policy. This effectively presents an online version of the Coase theorem, establishing that even with imperfect information, the two-player sequential game converges to the global optimum outcome when transfers are allowed.
Strengths: 1. The algorithm does not assume a specific policy for the upstream player and achieves sublinear regret for any no-regret upstream policy (with regret scaling as $T^\kappa$).
2. Lemma 1 provides a clean problem formulation, showing that bounds on regret in individual utility (with transfers) lead to a bound on the social welfare regret.
3. For the specific instance where the upstream player follows the Upper Confidence Bound (UCB) policy (or any other algorithm with $\sqrt{T}$ regret) the proposed algorithm achieves regret scaling as $T^\frac{3}{4}$.
Weaknesses: Please see the questions section.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. It is not clear whether the regret incurred by the downstream player is optimal. Even if the upstream player is assumed to follow UCB, it's not evident whether a regret scaling as $\sqrt{T}$ policy is achievable for the downstream player as well.
2. It is stated (on lines 111-112) that the assumption that the rewards are bounded is "without loss of generality" (WLOG). Can you please explain why? Would this work even in the case of other standard reward distributions in bandit literature, such as sub-Gaussian rewards?
3. In the current setup, it is assumed that transfers occur only on a single action, with the transfer function taking a value of 0 for all but one action. Is there a particular reason for assuming such a transfer function? Can we work with more general transfer functions where the downstream player can choose to make transfers on multiple actions? How would this affect the regret of the downstream player?
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: This paper does not have any negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful feedback, that will be taken into account in the new version of the paper. We answer below your different questions.
> It is not clear whether the regret incurred by the downstream player is optimal.
See our general answer on optimality of the regret bound for BELGIC for more details.
To summarize, we believe that the $T^{\frac{1+\kappa}{2}}$ bound is optimal with our set of assumptions. Yet, it might be possible that stronger assumptions on the upstream player strategy (that remain realistic) could lead to better regret bounds in the end.
> It is stated (on lines 111-112) that the assumption that the rewards are bounded is "without loss of generality"
Note that our boundedness assumption is about the expected rewards $v^{\mathrm{up}}(a)$. On the other hand, we assume nothing about the random realization rewards, except 1-sub-Gaussianity in Corollary 1.
H1 is only needed to know the range for which we should run the binary search (to estimate the optimal transfers $\pi^{\star}$). When the range is unknown, we could first do a range search procedure before the binary search, but we eluded this point for the sake of simplicity and presentation.
> In the current setup, it is assumed that transfers occur only on a single action, with the transfer function taking a value of 0 for all but one action
More generally, we could indeed define transfers as a vector of size $K$ depending on the choice of the upstream player. However such a possibility does not bring any improvement in the considered solution for the downstream player, as we cannot infer more information from the choice of action by the upstream player in that case.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I have no further questions. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their detailed and insightful feedback. All their comments will be taken into account in the revised version of our work. We answer individually to each reviewer's concern. Besides, a couple points were raised by multiple authors: we answer these points below.
## About the optimality of the regret bound for BELGIC
This is a very interesting question. With the current assumption on the upstream player, we cannot hope for a better bound than $T^{\frac{\kappa+1}{2}}$. Indeed, in the almost “ideal” scenario where the downstream player proposes a payment $\tau_{a}^{\star}+\varepsilon$ on the (socially) optimal arm $a$ at each round, within our assumptions, the upstream player might not pull $a$ a number of times $\frac{T^{\kappa}}{\varepsilon}$ times, while still having a regret smaller than $T^{\kappa}$. This would then imply a total regret of order $\frac{T^{\kappa}}{\varepsilon}+T\varepsilon$ to the downstream player, which is minimized for $\varepsilon=T^{\frac{\kappa-1}{2}}$ with value $T^{\frac{\kappa+1}{2}}$. Note that this is also the reason for which we add an extra $T^{\frac{\kappa-1}{2}}$ term in the proposed payment in the final phase of the algorithm.
Yet, better bounds might be reachable under stronger assumptions on the upstream regret (e.g., instance dependent bounds), which is open for future work.
## Can our algorithm be used for more general problems?
Naturally, if externalities go both ways (i.e., downstream action also influences upstream utility), the problem becomes much more complex and BELGIC shouldn’t learn the optimal equilibrium. This would then become a general repeated game, for which learning is hard and a long line of research already exists.
Our goal in this work is instead to consider a simpler game structure, where learning is easier. Extending our work to more than two agents is indeed a very interesting direction that we plan pursuing in future works. A potential extension of the setting could be for instance to consider a chain of agents, whose actions all influence the following agent in the chain.
A stronger set of assumptions would be needed in that case, as we would need to control the behaviors of all agents. We believe that our work can pave the way towards designing satisfying learning strategies in such games and it is a promising direction for future work.
Pdf: /pdf/45a83d94dd4d16470f76dda7c6c7290af8793e3b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Policy Mirror Descent with Lookahead | Accept (poster) | Summary: The paper studies policy mirror descent (PMD) where the policy improvement step is modified to include an action-value with $h$-step lookahead (contains $h-1$ applications of the Bellman optimality operator on the value of the policy from the previous iteration). In the exact tabular setting where the value functions can be computed exactly, this leads to an improvement of the linear convergence rate of standard PMD from $\gamma$ to $\gamma^h$. In the inexact tabular setting where the the lookahead Q-functions are estimated through Monte-Carlo rollouts, this gives a sample-complexity for finding a $\varepsilon$-optimal policy of $\tilde{O}(1/\varepsilon^2(1-\gamma)^7)$, improving by a factor of $1/(1-\gamma)$ with respect to prior work. Finally, they extend their results beyond the tabular setting to a linear function approximation setting.
Strengths: - The paper is quite well written and easy to follow.
- The paper presents an interesting extension of PMD to include look-ahead leading to faster convergence. The algorithm and results cover different settings from exact tabular to inexact with linear function approximation. Numerical experiments support these claims both in terms of iterations and runtime.
Weaknesses: - The Analysis of Theorem 4.1 closely follows that of [16] while the remaining results (Theorem 5.4 and Theorem 6.3) follow using analyses similar to prior works (except for the estimation of the lookahead Q-function). Nevertheless, I recognise the merit and novelty in combining these with the idea of lookahead from [9]. Note - it would be good to explicitly state where the proof of Theorem 6.3 can be found in the Appendix.
- It seems the only disadvantage of $h$-PMD over $1$-PMD is in the computational cost of computing a more complex value function (lines 212-215) but it is unclear exactly what this cost is (other than what is conveyed in the experiments). It would be nice to explicitly quantify the additional computation (e.g. in the tabular setting, computing the true value function can be done by finding the fixed point of $\mathcal{T}^\pi$ which involves inverting a $S\times S$ matrix, then how much extra computation is required to achieve the value function with lookahead ?). Note: in lines 194-195, the assumption is not only that the value function $V^{\pi_k}$ can be computed exactly but also the value function with lookahead $V^{\pi_k}_h$ right ?
- The rationale behind how to choose $h$ is not clear to me (see also questions below). For example, in Theorem 5.4 (line 270), if the overall sample complexity is $\tilde{O} (\frac{S}{h\varepsilon^2(1-\gamma)^6(1-\gamma^h)^2} + \frac{SA}{\varepsilon^2 (1-\gamma)^7})$, this seems to suggest that we should take $h \rightarrow \infty$. Besides the computational infeasibility of this, if the lookahead Q-functions are estimated using Algorithm 1, then taking $h \rightarrow \infty$ should also result in a sample complexity going to $\infty$, it seems odd that this does not appear in the sample complexity. It would be nice to have an explanation of this.
Typos:
- Line 128: you are missing a “the” in front of squared Euclidean distance.
- Line 564-565, in equation (12), $\mathcal{T}^{\pi_{k+1}}$ should be to the power of $n+2$ not squared.
- Line 344: succesfully -> successfully
Technical Quality: 3
Clarity: 3
Questions for Authors: This questions are related to choosing $h$ as discussed in the weaknesses.
- It seems that one step of $h$-PI is equivalent to $h$ steps of value iteration with a policy produced at the end by acting greedily with respect to the last value. Is this the case ? If so, is there a way to choose a good value of $h$ that balances the benefits of VI and PI/PMD ?
- In the simulations, it seems like taking $h$ bigger provides better convergence without the cost of longer runtime. Is this the case for all values of $h$ ? If not, it would be nice to see this reflected in the experiments (i.e. taking much larger values of $h$ which begin to impede on the runtime). If it is true for all values of $h$, is it essentially saying that value iteration is computationally more efficient than PI/PMD in the setting you consider ? And then it would be nice to consider settings where this perhaps is not the case ?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback on our work and for their useful comments. We are glad the reviewer found the paper "quite well written and easy to follow" and the idea of using lookahead in PMD "interesting". We address their concerns in the following.
**It would be good to explicitly state where the proof of Theorem 6.3 can be found in the Appendix.**
Thank you for the suggestion, we will add a pointer to the proof (in appendix C.3 p. 26) in the main part.
**It seems the only disadvantage of $h$-PMD over $1$-PMD is in the computational cost of computing a more complex value function (lines 212-215) but it is unclear exactly what this cost is (other than what is conveyed in the experiments). It would be nice to explicitly quantify the additional computation...**
Thank you for the question which we now answer in detail in different settings:
- Deterministic setting: in this case, the computational overhead of h-PMD over 1-PMD is given by the cost of computing the lookahead action values. These can be computed by performing h steps of VI as the reviewer mentioned. The total cost of this computation can be decomposed into the cost of computing a single value function plus the cost of applying the Bellman optimality operator h times. Overall, the total cost is $O(S^3 + h AS^2)$ which only scales linearly with h compared to h=1 (no lookahead) without any additional dependence on the state nor action space sizes.
- Stochastic setting: in this case, the computational overhead of h-PMD vs PMD is given by the cost of computing the Monte Carlo lookahead action values estimates. Following our procedure in section 5, the induced computational cost for any single state-action pair is that of performing h steps of approximate value iteration which gives a cost of the order of $O(h M S A + H M_0 S)$ where $h$ is the lookahead depth, $M$ is the minibatch size for the sampled transitions for planning, $M_0$ is the number of trajectories for value estimation and $H$ is their horizon length.
Thanks again for the question, we will add this discussion to the paper.
**Note: in lines 194-195, the assumption is not only that the value function $V^{\pi_k}$ can be computed exactly but also the value function with lookahead $V^{\pi_k}_h$ right ?**
In these lines, notice that we suppose access to a `greedy policy with respect to the value function’ (and not the lookahead one) because greediness refers here to $h$-greedy as defined in l. 149. Another way to formulate it is to suppose access to the lookahead value function from which the 1-step greedy policy can be computed.
**The rationale behind how to choose $h$ is not clear to me (see also questions below). For example, in Theorem 5.4 (line 270), if the overall sample complexity is $\tilde{O} (\frac{S}{h\varepsilon^2(1-\gamma)^6(1-\gamma^h)^2} + \frac{SA}{\varepsilon^2 (1-\gamma)^7})$, this seems to suggest that we should take $h \rightarrow \infty$. Besides the computational infeasibility of this, if the lookahead Q-functions are estimated using Algorithm 1, then taking $h \rightarrow \infty$ should also result in a sample complexity going to $\infty$, it seems odd that this does not appear in the sample complexity. It would be nice to have an explanation of this.**
The number of samples used for lookahead action value function estimation (at each iteration of h-PMD) is of the order of $O(h M S A + H M_0 S)$ where $h$ is the lookahead depth, $M$ is the minibatch size for the sampled transitions for planning, $M_0$ is the number of trajectories for value estimation and $H$ is their horizon length. Indeed, this sample complexity explodes with a growing lookahead depth $h$. However, our theorem 5.4 shows that we need less and less iterations $K$ when $h$ grows, namely $K > 1/h(1-\gamma) log(4/\epsilon(1-\gamma)(1-\gamma^h))$ (see l. 263). Altogether, the total sample complexity is $O(K (h M S A + H M_0 S))$ which implies that the $1/h$ in K cancels the multiplicative $h$ in the first term of the overall sample complexity. However, notice that the number of iterations $K$ of h-PMD has to be at least 1 to run our algorithm and this condition translates into a condition on the lookahead depth which needs to be no larger than the effective horizon. Increasing $h$ speeds up the convergence rate of the algorithm which allows us to afford a smaller number of iterations $K$. This number of iterations needs to be lower bounded by 1 though.
**Typos:** Thank you for spotting these, we will make sure to correct them.
**It seems that one step of $h$-PI is equivalent to $h$ steps of value iteration with a policy produced at the end by acting greedily with respect to the last value. Is this the case ? If so, is there a way to choose a good value of $h$ that balances the benefits of VI and PI/PMD?**
Thank you for this interesting comment and question. Indeed, this is correct in the deterministic (exact) setting. It has been shown in Efroni et al. 2018 [9] (see their Theorem 3) that h-PI enjoys a finite iteration policy convergence guarantee which is monotonically improving with $h$ increasing. In the stochastic setting, we are not aware of any policy convergence result in a finite number of iterations for h-PI, let alone h-PMD. We believe it would be interesting to investigate if we can both guarantee value function convergence as well as policy convergence with a suitable lookahead value.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response - these clarify most of the points I had raised. I still wonder about the cost of taking h larger, from the new figure 1 in the pdf attached to the rebuttal, it still seems like using h = 100 leads to convergence in one step without any cost with respect to runtime, which as i mentioned suggests that value iteration is computationally more efficient than PI/PMD in this setting. It would be nice to consider an experiment where this is not the case / where we see a trade-off between the improved convergence and the computational cost of a larger h and in particular where at some value of h the convergence benefit is outweighed by the computational cost.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you very much for your response and for your interesting follow up comment.
This is a good point, we agree. Please see figure 6 p. 29 in the appendix for a setting where large lookahead depth becomes slower and do not perform better. Note that in this more challenging practical setting the best performance is not obtained for higher values of h: intermediate values of h perform better, illustrating the tradeoff in choosing the depth h. We will add a discussion in the main part regarding this interesting point. | Summary: The authors proposed a version of the policy mirror descent algorithm that uses a multi-step greedy policy improvement operator. Afterward, the authors showed the theoretical benefits of the proposed method by a better contraction rate $\gamma^h$ instead of $\gamma$-contraction for a usual 1-step greedy policy improvement algorithm. Finally, the inexact version of the algorithm was proposed with an improved sample complexity in the finite MDP setting and, under a linear functional approximation setup, guarantees that are independent of the state space size.
Strengths: - The proposed combination of PMD and $h$-step greedy updates implies interesting theoretical results, such as reducing sample complexity with increasing the planning horizon in a very explicit way;
- Strong result with linear functional approximation that gives an implementable algorithm with a polynomial (in $h$) running time;
Weaknesses: - The experimental part of the paper might be improved by running continuous control experiments.
Technical Quality: 3
Clarity: 3
Questions for Authors: - I think the paper may benefit from a theoretical example of h-PI with greedy updates not convergent whereas h-PMD converges.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper is mostly of theoretical nature and thus does not imply any negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive assessment of our work and for appreciating our theoretical contributions. We reply to their remaining comments in the following.
**The experimental part of the paper might be improved by running continuous control experiments.**
Thank you for the suggestion, we have performed simulations in this setting in the rebuttal phase. Please see Figure 3 in our rebuttal which shows the results of running h-PMD in continuous control settings (CartPole-v1 and Acrobot-v1). We will make sure to include these in our paper.
**I think the paper may benefit from a theoretical example of h-PI with greedy updates not convergent whereas h-PMD converges.**
Thank you for this interesting comment. It has been shown in Efroni et al. 2018 [9] that h-PI converges in the deterministic setting. We have observed that h-PI is unstable in the stochastic setting when the lookahead action values are not very accurately estimated and h-PMD fixes this instability. Such an instability has also been observed in prior work for $h=1$. We have performed simulations to illustrate this in an example in the attached pdf using the same lookahead action values estimation precision (see Figure 2 in our rebuttal). It has been shown in Winnicki and Srikant 2023 [48] that PI with lookahead might converge even in the stochastic setting thanks to using larger lookahead. However, such a result requires a sufficiently large lookahead (see their assumption 1.(b)). Our results for $h$-PMD hold for any lookahead depth value $h$. We believe it would be possible to find an instance where stochastic h-PI does not converge (for a fixed not too large $h$) whereas stochastic h-PMD does: This interesting question remains open.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. I find the explanations satisfactory and will keep my decision as accept.
---
Reply to Comment 1.1.1:
Title: Thank you
Comment: Thank you very much again for your time, for acknowledging our response and for supporting our paper acceptance. | Summary: The author propose a multi-step greedy approach for Policy Mirror Descent. Combing PMD with multiple greedy policy updates results in a faster $\gamma^{h}$ rate improved the previously thought optimal $\gamma$ rate. Additionally, the authors extend their analysis to the stochastic setting and when using function approximation (with linear function approximation).
Strengths: The idea of combining lookahead to PMD is well-motivated and interesting. The motivation and presentation of the results are explained well. All results also do not rely distribution mismatch coefficients, which is something that appears quite often in prior work
Weaknesses: It seems that the analysis is done in the functional representation of the policy. It would be good to also discuss how the methods could extended for specific policy parameterization such as softmax policies.
Since the resulting bounds do not rely on concentrability, it would be nice to see experiments where the state space is much larger.
Technical Quality: 3
Clarity: 4
Questions for Authors: How well does $h$-PMD perform against PI in the deterministic setting?
Could you provide some intuition on why distribution mismatch coefficients does not appear in the stochastic setting? It's surprising to me since there isn't any explicit exploration.
Confidence: 2
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: No concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback and for acknowledging the novelty and well-founded motivations of our paper as well as its presentation. We hope that the following discussion fully answers your questions.
**It seems that the analysis is done in the functional representation of the policy. It would be good to also discuss how the methods could be extended for specific policy parameterization such as softmax policies.**
Thank you for your comment. While softmax tabular policies are covered by our analysis, we acknowledge that it would be interesting to consider more general policy parametrization. When introducing (neural network) policy parametrization and considering policy parameters as main variables, we lose part of the structure of the policy optimization problem and we introduce additional non-convexity (on top of the non-convexity of the value function as a function of the policy) into the problem when the objective is seen as a function of the policy parameters. Therefore, we expect to obtain weaker results using a different analysis based on gradient dominance properties of the policy optimization objective rather than contraction arguments.
**Since the resulting bounds do not rely on concentrability, it would be nice to see experiments where the state space is much larger.**
We have performed additional experiments during the rebuttal phase, please see Figure 3 in our rebuttal for experiments showing the feasibility of h-PMD in even continuous state spaces. In these experiments we test our algorithm in continuous control problems (CartPole-v1 and Acrobot-v1). Please note that our algorithm converges in a comparable number of iterations to the variant without lookahead.
**How well does $h$-PMD perform against PI in the deterministic setting?**
Thank you for this question. In the deterministic setting, h-PMD enjoys a faster $\gamma^h$ convergence rate compared to PI without lookahead in terms of value function gap. h-PMD and h-PI enjoy similar guarantees in terms of value function gap when using our adaptive step sizes for h-PMD. In terms of policy convergence (rather than value function gap convergence), note that PI also enjoys some strong convergence guarantees: it is guaranteed to converge to an optimal policy in a finite number of iterations. As noted in (Efroni et al., 2018, Theorem 3) this result generalizes to $h$-PI as well, yielding a finite iteration complexity guarantee that improves monotonically with $h$. It would be interesting to see if h-PMD can also enjoy such a guarantee in future works, given the relationship between h-PI and h-PMD (see the discussion after Theorem 4.2). That being said about the deterministic setting, note that the true strength of $h$-PMD is in the stochastic setting, where PI is known to be unstable. Please see Figure 2 in our rebuttal which illustrates this in a practical setting.
**Could you provide some intuition on why distribution mismatch coefficients does not appear in the stochastic setting? It's surprising to me since there isn't any explicit exploration.**
From the technical viewpoint, we remove the dependence on distribution mismatch coefficients by avoiding the use of the performance difference lemma (as previously shown by Johnson et al. 2023) and by conducting an analysis which builds on the strong connection between h-PI and h-PMD.
Notice that the performance error bound in Theorem 5.3 (for inexact h-PMD) features two terms: the first one which stems from the $\gamma^h$ contractiveness of the $h$-step Bellman operator is the same as in the exact setting (Theorem 4.1) and does not involve any distribution mismatch coefficient whereas the second one is a bias term due to lookahead value function estimation that can be made arbitrarily small upon choosing the right minibatch size. In our stochastic setting, we rely on the standard generative model assumption. The (inexact) value function is being computed at every state in the tabular setting, precluding the need for exploration. In this setting, our results are consistent with the work of Johnson et al. 2023 [16] focusing on the particular case of $h=1$. We expect that exploration will play an important role in the online setting where the value function is estimated using online trajectories (i.e. generated according to the current policy at each time step).
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for their throughout response. My initial questions and concerns have been addressed. I will keep my decision as accept.
---
Reply to Comment 1.1.1:
Title: Thank you for your response
Comment: Thank you very much for your time, for your response and for supporting our paper acceptance. We are glad that our rebuttal has addressed your questions and concerns. | Summary: The paper introduces h-PMD, an extension of the Policy Mirror Descent (PMD) algorithm, which incorporates multi-step lookahead to improve policy updates in reinforcement learning. PMD is a general framework that includes several policy gradient methods and relates to advanced algorithms like TRPO and PPO. Recognizing that multi-step greedy policies often outperform their single-step counterparts, the authors propose h-PMD, which integrates multi-step lookahead depth into PMD. This new class of algorithms achieves a faster convergence rate for solving discounted infinite horizon MDPs, under both exact and inexact settings. The paper also extends these results to linear function approximation, demonstrating improved sample complexity that depends on the dimension of the feature map space rather than the state space size.
Strengths: 1. The paper introduces a novel extension to the PMD framework by incorporating multi-step lookahead, enhancing the algorithm's performance and convergence rate.
2. The proposed h-PMD algorithm achieves a faster 𝛾^ℎ-linear convergence rate, which is an improvement over standard PMD and Policy Iteration methods.
3. The paper addresses both exact and inexact settings, providing a sample complexity analysis that demonstrates improved performance over previous methods, especially with increasing lookahead depth.
4. By extending h-PMD to linear function approximation, the authors make the algorithm applicable to large state spaces, which is crucial for practical applications in complex environments.
5. The theoretical findings are supported by empirical results from simulations on the DeepSea RL environment, illustrating the benefits of the h-PMD approach
Weaknesses: 1. The h-PMD algorithm, while improving convergence rates, also introduces higher computational complexity due to the multi-step lookahead, which can be demanding for large-scale problems.
2. The paper might lack detailed guidance on implementing h-PMD in various practical scenarios, making it challenging for practitioners to adopt and utilize the algorithm effectively.
3. The results are contingent on certain assumptions, such as the availability of multi-step greedy policies and the use of a generative model. These assumptions might limit the generalizability of the findings to all RL problems.
4. The benefits of the h-PMD algorithm are closely tied to the lookahead depth ℎ. Determining the optimal ℎ in practice could be non-trivial and might require extensive experimentation.
5. While the paper demonstrates the theoretical and empirical advantages of h-PMD, it may not provide a comprehensive comparative analysis against a wide range of existing RL algorithms, which would strengthen the case for its superiority.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Can you provide more detailed guidelines on how to implement the h-PMD algorithm in various practical settings, including both exact and inexact scenarios?
2. How does the computational complexity of h-PMD compare to standard PMD and other state-of-the-art RL algorithms like TRPO and PPO in practice? Are there strategies to mitigate the increased computational burden?
3. What methods or heuristics do you recommend for determining the optimal lookahead depth in different environments? How sensitive is the performance of h-PMD to the choice of ℎ?
4. How critical is the assumption of a generative model for the theoretical guarantees provided in the paper? Can h-PMD be effectively applied in settings where a generative model is not available?
5. How well does the h-PMD algorithm with linear function approximation perform in very high-dimensional or continuous state spaces? Have you explored other types of function approximation beyond linear?
6. Can you provide more extensive empirical results across a variety of RL environments to demonstrate the robustness and general applicability of h-PMD?
7. How does h-PMD compare to other advanced RL algorithms like AlphaZero, in terms of performance and computational efficiency? Have you conducted any comparative studies?
8. What are the scalability limits of h-PMD in terms of state and action space sizes? Are there practical cases where h-PMD might not be feasible due to its complexity?
9. Can you elaborate on the trade-offs involved in the inexact setting of h-PMD? How does the estimation of lookahead action values impact the overall performance and convergence rate?
10. Have you applied h-PMD to any real-world RL problems or industrial applications? If so, what were the outcomes and challenges faced?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: 1. The multi-step lookahead in h-PMD increases the computational burden significantly, potentially making it impractical for real-time or resource-constrained applications.
2. The performance of h-PMD is dependent on the chosen lookahead depth ℎ. Finding the optimal depth can be challenging and may require considerable computational resources for tuning.
3. The sample complexity improvements are based on the assumption of a generative model, which may not always be available or practical in many real-world scenarios.
4. While the extension to linear function approximation addresses scalability to large state spaces, the practical implementation and effectiveness of this approach in very high-dimensional or continuous spaces remain uncertain.
5. The empirical validation is limited to the DeepSea RL environment. Broader validation across diverse and more complex environments would be necessary to confirm the general applicability and robustness of the proposed h-PMD algorithm.
6. The paper might not provide sufficient practical implementation details, making it difficult for practitioners to apply the h-PMD algorithm to their specific problems without further guidance and experimentation.
7. Some theoretical results assume exact policy evaluation, which may not be feasible in many practical settings where only approximate evaluations are possible.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time for assessing our work, for their feedback and questions. We reply to each of one of their questions (also covering their weaknesses and limitations comments) in what follows. We will be happy to address any further concern.
**1. Can you provide more detailed guidelines on how to implement the h-PMD algorithm in various practical settings, including both exact and inexact scenarios?**
To support the description of our simulations in section 7, please note that the code for a full implementation of our algorithm is available in the anonymous code repository provided in l. 915-916 (link in section D of the appendix). We have also implemented a version of our $h$-PMD algorithm with a Monte Carlo Tree Search (MCTS) algorithm to compute the lookahead Q-function using DeepMind’s MCTS implementation (MCTX) (see section D.5 l. 945-950 in the appendix and the code provided for additional details). All our experiments can be replicated using this code. We will be glad to provide any further details that the reviewer might want to know regarding the implementation.
**2. How does the computational complexity of h-PMD compare to standard PMD and other state-of-the-art RL algorithms like TRPO and PPO in practice? Are there strategies to mitigate the increased computational burden?**
We have compared the performance of our h-PMD algorithm to the standard PMD algorithm corresponding to $h=1$ (for fair comparison) in terms of running time in our simulations (see e.g. Figure 1 right and additional experiments in appendix D.3 Figure 4). We observe that the algorithms with higher values for $h$ still converge faster in terms of runtime, recall that they require less iterations.
Beyond our simulations, exploiting parallel computing is an interesting strategy to further speed up the computation. For instance, this has been used in e.g. [8,15] (to name a few) for implementing tree search methods more efficiently. A version of our algorithm based on Deep Mind’s implementation of MCTS supports such a parallelization which can be useful in very large scale settings.
In principle, using lookahead value functions can also be used in combination with TRPO and PPO. Our focus in this work is on showing the potential of lookahead combined with PMD methods (as a general framework encompassing several PG methods as particular cases) and providing theoretical guarantees supporting our algorithm design. We have performed additional experiments though, please see Figure 3 in our rebuttal, which illustrates the effect of modifying the PPO algorithm to use lookahead value functions.
**3. What methods or heuristics do you recommend for determining the optimal lookahead depth in different environments?...**
The lookahead depth is a hyperparameter of the algorithm and can be tuned similarly to other hyperparameters such as the step size of the algorithm. Of course, the value would depend on the environment and the structure of the reward at hand. Sparse and delayed reward settings will likely benefit from lookahead with larger depth values. We have performed several simulations with different values of $h$ for each environment setting and the performance can potentially improve drastically with a better lookahead depth value (see section 7 and appendix D for further simulations).
**4. How critical is the assumption of a generative model for the theoretical guarantees provided in the paper? Can h-PMD be effectively applied in settings where a generative model is not available?**
The generative model assumption is important for our theoretical analysis. Relaxing this assumption is an interesting direction for future work. We would like to highlight that (a) this assumption is standard in RL theory and for analyzing PMD (h=1) in particular, see e.g. Xiao 2022 section 5.1, Lan 2023 section 5.1, Johnson et al. 2023, Yuan et al. 2023, Alfano et al. 2023, Li et al. 2023, Zhan et al. 2023 and (b) even under this standard assumption, the analysis requires a careful and involved analysis (see 5.2 and appendix B). Prior work considering lookahead policy iteration algorithms has even focused on the far less challenging deterministic setting assuming that lookahead value functions are accessible (see e.g. [9]).
In practice, we refer the reader to section D.5, l. 950-956 for a discussion about how to relax it: We only evaluate the value function (and so only update the policy) in states that we encounter using the current policy. We have also implemented our PMD algorithm with a Monte Carlo Tree Search (MCTS) algorithm) to compute the lookahead Q-function using DeepMind’s MCTS implementation (MCTX) (see section D.5 l. 945-950 in the appendix and the code provided for additional details). The design and analysis of a fully online algorithm is an interesting research question that requires further investigation and that we leave for future work.
**5. How well does the h-PMD algorithm with linear function approximation perform in very high-dimensional or continuous state spaces? Have you explored other types of function approximation beyond linear?**
Thank you for this question. We believe this is an interesting question to address in the future to cover even larger settings as we briefly mention in the conclusion. Using linear function approximation requires designing state-action features which is a delicate task in general as it is notoriously known, especially in high-dimensional spaces. That being said, we have provided a theoretical performance bound for our algorithm in that setting. We are working on using neural networks for approximating the lookahead values to implement an actor-critic style algorithm with lookahead. Please see our experiments on continuous control tasks in Figure 3 of our rebuttal, which illustrates the use of neural net function approximation with h-PMD.
**Please see response to remaining questions (6-10) in the global rebuttal.**
---
Rebuttal Comment 1.1:
Title: Reminder to address the author's rebuttal
Comment: Dear t6qC, please try to respond to the author's feedback today and elaborate if your opinion changed in light of it. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable time and feedback. We address the remaining concerns of the reviewers in our individual responses below. We have also performed additional experiments to support our responses. Please find figures relating to these experiments attached. These simulations include: a) testing the effect of a much larger lookahead depth (Reviewer Pv8t), b) comparing h-PI to h-PMD (Reviewers vPQx and SGix), c) testing h-PMD on two additional continuous control environments. Due to space constraints in our individual response to reviewer t6qC we include the rest of our response below in this general rebuttal. We hope that our responses address all of the questions of the reviewers, we also welcome any additional questions in the discussion period.
--------
**End of response to reviewer t6qC**
**6. Can you provide more extensive empirical results across a variety of RL environments…?**
We have provided additional simulations in appendix D, p. 27-31, including tests in the `Catch’ environment from Deepmind’s bsuite and a grid world with variable size. As for robustness of h-PMD, we have investigated (in the appendix) the use of a different PMD regularizer (Figure 2) as well as varying lookahead depths, chain lengths, grid world sizes, discount factors (p. 30-31). That being said, please notice that our main contributions are theoretical and our simulations primarily serve an illustrative purpose.
During the rebuttal phase we performed additional experiments in continuous control tasks to illustrate the general applicability of our algorithm.
**7. How does h-PMD compare to other advanced RL algorithms like AlphaZero, in terms of performance and computational efficiency?..**
Our h-PMD algorithm can be linked to the class of AlphaZero algorithms (see l. 180-187). It has been shown in [13] that AlphaZero can be seen as a regularized policy optimization algorithm, drawing a connection to the standard PMD algorithm (with h = 1). We argue that AlphaZero is even more naturally connected to our h-PMD algorithm with lookahead values. We have also implemented a version of our h-PMD algorithm with Deep Mind’s MCTS implementation (see section D.5 for details). Conducting further experiments to compare our algorithm to AlphaZero on similar large scale settings would be interesting and would deserve its own separate study. Note that we are not aware of any theoretical convergence guarantee for AlphaZero which relies on many heuristics. In contrast, our $h$-PMD algorithm enjoys nice theoretical guarantees ranging from the exact to the inexact, stochastic and function approximation settings.
**8. What are the scalability limits of h-PMD in terms of state and action space sizes?...**
We prove in theorem 5.4 p. 7 that the sample complexity of h-PMD scales with the product of the state and action space sizes in the tabular setting. Then, we employ linear function approximation to show a value function gap performance bound that only scales with the dimension of the state action feature map without any dependence on the state action space size. Please see the response to reviewer QSY1 for a more detailed discussion.
Tree search methods (used as subroutine in h-PMD for lookahead value estimation) have been successfully used in practice in a number of works, including in the most successful applications of RL such as AlphaGo, AlphaZero and MuZero to name a few. Notice also that we also rely on Deep Mind’s efficient MCTS implementation in our experiments in appendix D.5. Hence, we do not foresee any feasibility issue of h-PMD given all the existing efficient tree search implementations that can be readily used as subroutines. As PG methods are also notoriously suitable for high-dimensional settings, our h-PMD also inherits such a potential. Our additional experiments show that our algorithm can be modified to work smoothly in large or even continuous state spaces (see Figure 3 in the rebuttal pdf), and is still reasonable when the lookahead depth is scaled up dramatically (see Figure 1). We will add to our paper a more detailed discussion along these lines.
**9. Can you elaborate on the trade-offs involved in the inexact setting of h-PMD? …**
Comparing Theorem 4.1 (exact setting) and Theorem 5.3 (inexact setting), there is an additive bias term in the value function bound due to inexact approximation of the lookahead value functions. In Theorem 5.4, we control this bias and make it arbitrarily small by choosing an adequate mini-batch size (of the order of $1/\epsilon^2$ where $\epsilon$ is the desired accuracy) for our Monte-Carlo lookahead estimator. We then derive the overall improved sample complexity of Theorem 5.4.
Hence the convergence rate (in terms of number of iterations) is now still geometric but up to the aforementioned bias and the overall sample complexity to reach an approximate optimal value function is improved. This is at the cost of computing approximate lookahead action values as discussed.
**10. Have you applied h-PMD to any real-world RL problems or industrial applications? …**
Thank you for this question which is definitely important. To support our contributions which are mainly theoretical, we have performed simulations in standard widely used RL benchmarks in the literature beyond existing simulations for h-PI which were restricted to the simple grid world environment (see e.g. [9]). These initial experiments are promising and we believe it will be interesting to test our algorithm on real-world RL problems.
**Some theoretical results assume exact policy evaluation, which may not be feasible …**
We completely relax exact policy evaluation in section 5. We do not suppose access to multi-step greedy policies, we rather provide a procedure to compute them approximately: See section 5.1 for lookahead Q-function estimation and its use in $h$-PMD in Eq. (5) as well as its extension to the function approximation setting (see Eq. (7) and section C).
Pdf: /pdf/09861d8b6e3fd8390f3cb081b95818bbdb6cfb4c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper applies the idea of lookahead in policy improvement procedures under the PMD setting. They prove the $\gamma^h$-linear convergence rate. They also propose the inexact version of h-PMD and extend to the function approximation case.
Strengths: The writing of the paper is pretty good. The main idea and the results of the paper are easy to follow.
The paper is also complete. They propose the theoretical analysis in both tabular and function approximation settings.
Weaknesses: Although the paper proved $\gamma^h$-contraction property, it's not out of expectation.
I'm still not completely convinced that $h$-horizon lookahead policy improvement could bring additional benefit in the function approximation case. Even in tabular cases, additional computational effort are required.
Minor problems:
Line 70: bsuite.
Technical Quality: 3
Clarity: 3
Questions for Authors: - It's interesting to see that larger lookahead leads to faster training speed based on Figure 1. How about the sample complexities? Could you provide the used samples for each method?
- The meaning of the dotted lines should be clarified in Figure 1 (Left).
- Based on the theoretical result of the paper, does that mean larger $h$ leads to better performance? For Figure 1, what if we have a larger $h$?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper discusses some potential improvements in future work on the practical algorithm. However, they didn't discuss the limitations of the existing theoretical/practical results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments. We are glad that the reviewer finds the writing "pretty good" and the "main idea and the results of the paper easy to follow". We address their remaining concerns in the following. Please let us know if you have any further questions, we will be happy to answer them.
**I'm still not completely convinced that $h$-horizon lookahead policy improvement could bring additional benefit in the function approximation case. Even in tabular cases, additional computational effort are required.**
We also would like to add that multi-step greedy policy improvement contributed to the empirical success of some of the most impressive RL applications including AlphaGo, AlphaZero and MuZero. These practical algorithms have successfully used lookahead (via MCTS) and function approximation jointly in very large state-action space settings although they do not enjoy theoretical value function performance bound guarantees as our work. The benefit of using lookahead policy improvement has also been reported in Winnicki et al. 2021, 2023 [47, 48] for approximate policy iteration with linear function approximation. Using lookahead requires some additional computational effort and we show that it leads to a provably better sample complexity. We illustrate in our simulations how the benefit of using lookahead can greatly outweigh this overhead. Please see also e.g. Figures 6 and 7 in the appendix, in which it is very clear that $h = 1$ (no lookahead) is extremely slow to converge, whereas all versions with lookahead converge much faster.
**Minor problems: Line 70: bsuite.**
`Deep Mind’s bsuite’ (as written in l. 70) is the RL benchmark’s name as introduced in the reference [30]. We will make it in italic to avoid any confusion.
**It's interesting to see that larger lookahead leads to faster training speed based on Figure 1. How about the sample complexities? Could you provide the used samples for each method?**
As we mentioned in l. 325-327 p. 8 in section 7, ‘we also observed in our simulations that h-PMD uses less samples overall (see Appendix D for the total number of samples used at each iteration)’. The number of samples used at each iteration increases but the algorithm requires much less iterations for higher values of the depth $h$. This results in a more sample efficient algorithm overall. Please see Fig. 3 (p. 28) and appendix D.2 (p. 27, l. 926-932) for a discussion along these lines.
**The meaning of the dotted lines should be clarified in Figure 1 (Left).**
The meaning of the dotted lines is given in the text in l. 317-318 in section 7: ‘(a) in dotted lines in Fig. 1 (left), $\eta_k$ equal to its lower bound in sec. 4, with the choice $c_k := \gamma^{2h(k+1)})$ (note the dependence on $h$); and (b) in solid lines, $\eta_k$ identical stepsize schedule across all values of $h$ with $c_k := \gamma^{2(k+1)}$ to isolate the effect of the lookahead.’ We will add this to the caption of Figure 1 for clarity.
**Based on the theoretical result of the paper, does that mean larger $h$ leads to better performance? For Figure 1, what if we have a larger $h$?**
We prove that a larger lookahead depth results in a better sample complexity and a faster suboptimality gap convergence rate. This improvement comes at the cost of an additional computational effort (compared to $h=1$ for 1-step policy improvement) to compute the lookahead value function at each iteration. However, our experiments suggest that the benefits of the faster convergence rate greatly outweigh the extra cost of computing the lookahead, in terms of both overall running time until convergence and sample complexity (in the inexact case). See section 7 and Appendix D for evidence and additional experiments.
We have performed additional experiments with larger $h$ ($h=100$), see Figure 1 in the attached pdf. In this case, the algorithm converges in a single iteration. This is theoretically expected as computing the lookahead values with very large $h$ boils down to computing the optimal values like in value iteration with a large number of iterations.
**The paper discusses some potential improvements in future work on the practical algorithm. However, they didn't discuss the limitations of the existing theoretical/practical results.**
We thank the reviewer for the comment. We will further improve our discussion of the limitations of our work along the lines of investigating more general function approximation in even larger scale environments, adaptive lookahead depth selection as well as fully online estimators for the lookahead action values. We will also add further details regarding the computational tradeoff we mentioned for our lookahead algorithm.
---
Rebuttal Comment 1.1:
Title: Reminder to address the author's rebuttal
Comment: Dear Pv8t, please try to respond to the author's feedback today and elaborate if your opinion changed in light of it. | null | null | null | null | null | null |
Shaping the distribution of neural responses with interneurons in a recurrent circuit model | Accept (poster) | Summary: This paper proposes a normative recurrent circuit to solve an optimal transport problem, focusing on the problem of Gaussianization of natural image statistics.
Strengths: This paper was a pleasure to read. The writing is clear, the formulation of the problem elegant, and the results represent a clear advance relative to past works on whitening circuits.
Weaknesses: I have only a few small concerns regarding the biological realism of the model, but these are mostly already acknowledged by the authors. One point that I believe warrants further discussion is the plausibility of the activation functions (6). These are somewhat reminiscent of two-sided versions of the rectified-power law activations used in cortical models, no? I also have a handful of miscellaneous questions (see below), but on the whole these do not dampen my enthusiasm.
Technical Quality: 3
Clarity: 4
Questions for Authors: - Along with the work of Van Hateren, the authors might consider citing Laughlin, "A simple coding procedure enhances a neuron's information capacity," and Juusola and Hardie, "Light Adaptation in Drosophila Photoreceptors." I leave it to their discretion whether or not to do so.
- In that vein, it might be nice to provide further evidence for adaptation of activation functions in biology. If additional references occur to me, I will update my review.
- In Line 271, should "activations" be plural? I think this is a typo.
- The legibility of Figure 3A could be improved; as it stands the text is a bit too small and the lines a bit too faint.
- It could be nice to provide further support for the approximate form of the activation functions. Can you numerically evaluate the expression below Line 630?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: As noted above, the authors clearly discuss the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review, we are pleased that you enjoyed reading our work!
### Weaknesses
We agree the plausibility of the activation functions warrants further discussion. The interneuron activation functions indeed resemble two-sided rectified-power law activations. A potentially more plausible variant would have interneurons responding with a *rectified* power law, although implementing the same computation would then require twice as many interneurons.
Alternatively, our model provides a precise relation between the input distribution, target distribution and interneuron activation function. Therefore, fixing two of these determines the third. We fixed the input distribution and target distribution and derived the optimal interneuron activation function. One could instead fix the input distribution and interneuron activation function (e.g., based on experimental measurements) and estimate the target distribution.
### Questions
- Thank you for the suggestion. We will include references to the works by Laughlin and Juusola & Hardie in our revision.
- We will also add a reference to Sharpee et al. "Adaptive filtering enhances information transmission in visual cortex", which showed that neurons in cat primary visual cortex adapt to higher-order statistics beyond mean and covariance.
- Thanks for catching this typo.
- We will improve the legibility of Figure 3A.
- Yes, numerical evaluations of the expression below Line 630 are shown as thick gray curves in Figure 2C. To emphasize this, we will add text below Line 630 pointing the reader to the figure.
---
Rebuttal Comment 1.1:
Comment: Thanks for your thoughtful response to my comments and those of the other reviewers. I maintain my positive assessment.
A comment: I agree with Reviewer Voaj that it is important to acknowledge past work on normalization and predictive coding with spiking networks. However, I am familiar with these works, and in my opinion the mathematical framing of the present manuscript is certainly sufficiently novel and interesting to warrant publication. I'm not too concerned with violations of Dale's law; this is already an interesting first step. | Summary: This paper investigates a crucial question in neuroscience: how do neural circuits convert inputs into a target distribution, specifically focusing on how local interneurons transform natural image statistics into a spherical Gaussian distribution. The authors approach this problem through the lens of optimal transport. By using an integral probability metric, the task of transforming the input distribution into a spherical Gaussian distribution is framed as a minimax optimization problem. This constrained minimax optimization translates to recurrent neural dynamics (representing the minimization part) combined with gain modulation, activation function adaptation, and plasticity (representing the maximization part), with plasticity being local and Hebbian. Notably, the study's principled adjustment of activation functions is a unique contribution. The authors present experimental results on natural images using wavelet transformations.
Strengths: Note: As I am not well-versed in wavelet transformations, which are extensively utilized in the experimental section, my review focuses primarily on the theoretical aspects prior to the experiments (Section 4).
**Originality**: This paper tackles a significant question in neuroscience: how neural circuits transform input signals into a target distribution. The authors frame this problem as an optimal transport problem. Utilizing the integral probability metric, they demonstrate that this problem is equivalent to a constrained minimax optimization, which can be interpreted through neural circuit dynamics. The paper's theoretical contributions are both solid and original.
**Quality**: The work is theoretically robust, with well-justified methods and experiments that effectively validate the theoretical model. The use of spherical Gaussian distributions, which might initially appear to be a limitation, is convincingly justified in Section 4.
**Clarity**: The paper is clearly written overall.
**Significance**: This paper introduces a normative model for normalizing inputs, offering potentially valuable insights for the theoretical neuroscience community.
Weaknesses: - The authors should define $p_{target}$, $p_{marginal}$, and $p_r$ explicitly instead of using descriptive terms for better clarity.
- The proposed model necessitates different $g$, $\theta$, and $w$ for different stimuli. While neurons can potentially adjust their gain and activation function (i.e., $g$, $\theta$) relatively quickly, the need for weight updates to handle varying inputs significantly limits the model's applicability and biological plausibility.
- As noted in the limitations section, the weights are not sign-constrained. Therefore the distinguishing between excitatory and inhibitory neurons in this model makes no sense.
Technical Quality: 3
Clarity: 2
Questions for Authors: The constraint $g_i > 0$ is not included in Algorithm 1. Would imposing these constraints ($w > 0$, $g > 0$) negatively impact the experimental results?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have adequately addressed the limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments. We will revise our paper in accordance with your suggestions.
### Weaknesses
- We will explicitly define these distributions in our revision.
- This is a great point. Interestingly, the learned weights $W$ are approximately shared across images, whereas the optimal $g$ and $\theta$ vary between images, suggesting that there are structural properties that are shared across images. This is consistent with rapid adjustments of $g$ and $\theta$ and slow adjustments of $W$. We did focus on this point because Duong et al. [NeurIPS 2023] have shown similar results. Specifically, they showed how a multi-timescale (linear) circuit with fast gain adjustments and slow synaptic updates can effectively whiten circuit responses to changing contexts. Here, our focus is on the nonlinear aspects of the circuit so we did not emphasize this point, but we will note it in our revision.
- We can enforce $W>0$ at the cost of a slight degradation in the performance of the algorithm; see our general author response above and the attached PDF.
### Questions
Indeed, we did not include the constraint $g_i>0$ in Algorithm 1. However, the optimal solutions all have $g_i>0$ and including the constraint would not have changed the experimental results. As mentioned above, enforcing $W>0$ results in a slight degradation in performance.
---
Rebuttal Comment 1.1:
Comment: I've read the other reviewers' comments and appreciate the authors' thoughtful responses. I'll be keeping my score unchanged. | Summary: Authors propose an online algorithm that solves the problem of optimal transport. In particular, assuming a spherical distribution of stimuli, the goal of the algorithm is to generate neural responses such that their distribution best approximates the distribution of stimuli. Authors find that a neural network with excitatory and inhibitory neurons can solve such optimization problem, by learning a non-linear transformation that reduces dependency between neural responses. This is done through Hebbian learning on synapses and by adjusting activation functions of single neurons.
Strengths: The paper presents is a novel combination of well known techniques and provides an alternative to existing approaches. The work seems technically sound and authors describe some interesting relation to similar approaches. Proposed algorithm is a nonlinear extension of existing algorithms for data whitening using a neural network.
Weaknesses: The paper has the ambition of finding a neural implementation of generalized whitening such that it could be also implemented by biological neural circuits. However, a major concern is that inhibitory neurons do not obey Dale's law, and the model is therefore not biologically plausible. In general, it is not clear what the paper brings to understanding of biological or artificial neural networks.
Since the model violates Dale's law, the discussion about the function of interneurons seems out of scope.
Moreover, there are significant parts of the paper that are unclear. A number of details about the methods are given, but the main mechanism that supports learning remains unclear. Authors mention that the circuit "learns directions that are least matched to the target distribution", but it remained unclear to me how is this achieved and why this helps to Gaussianize the distribution of output firing rates of principal neurons.
Authors motivate their model by claiming that divisive normalization has not yet been solved with a neural network. This seems false (see for example Chalk, Masset, Deneve & Gutkin, PLOS CB 2017). Gain control is also captured by available models. Efficient spiking models and normative models where changes in gain control can be captured by modulating the metabolic cost on spiking in the model's objective (see Guiterrez and Deneve, eLife 2019; Koren and Deneve, PLOS CB 2017). In general, the paper does not sufficiently take into account previous work on efficient coding with spikes that is closely related.
There are typos on several places, for example: lines 75, 77, 100, 234
Technical Quality: 3
Clarity: 2
Questions for Authors: Authors motivate their modelling by claiming that it is currently unclear how neural networks could implement nonlinear transformations. Which non-linear transformations are not captured by the listed models?
The paper by Alemi et al. AAAI 2018, proposes efficient spiking models that implement non-linear transformation of stimuli at the population level. In Koren & Panzeri, NeurIPS 2022, neural populations perform linear transformation of stimuli on the level of neural populations, but on the level of single neurons these transformations are strongly non-linear. What do authors mean by non-linear transformations? Non-linear on what level? This should be clarified.
Authors say that approximating target distribution is useful because it facilitates efficient transmission of signals. It is however not entirely clear why making the distribution of responses Gaussian-like facilitates efficient transmission. Is this improvement in efficient transmission attributed to redundancy reduction?
Is inhibition required to solve this optimization problem, e.g., could the same operation be achieved through plasticity of E neurons?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Authors addressed the limitations adequately.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading of our paper. We take your concerns quite seriously and have revised our paper accordingly. Please find our responses to your listed weakness and your questions below.
### Weaknesses
- We tested a modified version of our algorithm in which we enforce Dale's law. The modified version performs well, although there is a noticeable degradation in performance when compared with the original algorithm. For more details, please see our general author response and the attached PDF.
- We appreciate your feedback that parts of the paper are unclear. We'll do our best to improve the revised description.
- Thank you for pointing out these omissions. We take this quite seriously and plan to revise our paper to make it clear that there are a number of existing nonlinear neural circuits for efficiently encoding their inputs (see our general author response).
- Thank you for pointing out these typos.
### Questions
- To be precise, we mean that the circuit transform is nonlinear. Specifically, the function $T:{\bf s}\mapsto{\bf r}$, which maps the vector of circuit inputs to the vector of circuit responses, is nonlinear. We will clarify this in our revision.
- Yes, the primary reason is attributed to redundancy reduction. There are other factors that make Gaussian distributions appealing, but this is the main reason.
- It is likely that plasticity of excitatory neurons can contribute to reshaping of neural responses; however, redundancy reduction likely requires local inhibitory interneurons. For example, we are unaware how plasticity of EE connections can reduce correlations between two primary neuron responses that receive highly correlated inputs.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply to my questions. In light of improvements made in the rebuttal I am increasing my score to 5.
I appreciate that you implemented and tested the model that obeys Dale's law - this is an important proof of concept in spite of reduced performance of the Dalian network. Also, I find that your work can be better appreciated in light of clarifications and improvements described in the general rebuttal. I suggest that these improvements and clarifications are carefully incorporated into the revised paper, in particular, the points about the batch training and symmetry of synaptic weights seem important.
I have two more questions.
1) The optimisation problem is formulated with a quadratic regulariser. Is it instead possible to use a linear regulariser? If so, do solutions obtained with a linear regulariser differ from those obtained with a quadratic regulariser?
2) The model uses normalisation of weights after each update. I suppose this is necessary for convergence and I do not have a problem with it. Nevertheless, my question is what are the consequences of not normalising the weights? Also, can good performance be achieved if weights are normalised only once every n>1 updates instead of every update?
---
Reply to Comment 1.1.1:
Comment: Thank you for the score bump. We're glad that our proposed revisions have improved the clarity of our work and we will incorporate them in our revised paper.
In response to your questions:
1. This would not change the learned response distribution as this is set by $p_\text{target}$; however it would change the learning algorithm. Specifically, replacing the $L^2$ (quadratic) regularizer $\lambda||T({\bf s})||^2$ in equation (1) with an $L^1$ (linear) regularizer of the form $\lambda|T({\bf s})|$ would encourage sparser neural responses. However, to compensate, the interneurons would adapt to offset the $L^1$ regulariser. Now if the goal is to encourage sparsity, this could built into $p_\text{target}$. For example, rather than choosing $p_\text{target}$ to be Gaussian, it could be chosen to have heavier tails. In this case adding an $L^1$ regularizer to the objective would help the circuit achieve this goal.
2. The main reason for normalization is to prevent the weights from diverging (or collapsing), which is a notorious problem when the synapses update according to a Hebbian update without a homeostatic compensation mechanism; see, e.g., [Abbott & Nelson "Synaptic plasticity: Taming the beast", Nature Neuro. 2000]. As you suggest, this can be achieved by normalizing every $N>1$ steps. Alternatively, this step can be replaced by a dynamic normalization process similar to the one shown in appendix E.1 of [Duong et al. NeurIPS 2023].
---
Rebuttal 2:
Comment: Likewise, we've appreciated your questions and engagement during the discussion period.
Request: can you please edit your official score so that your score increase is reflected in the reported average? Thank you!
---
Rebuttal Comment 2.1:
Comment: I updated the official score.
---
Reply to Comment 2.1.1:
Comment: Thank you! | Summary: This paper introduces a method that utilizes multiple inhibitory neurons and the Hebbian learning rule to transform a signal into a representation that conforms to a specified target distribution. Specifically, the model incorporates Hebbian synaptic plasticity to establish connections that optimally match this target distribution. Interneurons within the model are adapted in terms of their gain and activation directions to respond most effectively to the input signals, mirroring the potential optimization mechanisms found in biological neural processing.
Utilizing this approach, the study processes natural images from the Kodak dataset through a wavelet transform. This extracts 2-dimensional signals, specifically pairs of wavelet coefficients for images at fixed horizontal spatial offsets ranging from d = 2 to d = 64. The model then transforms the distribution of this two-dimensional information into a Gaussian distribution. This transformation demonstrates the model's ability to handle complex data structures and align them with statistically predictable patterns, enhancing both the analysis and interpretation of natural images.
Strengths: 1. Clear Presentation: The paper articulates its concepts with exceptional clarity, facilitating a deep understanding of complex models and their applications.
2. Effective Transformation of Distributions: The model is highly effective at converting input signals into specific, targeted distributions, optimizing data for further processing and analysis.
3. Online Operational Capability: One of the model's significant advantages is its ability to operate online. This feature significantly enhances its practicality for real-world applications, allowing for real-time data processing and continuous learning without the need for retraining.
Weaknesses: 1. A main concern is that the model is not biologically realistic. In addition to the authors' admission that the feedforward synaptic weights $W^T$ and feedback weights $-W$ are symmetrically constrained, there is also a question about the biological plausibility of using pairs of wavelet coefficients as inputs. It's unclear how biological systems would naturally derive such information and whether the distribution of these inputs is something that biological systems need to transform.
2. The setup of online learning with a batch size of 10 also raises questions about its biological feasibility. It's uncertain how biological systems could implement a similar mechanism.
3. The paper only demonstrates the method's effectiveness on two-dimensional inputs, which might not sufficiently prove its efficacy. In real-world scenarios, we often deal with inputs that are high-dimensional ($N>>2$), and the model's performance in such conditions remains untested.
4. The paper incorporates many specific settings, such as the choice of activation functions based on the Gaussianization of scalar signals. These particular choices may limit the generalizability and applicability of the model to different datasets or broader applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. As mentioned in the Weaknesses section, the model is not biologically realistic. Given this, my primary question is: What is the purpose of using neural networks to implement this function? Furthermore, if it does not realistically mimic biological processes, should it be compared with more engineering-oriented approaches?
2. Scaling of Inhibitory Neurons with Input Dimensionality: The model currently demonstrates with two-dimensional inputs, requiring three inhibitory neurons. How does the number of required inhibitory neurons scale as the dimensionality of the input increases? This is crucial for understanding the feasibility and complexity of the model when applied to higher-dimensional data.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: Yes, the authors have addressed some of the limitations of their work in the Discussion section of the paper. They specifically mention the model’s lack of biological realism and its primary demonstrations within low-dimensional settings. These acknowledgments align with the checklist guidelines on discussing limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading of our paper and for your thoughtful comments. We are pleased that you find the paper clear and appreciate the models online capabilities! We understand your concerns about the biological realism of our model and have addressed these concerns in our general author rebuttal above. Although there are aspects of our model that are not matched to underlying biological details, we do believe it offers a novel perspective for thinking about circuit computation that can provide a framework for interpreting experimental measurements.
In response to weakness #4: We actually think this points to an interesting aspect of sensory signals. In some respects, it's quite surprising that the model can effectively Gaussianize two-dimensional signals just by reshaping the response marginals along a few axes. This is likely due to structure of natural signals, and the efficient coding hypothesis posits that sensory circuits take advantage of this as well. Therefore, while we agree there are aspects of our circuit that may not generalize to arbitrary datasets, we do not think this reduces its efficacy as a model of circuit computation.
---
Rebuttal Comment 1.1:
Comment: I have carefully read the comments from other reviewers and appreciate the authors' detailed responses. I have a minor question regarding your discussion on the "Symmetry of synaptic weights" where you mention, "When the weights are decoupled, they converge asymptotically toward a symmetric solution due to the symmetry of their learning rules." While this conclusion is derived from both analytical and numerical analyses, I wonder if such symmetric connections are also observed in real biological systems, especially considering that this section addresses the question of biological plausibility.
---
Rebuttal 2:
Comment: The most compelling example is the olfactory bulb, an early stage of olfactory processing in vertebrates, where excitatory mitral cells form dendrodendritic connections with local inhibitory granule cells [Shepherd 2003], leading to symmetric *connectivity* matrices (though not necessarily symmetric *weight* matrices). Within cortical circuits, it's unclear, though somatostatin interneurons that form local, reciprocal connections with pyramidal cells are promising candidates. We view the symmetry constraint as a testable prediction, which can potentially be measured, for example, in recently reported connectomics datasets from mouse visual cortex [Schneider-Mizell bioRxiv 2024].
---
Rebuttal Comment 2.1:
Comment: Thank you to the authors for your thoughtful reply, which has deepened my understanding of this work. I will raise my score to a 5. | Rebuttal 1:
Rebuttal: Thank you for your careful reading and helpful comments. Here we respond to two important concerns and provide individual responses below.
## Biological realism
Reviewers **zp8S**, **Voaj** and **txmi** listed the biological realism of our model as a primary concern that limits the applicability of our model in understanding neural circuits. We appreciate your concerns, though we still believe that our model represents a useful advance.
First, some of the concerns can be addressed with minor adjustments that do not affect the overall circuit computation. We list these concerns with the most easily addressed concerns first.
1. **Wavelet coefficients.** Apologies for the jargon. These are responses of a local oriented filters (similar to Gabor filters) applied to natural images. They are qualitatively similar to simple cell responses in primary visual cortex [Field 1987], so they are representative of natural inputs to cortical circuits in the visual cortex. We will edit the text and refer to the inputs as "local filter responses''.
2. **Batch training.** Using a batch size 10 was an optimization choice and is not required. We could have implemented a fully online optimization algorithm with batch size 1. To demonstrate this, we've run Algorithm 1 in the fully online setting and reproduced Figure 3B of the main text; see the attached PDF.
3. **Symmetry of synaptic weights.** While Algorithm 1 enforces symmetry between the primary neuron-to-interneuron weights $W^\top$ and interneuron-to-primary neuron weights $-W$, this is not required. When the weights are decoupled, they converge asymptotically toward a symmetric solution due to the symmetry of their learning rules. This has been demonstrated both analytically and numerically in previous works; see, e.g, appendix E.2 of [Duong et al., NeurIPS 2023].
4. **Scalability.** We have strong reason to believe that the number of inhibitory interneurons required will scale reasonably with the dimension of the input. Visual inputs are highly structured; e.g., statistical dependencies between inputs rapidly decay with the distance between the inputs. Therefore, local interneurons only need to connect to neurons with overlapping or adjacent receptive fields, which greatly reduces the number of interneurons that are required as the size of the model increases.
5. **Violation of Dale's law.** This is the most serious concern (although note that Dale's law may not be as hard of a constraint as previously believed; see [Saunders et al. 2016, Granger et al. 2020]). There are two possible solutions. One approach is to seek a circuit implementation of the algorithm that obeys Dale's law. This could potentially be achieved by introducing additional (excitatory or inhibitory) neural populations, though the details of this construction need to be worked out. An alternative approach is to replace the current optimization problem with a *constrained* optimization problem that respects Dale's law and can be optimized via a projected gradient descent algorithm. As a preliminary test of this, we optimized the constrained optimization problem (in the context of Gaussianization of visual inputs) and find that the solution still significantly reduces the mutual information between neural responses (more than linear whitening, but less than the unconstrained model that violates Dale's law); see the attached PDF.
Second, we believe that our normative model strikes an effective balance between simplicity and biological realism that provides a useful framework for interpreting experimental measurements of neural circuits. In particular:
1. Our model conceptualizes the circuit computation as transforming the input distribution to achieve a target distribution (or set of distributions). This can potentially be tested experimentally by measuring distributions of circuit responses (and how these change when the input distribution shifts). We think this is especially useful when recording from large populations of neurons and there is some experimental work along these lines [Benucci et al. 2013].
2. We establish a link between the circuit level computation and physiological processes such as neural adaptation in local interneurons. This can be experimentally tested by measuring how interneuron FI curves adapt in response to changes in the distribution of circuit inputs.
## Relation to existing work
Reviewer **Voaj** pointed out that our original submission does not cite salient works related to circuit models that implement nonlinear transformations to efficiently encode information. We take this point quite seriously and will work to improve our description of the relationship of our work to related literature. The following will be added to the paper:
> There are a number of existing computational models that explain how neural circuits can implement nonlinear transformations to efficiently encode their inputs. For example, there are several neural circuit models that implement forms of divisive normalization [Rubin et al. 2015, Chalk et al. 2017, Malo et al. 2024], a transformation that is optimal for efficient encoding of natural signals [Schwartz & Simoncelli 2001, Lyu 2011]. In addition, there is a body of work on normative spiking models derived from objectives which maximize the information encoded per spike [Koren & Deneve 2017, Alemi et al. 2018, Gutierrez & Deneve 2019], which can account for neural adaptation mechanisms such as gain control. Our work differs from these, by proposing a novel framing of sensory circuit computation in terms of transformations of probability distributions, which can be viewed as a population level version of the seminal work by Laughlin. We then demonstrate in a normative circuit model how interneurons can play a critical role in optimizing this objective by measuring the marginal distribution of circuit responses and adjusting their feedback accordingly.
Pdf: /pdf/79b0c3b0c857520614a1a6c58f5406c9c0691f10.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Efficient Discrepancy Testing for Learning with Distribution Shift | Accept (poster) | Summary: The paper extends the recently introduced Testable Learning with Distribution Shift (TDS) model and makes contributions in three main aspects: Universal TDS Learners, Optimal Error Guarantees via L1 Sandwiching, and Fully Polynomial-Time Testing. It presents universal TDS learners that perform well across a wide range of test distributions, achieves nearly optimal error rates by leveraging L1 sandwiching techniques, and provides efficient testing algorithms that run in fully polynomial time, particularly for intersections of halfspaces. The paper also explores the implications of these methods for various concept classes and demonstrates their application in scenarios involving distribution shifts
Strengths: - This paper extends the recently introduced Testable Learning with Distribution Shift (TDS) model by introducing Universal TDS Learners and employing L1 sandwiching techniques to achieve nearly optimal error rates. These advancements provide new methods for handling distribution shifts, particularly for intersections of halfspaces.
- The theoretical foundations are robust and well-supported by detailed mathematical formulations. The use of extensive equations and rigorous proofs shows the depth of the research.
- The paper is well-organized and clearly written, with each section logically building upon the previous one. Definitions, lemmas, and theorems are clearly presented, aiding in reader comprehension.
- The development of efficient testing algorithms that run in fully polynomial time, especially for intersections of halfspaces, addresses significant limitations in prior work.
Weaknesses: - Lack of Experimental Validation: The paper presents extensive theoretical work but lacks empirical validation, making it difficult to assess practical performance.
- Assumptions on the need for efficient implementations of the testing phase: The paper assumes that efficiency is crucial for the testing phase, especially for large pre-trained models. However, it should address how much improvement this efficiency brings to current large models and if they are substantial enough to benefit from it.
- Generalization to Other Concept Classes: The focus on specific concept classes like intersections of halfspaces limits the applicability of the methods to a broader range of problems.
- Clarity in Algorithm Description: The algorithm descriptions could benefit from the inclusion of pseudocode or flowcharts to enhance understanding and reproducibility.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Can you provide evidence showing that the efficiency improvements in the testing phase are significant for current large pre-trained models, or specify at what scale of models these improvements become substantial?
- Generalization to Other Concept Classes: The paper focuses on specific concept classes like intersections of halfspaces. Do you have plans or ideas for extending these methods to other concept classes? It would be helpful to understand the potential for broader application.
- Clarity in Algorithm Description: The descriptions of your algorithms are mathematically detailed but could benefit from pseudocode or flowcharts. Could you provide these to enhance clarity.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes, the authors use one section to discuss about the limitation, future work and broader impact of this paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to thank the reviewer for their constructive feedback and for appreciating our work.
- The fully polynomial-time testers we propose in this work apply to the class of balanced halfspace intersections. There are two important implications of our results related to current large pre-trained models: (1) we can experimentally evaluate our proposed efficient testers through applying them to the last layers of large networks, since the last layers correspond to simple classes like halfspaces or halfspace intersections and (2) our localized discrepancy tester for halfspace intersections demonstrates that fully polynomial-time discrepancy testing is possible, even for classes for which there is no known fully polynomial-time provable learning algorithm. The second implication motivates further study of the localized discrepancy testing problem for more complex concept classes, like deep neural networks.
- We agree with the reviewer that generalizing our results to broader concept classes is an interesting open direction. That said, our results capture several fundamental concept classes like constant-depth circuits, low-degree PTFs (see Table 1), low-dimensional convex sets (section 4) and halfspace intersections (section 5). As mentioned above, TDS learning of these classes can also be useful for neural networks, since the last layers typically correspond to some simpler class. Moreover, our work hints at potential end-to-end extensions to other classes by extracting the exact properties we used to obtain our results (see for example Remark 5.4 and lines 394–400).
- Due to space limitations, we only provided pseudocode for our algorithms in the appendix (see Algorithms 1,2 and 3). Given the additional space for the camera-ready version, we will consider adding some pseudocode in the main paper for clarity. We can also add pseudocode for the boundary proximity tester in Appendix E.
---
Rebuttal Comment 1.1:
Comment: Thank you for response. | Summary: Discrepancy distance is crucial in domain adaptation. The paper proposes the first set of provably efficient algorithms for testing localized discrepancy distance. This approach can generalize and improve prior work on TDS learning, and further extend to semi-parametric settings. By separating learning and testing phases, the authors obtain algorithms that run in fully polynomial time at test time.
Strengths: 1.The article is well-written with clear logic, though it could benefit from improvements in some parts.
2.The authors spent great effort delivering theorems of time and sample complexity for several fundamental concept classes: 1) Classes with Low Sandwiching Degree; 2) Non-Parametric Low-Dimensional Classes; 3) Class of balanced intersections of half spaces.
3.The paper introduces algorithms for testing that run in fully polynomial time.
Weaknesses: See the questions below.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.Does Definition 1.1 of localized discrepancy match the one with 0-1 loss described in (Zhang et al., 2020)?
2.The abstract mentions, "Our methods also extend to semi-parametric settings and yield the first positive results for low-dimensional convex sets." However, I couldn't locate this information in the main text of the paper.
3.In line 65, it is stated, “we give the first TDS learners that are guaranteed to accept whenever the test marginal falls in a wide class of distributions that are not necessarily close to the training distribution (in say statistical distance) but, instead, share some mild structural properties.”
This raises the question of why emphasis is placed on some mild structural properties over statistical distance, and what exactly these mild structural properties are.
4.In line 142, it is stated, “$\lambda$ is the standard (and necessary) benchmark for the error in domain adaptation when the training and test distributions are allowed to be arbitrary.” I would appreciate it if you provide further insights about $\lambda$. For example, considering the training and test distributions are allowed to be arbitrary, is there an upper bound of $\lambda$?
[1] Zhang Y., Long M., Wang J., and Jordan M., On localized discrepancy for domain adaptation. arXiv. 2020.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to thank the anonymous reviewer for their feedback.
1. Our Definition 1.1 is a generalization of the localized disparity discrepancy w.r.t. 0-1 loss as described in (Zhang et al., 2020). In particular, they use a specific notion of neighborhood (which corresponds to the disagreement neighborhood of definition E.2 we define in the appendix). However, we can also define the localized discrepancy with respect to other notions of neighborhood, like the global neighborhood (see definition C.2) or the subspace neighborhood (see definition D.2). Different notions of neighborhood correspond to different localized discrepancy testers.
2. For the results on semi-parametric classes see section 4, where we propose TDS learners for low-dimensional convex sets (which is a semi-parametric class of functions).
3. Line 65 emphasizes that our testers will certify low test error even in the presence of large amounts of distribution shift, as long as the distribution shift is benign in the sense that the test distribution is structured. The appropriate structural properties are described in lines 273–275 for Theorem 4.2 and in lines 324–326 for Theorem 5.1 and essentially correspond to (1) an upper bound on the fourth moment and (2) anti-concentration. In other words, while distributions with small statistical distance look similar, our testers will accept distributions that are very different from the training distribution, as long as they satisfy some structural properties.
4. The error parameter $\lambda$ encodes the relationship between the training and test labels. It is small when there is some classifier $f^*$ in the given concept class that has both low training and low test error, i.e., both the training and the test labels are approximately generated by $f^*$. Conversely, if the test labels are opposite from the training labels, $\lambda$ will always be $1$. Since there are no test labels available, we cannot hope for an error guarantee better than $\lambda$ (see proposition 1 in [KSV’24]).
*[KSV’24] Adam R Klivans, Konstantinos Stavropoulos, and Arsen Vasilyan. Testable learning with distribution shift. The Thirty Seventh Annual Conference on Learning Theory, 2024*
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response. My concerns are addressed well and I have decided to increase the rating. Please revise the paper accordingly if it is get in. | Summary: This paper considers the problem of designing Testable Learning with Distribution Shift (TDS learning) algorithms. Through proposed algorithms for testing localized discrepancy distance, the authors give a set of efficient TDS learning algorithms. These algorithms improves all prior work in the sense that they give optimal error rates, provide universal TDS learners and have polynomial runtime.
Strengths: This work consider the novel framework of TDS learning, under which provably efficient algorithms for learning with distribution shift for certain concept classes are introduced. This shows the high novelty of this work.
From significance perspective, this work generalize beyond prior works in many aspects.
Also the proofs are solid, with clearly defined notations.
Weaknesses: Although "discrepancy testing" appears in the title, the discussion for testing is relatively limited. Most of the results are given in the form of TDS learners rather than discrepancy testers. Readers need to refer to the appendix to see what is actually going on for testing. I would suggest the authors to explain their testing algorithm in more detail, and explain more intuition of why this testing may work. Also, it will be good if the authors could explain more on why their testers can induce TDS learners in section 4 and 5.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weakness. My main questions are regarding the intuition behind your tester, and how your theorems in section 4 and 5 come from proposition 3.2.
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: No significant limitations are stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback.
We would like to clarify that we propose 3 different discrepancy testers, one for each of the sections 3, 4 and 5 (see proposition 3.2, appendix D.1 and appendix E1). The relevant discussion can be found in lines 103–109 and high-level descriptions for the three testers can be found in lines 110–116 (for chow matching tester), lines 117–123 (for cylindrical grids tester) and lines 124–130 (for boundary proximity tester). Moreover, in lines 285–316 we provide the intuition behind the cylindrical grids tester (used for the results of section 4) and in lines 354–378 we provide the intuition behind the boundary proximity tester (used for the results of section 5).
Due to space limitations, we had to move formal statements regarding our discrepancy testers in the appendix, but, given the additional space of the camera ready version, we will add some more details in sections 4 and 5 accordingly.
Moreover, to avoid confusion, we will explicitly state in section 1.2 that we propose three different testers.
We are happy to provide further clarifications regarding the testers of sections 4 and 5 if the reviewer has any questions not addressed in lines 285–316 and lines 354–378.
---
Rebuttal Comment 1.1:
Comment: Thanks for the explanation. | Summary: The paper investigates the problem of learning under distribution shift in the recently introduced framework of testable learning with distribution shifts (TDS) [Klivans et. al. 24]. In this framework, the learner receives labeled samples from the train distribution D and unlabeled samples from the test distribution D’, and is expected to either output a hypothesis with low error on D’ or correctly identify that D’ is not equal to D when that is the case. In other words, the learner runs a test to check whether D=D’ and outputs a hypothesis with low error on D’ when the test accepts. She can abstain from outputting a hypothesis when the test rejects, but the test needs to accept whp when D=D’.
This paper introduces the notion of localized discrepancy test and uses it for efficient TDS learning. The localized discrepancy test estimates the discrepancy between two distributions with respect to a fixed hypothesis which makes it efficient to compute in many cases. This is in contrast to the prior notion of (non-local) discrepancy which is computed with respect to worst pair of hypotheses within a class and is therefore significantly harder to compute.
Some of the salient results from the paper are as follows.
1. The prior work by Klivan et. al. (’23) showed that any class admitting low-degree polynomial approximators in the L2 sense can be efficiently TDS learned. This paper extends this result to show that approximation in the L1 sense is enough for TDS learning, and uses this result to obtain improved algorithms for constant depth circuits.
2. Gives universal TDS learner for convex sets with low intrinsic dimension and half-space intersection. The learner is universal in the sense that the corresponding test is guaranteed to accept even when the distribution D’ is not equal to D but has mild structural properties.
Strengths: The paper makes concrete improvements over the existing work on TDS learning.
It is reasonably well written.
Weaknesses: While the TDS learning model was introduced from the practical motivation of learning under distribution shifts, the results in this paper (as well as the prior work) seem to be in settings far removed from practice, often relying on strong distributional assumptions. I am unsure about the significance and relevance of these results to general machine learning researchers concerned with distribution shifts in the real world.
Technical Quality: 3
Clarity: 3
Questions for Authors: My main concern is regarding the practical applicability of this line of work. While practical applicability is not the goal for all the theory work, in this case, the motivating question comes from practice, so connecting back to practical usefulness seems important. Could the authors explain if they see this research helping with learning under distribution shifts in real-world situations? It's okay if this paper doesn't immediately offer practical algorithms, but if the authors can discuss how this work might eventually lead to something practically useful, I would be happy to reconsider my evaluation of the paper. I might be missing something and am willing to seriously reconsider my assessment.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See the weaknesses section and the question for the authors above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your question about "strong assumptions". The goal of this line of work is precisely to remove the strong assumptions inherent in all prior work in the fields of distribution shift and domain adaptation. More precisely, all prior work requires an assumption on both the train distribution D and test distribution D’, as it requires the discrepancy distance between D and D’ to be small to obtain meaningful generalization bounds. Instead, we are able to come up with the first efficient algorithms to test a (localized) version of this quantity, without taking any assumptions on D’. Additionally, we can remove assumptions on D as well by combining our work with so-called testable learning algorithms (effectively we can test the assumptions we need for D and reject if they are not satisfied). Finally, it is possible to relax the assumptions we take on D to say bounded distributions, which is quite reasonable to assume in practice (this observation will appear in forthcoming work).
Let us further explain why we think taking an assumption on D– the training distribution– is practically reasonable: consider the following now extremely common scenario: someone trains a foundation model on a labeled data set. In this scenario, the learner has a lot of control over the training set; for example, in the health or bio domains the training data can be subsampled from large databases to satisfy various properties. What we want to avoid is making an assumption about an unseen test set, as it may come from a radically different distribution. The recent line of work on TDS learning and this paper in particular actually give efficient algorithms with meaningful guarantees in this scenario, whereas all prior work requires an assumption on some notion of distance between D and D’ that cannot be efficiently verified.
Another practical example: assume we have built a foundation model and wish to determine if it will perform well on a new unseen test set. We can view the last layer of this network as a linear classifier with respect to a distribution of weights on the next to last layer of the network. We can even verify certain properties of this distribution (such as boundedness). Then determining if this foundation model will succeed is precisely a TDS learning problem as described in the submission.
Our work, in particular, makes three important contributions towards more practically relevant algorithms, by giving TDS learners that: (1) can handle more classes (i.e., degree-2 PTFs, constant-depth circuits and convex sets), (2) accept more often, since they are guaranteed to accept wide classes of benign distribution shifts (see the universal TDS learners of Theorems 4.2 and 5.1) and (3) run in fully polynomial time at test time, even for classes for which no fully polynomial learning algorithms are known (see section 5).
To summarize, we have removed the onerous assumptions prior work made on both the train and test distributions and have given a path forward for new efficient algorithms to determine if a foundation model will succeed on unseen test sets. We therefore feel these ideas will have a major practical impact. Distribution shift is certainly one of the most critical issues for trustworthy ML.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed and convincing response. Based on this, I have increased my score. I would encourage the authors to include a thorough discussion on practical applicability in the final version, clearly outlining areas where practitioners can learn from this line of results, as well as remaining major bottlenecks. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: Distribution shift is a well known problem where the classifier is trained on a particular data distribution encounters an input distribution far from the training set(OOD). In such a scenario it is desirable that the model performance not degrade too much, and one way to ensure this to estimate the discrepancy of the concept class wrt to the training and test distributions in the standard manner.
Reasoning over the exponentially many functions in a reasonable concept class is naturally not tractable in most cases, hence the authors define a localized variant that computes the discrepancy only over a small neighbourhood of a particular reference function. The intuition behind this is that the learned functions, are well behaved enough to be near the reference hypothesis with high probability.
To make the learned models robust to distribution shifts, it is essential to have learning processes that can give discrepancy guarantees and the paper takes a step towards this by providing an optimal test for a broad class of hypotheses.
Strengths: 1) the problem relaxation of localized discrepancy is original and well motivated, and is likely to influence further research in the area.
2) the examples are chosen well and make the paper easy to read
3) The paper makes a timely and significant contribution to learning with discrepancy guarantees
Weaknesses: 1) Since the main claim is a polytime algorithm, in addition to various speedups in the learning process, it might be interesting to look at a toy experiments to get an idea of the performance.
2) The localized discrepancy relaxation likely does not makes sense in a few situations. This needs to be a part of the paper, along the the list of cases where the notion does work.
Minor typos -- 1399 (threshol)
115 (chow)
Technical Quality: 4
Clarity: 4
Questions for Authors: --
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to thank the reviewer for their comments and for appreciating our work.
- We agree that experimental evaluation of our algorithms would be interesting, but we believe that a thorough, dedicated evaluation would be preferable and more suitable for future work, since the scope of this paper is theoretical.
- The appropriate localized discrepancy relaxation of the testing phase depends on the guarantees one can ensure during training. For example, if the training algorithm is guaranteed to approximately recover the relevant subspace (see Theorem D.13), then testing the localized discrepancy with respect to the subspace neighborhood (see Definition D.2) is appropriate. We refer the reviewer to lines 103–109 for a relevant discussion and lines 110–130 for examples of the appropriate relaxation in different scenarios. We will clarify this point in future revisions. | null | null | null | null | null | null |
NRGBoost: Energy-Based Generative Boosted Trees | Reject | Summary: The authors propose a energy-based generative boosting method. They try to maximize the log-likelihood functional delta with a second-order expansion. This lead to a boosting algorithm with deltas as steps in the log-likelihood. Instead of scaling the steps with a fixed predefined value (as a anti-overfitting hyperparameter), they rely on linear search to get better performance. They initialize f0 as uniform and use trees as weak learner to learn the deltas. They derive the objective with trees, it is well explained and looks similar to other second-order method objectives like XGBoost. They use MCMC to sample from Q(x) as is typical from energy-based approaches. They use Gibbs so they only need to sample from one dimension while keeping the rest constant. However since there are t trees, this is quadratic in t. They use some form of accept/reject to sometimes accept previous iteration samples in order to not have to re-samples new samples given the quadratic cost. They include a probability of refresh in order to not just always accept old samples. They propose interesting ways of regularizing the approach. They provide good literature review of related methods. It seems like Section 4 could be integrated with the related work section.
Figure may appear unimpressive to a generative expert unfamiliar with trees. But, being able to generate good samples from MNIST using only decision is an impressive feat (even if MNIST is downsampled). This paves the way for trees being used on more complex data.
Table 2 shows nice prediction results, and they do extensive hyperparameter tuning. However, ML Efficiency is not the best metric, it only focus on prediction. A generative model could produce low-diversity fake data that lead to good classifiers. I would recommending adding some distribution metrics such as the Wasserstein distance (which is computable in low-dimension and is used for high-dim data such as images/videos in the form of the widely popular FID/FVD). There are also other useful metrics for quality and diversity, I recommend looking the extensive choice of tabular-data metrics described in https://proceedings.mlr.press/v238/jolicoeur-martineau24a.html. However, the only essential one in my opinion is having a distance in distribution. It can be the Wasserstein distance or something like the MMD distance.
Except for the missing distribution metric, everything else in the results is good and the methodology is sound with good hyperparameter tuning. I would encourage the authors to add a distribution metrics.
The method is sound, novel and quite interesting. The presentation is very well made. Being the first tree method achieving good image data samples is impressive in my opinion. This is a very high quality paper.
Strengths: The method is sound, novel and quite interesting.
The presentation is very well made.
Being the first tree method achieving good image data samples is impressive in my opinion.
This is a very high quality paper.
Weaknesses: Missing a distribution metric, the ML efficiency is not a adequate metric on its own.
Technical Quality: 4
Clarity: 4
Questions for Authors: What is the memory requirements, especially when processing large datasets such as MNIST? Could you provide sample time and RAM requirements for the datasets? How hard is scaling with respect to N_samples and also scaling with respect to N_features?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: If the authors had to downsample MNIST from 28x28 to lower, than there are scaling limitations that should be added.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their valuable feedback and encouraging words about our work, as well as for providing helpful context about our method's ability to produce passable image data as a tree-based generative model.
We address the reviewer's two concerns below.
# Distribution Metric
While we did not include a conventional distribution metric, our original intention was to have the discriminator measure reported in the paper serve a similar purpose.
Our reasoning was that estimation of integral probability metrics (like the Wasserstein-1 distance or kernel MMD) between two distributions can be interpreted as solving a binary classification problem to distinguish between the two sets of samples (see e.g., [[Sriperumbudur et al., 2012]](https://projecteuclid.org/journals/electronic-journal-of-statistics/volume-6/issue-none/On-the-empirical-estimation-of-integral-probability-metrics/10.1214/12-EJS722.full)).
Therefore, instead of picking a classifier belonging to a specific family of functions (such as continuous functions with Lipschitz constant <= 1 for the $W_1$ metric), we chose instead to directly employ a classifier that we know to be state of the art for binary classification in tabular data and that has no issue with the data being mixed categorical and numerical with very different scales.
That being said, we agree that this evaluation is perhaps unconventional and our approach has downsides. Namely, that XGBoost is too effective and can nearly perfectly distinguish real from synthetic data on the larger datasets where it has enough training data.
We therefore decided to follow the reviewer's suggestion and evaluate the Wasserstein distance between samples using a similar setup to the linked paper:
- Numerical variables are min-max scaled and categoricals are one-hot encoded and scaled by 1/2.
- A $L_1$ distance is used in the optimal transport formulation.
- Due to the worst case cubic scaling with the sample size of solving the optimal transport problem we sub-sample a maximum of 5000 samples from the original test set and use an equal number of synthetic samples.
The full results for the test set are added to the PDF attached to the author rebuttal but below we summarize the average rank over all datasets and 5 experiments per dataset for convenience:
|method|$\hat{W}_{train}$|$\hat{W}_{test}$|
|-|-|-|
|TVAE|3.114|3.143|
|TabDDPM|2.743|2.771|
|ARF|2.743|2.771|
|DEF (KL)|4.090|4.057|
|NRGBoost|2.314|2.257|
We note however an almost complete inversion of the ranking of the methods compared to our discrimination measure results in the abalone and miniboone datasets.
Another somewhat concerning observation is the instability of the results with respect to different, seemingly equally plausible choices of how to normalize or rescale the data and to the distance metric used in the OT formulation.
Despite this, NRGBoost still ranks best on average even though these results don't favor it as much as the discriminator measure.
A possible explanation for the comparatively worse results is that NRGBoost (and DEF KL) are trained to minimize the KL-divergence between data and model distributions which behaves quite differently from a Wasserstein distance.
Regardless, we believe that both types of evaluation are valuable and provide different perspectives of the results and trade-offs between the different methods.
We will add the full results, together with the above discussion to the paper as we agree it would improve it significantly. We kindly ask the reviewer to please let us know if they have any further feedback regarding this evaluation.
# Scalability Concerns
Memory was never a concern during our experiments with any of our tree-based methods. Besides the memory required to store the original data itself and the similarly sized sample pool, **NRGBoost uses no significant additional memory** besides storing the trees themselves which use a very compact format.
The choice to downsample MNIST was simply one of computational time.
Both the tree-fitting and the sampling scale linearly with the number of features. Downscaling by 4x therefore allowed us to run the experiments with cross-validation on MNIST in a reasonable time-frame.
Note that training a NRGBoost model with 200 trees of 4096 leaves used to take roughly 2.5 hours and we trained 6 of those (+ smaller ones) per cross-validation fold (i.e. x5).
We have since made significant improvements to the efficiency of our tree-fitting implementation which was previously responsible for the bulk of the training time and are now able to train the same model on downsampled MNIST in under 1 hour.
Gibbs sampling now represents roughly 70% of training time which we report for each dataset in the PDF attached to the author's rebuttal.
This **sampling time scales linearly with both `N_samples` and `N_features`.**
We note however that downsampling also helps with the curse of dimensionality when it comes to density estimation.
Tabular models like NRGBoost do not benefit from any of the helpful inductive biases of convolutional neural networks and treats every pixel as a feature that is potentially correlated to any other feature without any awareness of spatial separation between pixels.
Increasing the resolution by 4x means that the density over 4x as many features needs to be learned and to achieve similar visual accuracy we expect that this would require deeper trees as there are more features to split on and partition the input space.
Because our primary goal was to model tabular data we did not explore directions that would improve NRGBoost on image data specifically by exploiting the biases mentioned above but that could be interesting future work!
# Summary
Again, we thank the reviewer for their time and their suggestions and hope to have adequately addressed their concerns.
---
Rebuttal Comment 1.1:
Title: response
Comment: I agree that the Discriminator score can be linked to a IPM metric loosely, but having a true metric like Wasserstein is better. I agree that there is no perfect way to normalize/scale the data when you have continuous and categorical data, this is a well known problem. The Gower distance specifically tries to tackle this problem and this the distance that you used for the Wasserstein distance (by min-max continuous and /2 one-hot), which at least makes the choice rational.
Thank you for addressing my requests for a distribution metric and memory concerns. Given this, I am increasing my score by 1.
---
Rebuttal 2:
Title: Thank you!
Comment: Thank you once again for your time in reviewing our rebuttal and for increasing your score.
We are glad we were able to address your concerns. | Summary: This paper proposes a boosted tree algorithm that performs distribution learning using an energy-based formulation. Inspired by methods like XGBoost, it is claimed to achieve high performance not only in generative ability but also in discriminative performance.
Strengths: The proposed method incorporates techniques that can be leveraged because they inherit from tree boosting models, such as the approximate sampling algorithm. Moreover, as a model that possesses not only generative quality but also discriminative performance, it has a wide range of applications.
Weaknesses: This study is not theoretical; therefore, the validity of the proposed method must be confirmed through experiments. However, there are some questions regarding the experimental settings.
Also, from an algorithmic perspective, since methods unique to tree ensembles are incorporated, they could potentially offer advantages in terms of computational complexity compared to other methods. However, evaluations regarding efficiency have not been conducted. If there are advantages, it seems opportunity loss.
Please refer to the Questions section.
Minor:
The evaluation of variance is presented separately in Appendix G, but it is difficult to compare. Therefore, I would like it to be summarized in one table.
Technical Quality: 2
Clarity: 2
Questions for Authors: **(1) Robustness**
In the discussion, it is stated that hyperparameters are not critical compared to deep learning, but it seems that there is no experimental evidence to support this. If you claim robustness to hyperparameters as a strength, please provide evidence for this.
**(2) Computational cost**
Please describe the computational cost and compare it with benchmarks. In the discussion, DEF is mentioned to not require sampling for training and that the processing for sampling is light, but please quantify how much of a difference there is.
**(3) Dataset**
How do you select datasets? It seems that only a few are picked from among the many datasets available, which may give the impression that you are cherry-picking the data that works well.
**(4) Fixed parameters**
It seems that the hyperparameters are fixed in advance as shown in Table 9, but how were these decided? It is mentioned that the hyperparameter search can be covered by only 24 combinations, but this also gives the impression that you are setting them in advance to work well and narrowing the apparent search space.
**(5) Parameter range**
It was also mentioned that DEF tends to have a large number of leaves, reaching 2^14, but isn't this because the search range is set that way? Even if you align this setting with NRGBoost, which has a maximum of 2^12, will it lead to the same conclusion?
**(6) Benchmark performance**
In Figure 1, the performance of the benchmark methods seems too poor. Is this figure created with the same settings as those measured in Table 2, for example? TabDDPM seems to show better performance than the proposed method in the ML Efficiency result, but it is hard to imagine getting similar results from the generated results in Figure 1.
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Computational cost of sampling is larger than existing methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed feedback. We will try to address all their concerns below.
Regarding the minor point about separately reporting averages and the respective standard deviations, we agree that this was a bad choice and will change the paper to present them in the same format used in the PDF attached to the author's rebuttal.
# (1) Robustness
We concede to the reviewer's point that we did not provide evidence for the claim of robustness as it was based on our own analysis and interpretation of the results of the hyperparameter tuning.
We can add an appendix with this analysis which show that, for example, models with 256 or 1024 leaves and a shrinkage factor around 0.15 can both achieve reasonable performance across most datasets for NRGBoost.
# (2) Computational Cost
We have added a comparison of the time taken to fit the best models found by hyperparameter tuning for each dataset to the PDF attached to the author rebuttal above.
Regarding the difference between sampling from NRGBoost and DEF models we are afraid that any numbers we can present for DEF would be misleading.
Since sampling is required during training for NRGBoost, we have spent considerable effort re-writing it in C and optimizing it.
In contrast, for DEF, sampling is a one-off cost that we incur for a single model after hyperparameter tuning and as a result it is implemented in single threaded Python code.
When we claim that sampling is simpler for DEF models it is because it requires only picking a tree in the ensemble uniformly at random, then a leaf according to a stored vector of leaf probabilities, and finally sampling a point uniformly from the support of the leaf. None of these steps require any complex calculations compared to the Gibbs sampling and an optimized implementation should be **many** times faster.
# (3) Dataset
Running our cross-validation setup for all density methods with hyperparameter tuning for each method/fold takes considerable time.
As a result, we had to limit the number of datasets used and we believe that 7 datasets is a reasonable number and in line with other papers on the subject of generative modelling (e.g., [[Xu et al., 2019]](https://arxiv.org/abs/1907.00503) and
[[Watson et al., 2023]](https://arxiv.org/abs/2205.09435)).
We tried to strike a good balance between tasks, #samples and #features. Furthermore, almost all of our datasets have appeared previously in other works on synthetic sample generation such as those mentioned above and [[Kotelnikov et al., 2022]](https://arxiv.org/abs/2209.15421). We can provide more specific reasoning behind the choice of each particular dataset in case the reviewer is interested.
# (4) Fixed Hyperparameters
For training NRGBoost we tune only the maximum number of leaves per tree and the shrinkage factor since they have the largest impact on the overall results.
The remaining parameters reported in Table 9 are indeed kept fixed because we wanted to have a simpler and less time consuming hyperparameter tuning.
However, note that some of these entries are not really parameters but rather algorithmic choices inspired by regular boosting.
The remaining are parameters that we found to work well enough, be somewhat redundant or otherwise not have a large impact in results.
This was mostly determined over the course of developing the method, either in early experiments on toy data or one-off runs in a single dataset/fold.
While we acknowledge that we do not report this extra experimentation, it is because of its limited nature and the fact that once the method was crystallized we never really tried to experiment with changing these parameters.
Tuning some of these (like the maximum ratio in each leaf or increasing the number of rounds of boosting) may actually improve results at the cost of a more expensive tuning.
Due to space limitations, we fully explain the rationale behind the choice of each of these parameters in a separate comment in case the reviewer is interested.
# (5) Parameter Range
We believe that the description of our tuning setup in section F.5 may have been misleading and we will make it clearer in the paper.
In fact, we always train DEF models with $2^{12}$ leaves (varying all the inner loop parameters). We only stop hyperparameter tuning if none of these models is better than the best model with $2^{14}$ leaves (the early stopping works per parameter and only exits the respective inner parameter loop if that makes sense). As a result, we know that decreasing the number of leaves did not yield a better DEF model.
Please let us know if this answers your question or if you would like us to further clarify this point.
# (6) Benchmark Performance
The samples shown in Figure 1 for TabDDPM were generated from the same model and were part of the samples used to compute the ML efficiency results in Table 2 (**note that TabDDPM is second to last on the MNIST dataset**).
The problem is that we have simply not been able to achieve reasonable results with TabDDPM on MNIST even after trying different normalization strategies, significantly increasing the number of training steps or the number of diffusion steps.
Most pixel features on this dataset are bimodal with most samples being either 0 or 255 and very few falling in between. The numerical data generated by TabDDPM is consistently outside the expected range, and is squashed into the two extremes of the range when inverting the input normalization leading to random looking images.
That being said, we share your concerns that the results as presented in the paper may be misleading and we plan to add a disclosure of our inability to obtain a reasonable model with TabDDPM for this dataset.
# Summary
We would like to thank the reviewer for their time and we hope to have adequately addressed all of their concerns. We would kindly ask them to let us know if anything wasn't clear and also to check the other improvements made to the paper listed in the author rebuttal above.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I would like to discuss complexity.
I believe that differences due to variations in implementation (e.g., programming languages) are not important. If all benchmarks are not implemented consistently during evaluation, the comparison becomes unequal, and no claim of superiority can be made. Additionally, it seems that implicit parallelization is also being done through techniques like sampling, so it is difficult to objectively understand the actual complexity by only looking at the computation time.
Since your paper is empirical, I believe a fair assessment can only be made by properly evaluating the trade-off between performance and cost.
Could you please describe the complexity of the proposed method and the benchmarks using Big-O notation and compare them? This might better demonstrate the validity of the proposed method rather than just computational speed.
According to the Rebuttal PDF, there are significant variations across different datasets. What is the cause of this?
---
Rebuttal 2:
Title: Fixed Hyperparameters
Comment: Below we explain our rationale for the choice of the fixed hyperparameters for NRGBoost:
- `num_rounds`: In earlier versions of the method, before we introduced importance sampling and rewrote our Gibbs sampling implementation in C, going above 200 rounds was prohibitevely expensive. While increasing this will tend to always improve results as long as the shrinkage parameter is also being tuned (and using early stopping), we decided to keep 200 rounds as the overall cost of sampling during training still scales quadratically with this number. Furthermore, this value is in line with values that already work well for discriminative boosting algorithms like XGBoost or LightGBM (which has a default of 100).
- `splitter`: Inspired by LightGBM's default approach to always split the leaf that leads to the best improvement in the objective. This allows more expressive models with fewer leaves than alternative approaches like depthwise splitting (XGBoost default) which wastes splits on leafs that don't really lead to much improvement.
- `line_search`: As explained in the paper this typically allows the model to converge faster by taking larger steps in the beginning of training where the quadratic approximation we optimize is farther from the likelihood. Incidently we find that the step size tends to settle closer to 1 as training progresses which is the optimal step size predicted by the quadratic approximation we try to optimize.
- `max_ratio_leaf`: We found early on with toy data that limiting this to small values tends to work better for regularization. Since this also limits the growth of the energy function at each round and therefore reduces the number of samples rejected from the pool we found no reason to increase it. In the end, this plays a similar role to the shrinkage parameter which we already tune so it was a bit redundant to tune both.
As for the sampling parameters, we have always used a similar number of samples to the original training set and this has always worked as a reasonable rule of thumb. Going over that rarely improved the results by much. But since it also never hurts to have more samples and the smaller datasets trained fast enough already, we kept the number 80000 fixed as a rough upper bound to the training set size for all datasets except for covertype (for which we used 4 times more to roughly match its #samples in training).
We chose to use 16 MCMC chains since these run in parallel and that is the number of virtual cores in the CPU used to run the experiments. For covertype we also multiplied the number of chains by 4 to keep the #samples/chain similar.
For the `burn_in` and `p_refresh` parameters we just picked seemingly reasonable values as they don't seem to have a large impact (other than `p_refresh` increasing the sampling time which is why we chose to set it at 10%).
We hope that this clarifies the choices made. Please let us know if you have any further questions.
---
Rebuttal 3:
Title: Additional comments on MNIST results
Comment: We thought we should provide some additional comments on the performance of benchmark methods on MNIST which we could not fit in the rebuttal due to space limitations.
NRGBoost and the other methods we compare to are meant for **tabular data**. As such, they do not benefit from the helpful inductive biases of convolutional neural networks and have no awareness of the spatial relation between pixels.
They are in fact invariant to a permutation of the pixels and need to learn the complex relationships between every pixel and all other pixels.
The dimensionality of this dataset is also higher than other typical benchmarks for tabular data.
All these properties make this problem very challenging for tabular models and (as reviewer **fQFZ** also seems to agree) we consider it a big achievement for NRGBoost to do as well as it does.
We do not believe that other methods underperforming on this dataset is unexpected as it's not really the type of data that tabular models are designed for. Still, we thought it would be interesting to include it as it allows us to visually assess actual samples from the model.
Still, regarding the performance of competing methods, we note that downsampled MNIST was also used as a benchmark in [[Xu et al., 2019]](https://arxiv.org/abs/1907.00503) which introduces TVAE and CTGAN. However, samples from neither of the methods are ever shown in the paper.
On the request of reviewer **trn6**, we have also included a comparison to a tree-based generative model (ARF) which seems to perform better than our original baselines (please refer to the PDF attached to the author's rebuttal).
Finally, ML Efficiency can be very deceptive about the ability of the model to accurately capture the input distribution, $p(x)$.
The most important property to achieve a high score in ML efficiency is that the model captures well the relationship between the target $y$ variable and the remaining input variables $x$ (i.e., has a good internal model for $p(y \vert x)$ since this is what the downstream discriminative model is trying to estimate).
Modeling well the remaining $p(x)$ (which is what is responsible for the images themselves) is secondary because a wrong $p(x)$ only means that the samples are inefficiently placed for optimal learning of the discriminative function.
Reviewer **fQFZ** makes a similar point in their review that models can produce low diversity data that still lead to good classifiers.
We note also that it is surprisingly easy to score well in **classification** on MNIST. Input features (i.e. pixels) are very redundant for predicting $y$. Even just modeling the relationship between a small subset of pixels and the target variable y can be enough to achieve a much better than random accuracy. As an example, using only the **8 pixels** in a central vertical line XGBoost is already able to achieve an accuracy of 0.649. Therefore a model presumably doesn't even need to model well the relationship between $y$ and all pixels, just a small subset in order to achieve a good ML efficiency score.
We hope this clarifies some of the reviewer's doubts about the results on this dataset.
---
Rebuttal 4:
Title: Response to Reviewer Comment (1/2)
Comment: We thank the reviewer for their time in reviewing our rebuttal and for being willing to discuss our paper.
Below we outline the computational complexities of all methods but note that these provide only asymptotic guidance.
## Tree-based methods
All tree based methods have to fit the ensemble of trees. The time complexity of this should be similar across all non-random methods (ARF, DEF, NRGBoost). Essentially, it scales as $O(NFTD)$ where $N$ is the number of data points, $F$ the number of features, $T$ the number of trees in the ensemble and $D$ the depth of the trees with the following caveats:
- For NRGBoost the factor $N$ should be replaced by $N + M$ where $M$ is the total number of samples in a pool of samples (see below). As mentioned previously in our comment on fixed hyperparameters, choosing $M$ similar $N$ works well in practice so the overall scaling **with the data characteristics** is still $O(NF)$.
- For ARF there is an additional multiplicative factor which is the number of times the adversarial loop runs. This is a complex function of the dataset as the algorithm stops when no improvement can be made. In practice we observe that it is somewhere between 2 and 6 iterations with smaller datasets generally stopping earlier.
### Sampling in NRGBoost
To fit a tree at each round of boosting, NRGBoost requires samples from the model. We keep a pool of samples of size $M$ and before each round of boosting we need to resample a portion of those which we will call $m$ and assume that it is roughly constant across rounds. Note that this sampling has nothing to do with implicit parallelization like the reviewers' statement seems to imply. It is merely an additional required step of our algorithm which we happen to be able to parallelize effectively.
The total cost of sampling is on average $O(mFT^2 D)$. As can be seen this process also scales linearly with $F$ and $N$ (through $m$). The quadratic scaling with the $T$ hyperparameter is the reason we only train NRGBoost models with 200 trees vs the DEF models that are trained with 1000 trees.
**To summarize, the computational complexity of all tree-based methods scales linearly with both number of data points and number of features as far as the characteristics of the data are concerned.**
As for the variation across different datasets, this is mostly due to these characteristics, both directly and indirectly:
- **Directly:** Datasets with a larger $NF$ factor will be slower which is the main reason that MNIST is the slowest followed by either miniboone or covertype.
- **Indirectly:** The characteristics of the dataset also change what are the best hyperparameters. For tree-based models, the number of leaves (depth $D$) is perhaps the biggest factor where datasets that favor larger trees will have slower training times for the best model, but other hyperparameters also have an effect.
## Neural Networks
Computational complexity for neural network methods depends on the architecture of the network and type of method. While we are not experts on the matter we can make a few educated guesses.
As an example, just the matrix multiplication in an input layer for $T$ training steps with a batch size $B$ would be $O(TBFH)$ where $H$ is the number of hidden output units of this layer.
This is similar to tree-based methods' scaling with $NF$ when we assume that a fixed number of epochs are used to train the model (i.e., $T$ is proportional to $N/B$).
Depending on the method there would be other multiplicative factors such as the number of diffusion timesteps and there are also the other layers of course.
These other layers may be responsible for most of the training time in practice and their input and output sizes are hyperparameters that may be chosen not to scale with $F$.
These models can also leverage massive parallelism from the use of GPUs which makes these comparisons based on computational complexity not very helpful in our opinion.
# Summary
We believe all methods scale similarly with the characteristics of the data under reasonable assumptions.
However this does not tell the full picture as they also scale differently with the very different hyperparameters of each method.
Controlling for computational time is difficult because there is hyperparameter tuning in the loop which might favor hyperparameters that lead to slower or faster models depending on the dataset.
We will follow up with an additional comment justifying our benchmarking approach and our statements due to space limitations.
---
Rebuttal 5:
Title: Response to Reviewer Comment (2/2)
Comment: We agree with the reviewer that a fair benchmarking of all methods is important for any claim of superiority.
In regards to tree-based models, we believe that we gave every possible advantage to our original baselines:
- We proposed DEF models in section 4 of our paper as an improvement over the original Density Estimation Trees (DET), introducing two algorithmic improvements (bagging and changing from ISE metric to KL divergence) in order to get them to perform well and to have a worthy baseline.
- Overall, DEF models took **longer** to run hyperparameter tuning and single models are usually slower to train. Despite this they still achieve lower performance consistently.
- NRGBoost tends to scale much better with depth of the trees than DEF models and in many datasets we could reduce the number of leaves with only a small performance penalty.
In contrast, we had to push DEF models to the limit of what our implementation would allow to even have reasonable performance. If we trained DEF models with trees of only 1024 leaves (typical for NRGBoost) performance would be abysmal in general.
Generally, the performance gap between NRGBoost and DEF models is so large that even if we were to further handicap NRGBoost by, e.g., reducing the number of trees and the maximum number of leaves it would still come out on top.
As a result we believe that our original statement that NRGBoost performs better than the additive tree-based ensemble models is accurate.
For neural network methods, because we are not experts, we chose to rely on guidance from the existing literature when it comes to hyperparameter tuning.
But since these are such different methods, running on different types of hardware, and with very different hyperparameters and hyperparameter tuning requirements it can be particularly challenging to achieve an experimental setup that everyone will agree is fair. We try to find a setup that we believe is representative of what a practitioner may get when using these methods and, like for our tree-based baselines, generally try to err on the side of caution. As an example, we give neural networks methods the benefit of using ML Efficiency directly as the hyperparameter tuning metric which is one of the metrics they are later evaluated on.
Nonetheless we are commited to being as fair as possible when presenting our results and we value the feedback from the reviewer on this issue. We welcome any further suggestions or recommendations on this matter that could improve our paper.
Best regards,
The authors
---
Rebuttal Comment 5.1:
Comment: Thanks for your response. I have read your response.
---
Reply to Comment 5.1.1:
Comment: Thank you for aknowledging our response. We want to reiterate that we are open to discussing any remaining concerns you might have. | Summary: This paper proposed an energy based generative boosting algorithm analogous to XGBoost, which can be used as generative model as well as be applied to discriminative tasks.
Strengths: The energy-based boosting is novel. The proposed method is capable of both generative sampling and discriminative tasks, enabling broad methodological applications.
Weaknesses: My concerns regarding the proposed method and the experiments are detailed in the questions section.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. For Table 1:
Why is NRGBoost shown in bold when it's overperformed by XGBoost? In other words why is XGBoost used as a baseline?
Is there a particular reason for SE reported separately in the supplement?
2. For the sampling experiments in section 6.2, is the authenticity of synthetic samples measured in some ways? (In an extreme case, if the synthetic samples are almost replicating training data, they will achieve high ML efficiency and good discriminator measures)
4. How is the scalability?
5. Can the proposed method be used on (unordered) categorical data, which is very common in tabular data?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I do not identify significant limitations other than the ones discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. Below are our responses to each individual question.
# Question 1
Our main goal is to compare generative methods and we mark as bold the best **generative** model on each dataset. We will add a note to the caption of Table 1 to make this clear since XGBoost was only meant to provide a reference for the best discriminative performance that could reasonably be achieved by a good **discriminative** model on these datasets.
In practice we expect generative models to have worse performance because discriminative models are trained to estimate $p(y \vert \mathbf{x})$ directly and not the more general joint distribution $p(y, \mathbf{x})$ which is a harder problem but also one that yields more flexibility in applications.
We therefore believe that NRGBoost coming close to the XGBoost baseline is many datasets is already a positive result.
Regarding the SEs being reported separately, we agree that this was a poor decision on our part. We will update all the tables in the main paper to report SE alongside the average values. Please check the PDF attached to the author's rebuttal for examples of the new formatting. Thank you for raising this issue.
# Question 2
In the context of generative models, memorizing the training data is an extreme case of overfitting where the model learns the empirical training distribution instead of approximating the idealized distribution that generates the data.
Regarding the ML efficiency metric reported in Section 6.2, we share the reviewer's concerns that it is susceptible to favoring overfitted models since in the extreme case where the training data is memorized, a method would simply be able to achieve the same performance as that obtained by training XGBoost directly in the original training data.
We note however that it is one of the most commonly used metrics when evaluating synthetic data (e.g.,
[[Xu et al., 2019]](https://arxiv.org/abs/1907.00503),
[[Kotelnikov et al., 2022]](https://arxiv.org/abs/2209.15421)).
This is why we believe is important to have other metrics that are not as susceptible to overfitting and our discriminator measure setup was designed with this in mind.
We train the discriminator (XGBoost) to distinguish between synthetic samples from the model and samples from held out data (validating also using held out data).
Since the probability that the same sample appears both in the training set for the generative model and the held out data should be very small (or effectively zero if the distribution is at least in part continuous), this discriminator can simply learn to predict 1 for all synthetic data in its training set.
This might seem like an overfitted discriminator but if the generative model was simply outputting a subset of the training samples, such a discriminator would be expected to also perform well in our held out test comprising of synthetic data and real held out data and therefore yield a poor discriminator measure.
Finally, we note also that for density models, overfitting to the empirical training distribution would also likely cause poor single variable inference results in the same way an overfitted discriminative model would.
As a result, at least for density models, we can check overfitting by comparing train and test single variable inference results and we can also check for methods that over-perform in ML Efficiency when compared to their single variable inference results.
# Question 3
The computational and memory cost of fitting the trees in NRGBoost should, in theory, be the same as for any other tree-based model such as XGBoost, with the caveat that we have a larger dataset comprising both real data and a pool of samples from the model. The scaling of this tree-fitting with the number of samples is linear as well as with the number of features, rounds of boosting and depth of the trees.
The main computational cost of our algorithm, however, is the Gibbs sampling that needs to be performed before every round of boosting to resample a fraction of the samples in our sample pool.
As mentioned in the paper, at round $t$ of boosting this cost scales in the worst case as $O(tNFm)$ with $m$ being the number of samples, $F$ being the number of features and $N$ being the number of internal nodes in each tree (in practice only a small subset of nodes needs to be transversed for each sample).
The total cost of sampling over $T$ rounds of boosting therefore scales as $O(T^2NFm)$, assuming that $m$ samples are drawn per round.
The memory cost of sampling is not a real concern as besides the space required to store the samples (and the trees) we make no significant use of extra memory.
In the author's rebuttal we add the training times required for each of the models selected by hyperparameter tuning on each dataset which can be used to gauge how this scaling plays out in practice as larger datasets and datasets with more features typically also benefit from larger models with deeper trees.
# Question 4
Naturally handling categorical data is one of the advantages of tree-based models and both NRGBoost and DEF models are both able to do it without requiring any tricks such as one-hot encoding or other forms of encoding.
In all of our experiments we used many-vs-many categorical splits at the tree nodes similar to how LightGBM splits categorical data.
We have also implemented one-vs-all splits in our code but have found that these tend to perform slightly worse when using trees of similar depth. 4 of the 7 datasets we use in the experiments section have (multi-class) categorical variables.
# Summary
We hope to have answered all of the reviewer's questions about our paper in a satisfactory manner and are happy to provide additional clarifications if necessary.
We would also like to kindly ask the reviewer to consider the other improvements made to the paper that were enumerated in the author's rebuttal above and thank the reviewer for their time.
---
Rebuttal 2:
Title: Additional Experiment on Sample Memorization
Comment: We apologize for the additional comment but the reviewer's question about memorizing the training data gave us an idea about an additional experiment that we believe supports our argumentation above and might interest the reviewer.
We introduce a **memorizing** baseline that simply memorizes a fraction of the training set.
When prompted for synthetic samples, it samples from this memorized set with replacement.
We try two settings for this baseline:
- Memorizing 10% of the training set
- Memorizing the full training set
Finally we evaluate the same discriminator setup we use in the paper.
This experiment allows us to gauge how well learning the training empirical distribution (with different sample sizes) performs.
Below we report the average discriminator measure results we obtained when repeating each experiment 5 times as in the paper.
We also include the results for the two best methods (TabDDPM and NRGBoost) for ease of comparison.
Note that these results are ROC AUC for distinguishing synthetic from real (held out) samples and as such **lower is better**.
| Dataset | AB | CH | PR | AD | MBNE | MNIST | CT |
|-----------------|---------|------------|---------|---------|-----------|---------|-----------|
| TabDDPM | 0.818 | 0.667 |**0.628**| 0.604 | 0.789 | 1.000 | 0.915 |
| NRGBoost |**0.625**| **0.574**| 0.631 |**0.559**| 0.993 | 0.943 | 0.724 |
| Memorize (10%) | 0.996 | 1.000 | 0.995 | 1.000 | 0.959 | 0.960 | 0.958 |
| Memorize (100%) | 0.763 | 0.799 | 0.762 | 0.769 | **0.607**|**0.611**| **0.606**|
For the smaller datasets, even the best possible memorization baseline fails to beat the best method on each respective dataset.
For the larger datasets (MiniBooNE, MNIST and Covertype), while memorizing the full training set does beat all other methods, memorizing a more sensible fraction of it is not enough.
As we increase the memorized set, the empirical distribution becomes closer to the data generating distribution and therefore discriminator measure results improve but we hope that this shows that simply memorizing samples is often not enough to achieve best performance on this metric due to the failure to generalize.
We would like to kindly ask the reviewer if their concerns have been resolved and if not, what remaining concerns they have.
We would greatly appreciate this feedback in order to further improve our paper.
Best regards,
The authors
---
Rebuttal Comment 2.1:
Comment: Thanks for the rebuttal and additional experiments. This addressed my concerns and I increase my score.
---
Reply to Comment 2.1.1:
Title: Thank you
Comment: We are glad we were able to address your concerns. Thank you for raising your score. | Summary: The paper proposes to extend the success of tree-based methods in discriminative tasks to generative modelling, which is implemented via an energy-based generative boosting algorithm (NRGBoost). Specifically, NRGBoost directly extends the tree-based tabular models by replacing the discriminative objectives with a generative one, which seems novel to me.
Strengths: 1. The paper has well-founded rationales: (1) Tree-based models are performant in discriminative tasks, and thus they are highly likely to also be performant in generative tasks on tabular data, and (2) existing tree-based generative methods do not preserve the tree structures well.
2. The paper is well-written, especially the notations.
Weaknesses: 1. **[Important]** Some highly relevant benchmark methods are missing, including ARF (tree-based) [1], GOGGLE (diffusion) [2] and TabPFGen (energy-based) [3].
2. In Line 325, the authors claim that the proposed method “significantly” outperforms other methods, while the significance test seems missing.
3. I would suggest the authors add comparison results on the computation efficiency. Because NRGBoost basically employs the same architecture as traditional gradient boosting trees, the computation efficiency should be higher than most other network-based generative models.
4. There seem to be some typos throughout the main text: “I.e.” (Line 272)
5. **[Important]** Code is not provided. I remain conservative about the results claimed in the paper.
[1] Watson, David S., et al. "Adversarial random forests for density estimation and generative modeling." International Conference on Artificial Intelligence and Statistics. PMLR, 2023.
[2] Liu, Tennison, et al. "GOGGLE: Generative modelling for tabular data by learning relational structure." The Eleventh International Conference on Learning Representations. 2023.
[3] Ma, Junwei, et al. "TabPFGen–Tabular Data Generation with TabPFN." NeurIPS 2023 Second Table Representation Learning Workshop.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. How does NRGBoost perform single variable inference (Lines 271-277)?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors detail the limitations of NRGBoost in Section 7.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful and careful review of our work and for the detailed feedback provided.
We will try to address all the weaknesses pointed out below.
# Missing Methods
**ARF:**
As another tree-based density method we agree that this is a very valuable comparison to make. We have thus made the effort to implement single variance inference for this method ourselves since the official python version of ARF did not implement density evaluation.
We have included the main results with this method in the PDF attached to the author's rebuttal above.
**GOGGLE:**
We were not aware of GOGGLE or TabPFGen and we thank the reviewer for bringing these methods to our attention.
We agree that GOGGLE would make an interesting comparison. We briefly tried to use the implementation available through the `synthcity` python library for the convenience of having a library interface but quickly ran into issues with package versions even after installing the library in a fresh virtual environment.
Given the short time-frame of the rebuttal period, the uncertainty of what other issues we might encounter and our unfamiliarity with the method we decided it was best to focus our efforts on other areas of improvement rather than rush to have results ready in only a few days. So while we can't have these results ready for the rebuttal we will do our best to do this comparison as well, provided we do not run into any further serious issues.
**TabPFGen:**
As for TabPFGen, we did not find an available implementation which makes it difficult to obtain a comparison in such a short time frame. However, after taking a closer look at the paper, we also believe that this method, while interesting, would not be a good fit for our current evaluation setup for a couple of reasons:
- It requires a categorical target which makes it incompatible with our regression scenarios.
- The authors of [[Ma et al., 2024]](https://arxiv.org/abs/2406.05216) acknowledge that dataset size is a limitation due to the TabPFN's inference step's inability to deal with large datasets as input. The largest datasets used in [Ma et al., 2024] have 2000 instances which leads us to believe that this method would be impractical on the datasets that we use as it would require significant downsampling.
- Finally, as explained in our experiments section, our setup hinges on the training of the generator being agnostic to what the target variable is. For all the generative models that we train, $y$ is not conditioned on and is treated as any other input variable. But in TabPFGen, the $y$ variable is generated directly from the $p(y|x)$ of a pre-trained classifier (TabPFN) which would make it unfair when evaluating single variable inference or ML efficiency over the same $y$ variable compared to the other methods. We also did not compare to [[Correia et al., 2020]](https://arxiv.org/abs/2006.14937) due to the same concern.
Note also that, similarly to TabPFGen, when a preferred $y$ variable exists, a generative model can always be used to learn only $p(x)$ while a state of the art discriminative model can be used to provide the $p(y|x)$ (e.g., XGBoost). This is another reason why we chose to focus on general fitness of the $p(y, x)$ instead.
# Significance Test for Single Variable Inference
While our original intention with the statement was merely to comment on the large gap in the results we agree that it is easy to misinterpret.
We have therefore verified that a paired t-test rejects the null hypothesis of equal means when comparing NRGBoost to all the other density methods (including now ARF) in Table 1 on all datasets (at a confidence level of 95%).
Of additional note is that the same test fails to reject this null hypothesis when comparing NRGBoost to XGBoost for the abalone (p=0.329), california (p=0.679) and protein (p=0.709) datasets.
We can add additional tables to appendix G with the p-values if the reviewer believes it adds value.
# Computational Efficiency
We have added a comparison of the time taken to fit the best models found by hyperparameter tuning for each dataset to the PDF attached to the author's rebuttal above.
Note that while the tree fitting part of the algorithm should, in principle, be comparable to other tree-based discriminative algorithms such as XGBoost, NRGBoost requires additional Gibbs sampling at the beginning of each boosting round.
After recent improvements to our tree fitting code, sampling now represents the biggest cost of training an NRGBoost model (~70% on average by our estimates). Note however that:
- There is still margin for optimization of our tree-fitting code which is not nearly as optimized as a mature implementation such as XGBoost.
- Gibbs sampling can extract more benefit from parallelization if a higher core count CPU is used.
# Typos
We thank the reviewers for raising this point. We spell-checked the text before submission but have also detected a few typos and other small issues since then. We will thoroughly check the text again.
# Code
We have sent an anonymized link with our code to the AC so that they may share with reviewers as per the instructions we have received for code sharing.
# Single Variable Inference
Given an energy function $f(y,\mathbf{x})$ for $q_f(y,\mathbf{x})$, $q_f(y\vert\mathbf{x})$ can be computed as:
$$q_f(y\vert\mathbf{x})=\frac{\exp f(y,\mathbf{x})}{\sum_{y^\prime}\exp f(y^\prime,\mathbf{x})}$$
This is essentially a softmax, involving only computing $\exp f(y, \mathbf{x})$ for all possible values of $y$ (which can be done efficiently for a tree based model) and normalizing.
This could definitely be clearer in the paper but we had to cut this explanation to meet the page limits. We will try to fit it back in in a future version of the paper if given the opportunity.
We would like to thank the reviewer for their time and we hope to have adequately addressed all of their concerns. We are happy to provide additional clarifications if necessary.
---
Rebuttal 2:
Title: GOGGLE comparison
Comment: We apologize for the extra comment but given the fast approaching end of the discussion period we thought it would be better to antecipate any questions the reviewer might have and provide an update on our ongoing efforts to add GOGGLE as a baseline as the reviewer originally requested.
We have managed to get a working version of the synthcity library with the GOGGLE plugin (and we can't stress enough that this was not an easy task).
So far, we have only managed to train models with (mostly) the library's default hyperparameters on 5 of the datasets but, in the process, have faced significant challenges which we outline below.
# Slow training
The method is considerably slower than all the other methods we compare:
- As an extreme example, training a model for 500 epochs on `covertype` took **more than 12 hours** at the maximum batch size used in the paper (128).
Note that the paper (and the library default) uses 1000 epochs (with early stopping) which could lead to more than 1 full day to train a single model!
- We haven't been able to run GOGGLE on the `MiniBooNE` or `MNIST` datasets as it seems that it **scales poorly with the number of features in the dataset**.
For context, **a single epoch on the `MiniBooNE` dataset takes 17 minutes to complete**.
- Even for the datasets where we are able to train models, the slow training times render us unable to do hyperparameter tuning in a reasonable timeframe like we do for all the other methods.
# Lackluster Performance
While we would like to explore the hyperparameter space for this model better, the results we have obtained so far with the default hyperparameters have yielded poor performance in general.
On all of our regression datasets we consistently get $R^2$ values below 0 for the ML efficiency metric.
While we are suspicious of these results and will need to investigate further we note that the paper appears to focus on binary classification tasks so we have no frame of reference.
On classification datasets, trained models tend to only generate data from majority classes:
- On the adult dataset, using our training setup, the trained model exclusively produces samples of the majority class.
- On the covertype dataset, it also rarely produces samples of any of the classes outside of the top two (out of 7 classes). This makes performance on this dataset poor as well.
- Even if we could conditionally generate based on the class, this goes against the idea of our setup which, as we explained above, is meant to produce models that are good for any inference task over any potential variable, not just a specific one. It would, furthermore, make it unfair to all other methods for which we don't use conditional generation (even though we could for some of them, including NRGBoost).
# Conclusion
We will continue to investigate our options and explore different choices of hyperparameters to try to achieve a setup that we believe is fair, both to GOGGLE and the other methods.
As it stands, given the long training times, we do not believe we will be able to provide a meaningful comparison with this baseline; at least not in any of the larger datasets.
However we would very much welcome any suggestions or recommendations from the reviewer on this subject.
We would also be very thankful if the reviewer could provide any feedback on whether their concerns have been adequately addressed and offer again to clarify any remaining questions that they might have.
We have made a significant effort to comply with every request of the reviewer to the best of what we believe is reasonably possible and we think that this has significantly improved our paper as a result. In light of this we would kindly ask the reviewer to consider re-evaluating their score.
Best regards,
The authors
---
Rebuttal Comment 2.1:
Title: Response to rebuttal
Comment: Thank you for your detailed response and the efforts you've put into addressing the concerns raised. After carefully considering your points and other reviewers' comments, I still believe that my initial evaluation remains valid: a fair and complete benchmark should be necessary before acceptance, so my score remains unchanged.
---
Rebuttal 3:
Comment: We thank the reviewer for their time and regret that our addition of the ARF comparison was not sufficient to address their concerns about a complete benchmark.
We decided to focus our efforts on ARF because it is also a density estimator and thus, the more interesting baseline given that it could be used in the same types of applications as NRGBoost. Unfortunately, this required implementing single variable inference for this method ourselves which consumed a significant portion of our rebuttal week (together with running hyperparameter tuning on 5 cross-validation folds for every dataset).
For the reasons we explained above, we don't believe that TabPFGen would be an appropriate comparison for our setup, even if we were able to run it on datasets of the size that we use.
Furthermore, assuming that we could resolve the performance issues we are currently experiencing with GOGGLE, due to the method being so slow to train we would have to severely limit the number of epochs relative to what was used in the original paper in order to bring training times in line with other methods. We will try to add a comparison along these lines in the future but we are not sure how valuable it will be given these constraints.
Best regards,
The authors | Rebuttal 1:
Rebuttal: We thank all reviewers for taking the time to review our work.
We have been working diligently to incorporate their feedback in order to improve the paper.
Below is a list of the main changes we have done as a result.
We kindly ask the reviewers to also check the attached PDF which includes additional results reflecting these changes.
# Comparison to ARF
We added a comparison to the tree-based density method **ARF** [[Watson et al., 2023]](https://arxiv.org/abs/2205.09435) as requested by reviewer **trn6**.
We implemented our own density evaluation for this method in order to be able to compare to the remaining density methods on our single variable inference task.
The hyperparameter tuning performed is similar to DEF models where the maximum number of leaves per tree and minimum number of samples per node are tuned using grid search.
Because ARF's single variable inference results were close to NRGBoost's on the covertype dataset we also made the effort to run 5-fold cross-validation on this dataset (for all density methods).
We believe that the overall results obtained (reported in the attached PDF) help solidify NRGBoost's claim as the top performing generative model capable of density estimation.
# Wasserstein Distance
We added an additional metric for comparing synthetic samples (Wasserstein distance) as recommended by reviewer **fQFZ**.
We follow the same setup used in [[Jolicoeur-Martineau et al., 2024]](https://proceedings.mlr.press/v238/jolicoeur-martineau24a.html) for its evaluation.
While the results (reported in the attached PDF) don't favor NRGBoost as heavily as our reported discriminator measure, it still ranks the best on average across all generated datasets.
Please refer to the rebuttal to reviewer **fQFZ** for a more in-depth discussion about this metric.
# Standard Deviation Reporting
As recommended by reviewers **QQ7t** and **UJVb** we will change all tables in the paper to report standard deviations alongside mean values (in the same format as the attached PDF).
# Computational Efficiency Improvements and Comparison
We have made significant improvements to our tree-fitting code, greatly reducing the training times originally reported for NRGBoost in Table 13.
Fitting the trees was previously the most time consuming part of training a NRGBoost model but now it represents only ~30% of the training time on average across all the datasets.
While we believe there is still plenty of margin for improvement, we are now much more confident that the new training times are representative of what an optimal implementation can achieve.
We report these new training times for NRGBoost as well as for every other method for the best model selected by hyperparameter tuning in the attached PDF.
A few caveats about these results though:
- We do not report the training times for the RFDE method because our implementation is sub-optimal and it should, in theory, be virtually free when compared to the other methods. This is because the splitting process is random and depends only on the input domain. The data itself is only required for computing leaf probabilities which is inexpensive.
- Many of the optimizations done for tree-fitting of NRGBoost models unfortunately do not extend to our implementation of DEF models. As a result, we don't believe the numbers we report for this method are very representative of what an optimal implementation could do.
- Currently, the biggest computational cost for training a NRGBoost model is the Gibbs sampling. This process can efficiently leverage higher parallelism than what we used in the experiments (16 virtual cores).
# Fix erroneous results for TVAE method
We realized that due to a configuration change while debugging, only the first 10 trained models were considered during model selection for the TVAE method in the MNIST and covertype datasets.
We have fixed this issue resulting in better performance for this method in the ML Efficiency metric but no (visible) change in our discriminator measure to the precision that is reported.
We hope that these changes resolve most of the remaining concerns about our work.
Many thanks,
The authors
Pdf: /pdf/6e67ce876785eed81b6989c4fffa211280701bf1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
CSPG: Crossing Sparse Proximity Graphs for Approximate Nearest Neighbor Search | Accept (poster) | Summary: This paper proposes CSPG as a novel graph index for vector similarity search. CSPG divides the dataset into sub-datasets with some common routing vectors. It builds a separate graph for each sub-dataset; during search, graph traversal is first conducted on one sub-dataset to quickly approach the neighborhood of the query and then switched to routing vectors for all sub-datasets to explore all vectors. Experiment shows that CSPG improves the QPS when achieving the same query recall for three existing graph indices.
Strengths: S1: Vector similarity search is an important problem, and graph index provides STOA performance. Thus, improving graph index is hot topic.
S2: Theorical analysis is conducted for CSPG.
S3: The experiment results are promising, i.e., CSPG seems a general technique that can accelerate different graph indices.
Weaknesses: W1: The relation to HNSW needs to be clarified. In particular, HNSW shares similar idea with CSPG, i.e., the upper-level graphs contain vectors sample from the dataset and used to quickly approach the neighborhood of the query; moreover, the upper-level graphs are searched with ef=1 while the bottom level graph is searched with much larger ef (corresponding to the two-stage search in the paper). In principle, HNSW can also change the graph index used for each layer once the vectors for each layer are determined. Thus, the question becomes the difference between a hierarchical graph (as in HNSW) and some parallel graphs with overlaps. Can HNSW match the performance of CSPG by tuning its parameters?
W2: Experiment should be enhanced. (1) Larger datasets should be used. Even if billion-scale datasets do not fit in common machines, datasets with 100M scale should be used. (2) Include NSG, as it is also widely used and shown to achieve good performance of the experimented datasets. (3) Try harder datasets. It is reported that the SIFT and Deep datasets are relatively easy [16] while Text-to-Image and Turing ANNS are more difficult. These datasets can be found at the NeurIPS’21 vector search challenge.
W3: Requiring a larger overlap (i.e., \lamada=0.5) between the sub-datasets and can only use a small number of sub-datasets.
Technical Quality: 2
Clarity: 3
Questions for Authors: I will raise the overall rating if the relation between this work and the hierarchical graph of HNSW is clarified.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank you for carefully reviewing our work and for providing such insightful suggestions. We have provided a Rebuttal PDF in the global Rebuttal, which contains all additional experimental setup and results.
## For Q1: The Relation between HNSW and CSPG
Our framework shares similarities with HNSW in utilizing redundancy to connect different graphs and similarities for varying candidate sizes (In HNSW, $ef_1$ is always set to 1, while in CSPG, $ef_1$ can be tuned to a value larger than 1). More importantly, they differ in the following three aspects:
- **From the perspective of redundancy.** For the standard HNSW index, there are redundant vectors between different levels to connect the adjacent levels. But in CSPG framework, redundant vectors are shared by independent sparse proximity graphs built for different partitions. When using CSPG with underlying HNSW index, there are redundant vectors between different levels in an HNSW, as well as redundant vectors between different HNSWs. The redundancy in an HNSW is used to connect the smaller upper-level graphs with the larger lower-level graphs within the partition, whereas the redundancy between HNSWs is used to connect completely different HNSWs (one vertically, the other horizontally). The roles of these redundancies are not the same.
- **From the perspective of structure.** While HNSW is a hierarchical graph structure, CSPG serves as a graph framework rather than a specific index structure. CSPG is not confined to a particular structure (hierarchical or flat), allowing it to enhance query performance across a broad range of mainstream graph indices. HNSW constructs a structure that transitions from sparse to dense unidirectionally. In contrast, CSPG builds a framework horizontally with smaller, sparser proximity graphs.
- **From the perspective of searching.** HNSW searches each level from top to bottom, and the final results are obtained from the lowest-level graph. CSPG, after approximating the query on one sparse proximity graph (First Stage), traverses back and forth between different sparse proximity graphs (Second Stage), which collect final results in each sparse proximity graph. We might traverse graph $\mathcal{G}_1$ for a while, then *jump* to another graph $\mathcal{G}_2$. After some time, we may return to $\mathcal{G}_1$ again. However, in HNSW, there is no *jump* from a lower level to an upper one, since the upper levels are merely subsets of the bottom level. This difference enhances CSPG's searching accuracy: since HNSW's candidate results only come from the large bottom-level graph, the accuracy of the results is closely tied to the quality of the bottom-level graph. CSPG, however, traverses between different sparse proximity graphs and collects candidates from them, making the answers more diverse and robust. Additionally, Algorithm 1 has been proven to have an expected search length that is equivalent to that of a single sparse proximity graph (Theorem 2), while the expected search length for a smaller sparse proximity graph (sub-datasets) is shorter than that of a complete large graph, which means that CSPG performs higher accuracy with fewer calculations.
## For W2: Additional Experiments
Following your suggestions, we have added the experiments as shown in Figure 2 (Rebuttal PDF) and Figure 6 (Rebuttal PDF).
**Larger dataset.** We compared the query performance (k=10) of HNSW and CSPG-HNSW on the SIFT100M dataset. We kept the hyperparameters consistent with those in Appendix D. The final results show that CSPG maintains its advantage even at the 100M scale.
**Inclusion of NSG and harder dataset.** We included the NSG and added comparisons for the Text2image (1M base vectors, 200 dimensions) and MSTuring (1M base vectors, 100 dimensions). The parameter settings were kept consistent with those described in Appendix D. The comparison results demonstrate that we still maintain an advantage even in more complex datasets.
## For W1: Can HNSW be better than CSPG by parameters-turning?
In principle, CSPG partitions the dataset to make each proximity graph sparser and smaller to traverse with shorter expected search length, which has been proven in Section 5. As a result, **CSPG-HNSW should always be better than HNSW**. To convince that, we conduct experiments on SIFT1M, GIST1M, and DEEP1M, focusing on baselines such as Vamana, HNSW, NSG, and HCNNG. The parameter selection sets for different indices as shown in Table 2 (Rebuttal PDF). We obtained the optimal parameters for HNSW from all combination as shown in Figure 4 (Rebuttal PDF). We then chose the parameters with the highest QPS when the Recall@10 equals $0.9 \pm 5e^{-3}$ as shown in Table 3 (Rebuttal PDF), and ran the comparison (QPS-Recall@10) between graph-based methods (baseline) and CSPG (our solution) with the optimal parameters we got. The results are shown in the Figure 5 (Rebuttal PDF). CSPG-HNSW outperforms HNSW even with the best parameters of HNSW.
## For W3: Partition number $m$
We require a similar distribution between different partitions, and we achieve this by vector redundancy. However, increasing the number of partitions does not necessarily lead to better query performance (please refer to Figure 5 in the submitted paper). The reason is that large $m$ may decrease the similarity of distribution among different partitions, which contradicts the assumptions discussed in Section 5.4.
---
Rebuttal Comment 1.1:
Title: Remaining concerns for the paper
Comment: Thank you for the detailed feedback!
For the relation between HNSW, I agree that (c) HNSW search can only move from upper-level to lower-level but GSPG can move between different graphs. However, both (a) HNSW and CPSG use redundancy differently and (b) GSPG is a general framework are still questionable. For (a), how does such a difference in redundancy affect performance, and why using the redundancy in the way of GSPG is better? For (b), HNSW can also be a general framework if we use different indexes for different levels of the graphs.
For parameter tunning, I am thinking about adjusting the data sampling ratios for different layers of HNSW. Sorry for the confusion in my initial review. In the extreme, we may use only 2 levels and sample 50-75% of the vectors for the top index (like GSPG) in HNSW. Then the only difference corresponds to (c) discussed by the authors, i.e., HNSW search can only move from upper-level to lower-level but GSPG can move between graphs. Comparing with such a baseline can establish the advantage of GSPG.
For partition number m, I am not convinced that if we use too many partitions (e.g., m=10), the data distribution will be different from the original dataset. If we sample the dataset uniformly, the distribution will be similar to the dataset; and vector datasets are very large, and thus a small sampling ratio (e.g., 10%) still produces a sufficiently large sample set. I think the reason is that the initial dataset should large enough such that the starting points found in it serve as good starting points for the other datasets.
To summarize, my remaining concern is direct comparison with the hierarchical index baseline discussed above.
---
Rebuttal 2:
Comment: We thank you for feedback about your concerns.
## For Q1 (Relation between HNSW)
**(a)** In HNSW, redundancy is characterized by the lower level containing all vectors from the upper level (i.e., $\mathcal{D} _{upper} \subset \mathcal{D} _{lower}$). However, in CSPG, there is a common overlap between partitions, and the remaining points are distinct (i.e., $\mathcal{D}_1 \cap \mathcal{D}_2 = RV, (\mathcal{D}_1 / RV) \cap (\mathcal{D}_2 / RV) = \emptyset$), where $RV$ denotes the redundant vectors. This difference allows HNSW to only jump downward unidirectionally, while CSPG can traverse across multiple subgraphs. This enables CSPG to collect more diverse and robust candidates from multiple subgraphs, leading to higher recall with less computation.
**(b)** CSPG is a general framework that can optimize mainstream graphs. The idea of viewing HNSW as a framework is interesting. We agree that by replacing different levels with other indexes, HNSW could also be seen as a framework. To the best of our knowledge, no one has modified HNSW in this way yet. We are willing to further discuss in the revised paper.
## For Q2 (Parameter Tunning)
Following your suggestions, we conducted experiments using the parameters of $M=32$ and $efc=128$ for NSW, setting $m=2$ for CSPG and $h=2$ for HNSW. We then varied the sampling ratios $\lambda \in [0.2, 0.5]$. The experimental results of QPS-Recall@10 on SIFT1M are shown in the table below.
| Recall@10 ($\lambda = 0.2$) | 0.86 | 0.91 | 0.95 | 0.96 | 0.99 |
| --------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| QPS (HNSW) | $5.84 \times 10^4$ | $4.45 \times 10^4$ | $3.20 \times 10^4$ | $2.79 \times 10^4$ | $1.45 \times 10^4$ |
| QPS (CSPG-NSW) | $6.35 \times 10^4$ | $4.88 \times 10^4$ | $3.49 \times 10^4$ | $3.05 \times 10^4$ | $1.45 \times 10^4$ |
| Recall@10 ($\lambda = 0.3$) | 0.75 | 0.95 | 0.96 | 0.98 | 0.99 |
| --------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| QPS (HNSW) | $9.19 \times 10^4$ | $3.43 \times 10^4$ | $2.96 \times 10^4$ | $2.20 \times 10^4$ | $1.46 \times 10^4$ |
| QPS (CSPG-NSW) | $1.02 \times 10^5$ | $4.26 \times 10^4$ | $3.69 \times 10^4$ | $2.34 \times 10^4$ | $1.53 \times 10^4$ |
| Recall@10 ($\lambda = 0.4$) | 0.91 | 0.95 | 0.96 | 0.98 | 0.99 |
| --------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| QPS (HNSW) | $4.55 \times 10^4$ | $3.76 \times 10^4$ | $3.25 \times 10^4$ | $2.28 \times 10^4$ | $1.50 \times 10^4$ |
| QPS (CSPG-NSW) | $6.60 \times 10^4$ | $5.41 \times 10^4$ | $4.44 \times 10^4$ | $2.97 \times 10^4$ | $1.63 \times 10^4$ |
| Recall@10 ($\lambda = 0.5$) | 0.91 | 0.95 | 0.96 | 0.98 | 0.99 |
| --------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | ------------------ |
| QPS (HNSW) | $4.45 \times 10^4$ | $3.52 \times 10^4$ | $3.22 \times 10^4$ | $2.33 \times 10^4$ | $1.51 \times 10^4$ |
| QPS (CSPG-NSW) | $6.81 \times 10^4$ | $5.44 \times 10^4$ | $4.51 \times 10^4$ | $3.43 \times 10^4$ | $1.69 \times 10^4$ |
As the $\lambda$ increases, CSPG outperforms HNSW. On the other hand, under the SIFTM dataset, CSPG exhibits better optimal performance.
## For Q3 (Partition Number $m$)
We agree with your analysis of m. **More accurately, we not only need to consider the size of the dataset but also the dimensionality and value range (i.e., the size of the vector space).** It is harder to keep consistent distributions across different sub-datasets in high-dimensional spaces. Therefore, if we consider both factors while still maintaining consistent distributions across sub-datasets, that would be effective, which means that the different sub-datasets can then provide good starting points for each other. | Summary: The article considers the task of graph-based approximate nearest neighbor (ANN) search. It provides a general method for building several sparse overlapping proximity graphs, and a method for searching them. The proposed algorithm can be used in combination with any kind of existing proximity graph. The experimental results demonstrate that the proposed method improves performance of the state-of-the-art graph methods (HNSW, HCNNG, Vamana).
Strengths: The method is novel and the explanation why it work is intuitive. It is also widely applicable, since it can be used in combination with any kind of proximity graph. The ablation experiments are comprehensive.
Weaknesses: The only issue I find is the experimental verification. While the methodology is otherwise sound, I would like to see a complete hyperparameter sweep for all the baselines, as in ANN-benchmarks project that is used to select the data sets. Since DiskANN and HNSW have hyperparameter grids in ANN-benchmarks, I see no excuse of not running the experiments using these grids (as the data sets used in the experiments are small benchmark data sets with sample sizes of 1M, this should be computationally feasible), and reporting the results with the optimal hyperparameters. This would (1) verify that the performance improvement holds at the optimal hyperparameters, i.e., that the proposed algorithm really improves state-of-the-art; (2) would rule out the possibility that the observed performance improvement of the proposed method over the baseline is due to the additional hyperparameter exploration enabled by its additional hyperparameters. Also the experimental code should be provided for reproducibility (at least I did not find the code or link to it anywhere?).
Technical Quality: 3
Clarity: 3
Questions for Authors: The improvement provided by your method is smallest for HNSW. Do you think that this is because also HNSW uses overlapping graphs with randomly selected overlap nodes (in HNSW the overlapping graphs are hierarchical and searched in a fixed order, whereas in your method flat, and searched in a different fashion), and thus there is more redundancy? It would be interesting to see an analysis of similarities and differences between your method and HNSW.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: I see no inherent limitations, expect the issues with the empirical verification that I addressed above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for carefully reviewing our work and for your insightful suggestions and interesting topics. We have provided a Rebuttal PDF in the global Rebuttal, which contains all additional experimental setup and results. Regarding of reproducibility, We have provided our source code for all experiments and algorithms to PC/AC/SAC.
## For W1: Hyperparameters grid
Regarding the hyperparameter sweep on baseline parameters that you mentioned, we searched for datasets like SIFT1M, GIST1M and DEEP1M, focusing on baselines such as Vamana, HNSW, NSG and HCNNG. The parameter selection sets for different indices are show in Table 2 (Rebuttal PDF). We obtained a parameters grid (combination) and evaluated the QPS-Recall@10 benchmark as shown in Figure 4 (Rebuttal PDF). Next, we choose the parameters with the highest QPS when the Recall@10 is equal to $0.9 \pm 5e^{-3}$ as shown in the Table 3 (Rebuttal PDF). Finally, we run the comparison (QPS-Recall@10) between graph-based methods (baseline) and CSPG (our solution) with the optimal hyperparameters we just got. The results are shown in Figure 5 (Rebuttal PDF).
## For Q1: The Relation between HNSW and CSPG
Our framework shares similarities with HNSW in utilizing redundancy to connect different graphs. However, there are some differences.
- **From the perspective of redundancy.** CSPG-HNSW does generate more redundancy. On the one hand, there are redundant vectors between different levels within each sparse proximity HNSW graph built on a partition. On the other hand, there are also redundant vectors between different sparse proximity HNSW graphs. However, the redundancy in HNSW is used to connect the smaller upper-level graphs with the larger lower-level graphs within a single partition, whereas the redundancy between sparse proximity HNSW graphs is used to connect two completely different HNSWs (one vertically, the other horizontally). The roles of these redundancies are not the same. Moreover, this cost is converted into further optimization of HNSW, with not only the hierarchical structure enhancing the searching of each HNSW, but also the faster and more precise searches brought about by traversing across sparser proximity graphs.
- **From the perspective of structure.** While HNSW is a hierarchical graph indexing structure, CSPG serves as a graph framework rather than a specific index. CSPG is not confined to a particular structure (hierarchical or flat), allowing it to enhance query performance across a broad range of mainstream graph indices. HNSW constructs a structure that transitions from sparse to dense unidirectionally. In contrast, CSPG is a framework horizontally builds smaller, sparser proximity graphs.
- **From the perspective of searching.** HNSW explores each level from top to bottom, and the final results are obtained from the lowest-level graph. CSPG, after approximating the query on one sparse proximity graph (First Stage), traverses back and forth between different sparse proximity graphs (Second Stage), which collects final results in each sparse proximity graph. We might traverse graph $\mathcal{G}_1$ for a while, then *jump* to another graph $\mathcal{G}_2$. After some time, we may return to $\mathcal{G}_1$ again. However, in HNSW, there is no *jump* from a lower level to an upper one, since the upper levels are merely subsets of the bottom level. This difference enhances CSPG's searching accuracy: since HNSW's candidate results only come from the large bottom-level graph, the accuracy of the results is closely tied to the quality of the bottom-level graph. CSPG, however, traverses between different sparse proximity graphs and collects candidates from them, making the answers more diverse and robust. Additionally, Algorithm 1 has been proven to have an expected search length that is equivalent to that of a single sparse proximity graph (Theorem 2), while the expected search length for smaller sparse proximity graphs (sub-datasets) is shorter than that of a complete large graph, which means that CSPG performs higher accuracy with fewer calculations.
---
Rebuttal Comment 1.1:
Comment: Thanks for a detailed answer. I would still like to see a comparison where your method is compared to the baselines so that the Pareto frontiers with the optimal hyperparameters are shown (like in ANN benchmarks), but acknowledge the limits set by the one week author response period. However, the provided additional experimental results where more hyperparameters are explored improve the experimental validation. In view of this, I upgrade my score from "borderline accept" to "accept".
I assume that you add the discussion clarifying the differences to HNSW, and the results in the ANN-benchmarks framework to the revised version of the article when you are done running the experiments.
---
Reply to Comment 1.1.1:
Comment: Thank you for valuable feedback that helps to strengthen the presentation and the evaluation of the proposed method. Following your suggestions, we will use ANN-benchmark to conduct more comprehensive experiments with a wider range of hyperparameters to show the different Pareto frontiers of various methods. Also, the differences and relation between HNSW and CSPG will be more thoroughly discussed in the revised version. | Summary: This paper studies the problem of approximate nearest neighbor search (ANNS), and proposes an optimized solution for existing proximity graph-based solution. The main idea of this solution is to decompose the original dataset into several partitions, build a proximity graph for each partition, and consider crossing proximity graphs during the searching procedure. Finally, experiments are conducted on three public datasets.
Strengths: S1. The paper studies an important problem.
S2. The proposed solution has theoretical analysis.
S3. Experiments are conducted on public datasets.
Weaknesses: W1. The novelty is limited.
W2. The experimental evaluation is inadequate.
W3. Important issues are not well considered in the algorithm design.
W4. Recent studies are not carefully reviewed.
Technical Quality: 3
Clarity: 2
Questions for Authors: Q1. The idea of random partitioning is quite simple, and the technical novelty is quite limited.
Q2. Since the idea is simple, I wish the authors could consider more on practical issues, such as dynamic updates (eg insertions and deletions) that are widely considered in recent studies. However, they are ignored in the algorithm design. Moreover, due to the usage of random partitions, the proposed solution can be less flexible to handle dynamic updates.
Q3. More recent solutions from the related studies in the references, which have been demonstrated to be more efficient than the chosen ones (eg the standard HNSW), should be compared.
Q4. In the experimental setup, it is unclear whether the index and dataset are stored in main memory, SSD, or disk. By contrast, the selected baseline work (eg [45]) has considered more than one kind of data storage.
Q5. In the experimental study, the impact of k should be also tested.
Q6. Based on the results in Table 1, it seems that the proposed solutions trade construction time and index size of better query efficiency. However, the rationale of the trade-off is not well discussed.
Q7. Since the experimental results have shown that the improvement is 1.5x to 2x, which is marginal, the theoretical result of speedup analysis is not meaningful enough.
Q8. Since several efficient solutions to ANNS have been proposed in recent years, they should be more carefully discussed in terms of the pros and cons. Besides, the following studies are also related:
[1] RaBitQ: Quantizing High-Dimensional Vectors with a Theoretical Error Bound for Approximate Nearest Neighbor Search. ACM SIGMOD 2024
[2] Towards Efficient Index Construction and Approximate Nearest Neighbor Search in High-Dimensional Spaces. VLDB 2023
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Please refer to the weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reading our work and providing detailed suggestions. We have provided a Rebuttal PDF in the global Rebuttal, which contains all additional experimental setup and results.
## For Q1 and W1: Novelty and Contributions
Most existing methods that adopt the idea of partitioning data often employ the clustering-based partitioning (e.g. kmeans). For example, DiskANN and IVF-based approaches use kmeans, which disrupts the original distribution of the given dataset and puts similar vectors into the same buckets. In this paper, we propose a totally different paradigm, i.e., making all sparse proximity graph (SPG)'s distribution as similar as possible, which allows us to divide and conquer the entire dataset. It also ensures that the load is distributed across different groups. To achieve this, we employ simple and efficient random partitioning. This strategy is simple yet effective as proved through both theoretical and empirical analysis. We believe the technical novelty is sufficient, mainly lying in that:
- We are the first to present a novel framework named CSPG, that utilizes random partitioning for ensuring a consistent distribution of graph indices, producing a smaller SPG for each partition. Moreover, this framework allows mainstream graph indices to be embedded within our framework.
- We develop an efficient two-staged approach for exploring CSPG, with fast approaching and cross-partition expansion. The approach can reduce many unnecessary explorations.
- We theoretically prove that with the consistent distribution produced by the random partitioning, CSPG can accelerate the existing graph-based ANNS algorithms by reducing unnecessary explorations.
## For Q4: In-memory Evaluations
We appreciate you pointing out the differences between the two versions of DiskANN. Regarding this concern, we mentioned in Section 6.1 that all evaluations were conducted in memory.
## For Q2 and W3: Dynamic Updates
Handing updates (insert and delete) is an interesting topic. Our framework can be easily extended to deal with updates.
- CSPG is a framework based on mainstream proximity graphs (PGs), and current updating methods of underlying graphs are applicable to our framework.
- The random partitioning makes our framework highly flexible for updates. For instance, when inserting a vector, it is randomly assigned to a SPG $\mathcal{G}_i$, while the first stage of Algorithm 1 is only conducted on another SPG $\mathcal{G}_j$. By ensuring $i \neq j$, our framework enables concurrent querying and updating.
- Due to our use of random partitioning, CSPG ensures that each partition fairly shares the load of updates, which is a capability that other partitioning methods cannot achieve.
## For Q8 and W4: Mentioned Recent Studies
Thanks for suggesting two recent works. Among them, RaBitQ [5] is a quantization method with a sharp theoretical error bound while providing good accuracy. LSH-APG [6] also utilizes quantization and designs an efficient graph index, achieving a well-balanced performance between construction and query efficiency. We will include a discussion of the two works in the revised paper.
## For Q5, Q6 and W2: Additional Experiments
Based on our experiments and those of LSH-APG, HNSW, NSG, HCNNG, and DiskANN stand out as the most representative graph indexing algorithms. CSPG outperforms these representative methods across all recall ranges.
**For Q5.** According to the experimental setup of LSH-APG, we compare the average search time of LSH-APG, CSPG-HNSW, CSPG-NSG, CSPG-Vamana and CSPG-HCNNG on the Audio dataset used in LSH-APG. We fix the recall above 0.99 and vary the value of k which is shown in Figure 1 (Rebuttal PDF). Our framework's construction time ranges from 3.8 to 9.6 seconds, whereas LSH-APG requires about 1.6 seconds to build.
**For Q6.** According to [2,4], graph-based indices are indeed the best among various indexes for querying, yet the construction cost of a graph is higher. LSH-APG and [6] do achieve a good balance between query and build. With the development of technologies (e.g., RAG), there are increasing demands for latency, and our framework specifically addresses these needs. Additionally, the time and space costs for construction with CSPG are controllable. By tuning the sampling ratio $\lambda$ and the number of partitions $m$, we can effectively manage the trade-off between query performance and construction costs.
**For W2.** We also add some harder datasets for evaluations such as Text2image, MSTuring and SIFT100M, which can be found in NeurIPS'21 [7]. The results are shown in Figure 2 (Rebuttal PDF) and Figure 6 (Rebuttal PDF).
## For Q3: Representative Baselines
While there are some approaches that outperform our baselines on certain dataset, their performance varies across different datasets. For example, NGT-panng performs better than Vamana on the MNIST-784, but worse than Vamana on the glove-100-angular. Since we propose a framework for mainstream PGs and aim for a more general comparison, we selected HNSW, NSG, Vamana and HCNNG as our baselines, all of which rank highly in ANN-Benchmark [2]. Previous studies [1,4,6] also chose these indices, further demonstrate that our baselines are among the most representative ones.
## For Q7: Speedup Analysis
Our method is not a specific index but a framework that is applicable and beneficial to mainstream PGs. The speedup analysis in Section 5 proves that our framework can consistently provide optimization, lending strong support to our approach. Since the experiments may be influenced by various factors like dataset, graph and parameters, such theoretical analysis is necessary and general enough to explain why CSPG requires less computation.
On the other hand, Subramanya et al. [3] demonstrate that Vamana slightly surpasses HNSW in terms of QPS. Our framework outperforms mainstream PGs by 1.5-2x, making our improvement significantly substantial(not marginal) in the field of ANNS.
---
Rebuttal Comment 1.1:
Title: Response to the author feedback
Comment: Dear authors,
I have read the rebuttal, and thank you for considering my suggestions. Based on the rebuttal, some of my concerns have been addressed, but some of them still remain:
(1) As explained in my Q1, since the idea is not entirely new, I wish more efforts could be made on practical issues, such as dynamic updates. However, it is not enough to simply claim that "your framework is flexible," since quite a few studies have already considered dynamic updates. The claim should be verified through experimental evaluations.
(2) Recent studies should also be compared in the experiments (if possible). Besides, based on the current results (including the supplemental PDF), it seems that the proposed framework has its own trade-off among query latency, space cost, and recall (potentially). When considering dynamic updates, this problem could be much more complex. Furthermore, when applying your framework into RAG, dynamic updates seem to be even more important to be considered.
Overall, the rebuttal helped address some concerns, and I will consider the rebuttal when making the final decision.
Best regards,
---
Rebuttal 2:
Comment: We greatly appreciate the reviewer's suggestions and feedback for improving our article.
**For Q1.** On the one side, our idea is new and not a simple combination of existing methods. To the best of our knowledge, we are the first to utilize random partitioning to enhance the similarity in the distribution of all sparse proximity graphs (SPGs). This approach allows us to search across different SPGs in the Crossing SPG (CSPG) framework and accelerate a wide range of graph-based methods. On the other side, we would never relinquish dynamic updates, even though many existing approaches, such as LSH-APG, have effectively addressed this issue and can also be integrated into our framework. As discussed in the Rebuttal, we will further consider improvements to our framework which fully leverage the isolation of different subgraphs to separate queries and updates (please refer to the pseudocode below), and will provide a more comprehensive evaluation to validate its flexibility. However, due to time limit, we may not be able to fully complete the evaluation. We will include the work you mentioned, all additional evaluations, and your valuable suggestions in our revision.
> We use original insert and delete functions of underlying subgraphs with atomic locks to update CSPG, and preserve a set $RV$ to maintain all route vectors’ IDs. Searching on CSPG has low probability to conflict with dynamic updating operations, which allowing the separation of query and updates, as we only search or update on a certain subgraph at a time while others are leisure.
>
> **function** $Insert$ (vector $x$)
>
> $\quad$assign a random $i$, insert $x$ to underlying graph $\mathcal{G}_i$
>
> $\quad$if $x$ is selected as route vector then insert $x$ to $\mathcal{G}_j$ if $i \neq j$, for $j \in \{1,2,...,m\}$ and insert $x$ into $RV$
>
> **function** $Delete$ (vector $x$)
>
> $\quad$remove $x$ from $\mathcal{G}_i$, for $i \in \{1,2,...,m\}$
>
> $\quad$if $x$ is a route vector then remove $x$ from $RV$
**For Q2.** In our additional experiments, due to time constraints and the limited relevance of RaBitQ to the graph-based methods, we did not include a comparison with it. However, we have provided comparisons with LSH-APG of query performance and construction costs on the Audio dataset, please refer to the relevant analysis in the Rebuttal and Figure 1 (supplemental PDF). On the other hand, **our framework never limit the underlying graphs’ advantages (e.g., dynamic updating)**, so their original algorithms and strategies are also employed within our framework, which is why we did not pay much attention on the dynamic updating before. We will do include your valuable suggestion, and consider more about the dynamic updates from the overall framework side, and will conduct a more comprehensive evaluation from different goals and trade-offs.
To sum up, we sincerely appreciate your constructive feedback on the dynamic updates and your valuable time. We will include the related studies you mentioned, our additional experiments, and your valuable suggestions in our revision.
---
Rebuttal Comment 2.1:
Title: Response to the rebuttal
Comment: Dear authors,
I acknowledge your further responses. When discussing with the other reviewers on the final recommendation, I will consider the rebuttal.
Thanks, | Summary: The manuscript proposes a novel schema called "Crossing Sparse Proximity Graphs" (CSPG) which defines an proximity graph (PG) for the graph-based approximate nearest neighbor search (ANNS) problem. Searching on top of the CSPG is provably faster than existing proximity graphs (which derived from other ANNS algorithms, such as the relative neighborhood graph or the MST graph). The authors supported their work by extensive experimental results.
Strengths: The experiments are convincing. And the idea of a two-stage search process to create the CSPG is novel and may open new doors for more innovations.
I liked that the authors say (see Line 149), "When the number of partitions m = 1, CSPG falls into the special case that builds a PG index over all vectors, which is consistent with most state-of-the-art graph-based ANNS algorithms." This makes their proposal a generalized counterpart of existing works.
Weaknesses: The theoretical analysis is daunting (see Section 5.4). It is not clear why CSPG is always faster.
Technical Quality: 3
Clarity: 2
Questions for Authors: It would be nice if the authors can compare their method with other ANNS algorithms. Is their method memory efficient? Is it more suitable for high-dim or low-dim vectors? Is their method better when the vectors are clustered or uniform?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors may optionally consider a complete benchmark with ann-benchmarks.com or use https://github.com/erikbern/ann-benchmarks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for reading our work and providing helpful suggestions. We have provided a Rebuttal PDF in the global Rebuttal, which contains all additional experimental setup and results.
## For Weakness: Additional Explanation for Section 5.4
CSPG is a framework designed to accelerate ANNS graph-based approaches. We randomly partition the dataset into multiple groups, where **the data distribution within each groups is similar (and similar to the original dataset)**, we then construct sparse proximity graph indexes on different subsets. We design a crossing-group retrieval algorithm (Algorithm 1) for this framework. The algorithm initially searches within a specific sparse proximity graph (First Stage) to get sufficiently close to the query. In the second stage, the algorithm uses routing vectors (redundancy) as bridges to traverse between multiple sparse proximity graphs. This traversal allows the query results to be derived from multiple sparse proximity graphs rather than a single large graph, resulting in greater diversity and robustness. Intuitively, as each sparse proximity graph is sparser and smaller with a shorter expected search length, we can achieve higher accuracy with fewer distance computations.
Section 5 provides the theoretical explanation and analysis by introducing a novel concept, AMSNET, a more practiced and robust theoretical graph model based on MSNET [1]. While MSNET requires the graph search process to continually approximate the query, AMSNET allows the search process to take detours, provided that these detours are as small as possible. Mainstream graph methods generally satisfy this model under suitable parameters, since a good graph model cannot afford to have too many meaningless detours in its search process. The expected search path of MSNET [1] is $\mathcal{O}(n^{\frac{1}{d}} \log n^{\frac{1}{d}} / \Delta r)$, and based on this, we derive that AMSNET's expected search length is $\mathcal{O} \left( \frac{w}{\Delta r} \left(\lambda n + \frac{n (1 - \lambda)}{m} \right)^{\frac{1}{d}} \log \left(\lambda n + \frac{n(1- \lambda)}{m}\right)^{\frac{1}{d}} \right)$, where $\lambda < 1$ is the sampling ratio of CSPG, $m$ is the number of partitions, and $w$ reflects the probability of detours occurring in the search process, increasing as detours happen. It is proven in Theorem 3 that $w$ does not decrease as $n$ increases.
**The bottleneck of the ANNS problem lies in the distance comparisons**, which corresponds to the node expansions in graph search. The Speedup of a query can be expressed as
$$
Speedup = \frac{\text{PG's comparison}}{ \text{CSPG's comparison}} = \frac{\text{PG's search length} \times \text{PG's degree}}{ \text{CSPG's search length} \times \text{CSPG's degree}} = \frac{\text{PG's search length}}{ \text{CSPG's search length}}.
$$
Since we fix the degree of a node in both PG and CSPG to be no more than $\sigma$ (In ANNS, the degree of each node in the graph tends to be around the maximum degree of the graph.), the speedup ratio is equal to the ratio of the expected search length between CSPG and PG. When $n$ varies, this ratio is no less than 1 (even $n \rightarrow \infty$), demonstrating that CSPG can always work. Figure 9 in the submitted paper also indicates that CSPG requires fewer comparisons to achieve the same accuracy.
## For Questions: Comparison with other approaches
**Memory efficiency.** According to [4], graph-based indices are indeed the best among various index types for query performance, yet the memory of a graph is correspondingly more expensive. Though CSPG has a larger index size, it is still affordable since the memory consumption of the vector data is much larger than the indices, which is also the reason why Vamana [3] put the index in memory.
**Dimension.** Whether dealing with high-dimensional or low-dimensional vectors, graph algorithms generally provide better support compared to other methods (such as Quantization-based), as shown in the ANN-Benchmark [2]. This is precisely why graph-based approaches stand out among various ANNS methods. Since CSPG optimizes mainstream graph algorithms, it also performs well in both high-dimensional and low-dimensional scenarios. For example, as detailed in Section 6, our method supports datasets like SIFT1M with 128-dimension vectors and GIST1m with 960-dimension vectors effectively.
**Clustered/Uniform dataset.** To analyze the performance of CSPG on clustered and uniform data, we conduct comparative experiments on SIFT1M, GIST1M, DEEP1M, Text2Image, and MSTuring, a generated clustered dataset (GCD) and a generated uniform dataset (GUD). The Hopkins statistic for these datasets is shown in Table 1, which tends toward 0 when the data is uniform and toward 1 when the data is clustered). The performance over different distributions is shown in Figure 3 (Rebuttal PDF), where CSPG performs well in optimizing both uniform and clustered data.
## For Limitations: ANN-Benchmark
We are conducting experiments using ANN-Benchmark [2] to compare CSPG with other ANNS methods on various datasets. To ensure the fairness, we run all the programs on the same machine (please refer to Appendix D), but it requires a long time to finish all of them. We will include all the results after finishing these experiments. | Rebuttal 1:
Rebuttal: We sincerely appreciate all four reviewers for their instructive suggestions on our submitted paper. Their suggestions have been immensely helpful in improving our work and enhancing our experiments. To response the relevant points of concern, we have conducted additional experiments and included the results in the Rebuttal PDF. We will carefully reflect on and correct the shortages of our methodology to enhance the readability and comprehensibility in the final version. Detailed responses to each reviewer's comments are provided below. Due to space constraints, we have list all references here.
## References
[1] Fu C, Xiang C, Wang C, et al. Fast Approximate Nearest Neighbor Search With The Navigating Spreading-out Graph[J]. Proceedings of the VLDB Endowment, 12(5).
[2] Aumüller M, Bernhardsson E, Faithfull A. ANN-Benchmarks: A benchmarking tool for approximate nearest neighbor algorithms[J]. Information Systems, 2020, 87: 101374.
[3] Jayaram Subramanya S, Devvrit F, Simhadri H V, et al. Diskann: Fast accurate billion-point nearest neighbor search on a single node[J]. Advances in Neural Information Processing Systems, 2019, 32.
[4] Wang M, Xu X, Yue Q, et al. A comprehensive survey and experimental comparison of graph-based approximate nearest neighbor search[J]. Proceedings of the VLDB Endowment, 2021, 14(11): 1964-1978.
[5] RaBitQ: Quantizing High-Dimensional Vectors with a Theoretical Error Bound for Approximate Nearest Neighbor Search. ACM SIGMOD 2024
[6] Towards Efficient Index Construction and Approximate Nearest Neighbor Search in High-Dimensional Spaces. VLDB 2023
[7] https://github.com/harsha-simhadri/big-ann-benchmarks
Pdf: /pdf/63eb59f9ce662cb6d956943ddded6596cedb6f59.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Neuro-Vision to Language: Enhancing Brain Recording-based Visual Reconstruction and Language Interaction | Accept (poster) | Summary: This paper proposes a framework that integrates 3D brain structures with visual semantics using a Vision Transformer 3D. By aligning fMRI features with multiple levels of visual embeddings, it eliminates the need for subject-specific models and allows extraction from single-trial data. The extractor consolidates multi-level visual features into one network, simplifying integration with LLMs. The topic is intriguing, and the proposed method offers practical applications.
Major concerns and minor comments include:
1. Benchmarking with Real-World Datasets: As a novel machine learning approach, the performance of the proposed method should be benchmarked with more real-world fMRI datasets to evaluate the generalizability of the results. The current study may be limited in scope or sample size, and using a wider variety of datasets will help demonstrate the robustness and scalability of the approach across different subjects and conditions. It would be beneficial to include datasets with diverse characteristics, such as different brain regions, tasks, and populations, to ensure comprehensive evaluation.
2. Discussion on Advantages and Disadvantages: The author(s) should discuss the advantages and disadvantages of the proposed method in the field of neuroscience and brain decoding. This discussion should include a comparison with existing approaches, highlighting the unique contributions and potential limitations of the new method. Additionally, insights into the interpretability and explainability of the proposed method would be valuable. For instance, how does this method enhance our understanding of brain activity patterns? Are there any trade-offs between model complexity and interpretability? Addressing these questions will provide a clearer picture of the method’s potential impact and areas for improvement.
3. Details on Cross-Validation: Please provide more details about the cross-validation used in the empirical studies, perhaps in Section 4. It is essential to specify the type of cross-validation technique employed (e.g., k-fold, leave-one-out) and the rationale behind its selection. Detailed information on the partitioning of the data, the number of folds, and any stratification strategies used will help in understanding the robustness of the validation process. Additionally, discussing the metrics used for evaluation and how they were computed across different folds will add clarity to the reported results.
4. Pseudocode for the Proposed Method: The proposed method can be summarized in the form of pseudocode (algorithm). Providing a step-by-step algorithmic representation will make the methodology more transparent and easier to reproduce. The pseudocode should outline the key steps involved in data preprocessing, feature extraction, model training, and inference. Including comments within the pseudocode to explain the purpose of each step and any critical hyperparameters or configurations will further enhance understanding.
5. Minor Typos and Grammar Mistakes: There are some minor typos and grammatical mistakes throughout the paper. A thorough proofreading and editing process is recommended to improve the overall readability and professionalism of the manuscript.
Strengths: Please see the Summary section
Weaknesses: Please see the Summary section
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the Summary section
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see the Summary section
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank you for the thorough review and positive feedback on our work. Your comments are invaluable for the improvement of our method and the revision of the manuscript. Let us address each individual comment/question below.
1. **Benchmarking with Real-World Datasets:**
We fully recognize the importance of demonstrating our model's robustness across diverse datasets. The Natural Scenes Dataset (NSD) that we utilized is the largest available for natural scene visual stimulation and includes data from multiple subjects. While there are other datasets, their smaller size and variations in acquisition equipment often limit their suitability for benchmarking our method comprehensively. Our framework is specifically designed to be scalable across different subjects without requiring subject-specific parameters, a feature that greatly enhances its potential for broader application.
This scalability is a fundamental strength of our approach, as demonstrated by consistent performance across various subjects, which is detailed in Section 4.2 and Appendix A of the manuscript. We are actively planning to extend our evaluations to additional datasets in the future, aiming to further validate and refine our model’s capabilities across an even wider array of conditions and populations.
2. **Discussion on Advantages and Disadvantages:**
Thank you for prompting a deeper exploration of the strengths and potential limitations of our approach. We have detailed our discussions on these aspects in the Related Work section of the manuscript. Our framework's ability to efficiently reconstruct visual stimuli from a single input across diverse subjects is comprehensively evaluated, with results available in Table 2 on page 7 and Figure 4 on page 8.
Additionally, the integration of a large language model has expanded our system's functionality, enabling it to perform a range of natural language processing tasks, from simple descriptions to complex reasoning. This not only demonstrates the versatility of our approach but also ensures its adaptability to various research and clinical settings.
In addressing interpretability, we utilize Gradient-weighted Class Activation Mapping (Grad-CAM) to localize semantic concepts within specific brain regions effectively. This technique enhances our understanding of how cognitive functions correlate with neural activity, allowing us to map textual information directly to brain signals. Our approach's accuracy is further validated by ablation studies where the removal of concept-related brain signals confirms the efficacy of our localizations, as discussed extensively in Section 4.4.
Following your suggestion and insights from Reviewer k95k, we are exploring additional interpretability techniques and plan further experiments to enhance our model’s explainability without compromising its effectiveness. We appreciate your insights and are committed to continuously refining our method to enhance its robustness and applicability.
3. **Details on Cross-Validation:**
We have detailed our dataset preprocessing and validation strategy in Section 4.1 on page 6 of the manuscript and the first three sections of the appendix. The NSD uses a standard 9:1 split for the training and validation sets, a common methodology that ensures a fair comparison with other methods. This split was determined during the dataset's compilation, where a fixed set of images shown to all subjects was designated as the validation set, allowing for consistent and comprehensive comparisons across studies. We appreciate your attention to this aspect of our work and are committed to ongoing improvements in our research methodologies.
4. **Pseudocode for the Proposed Method:**
To enhance clarity and reproducibility, we've included a pseudocode summary of our method in the appendix. Here's the markdown-formatted pseudocode:
```markdown
### Pseudocode for Neuro-Vision to Language Framework
- **Input**: fMRI data, Visual Stimuli
- **Output**: Visual reconstructions, Semantic interactions
**Procedure**:
1. **Preprocess fMRI Data**:
- Normalize and segment fMRI data to prepare it for feature extraction.
2. **Feature Extraction with Vision Transformer 3D (ViT3D)**:
- Apply ViT3D to the preprocessed patches to extract spatial feature embeddings.
3. **Align Features with Visual Embeddings**:
- Align fMRI features with visual embeddings from CLIP and VAE, optimizing with MSE loss.
4. **Integrate with Large Language Model (LLM)**:
- Feed aligned features into LLM for fine-tuning; generate natural language responses.
5. **Visual and Semantic Reconstruction**:
- Convert semantic embeddings back to visual space; perform concept localization and dialogue-based interactions.
```
This pseudocode comprehensively outlines the steps from data preprocessing to final inference, detailing critical hyperparameters and methodological choices, such as feature alignment techniques and the integration of visual and language models. This structured algorithmic representation should facilitate researchers in replicating our approach and exploring its potential adaptations. Additionally, we have provided key code for all experiments in the supplementary materials, ensuring that other researchers can directly replicate and validate our results.
5. **Minor Typos and Grammar Mistakes:**
We have rigorously proofread the manuscript to correct all typographical and grammatical errors.
We thank you again for your insightful critique, which has significantly enhanced the quality and clarity of our research presentation. We remain receptive to further feedback and are prepared to refine our manuscript as needed to meet the expectations of the readership.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their feedback. After reviewing the comments from other reviewers and the authors’ responses, I still find this paper interesting with potential applications. Therefore, I am maintaining my current score and support publishing this work.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback. We’re pleased that you find our work interesting and support its publication. | Summary: This paper presents an innovative framework that leverages Vision Transformer 3D (ViT3D) to integrate 3D brain structures with visual semantics, enhancing visual reconstruction and language interaction from fMRI data. By aligning fMRI features with visual embeddings through a unified feature extractor and integrating with LLMs, the framework enhances decoding capabilities, enabling tasks such as brain captioning, complex reasoning, concept localization, and visual reconstruction. The framework demonstrates exceptional performance and holds significant potential for applications in neuroscience and human-computer interaction.
Strengths: Provides a multi-task visual neural decoding framework that is simpler and more elegant compared to previous methods, offering a robust example for integrating visual and neural signals with unified modeling. Introduces a method for precise localization of language-based concepts within brain signals simply using the GradCAM method.
Weaknesses: The Dual-Stream fMRI Feature Extractor is already a common method in the field, but the combination of the two features appears suboptimal. The authors thoroughly discuss the trade-off between low-level and high-level features. However, based on the results in fig 4, the reconstructed images’ low-level features are inferior to previous methods.
Technical Quality: 4
Clarity: 4
Questions for Authors: I am curious if the method of combining low-level and high-level features in eq4 is the most effective. While other works also use this approach, it seems to force the model to trade off between the two types of features. Are there potentially better methods?
In visual reconstruction, the method with LLM appears effective only when the beta value is low. Does it overlap in function with the CLIP embedding?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors discuss the limitations well in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your detailed and insightful review. Your positive feedback on the soundness, presentation, and contribution of our work is highly encouraging. We are pleased that you found our framework innovative and robust in integrating visual and neural signals. We will now list your concerns one by one and provide specific responses.
1. **The Dual-Stream fMRI Feature Extractor is already a common method in the field, but the combination of the two features appears suboptimal. The authors thoroughly discuss the trade-off between low-level and high-level features. However, based on the results in fig 4, the reconstructed images’ low-level features are inferior to previous methods.**
Thank you for highlighting this issue. While the dual-stream fMRI feature extractor is indeed common, one of our main innovations is maintaining the spatial structure of the fMRI signals while achieving high-quality cross-subject feature extraction. Our method integrates these features with Vision Transformer 3D (ViT3D) and LLMs. This integration aims to balance the trade-off between low-level and high-level features, which is a known challenge in the field.
In Figure 4a, the difference in decoding quality for low-level features can be attributed to our model being cross-subject, meaning a single model can process brain signals from different subjects without the need for subject-specific training. Other methods require training separate models for each subject and often include additional low-level features such as depth and color (as seen in Ref [6]). Our approach, while compatible with these methods, focuses on reconstructing high-level feature representations and cross-subject generalization, thereby achieving better overall performance in tasks requiring complex semantic understanding. This trade-off is crucial for advancing multi-task neural decoding. For Figure 4b, our method shows better reconstruction quality at different levels when compared to single visual stimulus reconstruction tasks. Detailed quantitative analysis can be found in Table 2 on page 7 of the manuscript.
1. **I am curious if the method of combining low-level and high-level features in eq4 is the most effective. While other works also use this approach, it seems to force the model to trade off between the two types of features. Are there potentially better methods? In visual reconstruction, the method with LLM appears effective only when the beta value is low. Does it overlap in function with the CLIP embedding?**
Thank you for your insightful question. The trade-off between different levels of features is indeed a challenge. We have systematically discussed how to balance these features by regulating the initialization noise in the Diffusion latent space in Figure 5. While our current method shows significant benefits, we recognize it may not be the optimal approach. We are actively exploring alternative fusion techniques, such as hierarchical feature integration and dynamic weighting mechanisms, to alleviate this trade-off.
Regarding the beta value, our experiments indicate that the method using LLM is more effective at lower beta values. This suggests that a low \( \beta \) value better integrates high-level semantic information. Our method aligns brain signals directly with the Diffusion latent space, and if \( \beta \) is too low, it causes instability in the diffusion process due to the large discrepancy between the latent space and Gaussian distribution. In this case, LLM provides additional high-level semantic information, stabilizing the reconstruction through extra guidance.
While CLIP embedding indeed overlaps in function with LLM by providing high-level semantic information during the reconstruction process, combining both can enhance reconstruction quality, as illustrated in Figure 5 on page 8 of the manuscript and **Table R1** of the attachment.
We are committed to continuously refining our methodology and appreciate your valuable feedback, which has been instrumental in enhancing the quality and clarity of our manuscript. Please let us know if further details are required or if additional modifications are needed.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for their response, which has addressed my concerns. After reading the other reviews and rebuttals, I generally support the publication of this work and will maintain my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your support. We appreciate your recommendation for publication. | Summary: The paper introduces a new, subject-agnostic visual reconstruction pipeline. They introduce a way to integrate across fMRI readings from different subjects and enhance their integration using LLMs. Through this integration they see a consistent improvement of high level semantic feature baselines in their reconstruction.
Strengths: * An interesting new approach to align fMRI feature extraction with ViTs and LLMs
* Brain captioning with LLMs is highly novel. The results are promising in Table 1.
* Promising results on NSD reconstruction.
* Interesting set of analyses using GradCAM.
Weaknesses: * Concerns about preprocessing and claims about cross-subject analysis
* The authors propose a trilinear interpolation to combine BOLD signals across patches across subjects. To me, this makes many assumptions that may not be true or might miss several problems. The authors need to address these questions to substantiate the claim of removing subject-specific modeling or using some kind of mixed subject pretraining to combine subjects.
* For example, this method ignores inter-subject variability such as anatomical differences or functional differences, which can vary among individual subjects.
* The interpolation makes a linear assumption on BOLD responses which may not be true. I realize this will follow with alignment to a ViT later but this imposes strong assumptions about BOLD signal space.
* My main concern here is whether this kind of preprocessing will generalize to modeling over new subjects. Is this approach applicable to general subject populations? Could the authors add some clarification and expand on this point?
* LLM Interaction details
* My understanding is that there is a finetuning objective when using the LLM interaction. I think section 3.4 was cut off somehow and this wasn’t very clear to me.
* Some details were missing making Table 1 a bit difficult to understand. See questions
Technical Quality: 2
Clarity: 3
Questions for Authors: * What does fine-tuned model refer to in line 204? Is it the LLM (just making sure)
* When you use BLIP2 generated captions, are you passing the original NSD image and generating a caption? Is this in comparison to your LLM that uses an fMRI embedding in place of [image]? I think this was confusing.
* Although GradCAM is easily interpretable, I do wonder if this would benefit from something more recent for interpretability like [1].
* Could MindEye2 (https://medarc-ai.github.io/mindeye2/) be used as a baseline?
[1] Materzynska et. al. Disentangling visual and written concepts in CLIP. CVPR 2022.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: * Limitations are addressed but may want to consider toning down some of the claims about subject variability (see weaknesses).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your insightful comments. Your feedback has helped us enhance the quality of our manuscript. Below, we will provide detailed responses to your comments.
### **Concerns about Preprocessing and Inter-Subject Variability:**
We appreciate your concern regarding inter-subject variability. Developing a universal model for all subjects avoids the need for separate models for each brain, reducing computational and storage costs. Our cross-subject approach, validated by concurrent studies at CVPR 2024, ICML 2024, and ECCV 2024 (References [3-5]), uses a voxel patching technique to capture essential brain structures without additional subject-specific parameters. This enhances robustness, generalizability, and spatial integrity, resulting in superior performance over subject-specific models (Table 2, page 7). By using a trilinear interpolation method, we ensure effective alignment of cross-subject characteristics and achieve high-quality reconstruction and interpretability at low computational costs. Our model's ability to generalize without extra parameters for each subject sets it apart from existing approaches and is a critical step toward universal applicability.
1. **Trilinear Interpolation and Assumptions:**
We acknowledge your concerns regarding the assumptions underlying our use of trilinear interpolation. To address this, we compared various interpolation methods, including `nearest`, `area`, and `nearest-exact`, finding minimal impact on performance (detailed in **Table R3**). Trilinear interpolation effectively manages inter-subject variability and supports efficient cross-subject data integration while preserving essential spatial structures, which is critical for robust and generalizable model performance.
2. **Anatomical and Functional Differences:**
Our approach includes a voxel patching technique that captures local anatomical and functional differences, which are vital for accurately modeling BOLD signals. This method significantly enhances the performance over models that do not maintain these structures (results shown in Table 2).
3. **Linear Assumption on BOLD Responses:**
Although a linear assumption is involved, our results confirm that trilinear interpolation preserves the spatial integrity of BOLD signals. This integrity is crucial for effective alignment with the ViT structure and thus enhances our model's performance, especially when compared with models lacking the ViT3D structure (`w/o C_Subj & ViT3D`).
4. **Generalization to New Subjects:**
Our model has shown significant superiority over subject-specific models (Table 2), indicating robust generalization capabilities that make it highly suitable for broader subject populations. This advantage underscores the scalability and robustness of our approach, which does not require tailoring to individual subjects, thus simplifying the application to a diverse range of subjects. We intend to further validate our method on a larger dataset and have enriched our manuscript with discussions that clarify the universal applicability and potential of our approach to extend to more generalized settings.
### **LLM Interaction details:**
1. **Fine-Tuning Objective:**
The fine-tuning of the LLM using fMRI embeddings aligns the penultimate hidden state of the fMRI feature extractor with the LLM’s text embeddings. This integration allows the LLM to interpret fMRI data as natural language inputs, facilitating complex reasoning and language-based tasks. The fine-tuning occurs in two phases: initially training only the MLP between the fMRI feature extractor and the LLM (with the LLM frozen) and subsequently training both components together. Detailed process and results are documented on lines 234-241 and in Appendix A.
### **Responses to Questions:**
1. **What does fine-tuned model refer to in line 204? Is it the LLM (just making sure).**
The 'fine-tuned model' mentioned refers specifically to the LLM. This model has been adapted to align with fMRI embeddings, enhancing our ability to interpret complex neural data as described in Table 7 on page 16.
2. **BLIP2-Generated Captions:**
Regarding the use of BLIP2-generated captions, we clarify that we utilize the original NSD images as input to BLIP2 to generate target captions. These captions are then utilized to compare with those generated by LLMs, which uses fMRI embeddings instead of direct image inputs. This comparison is vital for assessing our model's capability in natural language tasks, particularly considering the differences brought by the cropping process in the NSD images, as illustrated in Figure 9.
3. **Interpretability Methods:**
Thank you for the suggestion regarding interpretability methods. Materzynska et al.[2] provide valuable insights for visual semantic interpretation, our focus on localizing textual information within brain signals benefits from the established efficacy of GradCAM. We employ this method to pinpoint text concepts in neural data. Nonetheless, we are open to integrating more recent techniques to potentially enhance our model's interpretability. Our manuscript includes experiments, specifically in Figures 7-8 and 12, that demonstrate the robustness and effectiveness of our approach in localizing these concepts.
4. **MindEye2 as Baseline:**
Thank you for suggesting MindEye2 as a baseline. We have conducted a comparative analysis between MindEye2 and our method, presented in **Table R3** of the attachment. Our method outperforms MindEye2 in most metrics, especially for models not fine-tuned on specific subjects. Even compared to MindEye2 models fine-tuned on specific subjects, our method shows superior performance on high-level semantic metrics. This underscores our framework's scalability and effectiveness across subjects.
We trust that the above responses comprehensively address your queries. Please inform us of any additional details you would require.
---
Rebuttal Comment 1.1:
Comment: Thank you for the discussion on the inter-subject variability. This is really helpful for positioning the preprocessing step and giving me more confidence in this step as well. The extra citations are also very useful. The extra experiments are highly appreciated!
Thank you for the clarification on LLM tuning as well.
I believe my concerns have been addressed. I will increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback and for increasing the score. We're glad the discussion on inter-subject variability and LLM tuning was helpful. | Summary: - This model focuses on the task of reconstructing image stimuli from fMRI readings
- Instead of training a subject specific model, a subject-generic model is trained
- The model is built around a pre-trained LLM core, which is finetuned to take the fMRI as input and then:
- engage in dialogue about the image stimulus
- reconstruct the image stimuli by generating a language prompt which is then passed to UnCLIP
- create a heatmap of activations in the brain associated with a given concept
Strengths: - The multi-subject aspect of the training appears novel
- The use of an LLM for image reconstruction and dialogue about the stimulus also appears novel
- The image reconstructions are of comparable or better quality than existing approaches
- Ultimately, I think the evaluation could be improved (see Weaknesses), but given the novelty of the approach (using an LLM to do decoding and reconstruction), I argue for acceptance.
Weaknesses: - It's unclear how much the image reconstructions reflect visual processing in the brain. It would be good to do an ablation to see how much the reconstruction depends on $\hat{z}_c$. What if $\hat{z}_c$ were replaced with random noise in the same way that an ablation was done on $\hat{z}_v$? Basically, I am curious about how much decoding can be done from the prompt $a_r$ only.
- In the same vein, it would be interesting to do an experiment where the generated prompt $a_r$ is given to a generative model that does not require $z_c$ and $z_v$, i.e., a model that can take a text prompt and generate an image.
- It's claimed that the model can engage in complex dialogue about the semantic content of the stimuli, but from the example prompts, it seems that many of the answers to the dialogue questions may be available in the prompt. For example, figure 11 seems to suggest that many of the complex reasoning prompts contain enough information to answer the question in the prompt. E.g. ``What is the distribution of the cows in the field based on the description and their positions in the image.'' A good ablation test would be to perform the complex reasoning task without the fMRI inputs.
- Some technical details are hard to follow, especially with regards to concept localization (see questions).
Technical Quality: 2
Clarity: 2
Questions for Authors: - line 293 --- how are concepts "extracted"?
- line "Finally, patches containing non-task-relevant information are filtered out" -- how are the masked portions chosen?
- line 256-258: I think there is a typo somewhere. I'm not sure what the sentence is supposed to say.
- Fig 1: From the figure, it seems that one can query the LM for concept localization in the brain? But where is that described? What do annotations for that dataset look like?
- A related question: line 533: what does it mean to localize the results using GradCAM? Can you describe how that was done?
- Fig 5 and 6: Equation (4) says that $\beta=1$ corresponds with all noise input. But increasing noise seems to correspond with better image quality?
- Line 484: Typo: "Language Extension"
- Table 2: what is `C_Subj`?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Limitations and potential negative societal impact are discussed in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your detailed and insightful review. Below, we provide a brief summary of your questions and our responses:
1. **Impact of $\hat{z}_c$ on image reconstructions: How much does the reconstruction depend on $\hat{z}_c$? What if $\hat{z}_c$ were replaced with random noise?**
We conducted an ablation study where $\hat{z}_c$ was replaced with random noise, and another experiment where $\hat{z}_c$ was set to zero. The results, summarized in **Table R1** of the attachment, indicate that both $\hat{z}_c$ and $\hat{z}_v$ significantly contribute to reconstruction quality. Using $a_r$ alone, derived solely from LLM prompts, achieved some level of reconstruction quality, particularly maintaining high-level semantic information. However, it affected the performance of lower-level image details more significantly. We observed that setting $\hat{z}_c$ to zero yielded better results than replacing it with random noise, suggesting that random fMRI embeddings might negatively impact reconstruction quality.
2. **Experiment with text prompt only: Can a generative model create an image from $a_r$ without using $z_c$ and $z_v$?**
We conducted this experiment using Stable Diffusion 2-1 [1], which generates images from text prompts alone. The performance, detailed in **Table R1**, demonstrates that even without directly using fMRI Feature Extractor, text prompts $a_r$ can achieve notable reconstruction quality. This highlights that LLMs can effectively interpret brain signals, and this method avoids the negative impact of missing fMRI embeddings on reconstruction quality.
3. **Does the prompt contain enough information to answer complex reasoning questions without fMRI inputs?**
Your concern is valid. In multimodal scenarios, information leakage through the question is possible. Thus, we performed an ablation experiment by removing the `<image>` tag from the question $q$, effectively testing without fMRI embeddings. The results, shown in **Table R2** of the attachment, indicate that for tasks like Brain Caption and Detail Description, where contextual information is not leaked through the question, removing fMRI embeddings results in random-level performance, validating the effectiveness of LLMs in extracting brain signals. For Complex Reasoning tasks, while some scene information is leaked through the question $q$, the inclusion of fMRI inputs significantly enhances response quality and accuracy, confirming the effectiveness of our method.
4. **How are concepts "extracted"?**
As shown in Table 6 on page 15, we fine-tuned the LLM to recognize concept localization instructions. Upon identifying such instructions, the LLM calls functions to perform concept localization. Specifically, the `<concept>` is input into the CLIP Text Encoder. Our fMRI Feature Extractor, aligned through training, uses the CLIP Embedding of `<concept>` and applies GradCAM for concept localization and visualization, identifying the brain regions corresponding to the textual concept.
5. **How are the masked portions chosen?**
We used the `nsdgeneral` labels from the Natural Scenes Dataset (NSD) to filter out non-task-relevant brain regions. Specifically, the union of `nsdgeneral` regions for each subject in the NSD dataset was considered task-relevant, and other regions were removed. Detailed processing of the dataset is described on line 473, page 13 appendix.
6. **Line 256-258: I think there is a typo somewhere. I'm not sure what the sentence is supposed to say.**
Apologies for the confusion. `w/o ViT3D` refers to not using the 3D structure-preserving fMRI preprocessing method described in section 3.1 on page 4 but instead flattening the fMRI data and patching it as in other literature to maintain the same fMRI feature extractor structure. We will clarify this in the revised manuscript. Thank you for your correction.
7. **Fig 1: From the figure, it seems that one can query the LM for concept localization in the brain? But where is that described? What do annotations for that dataset look like?**
Yes, in our method, the LLM can be queried for concept localization. The specific implementation is detailed in our response to question 4. The method is described on lines 211-214, page 6. Concept localization experiments are discussed on page 9, section 4.4, and in the appendix, section D.3 on page 18. The dataset annotations used are from the NSD, which includes visual stimuli images and fMRI data from subjects viewing these images, along with corresponding COCO captions.
8. **What does it mean to localize the results using GradCAM? Can you describe how that was done?**
We aim to use GradCAM to visualize the contributions of different natural language representations of concepts within fMRI signals. This involves querying the LLM for concept localization as described in our response to question 4.
9. **Increasing noise seems to correspond with better image quality?**
Equation (4) describes the visual stimulus reconstruction process using UnCLIP. Specifically, $\hat{z}_c$ represents fMRI features aligned with CLIP, and $a_r$ represents LLM-generated captions. Noise is added to the VAE features used to initialize the diffusion process. Due to the nature of the diffusion process, insufficient noise can destabilize the process and affect the injection of high-level semantic information $\hat{z}_c$ and $a_r$, impacting reconstruction quality. Adequate noise helps stabilize the diffusion process, improving reconstruction quality. This is discussed in Figure 5 on page 8.
10. **Typo: "Language Extension"**
The typo has been corrected.
11. **What is C_Subj?**
`C_Subj` refers to cross-subject training, where the model is trained on data from multiple subjects simultaneously, rather than training separate models for each subject as done in other studies.
We hope these clarifications address your concerns and enhance the quality of our work. Thank you for your constructive feedback.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: I thank the authors for their detailed response to my questions. I will keep my score and continue to recommend acceptance. I found the author's ablation studies with randomized inputs to be convincing, both for reconstruction as well as dialogue. One interesting point is raised by the finding that the intermediate text prompt $a_r$ is enough to achieve good reconstruction quality. This calls into question how much of the reconstruction actually reflects _visual_ processing in the brain, as opposed to some other higher-level concept processing. But this is not really a weakness of the paper, and more something to explore further.
---
Reply to Comment 1.1.1:
Comment: Thank you for your continued support. We appreciate your insights on the reconstruction process and agree that it’s an area worth further exploration. | Rebuttal 1:
Rebuttal: We sincerely thank you for your insightful feedback. We have carefully reviewed your comments and provided specific responses to each point raised. We appreciate the opportunity to discuss our research in more depth and clarify any concerns through these responses.
In the attached response document, we have conducted supplementary experiments to address the specific doubts raised, including:
- **Table R1**: Analyzes the contributions of $\hat{z}_c$ and $a_r$ in stimulus reconstruction, demonstrating the best performance when both are utilized together.
- **Table R2**: Discusses the performance drop in question-answering tasks without fMRI embeddings, underlining the essential role of these embeddings.
- **Table R3**: Evaluates different cross-subject alignment methods and compares our approach with MindEye2, showcasing our method's robustness and effectiveness.
---
The references cited in our specific responses include:
[1] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2022.
[2] Materzynska, et al. "Disentangling visual and written concepts in CLIP." CVPR, 2022.
[3] Wang, Shizun, et al. "Mindbridge: A cross-subject brain decoding framework." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024.
[4] Scotti, Paul Steven, et al. "MindEye2: Shared-Subject Models Enable fMRI-To-Image With 1 Hour of Data." Forty-first International Conference on Machine Learning, 2024.
[5] Xia, Weihao, et al. "UMBRAE: Unified Multimodal Brain Decoding." European Conference on Computer Vision (ECCV), 2024.
[6] Xia, Weihao, et al. "Dream: Visual decoding from reversing human visual system." Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, 2024.
Pdf: /pdf/4df792dbc247d39b965941363e9a5cdc410eb939.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
One-Step Diffusion Distillation through Score Implicit Matching | Accept (poster) | Summary: This paper proposes a distillation-based accelerated sampling method for various score-based diffusion models, such as EDM and Stable Diffusion. The authors have named this method Score Implicit Matching (SIM), which is designed to compress information from a diffusion-based teacher into single-step generator models. This method is somewhat based on SID, and SIM can be regarded as an extended version of SID to some extent. In my opinion, the crucial contributions of this paper are its application of SIM to Stable Diffusion and the more general design choice of the distance function.
Strengths: 1. This paper applies SIM to text-to-image (t2i) generators such as Stable Diffusion, an important experiment validating the effectiveness of accelerated sampling algorithms.
2. This paper enhances SID by generalizing the design choice of the distance function to six different versions, as shown in Table 4 of the Appendix.
3. This paper provides proof of their method, although the contribution of this point is relatively small since most of the proofs are based on previous work. To be specific, (1) the proof in SID use the data distribution as the target but SIM utilize the score function as target; (2) SIM uses the derivative form for derivation, but SID draws its conclusions through
$\nabla_{x_t}\log p_\theta(x_t)=[x_g-x_t]/\sigma^2_t$
(In EDM $\alpha_t \equiv 1$); (3) the only difference is the difference in the distance function, but this does not affect the derivation in any way.
Weaknesses: 1. The performance of SIM seems to be not effective, although the authors' statement that SIDs are reproduced because they don't have publicly available code. The performance gap may originate from a few tricks, and it is not yet known whether these tricks can further improve the effectiveness of SIM. Thus. I hope the authors can retrain several experiments related to CIFAR10 to further evaluate the effectiveness of SIM through SID's official implementation [1].
[1] https://github.com/mingyuanzhou/sid
2. The author spends a lot of space in the ``Score Implicit Matching" section to introduce the principle of Algorithm 1, but I think the biggest difference between SIM and SID is that SIM adopts the matching based on the score function while SID adopts the matching based on the sample. The rest of the logic is basically the same. Therefore, I think it is a bit unreasonable to take this part as the contribution of the paper.
3. The authors of SID recently released a new paper named ``Long and Short Guidance in Score identity Distillation for One-Step Text-to-Image Generation'' (LSG) which is also an extended version of SID. I wish the authors would add LSG into related work (unfortunately, different metrics were used between SIM and LSG, so SIM cannot simply compare it with LSG).
Technical Quality: 3
Clarity: 3
Questions for Authors: No
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your useful feedback. We will address your concerns one by one. Before that, we first give a summary of our work.
In this work, we introduce score implicit matching (SIM), a novel distillation algorithm that achieves competitive performances on CIFAR10 generation tasks, and **lossless aesthetic performances** on the one-step distillation of the **DiT-based text-to-image diffusion model**. Besides the strong empirical performances, the theoretical foundation of SIM is also sound: we prove that the **SIM loss shares the same $\theta$ gradient as our proposed score-based divergence**. Therefore using an SGD-type optimization algorithm to minimize SIM loss is secretly equivalent to minimizing the intractable score-based divergence. This **distinguishes SIM from SiD's theoretical loss functions** that are proved to be equal to Fisher divergence in value instead of parameter gradient when using an online learned score to replace the unknown generator score.
Next, we address your concerns one by one.
**Q1**. The relation between SIM and SiD
**A1**. We appreciate the SiD paper and have obtained inspiration from it. The SiD shows a solid step to minimize Fisher divergence other than KL. However, SIM and SiD are essentially different in the following aspects.
**1)** SIM loss has the same parameter gradient as the intractable score-based divergence. However, the SiD's theoretical losses only have equal value instead of an equal parameter gradient.
In Section 3.2, we have shown that SIM loss has the same parameter gradient (equation 3.5-3.7) as the target score-based divergence (equation 3.4) regardless of substituting the score function with an online learned score model (please check Theorem 3.1). This means that using an SGD-type optimization algorithm to minimize SIM loss is equivalent to minimizing the score-based divergence. On the contrary, the SiD theoretical loss functions (equation 13 and equation 20 in SiD paper) only have an equal value to the target Fisher divergence instead of the parameter gradient when using an online score function to replace the intractable generator score function. This means using a gradient-based optimization algorithm does not guarantee the minimization of the Fisher divergence. This might be the reason why SiD's theoretical-driven losses do not work well in practice as the authors of SiD have pointed out. After the theoretical argument, the authors of SiD empirically find that a linear combination of the two "bad" losses results in a strong loss function, which they call the "fused loss". Interestingly, we find that when taking SIM's distance function to be squared L2 distance, SIM recovers the empirical "fused loss" of SiD with $\alpha=1.0$. **From this point of view, SIM, as well as our score-divergence gradient theorem brings a theoretical tool for analyzing SiD's empirical fused loss in a gradient view.**
**2)** SIM is defined over a general family of score-based divergence with a flexible choice of distance function. As a comparison, the SiD only targets Fisher divergence, which can be viewed as a special case of SIM. Besides, trying to generalize SiD's theoretical loss (equation 20 in SiD) to support general distance functions is not direct. The proof of Equation 19 of the SiD paper relies on the expansion of the squared L2 difference. However, such an expansion may not hold for general distance functions such as L2 distance and the Pseudo-Huber distance.
**3)** SIM empirically shows faster convergence than SiD. This makes SIM favored when training large-scale one-step text-to-image generator models. Please see **A2** for details.
**4)** SIM shows novel scalability to complicated text-to-image one-step distillation, however, the SiD did not show such a possibility before the NeurIPS submission.
In short conclusion, SIM has proven the equality of the loss function and the intractable divergence in a **parameter gradient** view, making it a solid theoretical contribution. Besides, SIM has shown success in scaling to train strong one-step text-to-image generators without performance losses, making it a solid empirical contribution.
**Q2**. Will SIM be further improved with the SiD codebase?
**A2**. We appreciate your constructive suggestion. After reading the SiD's official code, we find the official code of SiD has multiple tricks that are different from our original implementation. With some tests, we find that these tricks, such as masking out the nan samples, the weighting function, as well as the continuous-time Karras-$\rho$ noise sigma distribution help stabilize the training and improve the performance.
Due to the limited time of the rebuttal period and limited compute, we conduct experiments with two settings to compare SIM and SiD behavior at different training stages: **(1) training from scratch**, and **(2) resuming from a nearly converged** generator; **Please check our global rebuttal and added one-page** for technical details.
The new experiment shows that the faster convergence of SIM than SiD ($\alpha=1.0$) using the same implementation techniques. However, training to the end will take a very long time, and we have to acknowledge that we probably can not make a final conclusion that SIM will be better than SiD without sufficient computation. Besides, we appreciate SiD's high-quality implementations and solid contributions.
**Q3**. The authors of SID recently released a new paper named LSG, I wish the authors would add LSG into related work.
**A3**. Thank you for your kind reminder. The LSG is publicly available after the NeurIPS submission deadline. After a careful reading, we find it technically interesting. We think the Long-and-short guidance technique proposed in LSG may further improve the SIM performance for text-to-image models. We will add the LSG to our related work in the revision.
**We hope our response has resolved your concerns. If you still have any concerns, please let us know.**
---
Rebuttal Comment 1.1:
Comment: I acknowledge the authors' response and appreciate their efforts in addressing all my concerns. I believe this paper is now well-prepared for acceptance.
---
Reply to Comment 1.1.1:
Title: Thank you for your constructive feedback.
Comment: We are glad that we have addressed your concerns. We appreciate your valuable suggestions and will incorporate them in our revision. | Summary: The authors study the distillation of score-based models in one-step generators. They propose a new objective function for score distillation coined Score-Based Divergence. This divergence measures the mean distance between the pre-trained score-based model and a score-based model learned on the distribution induced by the one-step generator. This method is then applied on image generation problems, on standard academic benchmarks and on large latent text-to-image models.
Strengths: Overall, this is a strong paper with a sound and novel method that obtains solid experimental results.
* The authors study the distillation of score-based models, which is a very hot research topic since inference time is the major bottleneck of such models.
* As far as I know, the idea of score-based divergence is novel and original. Moreover, the method comes with state-of-the-art results in a very competitive area (distillation of score-based models to one-step generators).
* The general idea of score-based divergences is sound and could lead to new research works, from applied ones to theoretical ones.
Weaknesses: * Not a big weakness, but we could argue that the training is demanding from a computational viewpoint, since it involves three large neural networks: generator, score-based network for data distribution, score-based network for generator's distribution. However, this is not specific to this particular method.
* Minor: typo in the TL;DR. "The submission propose an general" -> "The submission proposes a general"
Technical Quality: 3
Clarity: 3
Questions for Authors: * Have you observed instabilities with the optimization procedure that alternates between online score and generator? Have you tried to explore other hyper-parameters, e.g. more steps or different learning rates for the online score network?
* In ProlificDreamer, the authors use LORA to fine-tune the online score-based model. Have you tried such approach to reduce the computational cost?
* Could your method be directly used for learning 3D models, such as ProlificDreamer? From what I understand, setting $\theta$ to an ensemble of particles and generating 2D images through NERFs, this would be practicable.
* Theoretically, your method is agnostic to the type of generator that is used, right? Thus, we could imagine training a model with low-dimensional latent space and different architecture, such as a StyleGAN. Have you explored this?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: .
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that you like our work. We appreciate your valuable suggestions, and we will incorporate them in our revision. In this paper, we introduce score implicit matching (SIM), a theoretically sound distillation method that secretly minimizes a general family of score-based divergences when distillation. Our experiments on CIFAR10 generation as well as the one-step DiT-based text-to-image generator model show the superior performances of SIM on industry-level applications. In the following paragraphs, we will address your concerns one by one.
**Q1**. Not a big weakness, but the training computation cost may be demanding.
**A1**. We acknowledge that the training cost is a little bit expensive because the SIM involves an additional online diffusion model. However, in practice, we find that the **additional cost is acceptable** by using Pytorch techniques. For example, in our text-to-image experiment, we use 4 Nvidia A100-80G GPUs for training the one-step generator. We use the Pytorch Distributed data-parallel (DDP) to support multi-GPU gradient syncing and use BF16 numerical formate for training. With BF16 training, we can effectively train the model with a total batch size of 256 even without the gradient accumulation. If we use gradient accumulation techniques, we can enlarge the batch size to 1024 with some speed decline.
**Q2**. Are there instabilities with the alternating optimization procedure? Would be better to explore other hyper-parameters.
**A2**. We appreciate your good intuition. We have observed that **if the hyper-parameters are not set properly, the training may diverge**. For instance, we find that when the weighting functions and the log-normal noise sigma distribution for training online diffusion models are not properly set, the alternating training will lead to a poor result. However, compared with Diff-Instruct and SiD, we find that the updating of a one-step generator using SIM is pretty robust to hyper-parameters. We find that using different loss scaling, different learning rates (in a reasonable interval), as well as several kinds of weighting functions, does not affect the SIM updates for the generator too much. We appreciate your constructive suggestions as a new direction to explore. We have tried to update the online diffusion twice, however, this slows down the training process by approximately 1.5 times and we do not observe significant performance improvement, therefore, we still use one diffusion update per generator update, with two updates using the learning learning rate. However, we will explore more of these settings in the revision.
**Q3**. Have you tried using Lora such as in ProlificDreamer [2] to reduce the computational cost?
**A3**. We appreciate your good intuition. We like the trick proposed in ProlificDreamer to use a Lora online diffusion, which can reduce the memory cost. However, as we have shown in **A1**, when we use BF16 training, we are not bothered by memory issues, therefore we do not use the Lora online diffusion. Another reason is that the Lora model may have worse modeling ability than a full-parameter model, which can potentially harm the performance of SIM.
**Q4**. Can SIM be used to train NeRF models using 2D diffusion? Can SIM be applied to other generator models such as GANs?
**A4.** We highly appreciate your good intuition. As you can see, the SIM is a general method that can be used for a wide range of applications other than one-step diffusion distillation. In the rebuttal period, we conduct two more experiments: (1) text-to-3D generation using text-to-2D diffusion; and (2) improving the GAN generator using diffusion models; **Please check our Global rebuttal cell for details.** On both new applications, SIM has shown strong performances, demonstrating its broad applicability.
**We hope our response has resolved your concerns. If you still have any concerns, please let us know.**
---
Rebuttal Comment 1.1:
Title: Rebuttal Acknowledgement
Comment: I acknowledge the rebuttal. The authors have thoroughly replied to my questions. I will thus raise my score to 8, Strong Accept.
---
Reply to Comment 1.1.1:
Title: Thank you for your constructive feedback.
Comment: We are glad that we have addressed your concerns. We appreciate your valuable suggestions and will incorporate them in our revision. | Summary: This work proposed Score Implicit Matching (SIM) to distill diffusion models into a one-step generator. The core idea is to use the “score-gradient theorem” to transform the minimization of score-based divergences between generator and real score functions into an implicit and tractable minimization problem. The theory does not specify the underlying divergence or distance metric for score functions, and seems to generalize many distribution matching methods, such as SiD and DI. Experiments were conducted on CiFAR-10 and text-to-image generation to show the effectiveness and robustness of SIM, by distilling EDM and PixArt-$\alpha$, respectively.
Strengths: - Overall, the paper is well written and easy to read, although there is still a need to further polish the presentation (as I pointed in the Weaknesses section).
- This work provides a good theoretical insight into how to transform the intractable score-based divergence into a tractable implicit minimization problem, where both share the same gradient.
- It generalizes many previous works (SiD, DI) as its specific cases.
- Experiments on CIFAR-10 and text-to-image generation showed convincing results to demonstrate the effectiveness of SIM.
Weaknesses: - The reason why Pseudo-Huber distance is better than other choices (L2 Square in SiD or KL in DMD) is not quite clear to me. Can the authors provide some empirical evidence to support the hypothesis of the normalization effect? For example, we can use L2 distance (or setting c=0), does the method also work well as it also normalizes the difference between generator score and teacher score? Also, what $c$ value works the best for the Pseudo-Huber distance?
- Can the authors show how SIM becomes a DMD objective if we set $d(\cdot)$ to a reserve KL as in DMD? It is not quite clear to me how it works.
- There are some other standard benchmarks for one-step distillation, including distilling EDM trained on ImageNet-64 and SD trained on text-image pairs. I wonder why not compare SIM with strong baselines on these two settings?
- The presentation can be further improved. For example, the font size in Figure 2 is too small, and the legends in the right two figures are given without any explanation. Also, all the hyperlinks of references and citations are missing throughout the paper. Besides, there is a few typos: In Table 1, the FID of SiD ($\alpha=1$) and SiD ($\alpha=1.2$) can be switched. In line 24, “data synthetic” might be corrected as “data synthesis”. In line 254, “one generation steps” might be corrected as “one-step generation”.
- This is not a major concern but there is an obvious gap between the reproduced SiD results and their reported ones. The authors admitted that it is because SiD code was not released and more importantly, the SIM results can be further improved if its hyperparameters can be tuned based on SiD’s setting. So I wonder that since SiD code has now been released: https://github.com/mingyuanzhou/SiD, maybe it is a good time to look into the reproduction issue and decide whether it can improve SIM’s performance.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see my questions in the Weaknesses section.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed the limitations and potential negative societal impact of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your useful suggestions. We will address your concerns one by one. Before that, we first give a summary of the main contributions of our work.
In this work, we introduce score implicit matching (SIM), a novel distillation algorithm that achieves **competitive performances on CIFAR10 generation** tasks, and **lossless aesthetic performances** on the one-step distillation of the **DiT-based text-to-image diffusion model**. Besides the strong empirical performances, the theoretical foundation of SIM is also sound: we prove that the SIM loss has the same $\theta$ gradient as our proposed score-based divergence. Therefore **using an SGD-type optimization algorithm to minimize SIM loss is secretly equivalent to minimizing the intractable score-based divergence**. Besides, our proposed score-gradient theorem may bring new tools for future studies on generative models.
Next, we address your concerns one by one.
**Q1**. Clarify the reason for using Pseudo-Huber distance.
**A1**. We appreciate your keen intuitions. Though the standard L2 distance seems a fair choice, using the Pseudo-Huber distance is better. The Phuber loss (equation 3.8) has a denominator of the form $\sqrt{\||y_t\||^2_2 + c^2}$. Here $y=s_g(x,t) - s_d(x,t)$ will be close to zero when the generator is converging. Therefore, if we set $c=0$ (the case of squared L2 distance), such a denominator is unstable and can lead to numerical issues. This means that Phuber with a small $c$ is better than the standard of L2 distance.
Another advantage of Phuber loss (equation 3.8) is that it has an adaptive self-normalization effect on the loss scale. This helps the training loss to be stable with a constant scale.
**Q2**. Can the authors show how SIM becomes a DMD objective if we set $d(\cdot)$ to be reversed KL?
**A2**. If we consider the EDM formulation of Algorithm 2 in the Appendix, the DMD loss can be viewed as a special case of SIM loss. Let $d_{\phi}(x_t,t)$ denotes the online EDM model, and $d_q(x_t,t)$ the teacher EDM model. In Algorithm 2, the SIM loss with Phuber distance writes:
$$L(\theta) = \frac{d\_{\phi}(x\_t,t) - d\_q(x\_t,t)}{\sqrt{\||d\_{\phi}(x\_t,t) - d\_q(x\_t,t)\||\_2^2 + c^2}} (d\_{\phi}(x\_t,t) - x\_0).$$ If we cut off the $\theta$ gradient of $x_t$, then $d_{\phi}(x_t,t)$ and $d_q(x_t,t)$ will have no parameter dependence. So the SIM loss is equivalent to DMD loss with a weighting function $\frac{1}{\sqrt{\||d_{\phi}(x_t,t) - d_q(x_t,t)\||_2^2 + c^2}}$. However, the theory behind SIM and DMD is essentially different, because SIM aims to minimize a general family of score-based divergences, instead, DMD aims to minimize the KL divergence which is notorious for mode-collapse behavior.
There might be a reason why **score-based divergence is a better choice than KL**. Recall that the definition of KL divergence needs the likelihood ratio $\frac{p_g(x)}{p_d(x)}$, which will be ill-defined when $p_d$ and $p_d$ have misaligned density support. This can potentially lead to mode-collapse issues when using KL divergence as a target. However, the score-based divergence does not have such a "ratio", and therefore is safe in the case when $p_g$ and $p_d$ have misaligned support.
**Q3**. The motivation to distill PixelArt-$\alpha$ diffusion model.
**A3**. In Section 4.1, we evaluate the SIM on the CIFAR10 generation benchmark thoroughly and observe strong performances without using GAN losses. After that, we would like to scale SIM to challenging text-to-image generation. We choose PixelArt-$\alpha$ diffusion as a teacher model for three reasons. **(1)** We appreciate the great efforts of previous works on diffusion distillation of UNet-based t2i models. However, **distilling DiT-based diffusion models, such as PixelArt-$\alpha$ diffusion, lack explorations**. **(2)** The PixelArt-$\alpha$ model uses the T5-xxl text encoder, which is **able to understand long prompts** better than Stable Diffusion models with CLIP text encoders. We are motivated to explore high-quality one-step generator models that have such a long-prompt following ability. **(3)** We would like to explore the limits of a one-step model that is **favored by human users** in terms of Aesthetic Scores. The PixelArt-$\alpha$ is a good model that has been trained with high-quality aesthetic images.
**Q4**. Missing hyperlinks and some suggestions to improve the presentation.
**A4**. We feel sorry for the confusion. The hyperlinks are missing because we have separated the main pages and the appendix. We will incorporate your suggestions to improve our presentation in the revision.
**Q5**. Not a major concern, but I wonder if SIM can be further improved with the SiD codebase.
**A5**. We appreciate your constructive suggestion. After reading the SiD's official code, we find the official code of SiD has multiple tricks that are different from our original implementation. With some tests, we find that these tricks, such as masking out the nan samples, the weighting function, as well as the continuous-time Karras-$\rho$ noise sigma distribution help stabilize the training and improve the performance.
Due to the limited time of the rebuttal period and limited compute, we conduct experiments with two settings to compare SIM and SiD behavior at different training stages: training from scratch, and resuming from a nearly converged generator; **Please check our global rebuttal and added one-page** for technical details.
Our new experiment shows the faster convergence of SIM than SiD ($\alpha=1.0$) using the same implementation techniques. However, training to the end will take a very long time, and we have to acknowledge that we probably can not make a final conclusion that SIM will be better than SiD without sufficient computation. Besides, we appreciate SiD's high-quality implementations and solid contributions.
**We hope our response has resolved your concerns. If you still have any concerns, please let us know.**
---
Rebuttal Comment 1.1:
Title: Thank you
Comment: I thank the authors for providing detailed answers to my questions/concerns. My major concerns have been addressed so I increase my rating.
---
Reply to Comment 1.1.1:
Title: Thank you for response.
Comment: We appreciate your response and valuable suggestions. We sincerely hope that our responses have adequately addressed the concerns you raised in your review. **For any unresolved concerns or additional questions, please do not hesitate to let us know**. We would be happy to provide further clarification and address any remaining issues. | Summary: This paper proposes a new distribution matching loss between the one-step generator and the pre-trained diffusion model. The reverse KL divergence proposed in Diff-Instruct is generalized through the Score-divergence gradient Theorem.
Strengths: 1. The Score-divergence theorem is well adapted to generalize Diff-Inst.
2. The empirical results with generalized objective function show better with expanded hyperparameter space.
3. A method like CTM utilizes GAN loss (w/ true data) to boost the performance. Besides the CTM, this paper's performance seems good.
4. The distillation training cost seems cheap.
Weaknesses: 1. More analysis on $\alpha$ seems required.
2. The score network for the student model is still required. I think the existence of an auxiliary score network is expensive. For example, a recent paper in [1] does not utilize an auxiliary score network.
3. Adding recall metric in cifar-10 experiments if you want to claim Diff-Instruct suffers from mode-collapse.
4. Since Diff-Instruct uses reverse-KL divergence, their mode collapse is intuitive for me. Why does the objective function you propose not to suffer from mode collapse? I know why forward KL divergence can cover the mode and reverse KL can not do it. Please explain it similar sense. There may be a difference depending on the $\alpha$.
[1] Multistep Distillation of Diffusion Models via moment-matching
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Is the score-divergence theorem mentioned in the diffusion model community? It seems applicable to diffusion model training.
2. Will you release the code?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive feedback. We will address your concerns one by one. Before that, we first give a summary of the main contributions of our work.
In this work, we introduce score implicit matching (SIM), a novel diffusion distillation algorithm that achieves **competitive performances on CIFAR10 generation** tasks, and **lossless aesthetic performances** on the one-step distillation of the **DiT-based text-to-image diffusion model**. Besides the strong empirical performances, the theoretical foundation of SIM is also sound: **we prove that the SIM loss shares the same $\theta$ gradient as our proposed score-based divergence**. Therefore using an SGD-type optimization algorithm to minimize SIM loss is secretly equivalent to minimizing the intractable score-based divergence. Besides, our proposed score-gradient theorem may bring **new theoretical tools for future studies on generative models**.
Next, we address your concerns one by one.
**Q1**. More analysis on $\alpha$.
**A1**. We are not sure what you mean by $\alpha$. In the SIM algorithm (Algorithm 1), we do not have an $\alpha$ hyperparameter. We guess that you mentioned $\alpha$ to mean the SiD algorithm, which uses an empirical
$\alpha$ hyperparameter to fuse its loss functions. SIM is different from SiD in theory. As we have shown in Sections 3.3 and 3.4, when taking the distance function to be squared L2 distance, SiD's empirical fused loss with an $\alpha=1.0$ is a special case of SIM. Besides, SIM supports a flexible choice of distance functions to define and instantiate different score-based loss functions. This distinguishes SIM from previous methods such as SiD and Diff-Instruct.
**Q2**. The existence of an auxiliary score network is expensive. A recent paper in [1] does not utilize an auxiliary score network.
**A2.** We appreciate your keen intuition. We acknowledge that SIM needs an additional online diffusion model, which brings more memory costs. However, in practice, we find that the **additional cost is acceptable** if we use Pytorch techniques. For example, in our text-to-image experiment, we use 4 Nvidia A100-80G GPUs for training the one-step generator. We use the Pytorch Distributed data-parallel (DDP) to support multi-GPU gradient syncing and use the BF16 numerical format for training. We can effectively train the model with a total batch size of 256 even with no gradient accumulation. If we use gradient accumulation techniques, we can enlarge the batch size to 1024 with some speed decline.
However, **we highly appreciate your suggestions to refer to [1] to figure out the possibility of getting rid of the online diffusion model**. After a careful read of [1], we like the idea of moment matching to drop the online diffusion model. We will involve a discussion on SIM and [1] in the revision and leave the study that drops the online diffusion while maintaining strong one-step generation performance in our future work.
**Q3**. Why does SIM not suffer from mode collapse?
**A3**. We appreciate your good question! SIM is essentially different from the Diff-Instruct algorithm in the divergence that the generator aims to minimize. If we use $p_d$ to represent data distribution and $p_g$ the generator distribution, the Diff-Instruct minimizes the integral of the **KL divergence** between the generator and the teacher diffusion model with a form $$\mathcal{D}\_{KL} (p\_g, p\_d) = \mathbb{E}\_{x\sim p\_g}\log \frac{p\_g(x)}{p\_d(x)},$$ while the SIM minimizes the score-based divergence with a form $$\mathcal{D}\_{SD}(p\_g,p\_d) = \mathbb{E}\_{x\sim \pi} d(\nabla\_x \log p\_g(x) - \nabla\_x \log p\_d(x)).$$ KL divergence is notorious for mode-collapse issues because the likelihood ratio $\frac{p_g(x)}{p_d(x)}$ will be ill-defined if $p_g$ and $p_d$ have misaligned density support. However, the score-based divergence does not have such a "ratio", and therefore is safe in the case when $p_g$ and $p_d$ have misaligned support. Besides, our SIM with Phuber distance (equation 3.8) has a self-normalization form, which helps stabilize the training loss to a constant scale which potentially addresses the mode-collapse issue.
**Q4**. Is the score-divergence theorem mentioned in the diffusion model community?
**A4**. As we introduced in Section 3.2, the Score-divergence gradient Theorem is proved by taking the parameter gradient to both sides of the so-called score-projection identity. The score-projection identity is well-known for proving the equivalence of denoising score matching and classical score matching in [2]. Recently, [3] also used the projection identity to prove value equality when deriving SiD loss.
However, as far as we know, our score-divergence gradient theorem is the first time it appeared in the community, marking it as an important theoretical contribution in our paper. In this paper, we propose and use the score-divergence gradient theorem to prove the gradient equality of SIM loss and intractable score-based divergence instead of value equality. This distinguishes SIM from previous methods such as SiD that only prove value equality that is insufficient in theory.
**Q5**. Will you release the code?
**A5**. We will release the code if the paper gets published.
**Q6**. Add a recall metric to show that Diff-Instruct suffers from mode-collapse.
**Q6**. Thank you for your suggestion. In the rebuttal period, we compute the precision and recall of the DI model. It has a recall of 0.52 which is pretty low, indicating its mode-collapse issue.
[1] Multistep Distillation of Diffusion Models via moment-matching
[2] A Connection Between Score Matching and Denoising Autoencoders
[3] Score identity Distillation: Exponentially Fast Distillation of Pretrained Diffusion Models for One-Step Generation
**We hope our answers have resolved your concerns, and if you still have any concerns, please do let us know.** | Rebuttal 1:
Rebuttal: We appreciate all reviewers for your valuable feedback. In this cell, we address some common concerns.
As the **Reviewer G1sA** and **Reviewer Jq2C** wish, we run two additional experiments to demonstrate the wide applicability of SIM: (1) text-to-3D generation using text-to-2D diffusion; and (2) improving the GAN generator using diffusion models; On both new applications, SIM has shown strong performances, demonstrating its broad applicability.
**(1) text-to-3D generation.** We follow the setting of ProlificDreamer which minimizes the KL divergence in order for 3D generation. We use the open-sourced ThreeStudio Pytorch implementation of ProlificDreamer.
Result: **Figure 3** of our **added one-page** in the global response shows a qualitative comparison of SIM and ProlificDreamer (VSD). We find that when using SIM for text-to-3D generations leads to decent results, which is visually comparable with ProlificDreamer. This result shows that SIM has the potential to be used for text-to-3D generation. However, since our paper aims to focus on one-step image generation, and the computation for a thorough quantitative evaluation for 3D generation is expensive, we only give the qualitative visualization results in the rebuttal period. We leave more study and tuning on using SIM for 3D generation in our future work.
**(2) improving StyleGAN2 generator.** In this experiment, we use SIM to improve a pre-trained stylegan2 generator with pre-trained EDM diffusion models, following the same settings as the Diff-Instruct [2] paper's Section 4.2. **Table 1** and **Table 2** in our **added one-page** in the global response show the quantitative results of improving the generator experiment. As a result, we surprisingly find that SIM is able to improve the pre-trained StyleGAN2 generator better than Diff-Instruct can. This shows that SIM has a high potential to improve existing GAN generators, such as StyleGAN-T and GigaGAN. We will leave the scaling-up work to improve text-to-image GAN models in future work.
[1] ProlificDreamer: High-Fidelity and Diverse Text-to-3D Generation with Variational Score Distillation
[2] Diff-Instruct: A Universal Approach for Transferring Knowledge From Pre-trained Diffusion Models
As **Reviewer WiWu** and **Reviewer wPU2** wish, considering the limited time of the rebuttal period, we run two new experiments to compare SIM and SiD on **CIFAR10 unconditional generation** task behavior at different training stages: (3) training from scratch, and (4) resuming from a nearly converged generator;
**(3) compare SIM and SiD from scratch.** We compare SIM and SiD ($\alpha=1.0$) with SiD's official code and hyper-parameters to train a one-step generator from scratch. Both training runs are on 8 Nvidia-A100-80G GPUs with a total batch size of 128. We use the default settings as SiD, with the only difference being the use of loss functions. We log the FID value along the training process as a performance metric. **Figure 1** in our **added one-page** shows the FID Comparison of SIM and SiD for training from scratch.
**Result**: both methods can converge steadily. The SIM converges faster than SiD, with a lower FID value at the same number of training iterations.
**Findings**: We find that the weighting function that SiD uses is important for stable training. We find that using this weighting function for SIM can improve SIM's performance compared with our original implementation, which makes us appreciate the technical contribution of SiD authors for their high-quality implementations. We also find that training a generator to a very low FID is slow for both SIM and SiD. Due to the limited time of rebuttal period, this motivates us for our second experiment.
**(4) compare SIM and SiD by resuming from a nearly-converged generator.** In this experiment, we compare the convergence speed with SIM and SiD in the case of resuming from a nearly-converged one-step generator. Because a full training trial is very slow (it may last for up to several weeks with 16 GPUs). Therefore training until the end is impossible for the short rebuttal period. Therefore, we resume the generator from a checkpoint that we previously trained with our SiD implementation which has an FID of 2.8. We aim to inspect the training behavior of SIM and SiD near convergence with this experiment. For this experiment, we use 4 Nvidia-H800 GPUs with a batch size of 128 for both SiD and SIM. All hyperparameters of SIM are set the same as SiD's default choices except for a learning rate of 1e-6.
**Result**: **Figure 1** in our **added one-page** shows the FID Comparison of SIM and SiD. Both methods can converge slowly near convergence. The SIM converges faster than SiD, with a lower FID value with the same compute. With a couple of days of training, the SIM is able to reach a minimum FID of 2.06 from 2.8, while the SiD's minimum FID is larger than 2.15.
Next, we will address each reviewer's concerns in individual rebuttal cells.
Pdf: /pdf/9fa66cc434edf708973d857dfe9d543bf2795972.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper ‘One-Step Diffusion Distillation through Score Implicit Matching’ introduces a novel framework Score Implicit Matching (SIM), to distill pre-trained diffusion models into single-step generator models. This approach achieves almost no performance loss compared to the teacher diffusion model while being data-free i.e. this approach does not require ground truth data.
**Key Insights**
- The paper proposes a flexible class of score-based divergences between the generator model (student) and the diffusion model (teacher). - Though such divergences cannot be computed explicitly, the gradients of these divergences can be computed via the score-gradient theorem, allowing for efficient training.
- The Pseudo-Huber distance function in SIM results in stronger robustness to hyper-parameters such as learning rate and faster convergence.
- The paper provides a detailed algorithm for implementing SIM, which involves alternating phases of learning the marginal score function and updating the generator model.
**Empirical Performance**
- On CIFAR10, SIM achieves an FID of 2.17 for unconditional generation and 1.96 for class-conditional generation.
- For text-to-image generation, a single-step generator distilled using SIM from a leading diffusion-transformer-based model attains an aesthetic score of 6.42, outperforming other one-step generators like SDXL-TURBO, SDXL-LIGHTNING, and HYPER-SDXL.
**Comparative Analysis**
- SIM is compared to other methods like Diff-Instruct (DI) and Score Identity Distillation (SiD), demonstrating better performance in terms of robustness, convergence speed, and generative quality.
- The empirical results highlight SIM’s superiority, particularly in maintaining the performance of the original multi-step models in a one-step generation framework.
**Practical Applications**
- The data-free nature and robust convergence make SIM a highly efficient method for distilling diffusion models.
- The method shows strong potential for scaling to more complex tasks and larger neural networks.
**Limitations and Future Work**
- The applicability of SIM to other generative models, such as flow-matching models, is yet to be unexplored.
- While SIM is data-free, incorporating new data might further enhance performance, which is a potential area for future research.
Overall, the paper presents a significant advancement in the field of diffusion model distillation, offering a practical and efficient method for transforming pre-trained multi-step diffusion models into one-step generators without compromising performance.
Strengths: **Originality**
- The paper presents a novel approach - Score Implicit Matching (SIM) to distill pre-trained diffusion models into one-step generator models. This approach stands out by being data-free, which is a significant deviation from conventional distillation methods that often require extensive training data.
- The paper’s key technical insight, the score-gradient theorem, allows computation of gradients for score-based divergences, enabling efficient training. This is a novel contribution that enhances the feasibility of implicit minimization of these divergences.
- Pseudo-Huber Distance Function normalizes the distance vector y_t, which in turn stabilizes training loss and results in robustness to hyper-params and faster convergence.
**Quality**
- The paper provides compelling empirical evidence of SIM's effectiveness on CIFAR10 image generation. The approach also outperforms other one-step generators on aesthetic score on COCO-2017 validation set.
- The authors conduct thorough comparisons with existing methods, such as Diff-Instruct (DI) and Score Identity Distillation (SiD), showing that SIM not only outperforms these methods but also converges faster and is more robust to hyper-parameters.
**Significance**
- SIM makes a significant contribution to the practical deployment of diffusion models in real-world applications, particularly where computational efficiency is critical.
- The robustness and fast convergence of SIM suggest that it can scale to more complex tasks and larger neural networks, making it a valuable tool for future research and application in various domains, including image and video generation.
- The potential of SIM to generalize to other generative models, such as flow-matching models, opens up new avenues for research and application, further enhancing its significance in the field of generative modeling.
**Clarity**
- The paper is well-organized, with a logical flow from problem formulation to the introduction of the SIM method, followed by empirical evaluations and discussions of results.
- Key concepts, such as the score-gradient theorem and the Pseudo-Huber distance function, are explained in detail for reproducibility.
The tables and figures that present empirical results and algorithmic steps help in better understanding the performance and implementation details of SIM.
The paper presents a substantial advancement in the field of diffusion model distillation, offering a robust and efficient method for converting multi-step diffusion models into one-step generators without compromising performance. The combination of theoretical innovation with strong empirical results underscores the potential of SIM for widespread application and further research in generative modeling.
Overall, "One-Step Diffusion Distillation through Score Implicit Matching" is a well-executed study that addresses a key challenge in diffusion models and provides a promising solution with broad applicability.
Weaknesses: - **Narrow Scope of Comparisons**: The empirical comparisons focus primarily on Diff-Instruct (DI) and Score Identity Distillation (SiD). While these are relevant baselines, doing a broader comparison with other state-of-the-art distillation and generative modeling techniques would provide a more comprehensive evaluation of SIM's relative performance and applicability.
- **Dataset Limitation**: The experiments are mainly conducted on the CIFAR-10, SAM-LLaVA-Caption10M and COCO-2017 validation dataset. Including results on more diverse datasets, such as higher resolution images, different image domains (e.g., medical imaging, satellite imagery), and more complex text-to-image datasets, would strengthen the claim of SIM's broad applicability.
- **Addressing Failure Modes**: The paper does not thoroughly discuss the potential failure modes or limitations of the SIM method. Identifying scenarios where SIM might underperform, and providing insights or hypotheses on why these failures might occur, would be valuable for understanding the boundaries of the method’s applicability and for guiding future improvements.
- **Exploration of Data Incorporation**: While the data-free nature of SIM is highlighted as a strength, the potential benefits of incorporating new data during distillation are mentioned but not explored. Providing preliminary experiments or a more detailed discussion on how incorporating data might enhance the quality of the distilled models would be a constructive addition.
The paper presents significant contributions to the field of diffusion models. Addressing the above weaknesses would further enhance its impact and applicability.
Technical Quality: 3
Clarity: 3
Questions for Authors: Some of these are already mentioned under weakness.
- Evaluating the performance of SIM on different image domains would broaden the applicability of this approach
- Discussing scenarios where SIM would not perform well would help in understanding the boundaries and limitations for this approach and guide future improvements.
Additionally, it would be helpful if the authors can provide mode implementation details such as hyper-parameter tuning strategies and potential pitfalls during the training process.
Misc
- In the appendix, it is mentioned that a human preference study was conducted and the authors collected 30 responses in total. It would be valuable to know the results from the human preference study.
- Grammar/Typos
- Line 130 - ‘... if we choose the sampling distribution to be the diffused ..’
- Line 220 appears to be grammatically incorrect - ‘It is on par with the CTM and the SiD’s official implementation has yet to be released’
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: The authors have acknowledged limitations of the proposed approach.
- The applicability of SIM has not been explored in the space of other generative models such as flow-matching models.
- Although the approach is data-free, the potential benefit of incorporating data is yet to be explored.
I do not see potential negative societal impacts of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad that you like our work. We appreciate your valuable suggestions, and we will take them in the revision. In the following paragraphs, we will address your concerns one by one.
**Q1**. Doing a broader comparison with other distillation and generative modeling techniques would provide a more comprehensive evaluation of SIM's relative performance and applicability.
**A1**. In this paper, we have compared SIM with Diff-Instruct and SiD, showing its soundness on both the theoretical and empirical side. Another leading diffusion distillation method might be the consistency trajectory model (CTM) [1], which uses self-consistency to distill the model. Though CTM has shown strong performances, its performances rely on using additional LPIPS losses and GAN losses, which need ground truth data and are also tricky. Besides, the CTM model requires running multiple steps of reversed diffusion sampling, which is more computationally inefficient than SIM without running multiple-step sampling with teacher diffusion. We will compare with other leading distillation methods in the revision.
**Q2**. Including results on more diverse datasets, such as medical imaging, and satellite imagery would strengthen the claim of SIM's broad applicability.
**A2**. We highly appreciate your suggestions to use SIM on broader applications. In the rebuttal period, we conduct two more experiments: (1) text-to-3D generation using text-to-2D diffusion; and (2) improving the GAN generator using diffusion models; **Please check our Global rebuttal cell for details.** On both new applications, SIM has shown strong performances, showing its broad applicability.
**Q3**. Providing preliminary experiments or a more detailed discussion on how incorporating data might enhance the quality of the distilled models would be a constructive addition.
**A3**. We appreciate your keen intuition. One advantage of SIM is the image-data-free property. However, we agree that incorporating training data with SIM will potentially strengthen the SIM in real-world applications. A possible way to incorporate data is to use additional GAN discriminator and GAN losses together with the SIM loss. This can inject knowledge from new data into the generator model. Another direction that is worth exploring is to extend the one-step generator to multi-step generative models using variants of SIM. This potentially will combine techniques with consistency-based models. However, both the GAN loss and the multi-step generalization are more like a combination of different helpful techniques in generative modeling. In this submission, our goal is to thoroughly study the theory and practical performance of SIM. We will leave the study that incorporates data with SIM in the future.
**Q4**. Discussing scenarios where SIM would not perform well would help in understanding the boundaries and limitations of this approach and guide future improvements.
**A4**. We appreciate your constructive suggestion. In our text-to-image experiment, we find our SIM-DiT-600M sometimes fails to generate high-quality tiny human faces, hands, as well as legs. However, we find that the teacher PixelArt-$\alpha$ model also has such issues. We believe that a stronger teacher diffusion model will possibly address such failure cases. Please check **Figure 2** in our **added one-page** in global rebuttal for some failure cases.
**Q5**. The human preference study.
**A5**. We feel sorry for the confusion, we put the human preference comparison of SIM-DiT-600M and PixelArt-$\alpha$ diffusion model in **Table 3** in our submission. The result shows that the teacher PixelArt-$\alpha$ model slightly outperforms the SIM one-step model with a winning rate of 54.88\%. We will polish the presentation in the revision.
**Q6**. Implementation details.
**A6**. We will release the code if the paper gets published. In this response, we add some more implementation details for the one-step text-to-image generator model. When distilling the generator, we first reformulate the PixelArt-$\alpha$ diffusion with the so-called "data-prediction" formulation as proposed in the EDM paper [2]. We then initialize our generator using the "data-prediction" model with a fixed noise sigma of 2.5, following the same setting of Diff-Instruct and SiD. When distillation, we use a fixed classifier-free guidance scale of 4.5 for teacher diffusion. For the auxiliary diffusion model, we do not use the CFG. This setting is inspired by the DMD paper [3]. We will add more implementation details in the Appendix in the revision.
**Q7**. Some typos.
**A7**. We will address the typos and improve our writing in the revision.
[1] Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion
[2] Elucidating the Design Space of Diffusion-Based Generative Models
[3] One-step Diffusion with Distribution Matching Distillation
**We hope our response has resolved your concerns. If you still have any concerns, please let us know.**
---
Rebuttal Comment 1.1:
Title: Acknowledging the Rebuttal
Comment: Thank you for the response. I acknowledge the rebuttal. I still maintain my score as 8 -- Strong Accept. | null | null | null | null | null | null |
RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees | Accept (poster) | Summary: This paper introduces a novel image watermarking technique that injects signals into both the frequency and pixel domains of images. The watermark is specifically designed for the detection of AI-generated images and incorporates smoothing techniques to provide provable error guarantees against attacks under certain conditions. Additionally, the method's simplicity leads to faster watermarking speeds compared to existing techniques.
Strengths: - Unlike most watermarking techniques in the literature which train an encoder decoder model without theoretical guarantees, this paper provides certified error guarantees against a certain range of attacks. This is a novel and valuable addition to the watermarking literature, since reliability is of great importance in this domain.
- While having a fast inference time, the watermark achieves high robustness against some common attacks.
Weaknesses: - A comparison against more recent watermarking techniques is needed. In Table 4, the paper does not provide performance results (i.e., AUROC) against attacks compared to recent methods such as TreeRing [1], Trustmark [2], and RoSteLAS [3]. The included watermarking techniques (i.e., DwtDCT, RivaGAN) are known to be weaker against attacks compared to state-of-the-art watermarks.
- The paper lacks a comprehensive analysis of performance against image manipulations. While Table 4 illustrates performance on various attacks, it is unclear whether the hyper-parameters for these attacks have been set to their full strength. For instance, what hyper-parameters are being used for the VAE and Diff attacks? It is recommended that the authors include image quality metrics for the attacks (e.g., PSNR, SSIM) to demonstrate that their method maintains acceptable performance even under strong attacks that preserve image quality. Additionally, including a wider variety of diffusion attacks (e.g., pixel space and latent space) and potentially adversarial attacks (e.g., model substitution attacks), as discussed in the literature [4][5], would provide a more comprehensive evaluation.
[1] Yuxin Wen, John Kirchenbauer, Jonas Geiping, and Tom Goldstein. Tree-ring watermarks: Fingerprints for diffusion images that are invisible and robust, 2023.
[2] Tu Bui, Shruti Agarwal, and John Collomosse. Trustmark: Universal watermarking for arbitrary resolution images, 2023.
[3] Tu Bui, Shruti Agarwal, Ning Yu, and John Collomosse. Rosteals: Robust steganography using autoencoder latent space, 2023.
[4] Saberi, M., Sadasivan, V. S., Rezaei, K., Kumar, A., Chegini, A., Wang, W., and Feizi, S. Robustness of ai-image detectors: Fundamental limits and practical attacks, 2023.
[5] An, B., Ding, M., Rabbani, T., Agrawal, A., Xu, Y., Deng, C., Zhu, S., Mohamed, A., Wen, Y., Goldstein, T., et al.: Benchmarking the robustness of image watermarks, 2024.
Technical Quality: 3
Clarity: 2
Questions for Authors: - Based on Table 3, RAW achieves a comparably low FID and high CLIP score, suggesting it applies an imperceptible and low-budget watermark signal to the images in the current evaluations. However, a more detailed evaluation of RAW's robustness against provable diffusion attacks [1][2] that claim to break imperceptible watermarks would be valuable. I am curious to know the authors' opinion on how their method might perform against such attacks.
[1] Saberi, M., Sadasivan, V. S., Rezaei, K., Kumar, A., Chegini, A., Wang, W., and Feizi, S. Robustness of ai-image detectors: Fundamental limits and practical attacks, 2023.
[2] Xuandong Zhao, Kexun Zhang, Zihao Su, Saastha Vasan, Ilya Grishchenko, Christopher Kruegel, Giovanni Vigna, Yu-Xiang Wang, and Lei Li. Invisible image watermarks are provably removable using generative ai, 2023.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: - As noted in the limitations section, this technique is designed for use with a single watermark key (w) rather than multiple values, which is a disadvantage compared to existing methods. However, given the importance of the task of detecting AI-generated images, this focus on a single key value is justified and does not limit the value of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for devoting their valuable time to reviewing our paper and offering insightful suggestions for improving it.
> Q: The paper lacks a comprehensive analysis of performance against image manipulations. While Table 4 illustrates performance on various attacks, it is unclear whether the hyper-parameters for these attacks have been set to their full strength. For instance, what hyper-parameters are being used for the VAE and Diff attacks?
**R**: Thank you for your questions regarding the hyperparameters for the attacks. For all the attacks, we followed the public implementation in this GitHub repository (https://github.com/XuandongZhao/WatermarkAttacker) for watermark attacks without any modifications. We will include the hyperparameters for the attacks in the revision.
> Q: It is recommended that the authors include image quality metrics for the attacks (e.g., PSNR, SSIM) to demonstrate that their method maintains acceptable performance even under strong attacks that preserve image quality.
**R**: Thank you for your questions regarding the evaluation metrics.
We report the PSNR and SSIM of the watermarked images in the Table below. For PSNR and SSIM, our method outperforms the StegaStamp method but is slightly behind DwtDctSvd and RivaGAN. These results are consistent with our expectations. The StegaStamp method achieves more robust detection by sacrificing image quality through injecting a large amount of noise into the image, while DwtDctSvd and RivaGAN maintain image quality slightly better at the cost of reduced robustness against (adversarial) image perturbations.
| Method | FID (Lower is preferred) | PSNR (Higher is preferred) | SSIM (Higher is preferred) |
|--------------|---------------------------|----------------------------|----------------------------|
| RAW | 24.75 | 31.9 | 0.91 |
| StegaStamp | 42.31 | 27.8 | 0.88 |
| DwtDctSvd | 25.12 | 37.6 | 0.98 |
| RivaGAN | 25.03 | 35.2 | 0.95 |
> Q: Additionally, including a wider variety of diffusion attacks (e.g., pixel space and latent space) and potentially adversarial attacks (e.g., model substitution attacks), as discussed in the literature [4][5], would provide a more comprehensive evaluation.
**R**: Thank you for your suggestions regarding the evaluation of the proposed method. We will include more adversarial attacks, as you suggested, in the revision to provide a more comprehensive evaluation. | Summary: This paper proposes a Robust, Agile plug-and-play Watermarking (RAW) framework, which adds learnable watermarks directly on the original image and employs a classifier to detect the presence of the watermark. This design enhances both adaptability and computation efficiency, providing an model-agnostic for real-time applications. This paper show that RAW achieves provable guarantees on the false positive rate (FPR) for detection, even under adversarial attacks.
Strengths: 1. The proposed RAW provides provable guarantees on FPR for watermarking under signal processing and adversarial attacks.
2. RAW shows improved watermarking speed and robustness performance, while maintaining image quality.
Weaknesses: 1. In Equation (5), the watermark is also optimized, which indicates that this method can only add the same watermark. This limits the application.
2. The proposed method still embeds the watermark into the carrier image and is not specifically designed for AI-generated images.
3. In Table 2, some columns lack bolded data, and some bolded entries are not the best results.
4. The compared methods are outdated, failing to highlight the robustness and quality of this approach.
5. Since the carrier is involved in the method, PSNR should be evaluated.
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for devoting their valuable time to reviewing our paper and offering insightful suggestions for improving it.
> Q: The proposed method still embeds the watermark into the carrier image and is not specifically designed for AI-generated images.
**R**: Thank you for your questions regarding the watermark design.
The design of our proposed work aims to be usable by anyone, regardless of how the image is generated, whether by state-of-the-art diffusion models or drawn by an artist. Despite the general-purpose design, empirical evidence has verified the effectiveness of our method on AI-generated images. That being said, we will explore how to tailor the current methods to better fit AI-generated images in future work.
> Q: In Table 2, some columns lack bolded data, and some bolded entries are not the best results.
**R**: Thank you for your questions regarding the Table 2.
We will bold all the best results for each column in the revision. In Table 2,there are a total of three performance metrics and two metrics regarding the quality of the watermarked images. For the performance of encoding speed, our method outperforms other methods by significant margins. For the normal AUROC, all methods are close to each other, and the differences are not significant. For the adversarial AUROC, our method achieved the second-highest AUROC of 0.92 for MS-COCO, slightly behind the StegaStamp method. This is reasonable and consistent with our expectations, as StegaStamp significantly degrades image quality to achieve robustness against image perturbations, as indicated by significantly higher FID and lower PSNR and SSIM values. We will make this clear in the revision.
Table: Comparasion between StegaStamp and RAW (our method) on MS-COCO.
| Method | Adversarial AUROC |FID (Lower is preferred) | PSNR (Higher is preferred) | SSIM (Higher is preferred) |
|--------------|---------------------------|---------------------------|----------------------------|----------------------------|
Our method | 0.92 | 24.75 | 31.9 | 0.91
StegaStemp | 0.93 | 42.31 | 27.8 | 0.88 |
> Q: Since the carrier is involved in the method, PSNR should be evaluated.
**R:** Thank you for your suggestion regarding the image quality comparison metrics.
We report the PSNR and SSIM of the watermarked images in the Table below. For PSNR and SSIM, our method outperforms the StegaStamp method but is slightly behind DwtDctSvd and RivaGAN. These results are consistent with our expectations. The StegaStamp method achieves more robust detection by sacrificing image quality through injecting a large amount of noise into the image, while DwtDctSvd and RivaGAN maintain image quality slightly better at the cost of reduced robustness against (adversarial) image perturbations.
| Method | FID (Lower is preferred) | PSNR (Higher is preferred) | SSIM (Higher is preferred) |
|--------------|---------------------------|----------------------------|----------------------------|
| RAW | 24.75 | 31.9 | 0.91 |
| StegaStamp | 42.31 | 27.8 | 0.88 |
| DwtDctSvd | 25.12 | 37.6 | 0.98 |
| RivaGAN | 25.03 | 35.2 | 0.95
---
Rebuttal Comment 1.1:
Title: Final Comments
Comment: My concerns have not been well addressed, and even some have gone unanswered. I decide to lower the score and lean towards Borderline reject. | Summary: This paper proposes a post-processing watermarking strategy that embeds watermarks into images after generation. The strategy is designed to be computationally efficient and model-agnostic. It involves training a learnable watermark and embedding it into both the frequency and spatial domains of the original image. The embedded watermark is later verified by a classifier, which is trained jointly with the watermark.
Strengths: * Addresses an important problem.
* The problem formulation is clear and easy to understand.
* The paper is generally well-presented.
* Experimental results show improvements in adversarial robustness and speed.
Weaknesses: * The survey of existing works in the "Related Works" section is incomplete. Numerous works on watermark frameworks for diffusion models are not included.
* It would be helpful if the authors provided a figure depicting an overview of the proposed method.
* The authors should clarify why they did not use signSGD to optimize the verification model if it is better. What are the data-level optimization problems?
* The reasoning for optimizing the watermark only based on $L_0$ instead of $L_{raw}$ needs to be explained.
* The method is not plug-and-play; it requires training the watermark and the verifier for each target generative model.
* Experimental evaluation:
* For the SDXL model, no comparison with baselines is made.
* Metrics for image quality comparison are limited. The authors should also include metrics like SSIM.
Technical Quality: 3
Clarity: 3
Questions for Authors: see above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: see above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q: The survey of existing works in the "Related Works" section is incomplete. Numerous works on watermark frameworks for diffusion models are not included.
**R**: Thank you for your questions regarding the related works section. We will include more works on watermark frameworks for diffusion models in the revision.
> Q: It would be helpful if the authors provided a figure depicting an overview of the proposed method.
**R**: Thank you for your suggestion regarding the presentation of the proposed method. We have included a figure depicting an overview of the proposed method in the revision.
> Q: The authors should clarify why they did not use signSGD to optimize the verification model if it is better. What are the data-level optimization problems?
**R**: Thank you for your insightful question regarding the optimizer and the data-level optimization problems.
The data-level optimization problem refers to problems where the variables to be optimized are the data themselves instead of typical model parameters. For instance, the problem of finding an adversarial example is a data-level optimization problem ($x$ is an input to a machine learning model $f$, and $y$ is the correct label for $x$):
$ min_{x'} \| x - x' \|_\infty \text{; subject to } f(x') \neq y .$ Data-level optimization problem can be difficult to solve, e.g., the results may get stuck at suboptimal points [A]. A common approach is to use the signum of first-order gradients [B,C].
The reason for not using signSGD to optimize the verification model is that the verification model is a typical model parameter optimization problem, and signSGD may potentially lead to several optimization issues such as slow convergence speed. We will make this clear in the revision.
Refs:
[A] Wang et al., "Probabilistic margins for instance reweighting in adversarial training". In
NeurIPS, 2021.
[B] Madry et al., "Towards deep learning models resistant to adversarial attacks". In ICLR, 2018.
[C] Wang et al., "Watermarking for Out-of-distribution Detection". In NeurIPS, 2022.
> Q: The method is not plug-and-play; it requires training the watermark and the verifier for each target generative model.
**R**: Thank you for your question regarding the plug-and-play nature of the proposed method. When referring to the method as plug-and-play, we mean that the watermarking method can be applied to any generative model without modification, regardless of its architecture. This is in sharp contrast to previous works that are designed for specific generative models, such as [8] for stable diffusion with the DDIM sampler only. In addition, we believe that additional training by individual users, e.g., artists, is actually beneficial since the watermark and verifier are tailored to them and possibly would bring better robustness. We will make this clear in the revision.
> Q: The reasoning for optimizing the watermark only based on instead of needs to be explained.
**R**: Thank you for your question regarding the optimization procedure.
The main reason for optimizing the watermark based only on $L_0$ instead of $L_\text{raw}$ is because $L_\text{raw}$ involves non-differentiable terms with respect to the watermark, which makes gradient-based optimization infeasible. In detail, recall that $L_\text{raw}$ contains cross-entropy losses calculated on perturbed images, which can involve non-differentiable operations such as JPEG compression.
> Q: Metrics for image quality comparison are limited. The authors should also include metrics like SSIM.
**R:** Thank you for your suggestion regarding the image quality comparison metrics.
We report the PSNR and SSIM of the watermarked images in the Table below. For PSNR and SSIM, our method outperforms the StegaStamp method but is slightly behind DwtDctSvd and RivaGAN. These results are consistent with our expectations. The StegaStamp method achieves more robust detection by sacrificing image quality through injecting a large amount of noise into the image, while DwtDctSvd and RivaGAN maintain image quality slightly better at the cost of reduced robustness against (adversarial) image perturbations.
| Method | FID (Lower is preferred) | PSNR (Higher is preferred) | SSIM (Higher is preferred) |
|--------------|---------------------------|----------------------------|----------------------------|
| RAW | 24.75 | 31.9 | 0.91 |
| StegaStamp | 42.31 | 27.8 | 0.88 |
| DwtDctSvd | 25.12 | 37.6 | 0.98 |
| RivaGAN | 25.03 | 35.2 | 0.95
> Q: For the SDXL model, no comparison with baselines is made.
**R**: Thank you for your question regarding the baseline comparison for the SDXL model. We report the results in the following table. We observe that our proposed method outperforms RivaGAN and DwtDctSvd while being slightly behind StegaStamp. This is reasonable and consistent with our expectations, as explained previously.
Table: Comparison between RAW and baselines on images generated by SDXL.
| Method | Average normal AUROC | Average Adversarial AUROC |FID (Lower is preferred) |
|--------------|-------------------------------------|-----------------------------------------|-------------------------|
| RAW | 0.99 | 0.90 | 15.3 |
| DwtDctSvd | 0.99 | 0.76 | 15.2 |
| RivaGAN | 0.99 | 0.80 | 15.6 |
| StegaStamp | 0.99 | 0.92 | 23.4 |
---
Rebuttal Comment 1.1:
Title: Thanks for the response
Comment: The authors have addressed my concerns about the evaluation metric, SDXL model, and optimizer. Thanks! However, I still have concerns about the rest of the questions and consider them as weaknesses. Hence, I decided to keep my score. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Carrot and Stick: Eliciting Comparison Data and Beyond | Accept (poster) | Summary: This paper studies elicting comparison data with truthfulness guarantees, specifically strict Bayesian Nash equilbrium. The authors utilize strong stochastic transitivity to define a Bayesian strong stochastic transitivity model and generalize several existing models. Under this model (i.e., problem setting), the authors propose a peer prediction mechanism based on bonus-penalty and show that it achieves symmetrically strong truthfulness, for both comparison data. Moreover, the authors also identify the conditions (in theoretical results) and the corresponding mechanism that achieves symmetric strong truthfulness for networked data and the general setting. Empirical results on two real-world datasets show that the proposed mechanism achieves better truthfulness or incentive, by showing that being truthful leads to a higher payment.
Strengths: - The studied problem is important and relevant.
- The theoretical rigor is appreciated.
- The proposed approach seems sound, and the definition of Bayesian SST to generalize several existing models is interesting.
Weaknesses: - An important requirement or assumption is the admissible condition to ensure that the assignment contains the necessary triplets. While the authors describe that one can create a superset, with a bounded size, to ensure the admissible condition. It appears that in doing so, one may need the agents to observe and report additional pairs of items (correct me if I am wrong). If so, in practice, one may not always be able to ask the agents to make addtional observations on required pairs of items (e.g., customers' reviews of products of services).
- The abstract and introduction motivates the importance of eliciting comparison data by citing several applications and use cases. However, the covered use cases by the empirical results seem to be less extensive than these.
Technical Quality: 3
Clarity: 2
Questions for Authors: In lines 132-133,
> ... where the randomness of $S(\cdot, \cdot)$ comes from both $\theta$ and $T_{\theta}$
The randomness of $\theta$ is due to $\theta \sim P_{\Theta}$. What is the randomness of $T_{\theta}$? How should one interpret this, and its implications?
What other families of strategies are useful? Can strategically mis-behaving agents be accounted for, for instance to exploit this system to minimize someone's expected payment or intentionally aiming to obfuscate identification of the true comparison or ranking? In other words, are these considered families of strategies sufficient to consider some robustness to mis-behaving agents (and which types)?
In lines 173-174,
> ... she would expect that others will ... prefer $a$ over $a''$
What is the intuition behind this? Why would $a$ be preferred over $a''$ when the agent itself only has information that $a$ is preferred over $a'$?
In line 190-191,
> Hence, theorem 3.1 implies that agents’ manipulations can only decrease the probability of transitivity among their reports.
What is the implication of "decreasing the probability of transitivity among their reports"?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The method requires a specific problem setting or model called the Bayesian strong stochastic transitivity. But it is shown to generalize several other existing models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable input!
***
**Question 1**: What is the randomness of $T_\theta$?
**Answer**: $T_\theta$ is a stochastic comparison function that outputs random comparison given parameter $\theta$. The randomness of $T_\theta$ captures noise in observing comparisons. For instance, in the Bradley-Terry-Luce model, $\theta$ encodes each item's quality, and fixing the parameter $\theta$, the comparison outcome between two items $a$ and $a'$ is noisy and with level of randomness decided by $\theta_a-\theta_{a'}$.
***
**Question 2**: External incentives: minimizes other's payment of obfuscate identification of the true ranking.
**Answer**: Our setting focuses on agents who only care about their own payment and do not consider agents who have external incentives, for example, incentives to minimize someone else's expected payment or to obfuscate identification of the true comparison. Several previous works, however, offer potential solutions to handle such external incentives. For instance, as [49], we may incorporate robust statistics (or different privacy techniques) to mitigate the impact of a small group of agents attempting to attack one agent's payment or learning outcomes.
***
**Question 3**: Are these considered families of strategies sufficient to consider some robustness to misbehaving agent?
**Answer**: In our problem setting, since we work with binary reports, the only two pure
strategies of the agent are truth-telling and flipping. Therefore, any mixed strategy of the agent can
be represented by a linear combination of truth-telling and an uninformed strategy (and flipping).
Consequently, our comparison of truth-telling and uninformed agents already indicates the higher
payment of truthful agents over any other individual strategic behavior. Still, we add experiments of
more complex group strategic behaviors to test the robustness of our mechanism. Please see the answer to question 6 in the rebuttal for Reviewer c3bN. Thank you!
***
**Question 4**: Line 173-174. Why would $a$ be preferred over $a''$ when the agent itself only has information that $a$ is preferred over $a'$.}\yc{Can you check that this response is correct?
**Answer**: When an agent observes that $a$ is preferred over $a'$, denoting this as $T(a,a')=1$, the agent considers two conditional probabilities for any given third item $a''$: $P[T(a'',a')=1|T(a,a')=1]$ and $P[T(a'',a)=1|T(a,a')=1]$. The first is the conditional probability that the third item $a''$ is preferred over the $a'$ and the second is the conditional probability that the third item $a''$ is preferred over $a$. Since the agent himself thinks $a$ is better than $a'$, it's more likely that the third item $a''$ is better than $a'$, than better than $a$. The first conditional probability is higher than the second conditional probability. We'll provide more explanation on this point in the paper.
***
**Question 5**: Line 190-191. What is the implication of decreasing the probability of transitivity among their reports.
**Answer**: Given three items $a,a',a''$ and three comparison outcomes $x = 1[a\prec a']$, $y = 1[a'\prec a'']$, and $z = 1[a''\prec a]$, these comparison outcomes satisfy transitivity if they form one of the $3!$ valid rankings. As agent's noisy comparisons are random, some comparison outcomes satisfy transitivity and some do not. The probability of transitivity is the probability that noisy comparison outcomes satisfy transitivity.
***
**Question 6**: The method requires a specific problem setting or model called the Bayesian SST.
**Answer**: We emphasize that Bayesian SST model is general. It is not a restriction for the comparison data application. Instead, it is a non-parametric generalization of several most commonly used parametric ranking models.
***
**Question 7**: Admissible assignments.
**Answer**: We do not require a single agent to compare additional pairs of items; instead, we only need that some other agents are assigned to those pairs.
The admissible condition requires some control over the assignment $\mathcal{E}$.
This is reasonable for the purpose of rank learning because these algorithms also need certain controls on $\mathcal{E}$. Otherwise, it would be impossible to rank an item if it is never compared to any other item. Moreover, several works on rank learning (e.g., [51] and Braverman and Mossel) require $\mathcal{E}$ to include all possible pairwise comparisons, which automatically satisfies our admissible condition. Several platforms (including Amazon) actively encourage users to submit reviews on specific items, which can be considered a method to control $\mathcal{E}$.
***
**Question 8**: Empirical results are less extensive.
**Answer**: In the attached PDF, we run our mechanism on a new dataset: the HuggingFace H4 Stack Exchange Preference Dataset, which is a dataset used to align LLMs with human preferences. The experiment on this new dataset aligns with our introduction and abstract of improving the data quality of human preference training for LLMs. The dataset contains questions and their corresponding answers, each with voting data. In our experiment, we treat each vote as the report of an agent. For example, suppose a question has three answers, $a_1, a_2, a_3$, with vote numbers $v_1, v_2, v_3$. A vote (downvote) for $a_1$ means that the agent reports $a_2 \prec (\succ) a_1$ and $a_3 \prec (\succ) a_1$. Hence, there are $|v_1| + |v_2| + |v_3|$ agents in this example. We treat the original agents in the dataset as truth-telling agents. As in the experiment on the SUSHI dataset, we compare the payments of truth-telling agents with uninformed agents and unilateral strategies (see the first two figures). The ECDF of payments for truth-telling agents clearly dominates.
***
**Reference**:
Braverman, Mark, and Elchanan Mossel. "Noisy sorting without resampling." Proceedings of the
nineteenth annual ACM-SIAM symposium on Discrete algorithms (SODA), 2008.
---
Rebuttal Comment 1.1:
Title: Post rebuttal
Comment: I thank the authors for their prepared response. Most of my questions are clarified and I am happy to maintain my current recommendation of weak accept.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for acknowledging this! | Summary: This paper considers the problem of eliciting pairwise comparisons truthfully from strategic agents in the absence of ground truth. It considers the popular framework of peer prediction which is a class of mechanisms where reports from "peer" agents are used as a proxy for ground truth in order to design a payment scheme. The problem with using existing peer prediction mechanisms is that they mainly work for settings where tasks are drawn from the same distribution. However, this does not apply to the case of pairwise comparisons, as the two items being compared can be very different.
This paper designs a new peer prediction mechanism that is based on the Dasgupta-Ghosh mechanism where the agents also suffer a penalty in addition to rewards so as to avoid spurious "agree to agree" equilibria. The main novelty of the paper comes from the connection it makes to the well-known strong stochastic transitivity (SST) condition for pairwise comparison data. Under the assumption that pairwise comparisons satisfy SST condition, the mechanism cleverly utilizes this condition to ensure that the reward term exceeds the penalty term for truthful agents which is one of the main requirements for truthfulness. The paper also considers other elicitation tasks beyond pairwise comparisons, such as eliciting networked data, and give a general framework for extending.
Strengths: In my opinion, the paper makes a good contribution to the literature on peer prediction. While the proposed mechanism is mainly based on the Dasgupta-Ghosh mechanism, it is a principled way of extending the Dasgupta-Ghosh mechanism to heterogenous tasks and utilizing task specific properties (such as SST) for incentive design. I really like that the paper connects the peer prediction literature to the ranking literature in a meaningful manner.
Weaknesses: I am only concerned about the relevance of this paper to NeurIPS as this is mainly a mechanism design paper. But given the usefulness of pairwise comparisons in training LLMs, there is definitely need for truthful elicitation of comparison data.
The results on networked elicitation seem a bit disconnected from the results on elicitation of comparison data.
Technical Quality: 4
Clarity: 4
Questions for Authors: Perhaps the framework also works for weak stochastic transitivity?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: I did not find adequate discussion about limitations. Perhaps a section on limitations of the mechanism and peer prediction in-general might be useful.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We greatly appreciate your valuable input! Below, we will address your question in this paper.
***
**Question**: Weak stochastic transitivity.
**Answer**: This is an interesting question. We conjecture that weak stochastic transitivity alone is not sufficient. Our proof requires $\Pr[T_\theta(a'', a') = 1\mid T_\theta(a, a') = 1]$ is larger than $\Pr[T_\theta(a'', a) = 1\mid T_\theta(a, a') = 1]$, but weak stochastic transitivity only decides whether each term is greater than $1/2$ instead of magnitude. For instance, we can make $\Pr[T_\theta(a'', a') = 1\mid \theta]$ always close to $1/2$ and $\Pr[T_\theta(a'', a) = 1\mid \theta]$ to almost one if $\Pr[T_\theta(a'', a) = 1\mid \theta]\ge 1/2$. Thus, the conditional expectation $\Pr[T_\theta(a'', a) = 1\mid T_\theta(a, a') = 1]$ can be larger than $\Pr[T_\theta(a'', a') = 1\mid T_\theta(a, a') = 1]$, which will break our proof. That being said, our mechanism may still be symmetrically strongly truthful under Braverman and Elchanan's model, which is weakly stochastic transitive but not strongly stochastic transitive.
***
**Reference**:
Braverman, Mark, and Elchanan Mossel. "Noisy sorting without resampling." Proceedings of the nineteenth annual ACM-SIAM symposium on Discrete algorithms (SODA), 2008.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! I have no further questions and would like to keep my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging this! | Summary: * This paper proposes a peer prediction mechanism for elicitation of comparison data.
* In the model, there is a collection of items $A$, and a set of agents $N$ which privately observe noisy comparisons between items. Comparisons are characterized by a stochastic comparison function $T_\\theta$, parameterized by $\\theta \\in \\Theta$ and drawn from a common prior $P_\\Theta$.
* A peer prediction mechanism assigns a pair of items for each agent $i$ to compare. Each agent then observes a noisy comparison signal $s_i$ and strategically reports a possibly different value $\\hat{s}_i$ to maximize their ex-ante payment $\\mathbb{E} [M_i (\\hat{\\mathbf{s}})]$. The goal is to design a mechanism $M$ which elicits truthful reporting.
* The authors present Mechanism 1 for elicitation of comparison data, and prove that it is symmetrically strongly truthful (Theorem 3.1). The mechanism is based on a bonus-penalty payment function, which for agent $i$ awards agreement with some agent $j$, and penalizes agreement with some agent $k$.
* Section 5.1 presents a generalization of the mechanism to networked data, proving that it is symmetrically strongly truthful for signals sampled from the graph Ising model (Theorem 5.1). Section 5.2 further generalizes by showing a mechanism which is symmetrically strongly truthful for data with uniform dominance.
* Finally, empirical evaluation is performed on two datasets (Sushi preferences and Last.fm). Results seem to show stochastic dominance of payout in truthful reporting vs. random reporting.
Strengths: * Presentation is clear and concise. Intuition is given for formal proofs.
* The concept of uniform dominance is interesting and seems to be generalizable.
* Theoretical results seem to be quite general, and applicable to many pairwise choice models.
Weaknesses: * Empirical evaluation compares truthful reporting against benchmarks which seem relatively “simple” - uninformed random reporting, as opposed to more sophisticated forms of strategic behavior.
* Empirical evaluation procedure is not sufficiently clear. Method of constructing the graphs in the empirical section is not presented sufficiently formally. Graph legends and captions are uninformative.
* Implications of assumptions are not discussed in detail. Examples for such assumptions - assuming symmetric deviation, the ability to penalize users, assuming that items are a priori similar but ex-post distinct. See below.
Technical Quality: 3
Clarity: 2
Questions for Authors: * What is the relation between Mechanism 3 and Mechanisms 1,2? Is it possible to think about Theorem 3.1 and Theorem 5.1 as corollaries of Theorem 5.2? If not, how do they fundamentally differ?
* “We assume items are a priori similar but ex-post distinct” L106 - What are the implications of this assumption? When does it apply, and does it restrict generality?
* For the given mechanism, is it possible to have a better BNE which is not symmetric? (e.g, a beneficial equilibrium where only some agents report truthfully?)
* What would be the implications of agents having limited liability? (e.g if the agents can choose not to participate in the game when they may get negative reward)
* Possible typo in L318: “ The figure shows that only about 50% of the agents using an uninformed random strategy receive positive payments, while over 75% of the users in the original dataset receive positive payments” - Percentages seem to add to more than 100%.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Results seem to rely on relatively strong assumptions, such as symmetric deviation, and the ability to penalize users. Despite that, such limitations don’t seem to be discussed in detail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your valuable input.
***
**Question 1**: What are the relations between Mechanism 3 and Mechanisms 1 and 2? Is it possible to think about Theorem 3.1 and Theorem 5.1 as corollaries of Theorem 5.2?
**Answer**: Mechanism 3 offers a general scheme for designing peer prediction mechanisms, which requires finding uniformly dominant
tuples. Identifying tuples that are uniformly dominant for different settings is a critical and nontrivial
task. Mechanism 1 and mechanism 2 identify uniformly dominant tuples for the pairwise comparison
setting under Bayesian SST model and networked data setting under the Ising model respectively.
Theorem 3.1 and Theorem 5.1 can be viewed as instantiations of Theorem 5.2, with identifying and
proving uniformly dominant tuples in the respective settings as a main contribution.
***
**Question 2**: What are the implications of a priori similar but ex-post distinct?
**Answer**: The a priori similar assumption implies that agents do not favor any item before observing their noisy comparison. Our
mechanism allows agents to know the assignment $\mathcal{E}$ in advance. The a priori similar assumption
means that, before observing the noisy comparison, agents do not have strict ordering of any pair of
items even with the knowledge of the assignment $\mathcal{E}$. This assumption can be relaxed if we further
randomize the assignment $\mathcal{E}$ by a uniform random permutation.
The ex-post distinct assumption requires that the realized state $\theta$ is distinct for each item. Under
the Bradley-Terry model (a special case of our Bayesian SST), this assumption means that no two
candidates have the same realized scalar quality. The Mallow’s model (another special case of ours)
always satisfies this assumption.
***
**Question 3**: Limited liability? (e.g. if the agent can choose not to participate in the game when they may get
negative reward)
**Answer**: We do not require our mechanism to penalize users (i.e. give them
a negative payment) because any positive affine transformation of the payment function preserves the
equilibrium. For example, we can add a constant of 1 to the payment function in Equation (2). This
ensures that the payment is either 2 or 0.
***
**Question 4**: Possible typo: ...Percentages seem to add to more than 100%.
**Answer**: Our experiment shows the
histogram of agent’s payment under three different settings: (1) Truth-telling, where every agent uses
the truthful strategy, we treat the original data as agents’ true preferences. (2) Uninformed, where
every agent plays the same uninformed random strategy. (3) Unilateral deviation, where a single
agent plays a non-truthful strategy while all other agents report truthfully.
The numbers are for different settings and do not sum to 100%. We will rewrite the sentence to the
following “The figure shows that in the uninformed random strategy setting only about 50% of the
agents receive positive payments, while in the original dataset (truthful strategy setting) over 75% of
the users receive positive payments”.
***
**Question 5**: Symmetric deviation.
**Answer**: Our symmetric strong truthfulness does not require symmetric deviation.
The truth-telling strategy profile is a Bayesian Nash equilibrium where any individual’s unilateral
deviation results in a worse payment for the agent. Additionally, we show that our mechanism
ensures the truth-telling strategy profile is better (gives higher payoff) than any other symmetric
strategy profile. Considering symmetric strategy profiles is reasonable because they do not require
complicated coordination among all agents.
***
**Question 6**: More sophisticated form of strategic behavior.
**Answer**: In our problem setting, since we work with binary reports, the only two pure
strategies of the agent are truth-telling and flipping. Therefore, any mixed strategy of the agent can
be represented by a linear combination of truth-telling and an uninformed strategy (and flipping).
Consequently, our comparison of truth-telling and uninformed agents already indicates the higher
payment of truthful agents over any other individual strategic behavior. Still, we add experiments of
more complex group strategic behaviors to test the robustness of our mechanism.
In the attached PDF, we ran the experiment on all datasets with a new group strategic behavior to
further stress test our mechanism. For each dataset, we randomly divided the truth-telling agents
originally in the dataset into two groups of equal numbers. In the first group, the agents remained
truthful, but in the second group, the agents were replaced by uninformed agents. We then mixed
the two groups of agents and adopted our mechanism on the mixed group of agents. Finally, we
compared the payments of agents in the two groups (see the last 3 figures for the results). The
results show that the payments of truth-telling agents still dominate the payments of uninformed
agents.
Also, we run our experiments on a new dataset: HuggingFace H4 Stack Exchange Preference Dataset, a dataset used to align LLMs with human preferences. The dataset contains questions and their corresponding answers, each with voting data. In our experiment, we treat each vote as the report of an agent. We compare the payments of truth-telling agents with the new group behavior (see the third figure). The ECDF of payments for truth-telling agents clearly dominates. Please find the detailed setting in our global rebuttal.
***
**Question 7**: Is it possible to have a better BNE which is not symmetric?
**Answer**: From the mechanism designer’s perspective, the truth-telling equilibrium is the best equilibrium for the purpose of information elicitation. We focus on equilibria with symmetric strategy profiles. Our results do not rule out the possibility that there is
some asymmetric equilibrium leading to higher agent payoff. However, such asymmetric equilibrium
may require complicated coordination and hence is difficult to reach.
---
Rebuttal Comment 1.1:
Comment: We would greatly appreciate your response to our rebuttal, which, in our view, effectively addresses your main concerns. If there are lingering questions, we would gladly engage in a discussion. | null | null | Rebuttal 1:
Rebuttal: Please find our additional experiment results attached. Thanks!
In the first three figures, we run our experiments, including the new group strategical behavior, on a new dataset: the HuggingFace H4 Stack Exchange Preference Dataset, which is a dataset used to align LLMs with human preferences. The dataset contains questions and their corresponding answers, each with voting data. For example, suppose a question has three answers, $a_1, a_2, a_3$, with vote numbers $v_1, v_2, v_3$. A vote (downvote) for $a_1$ means that the agent reports $a_2 \prec (\succ) a_1$ and $a_3 \prec (\succ) a_1$. Hence, there are $|v_1| + |v_2| + |v_3|$ agents in this example. We treat the original agents in the dataset as truth-telling agents. In our experiment, we treat each vote as the report of an agent. As in the experiment on the SUSHI dataset, we compare the payments of truth-telling agents with uninformed agents, unilateral strategies, as well as the new group behavior (which is introduced in the rebuttal for Reviewer c3bN). The ECDF of payments for truth-telling agents clearly dominates. Due to limited computing resources, we only selected the first 100 questions in the dataset with at least three answers with nonzero votes.
The last three figures are the experiment of the new group behavior on our original datasets.
***
**Reference**:
Lambert, Nathan, Lewis Tunstall, Nazneen Rajani, and Tristan Thrush. "HuggingFace H4 Stack Exchange Preference Dataset." Hugging Face, 2023.
Pdf: /pdf/56c03222d2d7d8450ea487c4e49f6b59dda33820.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Recovering Complete Actions for Cross-dataset Skeleton Action Recognition | Accept (poster) | Summary: This paper presents an innovative recover-and-resample augmentation framework to tackle the domain generalization challenge in skeleton-based action recognition. Utilizing the concept of a complete action prior, this method reconstructs entire action sequences from partial observations and resamples them to generate robust augmentations for unseen domains. The approach's effectiveness is confirmed through comprehensive experiments on three skeleton action datasets, showing considerable improvements over existing domain generalization methods.
Strengths: 1. Domain generalization of the skeleton based human action recognition is an interesting and unexplored research direction. The authors propose a good test bed for this task which can contribute to the society.
2. The authors propose a new augmentation method which is verified to be effective on various DG settings.
3. Comprehensive ablation studies are delivered by the authors to show the efficacy of the proposed approach.
Weaknesses: 1. Domain generalization is mostly handled via RGB image, since due to the background changes and the camera positions there are obvious large distribution shifts. Regarding the skeleton data, since they are mostly recorded in 3D coordinates without background information, where the domain gap has already been reduced. The authors should justify the need of domain generalization in the skeleton-based human action recognition field with more details in the introduction section. Since in the ultilized datasets, e.g., NTU120, it involves cross view and cross subject evaluation, which can also be regarded as domain generalization.
2. In Section 4.3, how did the authors implement these approaches on the skeleton-based human action recognition methods? More details are expected to be enriched.
3. What is the number of parameters for the proposed approach? The authors are encouraged to make comparison between the proposed approach and the baselines regarding the number of parameters.
4. The authors should provide more clarification regarding why they chose those GCN backbones, e.g., HCN and CTRGCN. Maybe can be described as a small subsection in Sec. 4.
5. TSNE visualization is interesting on the test domain to see the learnt embeddings from different approaches.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Can the authors elaborate more on why domain generalization is critical in the context of skeleton-based human action recognition, despite the inherent reduction in domain gaps provided by 3D coordinates?
2. Why did the authors choose HCN and CTRGCN as the GCN backbones for their approach? Can this be detailed further, perhaps as a subsection in Section 4?
3. Could the authors provide a more detailed explanation of how the approaches in Section 4.3 were implemented for skeleton-based human action recognition methods, including any specific techniques or modifications used?
4. What is the total number of parameters in the proposed approach, and how does it compare to the baseline methods?
5. Can the authors provide t-SNE visualizations of the learned embeddings from different approaches on the test domain to illustrate the effectiveness of the proposed method compared to others?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: yes it is in appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer UadT:
Thanks for your effort for reviewing our paper and giving kind suggestions. We really appreciate your positive comments on our paper.
[Q1] Can the authors elaborate more on why domain generalization is critical in the context of skeleton-based human action recognition, despite the inherent reduction in domain gaps provided by 3D coordinates?
[A1] Thanks for the suggestion. In the revision we will rewrite second paragraph in the introduction section elaborating on the importance of studying domain generalization in the context of skeleton-based human action recognition as following:
Skeleton-based action representation has the advantage of removed background changes and camera positions, making it more robust compared to RGB representation. Despite that, generalizability under such a representation is still largely affected by the inherent spatiotemporal difference of 3D coordinates of a same action across domains. Essentially, cross-subject and cross-view settings are all cross-domain settings, yet they can be well addressed by designing more powerful backbones and applying geometric transformations, achieving high accuracy in the test set. However, we find that in the cross-dataset setting where source and target data come from different datasets, the performance degrades a lot (around or more than 20%) (comparing Table 1 and 15) and cannot be well remedied by the above approaches. This indicates drastic domain gaps in inherent feature of human actions across datasets, posing great challenges for real-life use and calling for research on domain generalization techniques for skeleton-based representation.
Investigating action samples across multiple datasets, our observation is that a notable source of domain gap comes from the temporal mismatch of an action across different datasets (Fig. 1(a)), which is usually caused by different definition or cropping criterion of human motions…
[Q2] Why did the authors choose HCN and CTRGCN as the GCN backbones for their approach? Can this be detailed further, perhaps as a subsection in Section 4?
[A2] Thanks for the suggestion. We will add a new subsection to Section 4 briefly describing backbones used in the experiments, AGCN, HCN, CTR-GCN and ST-GCN. (1) We use Adaptive-GCN with reduced number of blocks for our main experiments for its good balance between efficiency and performance. (2) HCN is a convolutional network used by [41]. We use it for fair comparison in their setting as shown in Table 3. (3) ST-GCN and CTR-GCN are representative GCN backbones. CTR-GCN has much more parameters and multi-level feature design. ST-GCN is a simple GCN without elaborate design. We use them to show that designing network itself is hard to improve generalizability. (4) We will move the results of generalizability for different backbones to the main paper.
[Q3] Could the authors provide a more detailed explanation of how the approaches in Section 4.3 were implemented for skeleton-based human action recognition methods, including any specific techniques or modifications used?
[A3] Thanks for the suggestion. We will add more details to Appendix A3: other baseline methods.
[Q4] What is the total number of parameters in the proposed approach, and how does it compare to the baseline methods?
[A4] (1) During model training, the newly added parameters are only two matrices, i.e., boundary poses of shape (N_bkg, J, 3) and linear transforms of shape (N_tr, T, T). Here N_bkg=10, J=25, N_tr=20, T=64. Considering the number of parameters of AGCN is large, our learned parameters for action completion add very little to total parameters, (around ~90000), which are less than two FC layers (suppose each layer is nn.Linear(256,256)). (2) Comparing to other methods, general domain generalization methods CCSA, ADA and handcraft augmentation methods such as uniform sampling, mixup, CropPad and CropResize do not introduce new parameters. Self-supervised learning methods often have a new branch along with the original network for learning auxiliary tasks, therefore introducing new parameters (usually several FC layers). So generally, ours and those self-supervised methods have similar magnitude of new parameters.
[Q5] Can the authors provide t-SNE visualizations of the learned embeddings from different approaches on the test domain to illustrate the effectiveness of the proposed method compared to others?
[A5] Thanks for the suggestion, we will add t-sne plot in the revision, which is a good visualization tool for checking learned features.
---
Rebuttal Comment 1.1:
Title: Response to the author
Comment: Thank you for the detailed response. I will keep my score as 6 according to the contribution of this work. | Summary: The work proposes a novel recover-and-resample augmentation framework for domain generalization with application to skeleton-based action recognition. The authors aim to tackle a specific issue when moving from one dataset to another, i.e. the temporal misalignment of actions of the same class. In the experimental analysis, they provide comparisons with existing approaches and ablation studies.
Strengths: - The paper considers an important task, related to enriching the size, quality and variability of a dataset to finally improve the generalization abilities of trained models
- The methodology is modular, with intermediate outputs that can give valuable insights on the interpretation and the possibility of replacing the different tools with alternative choices
- The experimental analysis is very extensive. The protocol is well described and largely allows for reproducibility
Weaknesses: - The introduction fails to guide the reader in understanding the motivations, methodologies and challenges. The motivations do not follow clear storytelling, and the description of the methodology is not fully comprehensible. Also, the language would need an improvement (the syntax of some sentences should be fixed)
- Also the SoA presentation fails to fully convince the reader of the value of the contributions of the proposed approach. The paper is related to different tasks and a more guided tour of the SoA would help to appreciate the contributions. Also, Domain generalization and Data Augmentation are two very important topics for the work, but the SoA discussion is very limited
- The methodology lacks technical details and appropriate, clear justifications for the different operations.
- It is not clear if the results are fully reproducible. To my understanding, the experimental protocol is not used in other papers and there are no details on the implementations that have been adopted, so it is very difficult to judge the results (although the authors say the code will be available upon publications, indications on the adopted implementation would be useful)
Technical Quality: 1
Clarity: 1
Questions for Authors: I would ask for clarifications on the following points
1.Introduction
- What do you mean by "It is actually a form that humans perform generally complete actions within large datasets"? If this is the case, what's the importance of alignment?
- "Although this prior can be detected by statistics in a general sense, in terms of individual samples, some exhibit strong action completeness while some are segments of their complete actions".
- On Fig. 1(b): the behaviour of NTU and PKU seems different from the one of ETRI (more variability in the initial poses for the latter), while in your comments ETRI is treated on par with the other two.
- "By studying the relationships between their raw and trimmed pairs, we can learn a temporal patterns inherited in human actions."
- You named part of the method "Complete action prior" but the term seems never used in the next sections
- In addition to these questions, a more general comment is that the introduction fails to convey the message, which should be a clear and convincing description of the methodology. The picture does not fully help the understanding
2.Related Work
- Some sentences are not well justified, for instance, "These approaches partially improve the generalizability across datasets but are still bounded by the specific augmentation design." or "...but they do not make full use of the skeleton representations". An intuition of the meaning of statements of this type and in what sense the proposed approach is better would be beneficial
3.Method
- "On the other hand, the statistical finding that the initial poses of skeletal action sequences have low feature diversity (shown in Fig. 1 (b)) also validates our assumption: the boundary poses are to some extent constrained and similar to each other." This does not seem true for ETRI
- If in the training dataset the actions are all starting from the rest position (so they are all aligned) how the proposed method helps dealing with misalignment when changing the dataset?
- With the extrapolation, is the length T remaining the same?
- "Note that the above nonlinear transform is still unable to capture global and structural patterns inherited in human actions". Why?
- Why is it useful to reorganize the frames?
- What are the differences between the proposed work and [17]?
- Eq. 4: in my understanding k_i should be an index between 1 and T, but from the formula it does not seem so
- How do you generate the similarity matrix?
- Why should you cluster the W? The need for clustering is not clear
- From the loss function, it seems that each training sample is paired with an "augmented" sample, which is in contrast with the use of a parameter m_aug
4. Experiments
- A quantification (or at least an idea) of the level of misalignment of the different datasets would be useful for the interpretation of the results
- Reporting the results with P as training would still be useful as a sanity check
- The transfer sub-settings used in the experiments are different from [41] (which, according to the authors, is the only one reporting results for cross-dataset settings. Why this choice?
- What implementations have you used for obtaining the results in the tables?
- It is that Kinetic, mentioned in the adopted datasets, never appears in the experiments
- For the NTU datasets, have you considered the cross-view of the cross-subject problem?
- ERM corresponds to what version of the method?
- When reporting the per-class results, what about the actions that are not mentioned? It would be useful to have a more general idea
- The experiment briefly described in "Generalizability for different backbones" is actually important and should go in the main paper (discussed more in-depth)
Minor
- I would not mention the use of NNs for the motion infiller in the methodology description
- The description of the experiment with P51, N51, N12, and K12 should be moved above, with the other details on the transfer settings
[After reading the other reviews and the rebuttal from the authors, I'm happy to increase my score as most of my main concerns have been clarified]
Confidence: 3
Soundness: 1
Presentation: 1
Contribution: 2
Limitations: Limitations and potential societal impacts are discussed in the Appendix
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Vnjk:
Thanks for reviewing our paper. Due to space limit, we answer main questions regarding the soundness of the paper. Feel free to raise further questions.
Introduction
[Q1.1] What do you mean by "human performs generally complete actions within large datasets" and "in terms of individual samples, some exhibit strong action completeness while some are segments of their complete actions"?
[A1.1] The two statements are not contradictory, which can be explained by within a large dataset, some action categories/samples are nearly complete and some action categories/samples are partial segments of complete actions. Meanwhile the overall statistics show the non-uniform diversity curve (meaning generally complete actions as opposed to uniform diversity). The insight is that the boundary poses and transforms mined from those complete categories/samples can be used to help perform action completion on those incomplete action ones, so we call them transferable knowledge (line47-48).
[Q1.2] Clarify "the above nonlinear transform is still unable to capture global and structural patterns inherited in human actions"
[A1.2] Sorry capture should be restore. Take phone calling as an example. Normally, a complete action phone calling consists of raising the arm, keeping hands close to head, and putting down the arm. During recovering, for a NTU sample (Fig 1(a)-NTU), we need to do mirroring so that the action becomes symmetric in time and thus becomes complete. For an ETRI sample (Fig 1(a)-ETRI), we have to first extrapolate the beginning pose to a rest pose, and then do mirroring so that the action ends with a rest pose as well. From above we see extrapolation only is not enough, and we still need global transforms to restore important properties of common complete actions, e.g., symmetry. Nonlinear transform refers to extrapolation here.
[Q1.3] "By studying the relationships between their raw and trimmed pairs, we can learn temporal patterns inherited in human actions." Why reorganizing the frames?
[A1.3] We use a segment of an action (i.e., trimmed sample) to reconstruct the original action (i.e, raw sample) with a linear transform (See Fig.2 u and v). It is an approximation, but in this way we can extract global patterns of human actions. For example, by sampling the first half segment of a symmetric action (e.g. Fig.1(a)-PKU phone calling) and trying to reconstruct the full, we can obtain a transform that functions like a mirroring operation. Note that the mirroring operation is a linear transform which can be characterized by a matrix of shape (T, T), and applying this matrix essentially means re-organizing existing frames since it does not introduce new frames. The learned linear transforms are not restricted to mirroring operation shown in this example, and can be shifting, scaling and mirroring, etc (see Fig.3).
[Q1.4] If in the training dataset the actions are all starting from the rest position (so they are all aligned), … ?
[A1.4] We have both recovering and resampling. During resampling, we randomly sample a segment from the recovered complete sequence, so the starting point of samples in the augmented training data is random and not necessarily the rest pose.
[Q1.5] On Fig. 1(b): the behaviour of ETRI.
[A1.5] Please refer to appendix A.5 (line629-632). NTU and PKU are captured in lab, while ETRI is captured at home with more noise and pose diversity. Yet clustering rest poses from beginning frames is still the best solution because they have least diversity.
3.Method
[Q3.1] With extrapolation, is the length T remaining the same?
[A3.1] Yes. See line 146-147. We squeeze the extrapolated sequence.
[Q3.2] Differences between the proposed work and [17]?
[A3.2] The goal of [17] is to learn alignment of two sequences. We adopt a similar way to find the alignment matrix between trimmed and full sequences. However, their task and our task are essentially different. We use those alignment matrices to augment the existing training data.
[Q3.3] Eq. 4: k_i should be an index between 1 and T, but from the formula it does not seem so
[A3.3] In Eq. 4, s_ij is the weight for each j and j is in the range [1,T], so the value of k_i is in [1,T]. We have round(k_i) in line 179.
[Q3.4] How to obtain the similarity matrix?
[A3.4] See line 175-177 and Eq. (3).
[Q3.5] Why clustering the W?
[A3.5] (1) The original number of transforms for W is too large, note that n_W = n_training_samples x per_segment_in_one_sample. With clustering, we avoid inefficient sampling of W. (2) Important transform patterns (e.g. mirroring) can stand out during the clustering, as some of them only account for small percentage of the whole pool of W. We will add these explanation.
[Q3.6] About the loss function.
[A3.6] While it is certainly acceptable to have loss exactly as Eq 5, in practice, for efficiency, we randomly take batch_size x m_aug raw samples and batch_size x (1-m_aug) augmented samples so that the total number of samples in a batch is still batch_size.
4. Experiments
[Q4.1] Reporting the results with P as training.
[A4.1] The result with P as training is provided in Table 3 in P51->N51 setting.
[Q4.2] The transfer sub-settings are different from [41]. Why this choice?
[A4.2] In Table 3 we follow exactly the same setting as [41] for fair comparison. However, [41] only considers mutual shared actions between two datasets. Our new multiple-dataset cross-dataset setting allows us to study and evaluate generalizability in a more fundamental way, so we mainly report results on our new setting (Table 1 and 2).
[Q4.3] Kinetic never appears in the experiments
[A4.3] Kinetics (K12) is used in Table 3.
[Q4.4] How about the cross-view for NTU?
[A4.4] We will discuss it briefly (see Reviewer UadT [Q1,A1]).
[Q4.5] ERM?
[A4.5] ERM refers to training a classifier using backbone network with standard cross-entropy loss. It serves as a basis for all the methods.
---
Rebuttal Comment 1.1:
Comment: I truly thank the authors for the care they put in their rebuttal. My main concerns have been solved (some of my observations were actually due to misunderstandings on my side), and I'm happy to raise my score. In any case, I suggest the authors to revise the introduction to make the story more clear (see my original comments).
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer Vnjk:
Thanks for the positive response and raising the score. We really appreciate your kind suggestions. In the revision for the introduction, we will add (1) clear insight/motivation for recovering complete actions as explained in the rebuttal, (2) a brief pipeline description for the full method, and (3) a clear justification for the need of clustering in each module as explained in the rebuttal. We will also address the issues for other parts of the paper as suggested. | Summary: This paper proposes a novel recover-and-resample augmentation framework to address the skeleton action generalization problem across different datasets. The framework utilizes a complete action prior to recover full action sequences from partial observations, employing boundary pose-conditioned extrapolation and smooth linear transforms. The proposed method demonstrates superior performance in cross-dataset settings compared to existing domain generalization approaches.
Strengths: . Innovative Framework: The recover-and-resample augmentation framework is a novel approach to tackling the skeleton action generalization problem by focusing on recovering complete actions.
. Comprehensive Evaluation: The method is thoroughly validated across multiple datasets, showing significant improvement over baseline methods.
. Efficient Learning: The use of clustering for learning boundary poses and linear transforms makes the framework efficient and scalable.
. Powerful generalizability on cross datasets.
. Utilize nature concept of skeleton-based dataset for generalizability.
. Varied method suggestion: clustering concept and linear transformation algorithms
Weaknesses: . Limited Scope: The evaluation is primarily focused on indoor datasets, which may limit the generalizability of the findings to other types of datasets or real-world applications.
. Complexity in Implementation: The two-step stochastic action completion and the need for clustering might pose implementation challenges for practitioners.
- Dependency on Clustering Algorithms: The approach heavily relies on clustering algorithms for learning boundary poses and linear transforms. The effectiveness of the method may be influenced by the choice of clustering algorithm and the parameters used, which might require extensive tuning and could impact the reproducibility and scalability of the approach.
- Request more ablation experiments on diverse sampling and clustering methods.
- Limited Exploration of Resampling Techniques: While the paper proposes a robust framework for recovering and resampling action sequences, the resampling techniques themselves are relatively simple (random) and not extensively explored. More advanced or varied resampling strategies could potentially further enhance the augmentation process and improve generalizability.
Technical Quality: 2
Clarity: 3
Questions for Authors: No further questions.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Tackled in the draft.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer V13L:
Thanks for your effort for reviewing our paper and giving kind suggestions. We really appreciate your positive comments on our paper.
[Q1] Dependency on Clustering Algorithms: The approach heavily relies on clustering algorithms for learning boundary poses and linear transforms. The effectiveness of the method may be influenced by the choice of clustering algorithm and the parameters used, which might require extensive tuning and could impact the reproducibility and scalability of the approach.
[A1] (1) We use k-means, which is one of the simplest clustering algorithms, and we conducted experiments for important parameters in the clustering process. (2) The insight behind is that we need to mine sufficient and important patterns for linear temporal transforms, e.g. scaling, shifting, reflection. So the number of linear transform clusters is important (Table 5), at least not to be set too small. (3) The number of clustered boundary poses is generally not very sensitive (Table 6).
[Q2] Limited Exploration of Resampling Techniques: While the paper proposes a robust framework for recovering and resampling action sequences, the resampling techniques themselves are relatively simple (random) and not extensively explored. More advanced or varied resampling strategies could potentially further enhance the augmentation process and improve generalizability.
[A2] (1) Yes, indeed we use very simple resampling method in our paper, namely randomly cropping a segment and then resampling it to remain the original sequence length. Despite its simplicity, it generally well addresses the case that many actions are only segments of their full sequences. (2) The whole recovering and resampling process already introduces some level of redundancy, since the recovering stage and resampling stage are both stochastic. (3) In limitation part we discussed the potential of more advanced resampling strategy in addressing ambiguity issue, but this seems to be a common challenge for all the augmentation-based methods. (4) Currently the resampling is done uniformly on randomly selected segments. We will try non-uniform sampling on random segments. We really appreciate it if the reviewer is willing to offer some more insights on what kind of resampling method might improve the results.
[Q3] Request more ablation experiments on diverse sampling and clustering methods.
[A3] Thanks for the suggestion. In the revision we plan to add DBSCAN as another clustering algorithm to check the reproducibility, and add non-uniform resampling of random segments as another resampling method to see whether it can bring improvement. | Summary: In this paper, the authors address the issue of generalizing skeleton-based action recognition across different domains. They propose a novel recover-and-resample augmentation framework based on the concept of complete action prior. The approach is validated on different cross-dataset settings and demonstrates a significant improvement in cross-dataset accuracy compared to existing methods.
Strengths: 1. Introduces a new recover-and-resample framework that effectively addresses temporal mismatch in skeleton action recognition.
2. Utilizes the concept of action completeness within large datasets, employing boundary poses and linear transforms to capture global action patterns.
3. The experimental results on different cross-dataset settings outperform the previous methods.
Weaknesses: Although the method is intriguing and the performance is impressive, I am content with the current experimental setting. In real-world applications, dealing with cross-dataset issues makes it challenging to ensure that both datasets share the same action categories. Currently, the shared classes are manually selected. In my opinion, it would be better to adopt the new setting proposed by [1], where the source and target datasets have a category gap.
[1] Collaborating Domain-shared and Target-specific Feature Clustering for Cross-domain 3D Action Recognition. ECCV 2022
Technical Quality: 2
Clarity: 3
Questions for Authors: N.A.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N.A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer UxEw:
Thanks for your effort for reviewing our paper and giving kind suggestions.
[Q1] I am content with the current experimental setting
[A1] Do you mean you are not content with the current experimental setting? Since the generalizability of skeleton-based action recognition is still less explored in the community, we focus on the task of domain generalization, which is the most formal and standard setting where source and target datasets share the same action categories. Our reason for adopting this setting is that we can put every effort on investigating the domain gaps in skeleton-based action recognition in a more fundamental way. Following such a goal, we explored the inherent nature of action completeness for improving generalizability.
In the new setting proposed by [1], where the source and target datasets have a category gap, useful information from large amount of unlabeled data can be mined. However, its final prediction is unable to give the actual action label in a straightforward way, especially for those unseen categories. Note that [1] needs to solve the label assignment problem to obtain the accuracy metric. It is more of a representation learning or a clustering problem. As a result, such a setting is also limited in its practical use.
If considering another new setting where the source and target datasets have a category gap and the model has to predict the exact action labels for unseen target actions, it would become a zero-shot setting for unseen categories and more information about the category itself needs to be considered.
So basically our standard setting and the new setting [1] address the generalization problem in different aspects. It is hard to say one setting is overwhelmingly better than the other. Hence, in our opinion, adopting a standard setting of domain generalization should not be blamed as core weakness of the paper.
[1] Collaborating Domain-shared and Target-specific Feature Clustering for Cross-domain 3D Action Recognition. ECCV 2022
---
Rebuttal Comment 1.1:
Title: Rebuttal Response
Comment: Based on my understanding, in real-world applications, there's no certainty that newly collected video or skeleton sequences will correspond to classes that the model was originally trained on. Therefore, I contest the claim that "As a result, such a setting is also limited in its practical use." This introduces a significant challenge in real-world contexts, necessitating further research and development to tackle it effectively.
Given that the main title of this paper is "Cross-Dataset Skeleton Action Recognition," as opposed to domain generalization in skeleton action recognition, my primary concern is with the accuracy of the cross-dataset setting definition. In my view, we should not align labels or categories before proceeding with cross-dataset experiments.
In conclusion, I will maintain my rating at '4: Borderline reject'. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Classifier-guided Gradient Modulation for Enhanced Multimodal Learning | Accept (poster) | Summary: This paper proposes a balanced multimodal learning method. Compared to existing methods that only consider the gradient size, it also considers the direction of the gradient.
Strengths: The experiment includes multiple data sets and multiple tasks.
Weaknesses: There is less visualization analysis of the experiment.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. It is interesting that the author calculates the improvement of each modality (such as the change in accuracy) instead of the current performance. However, it is hoped that more convincing theoretical proofs can be added.
2. It is recommended to visualize changes in indicators before and after modulation (such as utilization rate, etc.), including only adjusting the gradient size, gradient direction, and both. It is best to also add the performance change process for each modality to avoid "using the indicators proposed by yourself to measure your performance."
3. Can it be implemented using only one classifier, instead of one classifier for each modality?
4. Please add a brief introduction to the experimental comparison method, such as AGM, PMR, etc.
5. The method in this paper is similar to the OGM method, but the original OGM also includes a part that uses Gaussian noise to improve generalization. Is this part of the method used in this paper? Can you add relevant discussions?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes, the authors have adequately addressed the limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable time and comments.
**(W1):Theoretical analysis.**
We provide an analysis of why the combination of improvement can balance the training. In Sec 3.2, we know the dominant modality will be updated faster than others, which makes the gradient $\partial\Omega/ \partial\phi$ much larger. Therefore, according to Eq.(3,4), $\Delta\theta^{\phi_i}$ will be larger than other modalities, indicating a larger step towards the optimal, which in turn influences $\Delta\epsilon$ (i.e. $\partial\Omega/ \partial\phi(\uparrow)\rightarrow\Delta\theta^{\phi_i}(\uparrow)\rightarrow\Delta\epsilon(\uparrow)$ ). Then, for modality $i$, change in balance term will be $\frac{\sum_{k=1,k\ne i}^M\Delta\epsilon_k}{\sum_{k=1}^M\Delta\epsilon_k}=\frac{\sum_{k=1}^M\Delta\epsilon_k-\Delta\epsilon_i}{\sum_{k=1}^M\Delta\epsilon_k}=1-\frac{\Delta\epsilon_i}{\sum_{k=1}^M\Delta\epsilon_k}$. If $i$ is the dominant modality ($\Delta\epsilon_i(\uparrow)$ increases), then $\frac{\Delta\epsilon_i(\uparrow)}{\sum_{k=1}^M\Delta\epsilon_k(-)}$ will increase, which proves that the balance term $1-\frac{\Delta\epsilon_i}{\sum_{k=1}^M\Delta\epsilon_k}$ will decrease. According to CGGM update rule ($\theta^{\phi_i}=\theta^{\phi_i}-\rho (1-\frac{\Delta\epsilon_i}{\sum_{k=1}^M\Delta\epsilon_k})\nabla g_i$), $\Delta \theta^{\phi_i}$ will decrease, slowing down the optimization of the dominant modality. For other modalities, $1-\frac{\Delta\epsilon_i}{\sum_{k=1}^M\Delta\epsilon_k}$ will increase, thus accelerating their optimization. During training, the balancing term keeps changing according to the optimization, thus making all modalities sufficiently optimized.
**(W2):Visualization.**
We have uploaded figures in PDF in the general response. We have made the following visualizations: Change in losses under four scenarios. Change in balancing term. Change in acc, gradient size and direction. Compared with Fig.2 in paper (0.7 after 1 epoch), the accuracy of text modality does not increase very fast with CGGM (0.55 after 1 epoch), indicating CGGM imposes constraints to it during optimization(see Fig3 in PDF). In Fig2 in paper, the dominant modality always has the largest gradient while in Fig.1 in PDF, the gradient magnitude of the text modality decreases at first, indicating that CGGM slows down its optimization. In Fig2 in paper, $\texttt{cos}(g_a, g_{mul})< 0$ during training, indicating an opposite optimization direction between the unimodal and multimodal, thus hindering the optimization process. In Fig.1 in PDF, we observe that $\texttt{cos}(g_{i}, g)> 0$ for all modalities, indicating that all modalities have the same direction with the multimodal. In Fig.2 in PDF, we can observe the loss of the dominant modality will drop much slower than that in Fig2a. Besides, the losses of all modalities in Fig2(bcd) are smaller than those in (a), indicating the effectiveness of CGGM. In Fig.3 in PDF, when the value is higher than the red line, the modality is promoted. When the value is lower than the red line, the modality is suppressed. In the first few iterations, the dominant modality is suppressed, ensuring that other modalities are fully optimized. During the optimization, balancing terms of three modalities turn up and down, ensuring each modality is sufficiently optimized.
**(W3):Using only one classifier instead of one classifier for each modality**
It is difficult for only one classifier to catch the unimodal gradients and features accurately. During optimization, the classifier takes all modalities as input and absorbs multimodal information, thus failing to reflect unimodal utilization rates effectively. The comparison on IEMOCAP is shown below.
| | 3 classifiers | 1 classifier |
| :--: | :-----------: | :----------: |
| Acc | 75.4 | 72.1 |
Besides, additional classifiers does not require much memory, for classifiers do not pass gradients to modality encoders during backpropagation.
| | No classifier | 3 classifiers |
| :--------: | :-----------: | :-----------: |
| Memory(MB) | 4438 | 4446 |
**(W4):Brief introduction to comparison methods.**
Thanks for your suggestion. G-Blending computes an optimal blending of modalities based on their overfitting behaviors to balance the learning. Greedy proposes conditional learning speed to capture the relative learning speed to balance the learning. OGM balances the multimodal learning by monitoring the discrepancy of their contribution to the learning objective with gradient enhancement. AGM proposes a metric built on mono-concept to represent competition state of a modality. PMR introduces prototypes for each class, accelerating the slow-learning modality by enhancing its clustering toward prototypes and reducing the inhibition from dominant modality with prototypical regularization.
**(W5):Discussion on OGM and CGGM.**
There are crucial differences between OGM and CGGM:
- OGM is based on cross-entropy loss and it cannot applied to other tasks such as regression.
- OGM calculates the discrepancy ratio between two modalities. However, when there are $M$ modalities, it needs to calculate a total of $M$ ratios for each modality. OGM does not consider how to use $M^2$ ratios to balance the training. In contrast, CGGM first calculates the term individually and then combines them to represent utilization rate, indicating its universality.
- OGM overlooks the influence of gradient direction.
OGM also includes a part that uses Gaussian noise to improve generalization. We do not use this because this method is based on SGD generalization ability which follows a Gaussian distribution when the batch size $m$ is large enough. However, in many tasks, other optimizers may be chosen and batch size may be small. Therefore, adding Gaussian noise may hinder the optimization process. For example, we use AdamW on Food101 and when we add gaussian noise to the gradient, we find the accuracy drops from 92.9 to 92.5.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, a reminder to take a look at the author's rebuttal and other reviews. Did the rebuttal address your concerns?
---
Rebuttal 2:
Comment: Dear Reviewer kP2D,
Thank you for your valuable time and comments on our manuscript. The rebuttal period is set to end soon, and we are looking forward to your feedback. During the rebuttal stage, we dedicated significant time and effort to address the concerns you raised in your initial review.
Please let us know if our response addresses your concerns and if you have any other questions or require any further information from us. We are happy to provide additional details or clarification as needed. We appreciate your time and consideration, and look forward to hearing back from you.
Best regards
---
Rebuttal 3:
Comment: Dear Reviewer kP2D,
Thank you again for your valuable time and comments. The rebuttal period is set to end today, and we are looking forward to your feedback. During the rebuttal stage, we dedicated significant time and effort to address the concerns you raised in your initial review. Please feel free to reach out if you have any other questions. We are happy to provide additional details or clarification as needed. | Summary: This paper proposed a balanced multi-modal learning method with Classifier-Guided Gradient Modulation (CGGM), considering both the magnitude and directions of the gradients, with no limitations on the type of tasks, optimizers, the number of modalities.
Strengths: 1. Balanced multi-modal learning considering both the magnitude and directions of the gradients is a reasonable idea.
3. The proposed method is easy to follow.
Weaknesses: 1. Balanced multi-modal learning considering the directions of the gradients is not novel. A previous work [1] have already analyze the issue of modality dominance caused by gradient conflicts. The difference and the comparison with this method should be considered in detail. Besides, the approach to controlling gradient magnitude is similar to the ideas of OGM[2] and PMR[3].
2. This framework still does not explore the imbalance issue of multi-modal learning in more flexible task formats, such as the potential imbalance in tasks like AVQA and multi-modal generation. Expanding task formats to regression and segmentation tasks is only a minor improvement. Existing work can also be extended to these tasks with minor adjustments.
[1] Wang, H., Luo, S., Hu, G. and Zhang, J., 2024, March. Gradient-Guided Modality Decoupling for Missing-Modality Robustness. In *Proceedings of the AAAI Conference on Artificial Intelligence* (Vol. 38, No. 14, pp. 15483-15491).
[2] Peng, X., Wei, Y., Deng, A., Wang, D. and Hu, D., 2022. Balanced multimodal learning via on-the-fly gradient modulation. In *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition* (pp. 8238-8247).
[3] Fan, Y., Xu, W., Wang, H., Wang, J. and Guo, S., 2023. Pmr: Prototypical modal rebalance for multimodal learning. In *Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition* (pp. 20029-20038).
Technical Quality: 2
Clarity: 3
Questions for Authors: The weaknesses above should be carefully considered.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable time and constructive comments.
**(W1): Difference and comparison with previous methods.**
The differences between CGGM and [1] are:
- [1] considers a fixed loss term for direction during the training process while CGGM employs a dynamic loss term to balance the training. The fixed loss term in [1] might face a challenge during the training process because the gradient directions of different modalities are always changing. For example, PMR also employs a regularization loss term. However, this loss is only added in the first several epochs and needs to be deleted manually to avoid performance damage according to the optimization process. However, CGGM employs an adaptive loss function with the balancing term to train the model. It will dynamically adjust the loss according to the utilization rate of each modality during the training process.
- [1] only considers the direction and overlooks the impact of magnitude while CGGM considers both.
- [1] aims to address missing modality issues with direction loss while CGGM aims to address the imbalanced multimodal learning.
We compare the direction loss in [1] with the loss in CGGM and present the results on IEMOCAP below.
| | baseline | Loss in [1] | Dynamic loss in CGGM |
| :------: | :------: | :---------: | :------------------: |
| Accuracy | 70.7 | 71.6 | 73.3 |
| F1 score | 69.5 | 70.9 | 72.8 |
From the results, we can observe that our dynamic loss can adjust the directions of gradients better than the fixed loss term in [1].
Additionally, there are several important differences between CGGM and PMR:
- PMR needs to calculate a prototype for each class. This indicates that PMR can only be applied to pure classification tasks. In contrast, CGGM can be applied to various tasks.
- The whole PMR and its formulations are based on cross-entropy loss. In contrast, CGGM has no limitations for this problem.
- PMR discusses the gradient direction, but it does not balance the multimodal learning from the perspective of direction explicitly. It proposes two loss terms for modal acceleration and regularization. PMR adds this regularization term in the first few epochs and needs to delete it manually to avoid performance damage. In contrast, CGGM can modulate magnitude and direction dynamically with the training process.
Besides, there are also several important differences between OGM and CGGM:
- OGM is based on cross-entropy loss and it can not applied to other tasks such as regression.
- The balance term in OGM is designed for two modality situations. It calculates the discrepancy ratio between two modalities. However, when there are $M$ modalities, it needs to calculate a total of $M$ ratios for each modality. OGM does not consider how to use $M^2$ ratios to balance the training. In contrast, CGGM first calculates the term individually and then combines the term from different modalities to represent the utilization rate, indicating its universality.
- OGM overlooks the influence of gradient direction.
[1] 2024, March. Gradient-Guided Modality Decoupling for Missing-Modality Robustness. In *Proceedings of the AAAI Conference on Artificial Intelligence*
**(W2): More flexible tasks.**
To show the universality and superiority of CGGM, we conduct experiments on more flexible tasks: multimodal retrieval tasks and video question answering. Specifically, we use MSRVTT and MSRVTT-QA datasets. For multimodal retrieval task, we use the common multimodal retrieval transformer MMT pre-trained on HowTo100M as the backbone. We used extracted features for the retrieval task. There are seven modalities in the extracted MSRVVT (motion, audio, scene, ocr, face, speech and appearance features). For video QA, we use the pre-trained VALOR-B as the backbone. For MMT and VALOR-B, they both consist of modality encoders and a multimodal fusion or decoder for downstream tasks. And we initialize the unimodal classifier with the multimodal fusion of the model. The results are shown below.
| | | Text$\rightarrow$Video | | | Video$\rightarrow$Text | |
| :--: | :-------------: | :--------------------: | :---------------: | :-------------: | :--------------------: | :---------------: |
| | R@5($\uparrow$) | R@10($\uparrow$) | MnR($\downarrow$) | R@5($\uparrow$) | R@10($\uparrow$) | MnR($\downarrow$) |
| None | 54.1 | 67.3 | 26.9 | 54.8 | 66.9 | 23.9 |
| CGGM | 59.2 | 70.2 | 21.2 | 59.6 | 69.8 | 20.1 |
| | None | CGGM |
| --------- | ---- | ---- |
| MSRVTT-QA | 46.7 | 47.2 |
From the results, we can observe that even in flexible tasks, CGGM can also improve the performance of the baseline model. Due to our idea of difference of evaluation metrics, CGGM can be easily applied to these flexible tasks, indicating its universality and effectiveness.
Thank you for your valuable feedback. We will incorporate these details in our paper.
---
Rebuttal Comment 1.1:
Comment: Thanks authors for the explanations and supplementary experiments. Considering the opinion of other reviewers and the contribution of this work, I decide to raise the rating to Borderline accept. I think the innovation in this work is a bit limited and it is an incremental work.
---
Rebuttal 2:
Comment: Thank you for your considered review and feedback. We appreciate you raising the score and your thoughtful assessment of our work. We believe the insights and findings of our paper can make a meaningful impact and extend previous research in a way that advances the state of the art and universality. Please feel free to reach out if you have any other suggestions or questions. | Summary: This paper focuses on the notorious modal imbalance problem in multi-modal learning. To alleviate the modality imbalance, the proposed method modulates gradient magnitude and the directions of the gradient simultaneously. Experiments on various multi-modal datasets demonstrate the efficiency.
Strengths: - This paper explores an interesting problem. In the current joint learning paradigm, the dominant modality overpowers the learning process and the resulting gradient prohibits further exploration of the features in the weak modality.
- This paper is well-written and easy to follow.
Weaknesses: - **The motivation is not stated clearly.** From Eq(3) and Eq(4), we can observe that the gradient magnitude can affect the update of a specific modality. However, the explanation of how the gradient direction between the specific modality and their fusion influences the modality update is unconvincing.
- **The experiments are not convincing.** The authors ignore recent state-of-the-art methods, such as UMT/UME [1], QMF [2], and ReconBoost [3]. It is recommended that the authors compare these methods. Additionally, the authors should plot the gradient direction, accuracy curve, and gradient profile after using their method and compare these to Fig. 2 to better highlight the effectiveness of their approach.
- **The related work section lacks discussion on recent research.** UMT [1] distills well-trained uni-modal features to assist multi-modal learning. QMF [2] provides a quality-aware multimodal fusion framework to mitigate the influence of low-quality multimodal data. ReconBoost [3] finds that the major issue arises from the current joint learning paradigm. They propose an alternating learning paradigm to fully harness the power of multi-modal learning.
- Some notations are confusing. Please see the questions below.
For now, I recommend a borderline for this paper, leaning to reject. If the concerns in weakness can be addressed in the rebuttal phase, I am willing to raise my concern and accept this paper.
[1] On Uni-Modal Feature Learning in Supervised Multi-Modal Learning. ICML 2023.
[2] Provable Dynamic Fusion for Low-Quality Multimodal Data. ICML2023
[3] ReconBoost: Boosting Can Achieve Modality Reconcilement. ICML 2024.
Technical Quality: 2
Clarity: 2
Questions for Authors: In section 3.2, $\mathcal{L}$ denotes both the overall empirical loss and the loss of individual samples. It is recommended to denote the loss of individual samples as $\ell$.
Equations (3) and (4) are incorrect. The chain rule of differentiation for a scalar with respect to multiple vectors should be applied, rather than the chain rule for a scalar with respect to another scalar.
$$
\frac{\partial z}{\partial x} = \left(\frac{\partial y}{\partial x}\right)^{T}\cdot \frac{\partial z}{\partial y}
$$
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: None.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable time and constructive comments.
**(W1): The explanation of how the gradient direction between the specific modality and their fusion influences the modality update.**
In Eq(3) and Eq(4), we know that the parameter $\theta$ and the term $\partial \mathcal F, \partial \mathcal \Omega. \partial \mathcal \phi$ are all vectors not scalars. Therefore, these terms already include the direction information of the gradient. And just like the gradient magnitude, gradient direction can also affect the update of modalities from Eq(3) and Eq(4).
Additionally, we can use cosine similarity $\cos(g_i,g_j)$ to represent the direction between two vectors. As shown in Fig2(c), $\cos (g_a,g_{mil})<0$, indicating they optimize towards the opposite direction, thus hindering the gradient update for multimodal branch. When we implement CGGM, in Fig.1(c) in PDF, $\cos(g_i,g_{mil}) > 0$ for all modalities, indicating they optimize towards the same direction.
**(W2): Experiments.**
Thanks for the reminder. We will include these additional baselines into our papers. Since MOSI is a regression task where the label is a score and loss function is mean absolute error, methods such as QMF and ReconBoost where distributions are needed cannot be applied. Additionally, we implement a combination of Dice loss and entropy loss on BraTS dataset in our paper. Therefore, we modified it slightly to fit the methods.
| | UPMC-Food 101 | MOSI | IEMOCAP | BraTS |
| :--------: | :-----------: | :--: | :-----: | :---: |
| None | 90.3 | 81.2 | 70.7 | 69.2 |
| UMT | 91.8 | 81.8 | 70.8 | 69.5 |
| UME | 90.7 | 80.8 | 71.5 | 70.3 |
| QMF | 92.9 | - | 72.1 | 71.6 |
| ReconBoost | 92.5 | - | 73.1 | 71.8 |
| CGGM | 92.9 | 82.8 | 75.4 | 73.9 |
Meanwhile, we visualize the balancing process and the performance comparison in the PDF file which can be downloaded in the general response. Comparing the Fig.2 in the paper and Fig.1 in the additional PDF, we have several observations:
- Performance: Compared with Fig.2 in the paper (around 0.7 after one epoch), the accuracy of text modality does not increase very fast with CGGM (around 0.55 after one epoch), which indicates that CGGM imposes constraints to the dominant modality during the optimization process (see Fig.3 in PDF). Besides, the accuracies of all the modalities and the fusion improves, indicating the effectiveness of CGGM.
- Gradient magnitude: In Fig.2 in the paper, the dominant modality always has the largest gradient while in Fig.1 in the PDF, the gradient magnitude of the text modality decreases at first, indicating that CGGM slows down its optimization and accelerates other modalities' optimization, helping each modality learn sufficiently, thus improving the multimodal performance.
- Gradient direction: In Fig.2 in the paper, $\texttt{cos}(g_a, g_{mul})< 0$ during the training process, indicating an opposite optimization direction between the unimodal and multimodal, thus hindering the optimization process. In Fig.1 in the PDF, we observe that $\texttt{cos}(g_{i}, g)> 0$ for all modalities, indicating that all modalities have the same direction with the multimodal fusion.
**(W3): Recent work.**
We apologize for missing some recent work. We will add UMT/UME, QMF and ReconBoost methods in the related work section and incorporate the experimental comparisons with our methods.
**(W4): Notations.**
We appreciate your feedback on the notation. We will update the $\ell$ and equations for a better presentation. $(\frac{\partial\mathcal F}{\partial\mathcal \Omega})^\top\frac{\partial \ell(\hat{y}^n,y^n)}{\partial \mathcal F}$
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, a reminder to take a look at the author's rebuttal and other reviews. Did the rebuttal address your concerns?
---
Rebuttal Comment 1.2:
Comment: Thanks for your efforts during the rebuttal phase. Most of my concerns have been addressed. I will raise my score to accept it.
---
Reply to Comment 1.2.1:
Comment: Thank you for your valuable time and feedback. We are glad to hear that your initial concerns have been addressed through our rebuttal. We will incorporate these additional results and clarifications into our manuscript. We believe these modifications will make our manuscript more solid and convincing.
---
Rebuttal 2:
Comment: Dear Reviewer evvZ,
Thank you for your valuable time and comments on our manuscript. The rebuttal period is set to end soon, and we are looking forward to your feedback. During the rebuttal stage, we dedicated significant time and effort to address the concerns you raised in your initial review.
Please let us know if our response addresses your concerns and if you have any other questions or require any further information from us. We are happy to provide additional details or clarification as needed. We appreciate your time and consideration, and look forward to hearing back from you.
Best regards | Summary: This paper proposes CGGM, a novel strategy to balance the multimodal training process. Compared with existing methods, it can deal with the unbalanced multimodal learning problem with different optimizers, takes, and more than two modalities.
Strengths: The motivation is sufficient and the experiments on different tasks and datasets prove that the proposed method solves the problem well.
CGGM stands out by considering both the magnitude and direction of gradients for balancing multimodal learning. This combined approach effectively addresses the modality competition problem and ensures that all modalities contribute equally to the model’s performance.
Weaknesses: 1. The reviewer is curious about the computational complexity of the additional classifier or decoder. Is there any experimental result?
2. As present in Line 149, the classifier fi consists of 1-2 multi-head self-attention (MSA) layers and a fully connected layer for classification and regression tasks. Does this apply to all models or classification tasks? Why is it set up like this? Why not just set it to the same classifier structure as the multimodal head?
3. What's the light decoder used for segmentation tasks?
4. The introduction of unimodal classification may limit the learning of multimodal tasks, such as gradient conflicts. How to deal with this problem?
5. PMR has also discussed the problem of gradient direction and introduced unimodal loss to assist multimodal learning. What's the difference between CGGM and PMR?
6. The reviewer is concerned about the accuracy of using the difference between the two consecutive ε to denote the modality-specific improvement for each iteration. According to my experience, the loss of the dominant modality will quickly drop to the magnitude of 1e-2 to 1e-3, while the magnitude of the weak modality is around 1e-1. At this time, the loss change of the weak modality will be larger, and according to the author, it will be regarded as the dominant modality. I
7. What's the performance of the proposed method on CRAME-D and AVE datasets? They are also widely used in previous studies.
Technical Quality: 2
Clarity: 3
Questions for Authors: see weakness
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: No
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable time and constructive comments.
**(W1): Computational complexity of the additional classifiers experimental results.**
The additional classifiers will need more computational resources during training. However, during inference, the classifiers will be discarded. Therefore, they have no impact during the inference stage. We report the memory cost (MB) of the additional classifiers in the table below.
| Setting | Food101 | MOSI | IEMOCAP | BraTS |
| :-----------------: | :-----: | :--: | :-----: | :---: |
| With classifiers | 8846 | 3902 | 4446 | 18072 |
| Without classifiers | 8838 | 3894 | 4438 | 18048 |
From the table, we can observe that the additional computational increase is low. There are two main reasons: (1) the classifiers or decoders are light with only a few parameters; (2) the classifiers only use the gradients to update themselves and do not pass the gradients to the modality encoders during backpropagation. Therefore, there is no need to store the gradient for each parameter, thus reducing memory cost.
**(W2): About the additional classifiers setting.**
Initially, we set the classifier the same as the multimodal head. However, according to the experiments, we observe that there is little performance gain compared with the classifiers with only a few layers. The results are shown below.
| Classifier | $f_1$ | $f_2$ | $f_3$ | Multimodal |
| :---------------: | :---: | :---: | :---: | :--------: |
| multimodal head | 55.1 | 56.2 | 67.5 | 75.6 |
| CGGM (1-2 layers) | 54.9 | 56.8 | 67.4 | 75.4 |
In the table, $f_1,f_2,f_3$ represents the accuracy of three modality classifiers. As we can see, more layers do not bring much improvement to the model and will add additional computational resources and parameters. From another perspective, the function of the classifiers in our paper is the reflection of the utilization rates of different modalities but not their own accuracies. Therefore, if these classifiers can reflect the utilization rates of each modality, we can design them as light as possible. Additionally, as discussed in W1, the additional decoders will not bring much computational costs.
**(W3): Light decoder in segmentation task.**
In PyTorch, we implement the decoder with two Conv layers with concatenation of low-level features generated by the encoders and interpolation functions.
**(W4): Unimodal branch may limit the learning of multimodal tasks. How to deal with this?**
The unimodal classifiers have no influence on the multimodal tasks because the unimodal classifiers only use the independent loss function to update themselves and do not pass the gradients to the multimodal branch. The reason is that the classifier is designed to reflect the utilization of each modality. Besides, this design can reduce the computational resources because there is no need to store the gradient for each parameter.
**(W5): What's the difference between CGGM and PMR?**
There are several important differences between CGGM and PMR:
- PMR needs to calculate a prototype for each class. This indicates that PMR can only be applied to pure classification tasks. In contrast, CGGM can be applied to various tasks.
- The whole PMR and its formulations are based on cross-entropy loss. In contrast, CGGM has no limitations for this problem.
- PMR discusses the gradient direction, but it does not balance the multimodal learning from the perspective of direction explicitly. It proposes two loss terms for modal acceleration and regularization. PMR adds this regularization term in the first several epochs and needs to delete it manually to avoid performance damage. In contrast, CGGM can modulate magnitude and direction dynamically with the training process.
**(W6): Concerns about two consecutive $\epsilon$**
Yes. At initial iterations, if no constraints are added, the loss of the dominant modality will drop quickly in our experiments (See Fig.2 (a) in the additional PDF in the general response). However, our method can prohibit this quick process. From iteration 0, CGGM will calculate the $\epsilon$. $\epsilon$ serves as the constraint to the dominant modality. If the dominant modality has much more improvements than that of weak modalities, CGGM will modulate its gradients to slow down its update and accelerate the weak modalities' update. Therefore, after CGGM is implemented, the loss of the dominant modality will not drop quickly (See Fig.2 (b) in the additional PDF, the loss of text drops much slower than (a)). Besides, CGGM can calculate the balancing term dynamically during the training process according to the optimization situations. Therefore, although text is the dominant modality in a task, during the training process, the "dominant modality" always changes according to the dynamic balancing term in CGGM. At some time, text is the dominant modality and at another time, audio may become the dominant modality (See Fig.3 in PDF). We can also reach this conclusion by comparing the accuracy change in Fig.1 in the PDF and Fig.2 in the paper.
**(W7): Performance on CREMA-D and AVE**
We conduct these experiments for more evaluations of CGGM. For backbones, we use ResNet as encoders. For both datasets, we adopt SGD as optimizers. For additional classifiers, we use an MLP layer. The results are shown below.
| | CREMA-D | AVE |
| :--: | :-----: | :--: |
| None | 61.5 | 64.8 |
| CGGM | 79.2 | 73.6 |
Thanks for your suggestion. We will incorporate these details in our paper.
---
Rebuttal Comment 1.1:
Comment: Dear reviewer, a reminder to take a look at the author's rebuttal and other reviews. Did the rebuttal address your concerns?
---
Rebuttal 2:
Comment: Dear Reviewer S6e7,
Thank you for your valuable time and comments on our manuscript. The rebuttal period is set to end soon, and we are looking forward to your feedback. During the rebuttal stage, we dedicated significant time and effort to address the concerns you raised in your initial review.
Please let us know if our response addresses your concerns and if you have any other questions or require any further information from us. We are happy to provide additional details or clarification as needed. We appreciate your time and consideration, and look forward to hearing back from you.
Best regards
---
Rebuttal 3:
Comment: Dear Reviewer S6e7,
Thank you again for your valuable time and comments. The rebuttal period is set to end today, and we are looking forward to your feedback. During the rebuttal stage, we dedicated significant time and effort to address the concerns you raised in your initial review. Please feel free to reach out if you have any other questions. We are happy to provide additional details or clarification as needed. | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their great effort and constructive comments on our manuscript. During the rebuttal period, we have been focusing on these beneficial suggestions from the reviewers and doing our best to add several experiments and revise our manuscript.
According to the reviewers' suggestions, we have included more recent baselines and more flexible tasks, which demonstrates the effectiveness and universality of CGGM. Besides, we have included several visualizations for a deeper analysis of CGGM. These visualizations include:
- Visualization of the changes in losses under four different scenarios.
- Visualization of the changes in the balancing term.
- Visualization of the changes in performance, gradient size, and direction.
These visualizations can be downloaded in this response. We believe these addition will make our experiments and methods more comprehensive and insightful. Additionally, we have made further clarifications about the setting of classifiers, the additional computational resources required by the classifiers, the differences between CGGM and other methods, as well as the theoretical analysis of CGGM. These modifications aim to make our manuscript more solid and convincing.
Pdf: /pdf/4fed20635934e112ab0a2a0cdf338f2d7e54d55a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Generalization Bounds via Conditional $f$-Information | Accept (poster) | Summary: This paper introduces novel generalization bounds using the conditional f-information framework. It first derives conditional f-information-based generalization bounds for bounded loss, then examines mutual information generalization bounds as a specific example of their general theorem. This analysis helps to highlight the potential looseness of previous CMI bounds. Additionally, the paper presents several other f-information-based generalization bounds and extends the framework to handle unbounded loss cases. Finally, the tightness of the obtained bounds is demonstrated through empirical comparisons with other existing bounds.
Strengths: - **Clear presentation**: The structure of this paper is clear and easy to follow, with definitions and theorems clearly presented.
- **Empirical validation**: Although it is a theoretical work, the paper includes empirical studies to demonstrate the utility of the proposed generalization bounds.
Weaknesses: As the authors mention, the results pertain to the expected generalization error and therefore lack the high-probability generalization guarantee, which is more critical.
Technical Quality: 3
Clarity: 3
Questions for Authors: - For the definition of the expected generalization error (line 101 on page 3), should the correct definition be $\\mathbb{E}\_S[\\mathbb{E}\_W[L_{\\mu}(W)-L\_S(W)]]$ instead of the current formula in the paper? If I understand correctly, $\\mathbb{E}\_W$ refers to taking the expectation with respect to $P\_{W|S}$, which depends on $S$.
- In the “Other Fast-Rate Bound Cases” section, could the authors provide some examples where $\\mathbb{E}[\\Delta L^2\_i] \lesssim \text{Var}(L\_i^{+})$ holds?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you sincerely for the valuable feedback. Our responses follow.
>- For the definition of the expected generalization error (line 101 on page 3), should the correct definition be $\mathbb{E}\_S[\mathbb{E}\_W[L_\mu(W)-L_S(W)]]$ instead of the current formula in the paper? If I understand correctly, $\mathbb{E}\_W$ refers to taking the expectation with respect to $P\_{W|S}$, which depends on $S$.
**Response.** In the definition provided in Line 101, $\mathcal{E}\_\mu(\mathcal{A})\triangleq \mathbb{E}\_W{L\_\mu(W)}-\mathbb{E}\_{W,S}{L\_S(W)}$, the first $\mathbb{E}\_W$ is taken with respect to the marginal distribution $P\_W$. Although $W$ indeed depends on $S$, the population risk $L\_\mu(W)$, as a function of random vairable $W$, only depends on $W$. Therefore, knowing the marginal distribution of $W$ is sufficient to compute $\mathbb{E}\_W{L_\mu(W)}$, namely $\mathbb{E}\_{S,W}{L\_\mu(W)}=\mathbb{E}\_W{L\_\mu(W)}$.
>- In the “Other Fast-Rate Bound Cases” section, could the authors provide some examples where $\mathbb{E}[\Delta L^2_i]\lesssim \mathrm{Var}(L_i^+)$ holds?
**Response:** We note that the inequality $\mathbb{E}[\Delta L^2_i]\lesssim \mathrm{Var}(L_i^+)$ holds in the CMI setting in general, rather than being specific to particular examples. This can be demonstrated as follows: In the CMI setting, due to the symmetric nature of the supersample construction, the random variables $L_i^+$ and $L_i^-$ have identical marginal distributions, thus having the same mean $e=\mathbb{E}[L^-_i]=\mathbb{E}[L_i^+]$. Consequently,
$$
\mathbb{E}[(L^-_i-L_i^+)^2]=\mathbb{E}[(L^-_i-e+e-L_i^+)^2]\leq 2\mathbb{E}[(L^-_i-e)^2]+2\mathbb{E}[(L^+_i-e)^2]=4\mathbb{E}[(L^+_i-\mathbb{E}[L_i^+])^2]=4 \mathrm{Var}(L_i^+).
$$
Therefore, $\mathbb{E}[\Delta L^2_i]\lesssim \mathrm{Var}(L_i^+)$ is indeed a general result.
We will include this justification in the revision to clarify this point.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response to my comments. I have read your rebuttal and will keep my score.
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: Thank you for your prompt reply and for maintaining the positive evaluation of our paper. | Summary: This paper presents a general framework for generalization bounds using a careful application of the Donsker-Vardaran representation of the conditional $f$-information, and show that a suitable quadratic Taylor expansion of the bounds, for various $f$-divergences, improves over the state of the art.
Strengths: This paper has a number of strengths:
(1) The technique appears generic and "fundamental:" it works regardless of $f$ divergence, and unifies the perspectives on quite a few of them. Moreover, it seems to improve over prior art, consistently, and in many instantiations. Finally, Lemma 3.1 does seem useful in its own right. From what I understand, the use of the variational representation also seems novel.
(2) The bound admits extension to unbounded losses, even those which do not admit MGFs
(3) The bounds satisfy an oracle property, in that they adapt to problem-dependent quantities.
(4) For a paper focused on theoretical contribution, the experiments are rather compelling: across toy domains and CIFAR 10, the Hellinger-based oracle confidence interval consistency improves upon prior art.
(5) The authors go to great lengths to explain and differentiate their contributions from prior work.
Weaknesses: I am not an expert in the state-of-art guarantees for generalization, so please take these concerns with a grain of salt. However, my only concern would be that these guarantees do not feel "field changing" in the sense that they appear to be an extension of and elaboration upon an accepted framework: construct a super-sample, evaluate its ($f$-)information, and evaluate the ensuing consequences. The idea of "generalize Shannon information to $f$-information" feels like a natural extension, and therefore its hard for me to find that some of these results are surprising. Moreover, the extension to unbounded losses appears to be based on standard truncation techniques.
That is to say, form an outsider's perspective, this appears a good, solid work with nice fundamental insights. But it doesn't feel groundbreaking enough for me to say, advocate for an award.
There are also a few typos here and there, and the preliminaries read a bit dense at times. This is probably standard in the community, and does not trouble me that much.
Perhaps one suggestion: regarding the presentation, the authors could do well to offer a table in the appendix where they compare the bounds to prior art in a more structured and systematic way. The comparison in the paper often assumes familiarity with prior work, and it would be more effective to re-present the formal statements upon which the authors are improving an an Appendix. Then the authors can more clearly delineate the key advantages of their bounds over past art.
Technical Quality: 4
Clarity: 3
Questions for Authors: Had any other works in the literature considered bounds based on $f$-divergences before? Did the original bounds, based on Shannon information, using the variational characterization in any meaningful way? It would be quite helpful for me to understand this better to fully gauge novelty.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors do an adequate job of explaining the limitations of their work: lack of high probability guarantees, failure to account for the tradeoff in source [52], and perhaps the general limitations of purely information-theoretic approaches to generalization.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you sincerely for your valuable feedback and the positive comments on our paper. Our responses follow.
>- I am not an expert in the state-of-art guarantees for generalization, so please take these concerns with a grain of salt. However, my only concern would be that these guarantees do not feel "field changing" in the sense that they appear to be an extension of and elaboration upon an accepted framework: construct a super-sample, evaluate its ($f$-)information, and evaluate the ensuing consequences. The idea of "generalize Shannon information to $f$-information" feels like a natural extension, and therefore its hard for me to find that some of these results are surprising. Moreover, the extension to unbounded losses appears to be based on standard truncation techniques.
>- That is to say, form an outsider's perspective, this appears a good, solid work with nice fundamental insights. But it doesn't feel groundbreaking enough for me to say, advocate for an award.
**Response.** Thank you for considering our paper to be a good and solid work with nice fundamental insights. While we acknowledge that our work builds on existing SOTA information-theoretic generalization bound techniques, such as the CMI framework, our novel change of measure inequalities in Lemma 3.1 and Lemma 4.1 enable the derivation of tighter generalization bounds in a simpler manner. We believe these inequalities can be applied in even broader contexts. Additionally, we sincerely appreciate the reviewer’s evaluation of our paper from an award-level perspective.
>- There are also a few typos here and there, and the preliminaries read a bit dense at times. This is probably standard in the community, and does not trouble me that much.
>- Perhaps one suggestion: regarding the presentation, the authors could do well to offer a table in the appendix where they compare the bounds to prior art in a more structured and systematic way. The comparison in the paper often assumes familiarity with prior work, and it would be more effective to re-present the formal statements upon which the authors are improving an an Appendix. Then the authors can more clearly delineate the key advantages of their bounds over past art.
**Response.** Thank you for your valuable suggestions. We will include a table to compare with previous works and put the prior theorem statements in the appendix to improve the readability of our work. In addition, we have fixed some typos and will continue to improve the presentation of our paper in the next revision.
>- Had any other works in the literature considered bounds based on $f$-divergences before? Did the original bounds, based on Shannon information, using the variational characterization in any meaningful way? It would be quite helpful for me to understand this better to fully gauge novelty.
**Response.** Yes, as noted in Lines 43-48, there are previous works that explore using alternative dependence measures, including various $f$-divergences, in place of KL divergence. However, our contribution is distinct in its generic proof approach for $f$-divergence-based bounds under the supersample setting, as detailed in Lemmas 3.1 and 4.1. This method represents a significant departure from previous approaches and provides insight into the tightness of the obtained bound for different $f$-divergences, illustrated by the gap between $\phi^{*-1}(x)$ and $x - ax^2$ in Figure 1. In addition, earlier works typically focus on hypothesis-based $f$-divergence quantities, which are challenging to evaluate empirically.
Regarding the original Shannon information-based bounds, they are indeed based on the Donsker and Varadhan variational formula of KL divergence. These Shannon information-based generalization bounds are obtained by using concentration results (e.g., Hoeffding's Lemma) to upper bound the cumulant generating function in the variational formula, which can be challenging to generalize to other $f$-information. Such challenges are avoided in our Lemmas 3.1 and 4.1, as we discussed in Line 131-136.
---
Rebuttal Comment 1.1:
Title: Thank you for the response.
Comment: Thank you for the response. I remain positive about the work, and in discussions may recommend a spotlight presentation (if appropriate).
---
Reply to Comment 1.1.1:
Title: Thanks
Comment: We sincerely appreciate your continued positive evaluation of our paper and are grateful for your recommendation for a spotlight presentation. | Summary: Understanding generalization using information-theoritic measures of dependence between input and output of a learning algorithm is an important area of study. The main focus of this line of work is using the KL divergence as a measure of dependence and providing generalization bounds. In this work, the authors propose a more general approach that replaces the KL divergence with arbitrary f-divergences. The main contribution of this work is presenting an approach to obtain such a generalisation bounds.
Strengths: The main strength of this work is showing KL divergence is not necessary for obtaining information-theoritic generalization bounds. Also, compared to other work showing this fact, the proof strategy in this paper is much simpler.
Weaknesses: I think the weakness of the work are the following:
1- The point that one can replace KL with an arbitrary f-divergence was shown in the following paper:
Lugosi G, Neu G. Online-to-PAC conversions: Generalization bounds via regret analysis. arXiv preprint arXiv:2305.19674. 2023 May 31.
I could not find a clear comparison of the approach proposed in this work and the the paper by Lugosi et.
2- More general question is that by replacing KL with f-divergence what sort of new insights can we obtain? I am trying to understand the real advantage of replacing KL with f-divergences.
3- It is difficult to parse Theorem 4.1. for me. How one can find a good truncation value?
Technical Quality: 3
Clarity: 2
Questions for Authors: 1- Write down the definition of convex conjugate?
2- One main question here is that what is the gain of using f-divergence compared to the usual mutual information?
3- Typo in Lemma 3.1. It should be b_1 and b_2.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you sincerely for your constructive comments on our paper. Our responses follow.
>- 1- The point that one can replace KL with an arbitrary f-divergence was shown in the following paper:
Lugosi G, Neu G. Online-to-PAC conversions: Generalization bounds via regret analysis. arXiv preprint arXiv:2305.19674. 2023 May 31.
I could not find a clear comparison of the approach proposed in this work and the the paper by Lugosi et.
**Response.** Our work was indeed inspired by Lugosi et al. (2023) and other recent studies exploring alternative measures of dependency for generalization analysis. While we share a common theme with these works, our proof strategy differs significantly. Notably, our framework is much simpler as it does not rely on existing regret guarantees for online learning algorithms, unlike Lugosi et al. (2023).
Additionally, these works are not directly comparable due to different assumptions (e.g., Lugosi et al. (2023) require the second moment of the loss to be bounded, which we do not). However, both frameworks are deeply connected with convex analysis, and we believe each has its own advantages in specific contexts. We plan to further compare and potentially unify these frameworks in future research.
>- 2- More general question is that by replacing KL with f-divergence what sort of new insights can we obtain? I am trying to understand the real advantage of replacing KL with f-divergences.
>- 2- One main question here is that what is the gain of using f-divergence compared to the usual mutual information?
**Response.** Thanks to our generic recipe, we have shown that most generalization bounds based on KL divergence or other $f$-divergence can be derived by lower-bounding $\phi^{\*-1}(x)\geq x-ax^2$. The gap between $\phi^{\*-1}(x)$ and $x-ax^2$ reflects the tightness of the bound. To elaborate, define the gap function $g(x,a;\phi)=\phi^{\*-1}(x)- (x-ax^2)$. Therefore, to derive a tighter bound (i.e., a smaller $g(x,a;\phi)$), we consider the optimization problem $\min\_{x,a,\phi}g(x,a;\phi)$. Notably, there is no indication that choosing $\phi(x)=x\log(x)$, corresponding to KL divergence, is optimal.
In the paper, we use Figure 1 to visualize this. Clearly, using alternative $f$-divergences such as JS-divergence and squared Hellinger divergence can potentially lead to tighter bounds compared to KL divergence (i.e., the blue line in Figure 1(a)), as they have smaller $g(x,a;\phi)$. This has also been corroborated by our experiments.
While KL divergence and mutual information have advantageous properties like the chain rule, which enable various interesting studies and are not generally applicable to other $f$-divergences, there is no compelling reason to exclusively use KL divergence if tighter bounds can be achieved with other $f$-divergences.
>- 3- It is difficult to parse Theorem 4.1. for me. How one can find a good truncation value?
**Response.** We have discussed common cases for selecting truncation values in Lines 295-321. For the bounded case, the truncation value is often chosen as the boundedness value, though it might not always be optimal. If the random variable is likely to stay within a bounded range with high probability, selecting the corresponding boundedness value as the truncation value can be appropriate since $\zeta_2$ will be small in Theorem 4.1. If there is no additional information about the tail behavior of the random variable (e.g., loss difference), we invoke $\zeta_1\leq\sqrt{2}C$, apply the Markov inequality to $\zeta_2$ and then optimize the upper bound with respect to the truncation value $C$. This approach leads to Corollary 4.1.
>- 1- Write down the definition of convex conjugate?
**Response.** Thank you for pointing this out. We have added the definition of the convex conjugate to our manuscript.
>- 3- Typo in Lemma 3.1. It should be b_1 and b_2.
**Response.** We appreciate you catching these typos. We have fixed them.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. Is there any clean example which shows that the generalization bound developed in your paper is tighter than Lugosi et al. (2023)?
---
Rebuttal 2:
Title: Thanks for the prompt reply.
Comment: The most notable example where our bound is tighter than that of Lugosi et al. (2023) is in the realizable setting, where we provide a strictly non-vacuous bound. Specifically, when $\mathcal{A}$ is an interpolating algorithm (i.e., the training loss is always minimized to zero) and the loss function is bounded in $[0,1]$, as discussed in Lines 197-201, our Lemma 3.1, combined with the lower bound $\log(1+x) \geq x\log{2}$ for $x \in [0,1]$, allows us to derive the bound $\mathcal{E}\_\mu(\mathcal{A}) \leq \sum_{i=1}^n \frac{I(\Delta L_i;U_i)}{n\log{2}}$. This is a strictly non-vacuous bound because $I(\Delta L_i;U_i) \leq \log(2)$, ensuring the overall bound is $\leq 1$, which is the upper bound of the loss function. In contrast, none of the bounds in Lugosi et al. (2023) has this property when the loss is bounded in $[0,1]$; their worst-case bound (e.g., for a deterministic algorithm) exceeds $1$. Additionally, our framework allows similar bounds for other $f$-divergences, e.g., for squared Hellinger distance, using $\frac{x}{1+x} \geq \frac{x}{2}$ for $x \in [0,1]$, a corresponding bound can be obtained.
Moving beyond the realizable setting, consider the case of the KL or mutual information (MI). The only explicit expected generalization bound provided in Lugosi et al. (2023) is their Corollary 21, which recovers the square-root bound of $\mathcal{O}(\sqrt{I(W;S)/n})$. This bound is clearly weaker than our fast-rate bound in Corollaries 3.1-3.2, due to the omission of vanishing terms in our oracle bound in Theorem 3.1. In fact, a more refined MI bound is presented in the earlier version of Lugosi et al. (2023), namely Corollary 4 in Lugosi et al. (2022) [R1]. This bound takes the form $\sqrt{\frac{4\mathbb{E}\_Z\|\|\ell(\cdot,Z)-\mathbb{E}\_Z[\ell(\cdot,Z)]\|\|^2_\infty I(W;S)}{n}}$, which can indeed be derived from Lugosi et al. (2023) due to the generality of their framework. Recall that our Theorem 3.1 gives the bound $\frac{1}{n}\sum_{i=1}^n \sqrt{(2\mathbb{E}[\Delta L^2_i] + 2|\mathbb{E}[G_i]|)I(\Delta L_i;U_i)}$. Notably, because we apply individual and loss-difference techniques, our averaged MI term is always tighter than that of Lugosi et al. (2022, 2023), as $\frac{1}{n}\sum_{i=1}^n \sqrt{I(\Delta L_i;U_i)} \leq \sqrt{\frac{I(W;S)}{n}}$ generally holds. To fairly compare our framework with theirs, we ignore the difference between MI terms and only focus on the novel components of each bound, specifically $\mathbb{E}\_Z\|\|\ell(\cdot,Z)-\mathbb{E}\_Z[\ell(\cdot,Z)]\|\|^2_\infty$ in their work and $\mathbb{E}[\Delta L^2_i]+|\\mathbb{E}[G_i]|$ in ours.
Let’s consider the following simple example:
**Example 1.** Let $\mathcal{W} = [-1,1]$, and let the input space be $\mathcal{Z} = \\{1, -1\\}$. Assume $\mu = \text{Unif}(\mathcal{Z})$, i.e. $Z$ is a Rademacher variable. Consider a convex and 1-Lipschitz loss function $\ell(w,z) = -w \cdot z$.
Under the ERM algorithm, $W = \mathcal{A}(S) = \frac{1}{n}\sum_{i=1}^n Z_i$. Notice that for any $w \in \mathcal{W}$, $\mathbb{E}\_Z[\ell(w,Z)] = \frac{1}{2}(-w \cdot (1-1)) = 0$, hence $\mathbb{E}\_Z\|\|\ell(\cdot,Z) - \mathbb{E}\_Z[\ell(\cdot,Z)]\|\|^2_\infty = \mathbb{E}\_Z\|\|\ell(\cdot,Z)\|\|^2_\infty = 1$. In contrast, since $\Delta L_i \in [-1,1]$ in this case, $\mathbb{E}[\Delta L^2_i] \leq \mathbb{E}[|\Delta L_i|]$ and $|\mathbb{E}[G_i]| \leq \mathbb{E}[|\Delta L_i|]$, we have $\mathbb{E}[\Delta L^2_i] + |\mathbb{E}[G_i]| \leq 2\mathbb{E}[|\Delta L_i|]$. Moreover, $\mathbb{E}[|\Delta L_i|] = \mathbb{E}[|W \cdot (Z_i^+ - Z_i^-)|] \leq \frac{2}{n}\mathbb{E}[|\sum_{i=1}^n Z_i|] \leq \frac{2}{\sqrt{n}}$, where the last step is by the Khintchine-Kahane inequality [R2, Theorem D.9].
Thus, in this example, $\mathbb{E}\_Z\|\|\ell(\cdot,Z) - \mathbb{E}\_Z[\ell(\cdot,Z)]\|\|^2_\infty = 1$ as in Lugosi et al. (2022, 2023), while our bound $\mathbb{E}[\Delta L^2_i] + |\mathbb{E}[G_i]| \leq 2\mathbb{E}[|\Delta L_i|] \leq \mathcal{O}(\frac{1}{\sqrt{n}})$ shows a tighter convergence rate. Consequently, even without using the individual trick and loss-difference technique, our bound (Theorem 3.1) is still tighter than Lugosi et al. (2022,2023) in terms of convergence rate. We will include this example in our revision.
Finally, to clarify the comparison, it’s worth noting that the unpublished version of Lugosi et al. (2023) represents an initial exploration of their "online-to-PAC" generalization framework. As such, the results in their current version may primarily aim to recover previous findings, and their framework likely has further potential. We will continue to monitor the development of their work.
[R1] G. Lugosi and G. Neu. Generalization Bounds via Convex Analysis. COLT 2022.
[R2] M. Mohri, A. Rostamizadeh, and A. Talwalkar. Foundations of machine learning. MIT press, 2018.
---
Rebuttal Comment 2.1:
Comment: thank you for the response. I think the authors should update the manuscript and provide a better comparison with Lugosi et al results. I will increase my score to 6.
---
Reply to Comment 2.1.1:
Title: Thanks
Comment: Thank you very much for both your valuable suggestion and the increased score. We will carefully revise our manuscript based on your suggestions. | Summary: This paper extends conditional mutual information bounds to other $f$-divergences. A list of bounds involving various $f$-information terms are established. The results are derived by evaluating a previously established variational formula for f-divergences at a specific function. Analysis tailored to various choices of $f$ is then conducted to extract expected generalization bounds.
Strengths: The paper is clear in the assumptions made to derive its results. The buildup to each theorem is well detailed making it easy for the reader to follow.
Weaknesses: The main weakness of this work is in the motivation and the reason for being for the results. The derived extensions have the quantity to be bounded appearing within the bound itself. The leads to strange tautological statements like in line 165 where it is said that if each term in the expected generalization decays fast with $n$ then so does the expected generalization. The $\chi^2$, Hellinger, Jensen-Shannon results have increasingly complicated terms that also involve the very quantity trying to be bound. It is unclear to me why such bounds would be of interest, especially since any instantiation requires falling back to a previously established result.
For example, the sub-gaussian assumption seems to be necessary to have any hope of controlling the quantities appearing in theorem 4.1, and when the assumption is made, the result yields a bound that is worse than existing bounds. The heavy-tail corollary 4.2 is even more impenetrable, the very same quantities we wish to bound appear on the right hand side within $L_p$ norms. The authors should provide justifications as to why these true inequalities are not circular and of limited interest.
Minor note: There is some prior work establishing expected generalization bounds using different $f$-divergences [1].
[1] Esposito, Amedeo Roberto, and Michael Gastpar. "From generalisation error to transportation-cost inequalities and back." 2022 IEEE International Symposium on Information Theory (ISIT). IEEE, 2022.
Technical Quality: 4
Clarity: 3
Questions for Authors: How exactly are the experimental plots made ? The theorems involve expectations and mutual information terms, are the authors estimating those? Can the authors add more information on what is plotted?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: The limitations are discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you sincerely for your valuable feedback on our paper. Our responses follow.
>- The main weakness of this work is in the motivation ...
**Response.** As noted in Lines 159-160, when a bound contains $\mathbb{E}[G_i]$, we refer to it as an "oracle" bound, such as Theorem 3.1. Obtaining the oracle bounds first provides new insights into information-theoretic generalization bounds and inspires novel bounds (e.g., Corollaries 3.1-3.2). When the oracle bound falls back to a previous result, our aim is to show either that the previous bound is potentially loose or that it can be recovered by a simpler approach based on our framework.
We now restate our motivation for introducing the $f$-information-based generalization framework. Our framework provides a new, generic method for deriving generalization bounds based on $f$-divergence in the supersample setting, differing from related works by not relying on existing concentration inequalities (e.g., Hoeffding's lemma) or existing regret guarantees for online learning algorithms. The advantages of our framework are twofold: 1) By first obtaining an oracle bound, we show that previous bounds are potentially loose as they ignore some vanishing terms; 2) By carefully handling these vanishing terms, we can either recover previous fast-rate bounds or derive new fast-rate bounds.
To elaborate, consider the KL divergence as an example. Our Theorem 3.1 presents the bound: $\mathcal{E}\_\mu(\mathcal{A})\leq\mathcal{O}(\sqrt{(\mathbb{E}[\Delta L_i^2]+\mathbb{E}[G_i])I(\Delta L_i;U_i)})$. In contrast, existing square-root MI bounds under the same boundedness assumption are $\mathcal{E}_\mu(\mathcal{A})\leq\mathcal{O}(\sqrt{I(\Delta L_i;U_i)})\leq\mathcal{O}(\sqrt{I(W;U_i|\widetilde{Z})})\leq\mathcal{O}(\sqrt{I(W;Z_i)})$. These previous bounds can be very loose as they ignore the vanishing term $\sqrt{\mathbb{E}[\Delta L_i^2]+\mathbb{E}[G_i]}$. Additionally, our framework allows us to recover previous fast-rate bounds such as $\mathcal{O}(I(\Delta L_i;U_i))$ under realizable settings. Furthermore, using the oracle bound, we derive two new fast-rate bounds in Corollaries 3.1-3.2, which do not contain $\mathbb{E}[G_i]$, to mitigate the looseness in previous square-root bounds.
For other $f$-divergence bounds, we mainly state their "oracle" versions because obtaining similar bounds as in Corollaries 3.1 and 3.2 follows the same procedure. This is mentioned in lines 257-258 (and see Corollary B.1 in Appendix B.6 for the squared Hellinger case).
>- For example, the sub-gaussian assumption ...
**Response.** Regarding the sub-gaussian case, we note that our Theorem 4.1 is not worse than existing bounds. The bound provided in Line 306 uses a rough choice of $C=\sigma$ and a pessimistic upper bound $\zeta_1\leq\sqrt{2}C$. While this bound shows a simple combination of two divergences, it does not rely on the optimal choice of $C$ (which we believe should also vanish as $n$ increases), and $\zeta_1$ is simply replaced by a non-vanishing constant in this case.
In fact, if we want to compare our bound with existing bounds, we can set $C=0$ and $q=1$ in the sub-gaussian case. This makes the first term in Theorem 4.1 zero, and the second term, using $||\Delta L\_i||\_{\beta}\lesssim \beta\sigma$ for sub-gaussian R.V., becomes $\mathcal{O}(\beta\sigma\sqrt[\uproot{5} \alpha]{I\_{\phi_\alpha}(\Delta L_i;U_i)})$. Compared to existing MI bounds, e.g., $\mathcal{O}(\sigma\sqrt{ I(\Delta L_i;U_i)})$, we know from Pinsker's inequality and $\mathrm{KL} \leq \chi^2$ that $\mathcal{O}(I_{\phi_1}(\Delta L_i;U_i))\leq \mathcal{O}(\sqrt{ I(\Delta L_i;U_i)})\leq\mathcal{O}(\sqrt{I_{\phi_2}(\Delta L_i;U_i)})$, suggesting that some $\alpha \in (1,2)$ could outperform the MI bound.
Moreover, even if we set $C = \sigma$ and $q = 1$ as in Line 306, the bound $\mathcal{O}(\sigma\sqrt{ I(\Delta L_i;U_i)}+\frac{\alpha}{\alpha-1}\sigma\sqrt[\uproot{5} \alpha]{I_{\phi_\alpha}(\Delta L_i;U_i)})$ is not necessarily worse than existing bounds in terms of convergence rate. The bound in Line 306 is worse only if $\frac{\alpha}{\alpha-1}\sqrt[\uproot{5} \alpha]{I_{\phi_\alpha}(\Delta L_i;U_i)})$ dominates, namely, if $\frac{\alpha}{\alpha-1}\sqrt[\uproot{5} \alpha]{I_{\phi_\alpha}(\Delta L_i;U_i)})\geq \sqrt{ I(\Delta L_i;U_i)}$. We will add these additional discussions in the revision.
Regarding the heavy-tailed result, there is a typo: $G_i$ in Line 317 should be $\Delta L_i$. We sincerely apologize for the confusion and have fixed it. Furthermore, the validity of our Corollary 4.1 is consistent with existing heavy-tailed generalization bounds, which typically assume the higher $L_p$ norm of the loss is finite, implying the corresponding norm of the loss difference $\Delta L_i$ is finite. Therefore, as long as those bounds are valid, our Corollary 4.1 is meaningful.
>- Minor note: ...
**Response.** Thank you for pointing out this missing reference. We have included in our revision.
>- How exactly are the experimental plots made ...
**Response.** The experimental plots are generated following protocols from previous works on CMI variants [21, 22, 23], and yes, we estimate expectations and MI terms by conducting multiple runs of experiments in the supersample settings. Specifically, as detailed in Appendix D, we draw $k_1$ samples of $\widetilde{Z}$ and $k_2$ samples of $U$ for each given $\tilde{z}$ (e.g., $k_1=5$ and $k_2=30$ for CNN on MNIST). Given that the loss function is 0-1 loss, estimating the mutual information between the discrete random variable $\Delta L_i$ (which can be $0$, $1$, or $-1$) and the binary random variable $U_i$ (which can be $0$ or $1$) is easy. For each plot, we show the mean and standard deviation (represented by the shaded areas) of the estimated bound values and generalization error values. A more detailed description of the experimental protocol is provided in [21, Appendix B], and we will include additional information in the next revision.
---
Rebuttal 2:
Title: Response to authors
Comment: Thank you for your detailed response.
*Oracle bounds* : I respectfully disagree with the authors on the significance of 'oracle' bounds. The other bounds in the literature are looser exactly because they do not want to include the very terms that need to be bounded in the RHS of the inequality. The authors argument on their benefit remains unfortunately quite unclear to me.
Moreover, the corollaries you mention include expectations of $\Delta L_i^2$ on the right hand side, these are exactly equal to $G_i^2$. This is precisely a generalization gap. Unless I am mistaken, these corollaries are still 'oracle' bounds.
More concerning to me is the following point: the gap $\Delta L_i$ is assumed subgaussian for a fixed $w$'s. The difficulty of bounding the expectations of $\Delta L_i$ lies in the fact that $w$ is data dependent, and therefore it is difficult to control its variance. The authors claims that you could bound them with a constant is very much unclear to me. Why would it be immediate that $\Delta L_i$ is subgaussian, even with learnt weights $w$ ?
*The need for assumptions to make the bounds readable* : I thank the authors for providing details on how one should go about choosing all the different parameters in order to instantiate their bound on a subgaussian loss. I believe this further strengthens the point that the RHS of their results are simply not quantities that are accessible. The motivation for having a generalization bound where the RHS involves $L_p$ norm of the gap $\Delta L_i$ itself is not clear to me.
If the authors could explain why $\Delta L_i^2 = G_i^2$ is a meaningful quantity to have on the RHS, and especially if they could explain why they state that the gap will remain subgaussian even with learnt weights, I would increase my score. Currently, the circularity of the bounds make me inclined to maintain my score.
---
Rebuttal Comment 2.1:
Title: Thanks for the reply
Comment: We sincerely appreciate the reviewer's engagement in the discussion.
>- Oracle bounds : I respectfully disagree with the authors on the significance of 'oracle' bounds. The other bounds in the literature are looser exactly because they do not want to include the very terms that need to be bounded in the RHS of the inequality. The authors argument on their benefit remains unfortunately quite unclear to me.
>- Moreover, the corollaries you mention include expectations of $\Delta L_i^2$ on the right hand side, these are exactly equal to $G_i^2$. This is precisely a generalization gap. Unless I am mistaken, these corollaries are still 'oracle' bounds.
**Response.** We believe that stating $\mathbb{E}[G_i^2]$ is precisely a generalization gap may not be accurate. To our understanding, the generalization gap is $|\mathcal{E}\_\mu(\mathcal{A})|\leq \frac{1}{n}\sum_{i=1}^n|\mathbb{E}[G_i]|=\frac{1}{n}\sum_{i=1}^n\sqrt{\mathbb{E}^2[G_i]}\leq \frac{1}{n}\sum_{i=1}^n\sqrt{\mathbb{E}[G^2_i]}$. Hence, even for a symmetric algorithm $\mathcal{A}$, where $|\mathcal{E}\_\mu(\mathcal{A})|= |\mathbb{E}[G_i]|$, the term $\sqrt{\mathbb{E}[G^2_i]}$ is not exactly a generalization gap.
In our paper, we refer to a bound as an 'oracle' bound if it involves $\mathbb{E}[G_i]$. However, if the terms in the bound can be computed solely using $\Delta L_i$, e.g., $\mathbb{E}[\Delta L_i^2]$, we do not consider it an oracle bound, as the mutual information (MI) term in our bound already involves the random variable $\Delta L_i$. In essence, if the mutual information term requires access to the distribution of $\Delta L_i$, then prohibiting $\Delta L_i^2$ from appearing in the RHS of the bound would imply that the MI term itself may also be disallowed.
Furthermore, if one really prefers that $\Delta L_i$ does not appear in the RHS aside from the MI term, we note that in Line 212, we do mention that $\mathbb{E}[\Delta L^2_i]\lesssim \mathrm{Var}[L_i^+]$. This implies that all $\mathbb{E}[\Delta L^2_i]$ terms in these bounds can be replaced by $4\mathrm{Var}[L_i^+]$, which only requires access to a single column of losses in the supersample. This remains a novel bound.
Regarding the looseness of other bounds in the literature, we still believe that simply dropping some vanishing terms (i.e., upper-bounding by a constant) does not align with the goal of achieving tight generalization bounds. Instead, explicitly presenting these terms and then carefully handling them is crucial for both understanding why they are loose and improving upon those previous results.
>- More concerning to me is the following point: the gap $\Delta L_i$ is assumed subgaussian for a fixed $w$'s. The difficulty of bounding the expectations of $\Delta L_i$ lies in the fact that $w$ is data dependent, and therefore it is difficult to control its variance. The authors claims that you could bound them with a constant is very much unclear to me. Why would it be immediate that $\Delta L_i$ is subgaussian, even with learnt weights $w$ ?
**Response.** We appreciate the reviewer raising this valid point, and we acknowledge that our initial argument was indeed unclear. We believe $\Delta L_i$ is subgaussian even with learned weights $w$ for the following reasons: $L_i^-$ or $L_i^+$ can either be training loss (i.e., $W$ depends on $Z$) or testing loss (i.e., $W$ is independent of $Z$). If $L_i^-$ is a testing loss, it is subgaussian, and if $L_i^-$ is a training loss, we rely on the common understanding that the training loss should be finite for any meaningful algorithm (i.e., bounded by some constant), making it subgaussian as well. Therefore, the overall loss difference, being the sum of two subgaussian random variables, remains subgaussian. That said, we acknowledge that the additional condition—that the training loss should not go to infinity—should be explicitly stated. We will revise Lines 303-309 to directly consider $\Delta L_i$ as $\sigma$-subgaussian (rather than for any fixed $w$) and incorporate the related discussions in our previous response.
>- The motivation for having a generalization bound where the RHS involves $L_p$ norm of the gap $\Delta L_i$ itself is not clear to me.
**Response.** We maintain the opinion that if the mutual information term is allowed to be defined based on $\Delta L_i$, then involving the $L_p$ norm of the gap $\Delta L_i$ should also be valid. Moreover, it is still possible to replace this with the $L_p$ norm of the (centered) single-column loss due to the symmetric property of the supersample construction.
Please do let us know if the reviewer has any remaining concerns about the motivation behind our work. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Understanding Representation of Deep Equilibrium Models from Neural Collapse Perspective | Accept (poster) | Summary: The paper analyzes the features of a deep equilibrium model (DEQ) using neural collapse metrics under class-balanced and imbalanced settings. In particular, it is shown that under imbalanced conditions, the class means of features in DEQ are relatively closer to a simplex ETF orientation than in the case of an explicit neural network model (such as ResNet).
Strengths: The paper takes an interesting theoretical view of analyzing the features of deep equilibrium models using the neural collapse metrics. In particular, the paper extends the unconstrained feature model to cases with implicit layers and analyzes the settings under which neural collapse is ideal (with respect to data balancedness).
Weaknesses: 1. The paper suffers from major presentation issues. In particular, a lot of notation/formulation errors lead to unclear results. For instance:
- In Eq (1) and the text below, the function $\phi()$ is assigned to a matrix without mentioning the class-wise arrangement of the features in matrix $H$. Similarly, in the definition of NC1, the class means vector has a different notation when compared with the one presented in Appendix A (line 482).
- The NC2 formulation in Theorem 3.1 is wrong.
- $\tau(i)$ in Theorem 3.1 is not defined.
- Mismatch in the scale of NC1 values in Figure 3.
2. The author's claim about DEQ reaching lower loss when compared to explicit layers is not validated in the experiments. Especially, the (train) cross-entropy loss is not illustrated. Furthermore, based on the results in Figure 3, it is unclear when the ResNet/DEQ reaches the terminal phase of training (i.e 100% training accuracy).
3. The paper states that DEQ models can be memory efficient, but do not provide any numerical results on the computational benefits of employing them. Also, the experimental results rely on training with a single random seed which is an incomplete measure of good/bad performance.
Technical Quality: 3
Clarity: 1
Questions for Authors: 1. The experimental setup with ResNet and the DEQ is unclear. Which part of the ResNet is being used as the image encoder? As of now, it seems like only the last ResNet block is reformulated as the DEQ, and all the previous layers are used as the image encoder. Is this observation correct? If that is the case, then what is the computational overhead of the DEQ layer?
2. Based on this setup of replacing only one ResNet block with the DEQ, how does this approach compare to simply fixing the last classifier layer of ResNet as a simplex ETF to address the class-imbalance issue?
3. What happens if we replace more than one block of the ResNet with a DEQ layer? How sensitive is the DEQ layer to learning rate and batch size?
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: The paper suffers from major presentation issues. In particular, a lot of notation/formulation errors lead to unclear results.
A1: Thanks for pointing out these typos and errors! We have made the following corrections in the revised version:
* In Line 74, we have adjusted the domain as $\phi(x):R^{in\times N}\rightarrow R^{D\times N}$.
* We changed $\frac{E_WE_H}{N-1}$ on Line 549 to $-\frac{E_WE_H}{K-1}$, and we also adjusted the conclusion in NC2 from $k$ to $K$.
* In Line 169, we replace $i \in \tau(i)$ with $i\in \tau (k)$,where $\tau(k)$ denotes the samples belonging to the $k$-th class.
* We have adjusted the y-axis of Figure 3(a) for NC1 and changed it to a log scale for NC1.
> Q2: The author's claim about DEQ reaching lower loss when compared to explicit layers is not validated in the experiments. Especially, the (train) cross-entropy loss is not illustrated.
A2: Thanks for pointing this out! Since the specific value of the loss does not have practical significance in classification, we primarily use accuracy to compare the performance of explicit and implicit layers. In the revised version, we will include the presentation of loss values.
> Q3: Based on the results in Figure 3, it is unclear when the ResNet/DEQ reaches the terminal phase of training.
A3: As discussed in Section 5.1, this early-stop setting was intentional. This is because DEQ models exhibit instability, which becomes particularly pronounced as training progresses, with some samples struggling to converge to a fixed point and requiring more iterations. The previous paper raised this issue (section 3.1 in [1]) and addressed it using early stopping. Here, we follow this standard technique in DEQ.
> Q4: The paper states that DEQ models can be memory efficient, but do not provide any numerical results on the computational benefits of employing them.
A4: Memory efficiency is the fundemental properities of DEQ. Extensive experiments have conducted on this aspect when it is proposed. According to table 1-3 in [2], for example, Transformer require 4.8 GB and 6.8 GB memory, while its corresponding DEQ version only requires 1.1 GB. This is because the core of DEQ lies in parameter weight sharing, which saves a large number of parameters. It is not the focus of our paper, so we did not include experiments on this. We primarily explain the performance of DEQ and compare it with explicit neural networks in classification problems.
> Q5: The experimental results rely on training with a single random seed which is an incomplete measure of good/bad performance.
A5: We would like to argue that this is not true. We did not use just one seed; instead, we used different random seeds and computed the average and standard deviation of accuracy, which are shown in the Table 1-4.
> Q6: The experimental setup with ResNet and the DEQ is unclear. Which part of the ResNet is being used as the image encoder? As of now, it seems like only the last ResNet block is reformulated as the DEQ, and all the previous layers are used as the image encoder. Is this observation correct? If that is the case, then what is the computational overhead of the DEQ layer?
A6: We are afraid that this is not true. In our experiment, we use the layers from the last stage of ResNet, specifically, the last two residual blocks, which comprise four neural network layers collectively structured as DEQ. This will reduce the number of parameter during computation. We apologize for any confusion in the previous statement and have clarified this in our revised paper.
> Q7: Based on this setup of replacing only one ResNet block with the DEQ, how does this approach compare to simply fixing the last classifier layer of ResNet as a simplex ETF to address the class-imbalance issue?
A7: Comparing DEQ to simplex ETF on the performance of imbalance learning is not appropriate. DEQ is a broader network representation and is not specifically designed to address imbalanced learning. It can incorporate any network structure as long as there are parameters to be learned. Thus, the last layer with a simplex ETF can also be converted into the DEQ formulation.
> Q8: What happens if we replace more than one block of the ResNet with a DEQ layer?
A8: Currently in our paper, we have already combined multiple consecutive residual blocks into DEQ. A distinctive feature of DEQ is its capability to convert arbitrary neural networks into the form of an implicit network regardless of length.
> Q9: How sensitive is the DEQ layer to learning rate and batch size?
A9: In our experiments, DEQ is not sensitive to batch size, but it is relatively more affected by the learning rate. A larger learning rate causes fluctuations in DEQ training to occur earlier. Due to the limited theoretical research on DEQ, these issues have not been thoroughly addressed and studied. Currently, [3,4] have explored different learning rate settings and compared their outcomes. [5] has proven that with sufficiently small learning rates, the loss function can converge at a linear rate. However, though we believe these are not within the scope of our paper, we will add the analysis into the related work part of our revised version.
ref:
[1] Stabilizing Equilibrium Models by Jacobian Regularization, icml 2021
[2] Deep Equilibrium Models, nips 2019
[3] Deep equilibrium networks are sensitive to initialization statistics, icml 2022
[4] Positive Concave Deep Equilibrium Models, icml 2024
[5] Global Convergence Rate of Deep Equilibrium Models with General Activations, arxiv: 2302.05797
---
Rebuttal Comment 1.1:
Comment: Thank you for clarifying some of the concerns.
1. Line 259 states that "In this context, accuracy is assessed by averaging the results from the last 10 epochs and computing their standard deviation". How does this relate to the random seed-based std dev results?
2. The author's response to Q3 regarding the terminal phase of training(TPT) in Figure 3 is a bit concerning. Since achieving TPT is the main underlying assumption for NC analysis, my major concern is that since the ResNet itself is not trained to achieve 100% training accuracy (TPT), the claims regarding better DEQ performance/ similar NC might not hold. The early stopping is only necessary to train the DEQ to achieve the best results, but isn't training data interpolation the primary requirement for NC?
3. The authors also mention that training a DEQ leads to training instabilities so they skip some samples (line 255). Can you provide some more context on how many samples are discarded in this fashion? Is it something to worry about?
---
Reply to Comment 1.1.1:
Comment: Thanks very much for your comments.
A1. We are sorry for the misleading. Actually, we ran each experiment five times and then calculated the standard deviation. At each time, we average the results from the last 10 epochs and take this mean value as the result of this time. We didn't choose a specific random seed. During each training session, the results were based on PyTorch's default random seed settings. We will add this clarification in our final version.
A2. We select 100 epochs as the early stopping criterion and achieve around 95% training accuracy generally for these experiments. Since our work is primarily theoretical, our experiments aim at validating our theory and observing the trend of NC. DEQ with 95% TPT still has clear advantages in terms of NC values for imbalanced cases compared to the explicit counterpart with 100% TPT. Therefore, we believe this result can validate DEQ’s advantage as claimed in our theory.
A3. In our experimental results, using Cifar-100 as an example, less than 1% of the samples are discarded. Given the limited amount of affected data, we believe it is not quite necessary to worry about that. | Summary: This paper studies the neural collapse (NC) phenomenon in Deep Equilibrium model (DEQ), a competitive implicit neural network model to standard explicit model. The authors compare the theoretical property of DEQ and NN on the layer-peeled model, a simplified model that include the last two layers only. It was shown that under a balanced sample setting, NC happens both in DEQ and standard NN, under an imbalanced sample setting, DEQ exhibits better robustness property in terms of the NC metrics. Experimental results are presented to justify the theoretical analysis on CIFAR10 and CIFAR100, it was shown that DEQ have better performance than standard NN across in various settings.
Strengths: The finding of this paper is novel and interesting, it provides a insightful perspective to understand the advantage of DEQ over the standard NN. The authors provide solid theoretical analysis and experimental results to support the main finding. Overall, the paper is well-organized and easy to follow.
Weaknesses: My major concern of this paper is its contribution and significance. Since the discovery of minority collapse, there has been tons of literature on how to mitigate the minority collapse. For example, Proposition 1 in [1] proves that simply reweighting the samples can mitigate the minority collapse. Therefore, my opinion is that neural collapse analysis can provide limited insight about the advantage of DEQ over standard NN, since the improvement is relatively marginal compared with simple techniques. I would suggest the authors compare the DEQ and standard NN with some standard techniques such as reweighting, and confirm that the advantage indeed have practical influence.
[1] Exploring Deep Neural Networks via Layer-Peeled Model: Minority Collapse in Imbalanced Training
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The experimental results are limited to ResNet-18 only, I would encourage the authors to perform more extensive evaluation on various architectures to support the finding.
2. The proof needs to be better organized. It will be helpful if the authors can provide an outline of proof in the appendix, restate the main result and break it into several propositions, and add some high level intuition of why DEQ outperforms standard NN.
3. Neural Collapse solutions do not have full rank in general, would that bring issues when writing down the inverse of gradient in equation (3)?
4. In your theoretical analysis, it was assumed that the DEQ is linear with a specific expansion form. The authors should highlight this with a hold out assumption and discuss when it can be satisfied. In particular, what is $W_{\text{DEQ}^i}$ in line 162? Are they being optimized in the programming? In the current form it seems to reduce to simply a linear function $Wx$.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have properly addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: My major concern of this paper is its contribution and significance. Since the discovery of minority collapse, there has been tons of literature on how to mitigate the minority collapse. Therefore, my opinion is that neural collapse analysis can provide limited insight about the advantage of DEQ over standard NN. I would suggest the authors compare the DEQ and standard NN with some standard techniques such as reweighting, and confirm that the advantage indeed have practical influence.
A1: Thanks for your comment! We would like to clarify that our goal was not to use DEQ to mitigate minority class collapse but rather to leverage NC analysis to explain the performance of DEQ. We had employed reweighted CE to test both explicit NN and DEQ, observing that their accuracies both improved. Importantly, their difference remained similar. We shows the case of Cifar-10, $K_A=3$ and $R=10$, the results are as follows:
||Reweighted CE|CE|
|:-:|:-:|:-:|
|Explicit NN|79.01$\pm$0.36|72.57$\pm$0.25|
|DEQ|80.17$\pm$0.95|73.84$\pm$0.72|
Additionally, our work primarily focuses on theoretical aspects since current DEQ research heavily relies on empirical evidence for its effectiveness and performance and lacks theoretical grounding.
Furthermore, we would like to argue that NC do not brings limited insight to DEQ. A significant similarity between DEQ and NC research is their model-agnostic nature. By using NC tools, we expand on the internal computational processes of both explicit and implicit structures, providing explanation of why DEQ might outperform explicit NN under certain conditions, i.e., we analyze and compare the differences between $W_{DEQ}$ and $W_{EX}$ in their application. Our findings demonstrate that the extracted features of DEQ align more closely with the vertices of a simplex ETF and exhibit better alignment with classifier weights under specific conditions.
We quantify the benefits of DEQ’s multiple forward iterations, offering a valuable theoretical supplement to the existing research. Thus, we believe our paper contributes to the community by addressing a theoretical gap and providing a valuable addition to the existing research.
>Q2: The experimental results are limited to ResNet-18 only, I would encourage the authors to perform more extensive evaluation on various architectures to support the finding.
A2: Thanks for your advice! We conducted a new experiment using MobileNet as the network backbone. The results are as follows:
$K_A=3$, Cifar-10
||R|10|50|100|
|:-:|:-:|:-:|:-:|:-:|
|Explicit NN|overall|69.47|49.16|34.98|
||major|94.13|95.56|93.70|
||minor|58.90|29.28|9.81|
|DEQ|overall|71.12|49.88|35.59|
||major|95.24|96.00|94.01|
||minor|60.78|30.11|10.55|
Due to character limits, we omitted the $\pm$ and variance term and selected a specific set of experiments to showcase. Additionally, we incorporated more results into our revised paper. The conclusion we found that is, the differences between Explicit NN and DEQ consistently persisted across different backbones.
>Q3: The proof needs to be better organized. It will be helpful if the authors can provide an outline of proof in the appendix, restate the main result and break it into several propositions, and add some high level intuition of why DEQ outperforms standard NN.
A3: Our proof sketch divides into two cases: in the balanced scenario, we utilize inequalities to iteratively establish conditions under which each equation holds, thereby deriving properties of NC. In the imbalanced case, we analyze conditions under which the majority and minority classes achieve their minima, and integrate these insights to compare the performance of Explicit NN and DEQ within the NC framework.
We have re-orginzed our prove and made the following modification:
- We add a table of contents for our appendix.
- In the proof of balanced learning, we have rewritten Case 1 (explicit NN) in Line 528 and Case 2 (DEQ) in Line 552, emphasizing the three properties of NC for comparison. We explain that DEQ performs slightly better than explicit NN because of its lower bound on the loss function (theorem B.3).
- In the proof of imbalanced learning, we have added more detailed explanations about why DEQ performs better under mild conditions, which is mainly because the forward fixed-point iteration increases the number of learning iterations for samples in the minority class.
> Q4: Neural Collapse solutions do not have full rank in general, would that bring issues when writing down the inverse of gradient in equation (3)?
A4: We don’t think so. Equation (3) describes the forward solving process of DEQ, where $x$ is the input to the DEQ layer, and $z^\star$ the output of this layer through fixed-point iteration. The rank in (3) differs with that in NC. Besides, the matrix $B^{-1}$ is not directly obtained through inversion in practice, but rather approximated using BFGS.
>Q5: In your theoretical analysis, it was assumed that the DEQ is linear with a specific expansion form. The authors should highlight this with a hold out assumption and discuss when it can be satisfied.
A5: Thanks for your suggestion! We would like to clarify that the assumption of Linear DEQ, as used and discussed in [1], is a common approach in DEQ studies. This assumption posits that a linear function is used during the fixed-point iteration to compute the equilibrium point $z^\star$. We will add this assumption more clearly in the final version.
>Q6: In particular, what is $W^i_{DEQ}x$ in line 162? Are they being optimized in the programming? In the current form it seems to reduce to simply a linear function $Wx$.
A6: $W_{DEQ}$ represents the parameter being optimized, and $W_{DEQ}^i$ refers to its $i$-th power. Its presence arises from the necessity of repeated iterations during the forward solving process. It cannot be seen as a simple linear function $Wx$.
ref:
[1] Deep equilibrium networks are sensitive to initialization statistics, icml 2022
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response and efforts on additional experiments, my concerns have been properly addressed. I would like to raise my score to weak accept and encourage the authors to improve the presentation of theoretical results in future versions.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thanks so much for your insightful review and for raising your score. We are happy that our responses addressed your questions. We will take your suggestions into our revision. | Summary: This paper investigates the representation of Deep Equilibrium Models (DEQ), highlighting their memory efficiency and competitive performance. Using NC, it shows that DEQ exhibits NC under balanced conditions and maintains advantages in imbalanced settings. Theoretical findings are validated through experiments, demonstrating DEQ's superior handling of imbalanced datasets.
Strengths: 1 The paper provides a theoretical analysis of the representation of Deep Equilibrium Models under both balanced and imbalanced conditions using the Neural Collapse phenomenon. This analysis demonstrating DEQ’s advantages over explicit neural networks under some mild conditions.
2 Validation of theoretical insights through experimental results on datasets like Cifar-10 and Cifar-100, enhances the credibility of the findings that the superior performance of DEQ, especially in handling imbalanced datasets.
3 The introduction of the Neural Collapse phenomenon offers a novel perspective on deep representations in implicit neural networks.
Weaknesses: 1 The analysis is limited to simple imbalanced scenarios and DEQ models, restricting the generalizability of the findings to more complex real-world situations.
2 While the paper provides valuable theoretical insights and experimental validation in section 5, there may be a need for more in-depth quantitative analysis like statistical analysis to further support the claims and conclusions drawn.
3 The lack of broader experiment for $K_A\neq 3$ in Table 3 may hinder a comprehensive comparison of DEQ with Explicit NN.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please see the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: The analysis is limited to simple imbalanced scenarios and DEQ models, restricting the generalizability of the findings to more complex real-world situations.
A1: Thanks for your comment! Currently, our work primarily focuses on theoretical aspects, and discussing real-world issues will be a part of our future work. However, our research inherently addresses the model's generalization capability.
Through our theoretical proof, we have established that DEQ's NC2 and NC3 demonstrate superior properties than explicit neural network under mild conditions. Specifically, we discovered that features extracted by DEQ exhibit closer alignment with the vertices of a simplex ETF and show better conformity with classifier weights under specific conditions. As a result, feature separability of DEQ is enhanced compared to explicit layers under some mild conditions, which can potentially lead to stronger generalization performance in downstream tasks.
> Q2: While the paper provides valuable theoretical insights and experimental validation in section 5, there may be a need for more in-depth quantitative analysis like statistical analysis to further support the claims and conclusions drawn.
A2: Thanks for your advice. We have plotted the values of NC1 - NC3 in Figure 3 to Figure 5 and their statistical representations are included in Appendix Section A. The NC cases shown in Figure 3(a) and Figure 3(b) precisely support the conclusions in section 3 and section 4 respectively. Additionally, we have also calculated the mean and standard deviation of training accuracy, which are presented in the Table 2 - Table 4.
> Q3: The lack of broader experiment for $K_A\neq 3$ in Table 3 may hinder a comprehensive comparison of DEQ with Explicit NN.
A3: We did conduct the experiments with $K_A$ values of 5 and 7, and the results were presented in the appendix (page 26).
---
Rebuttal Comment 1.1:
Comment: I appreciate the time and effort taken by the authors on the response to my review, and for addressing each of my concerns in turn. This is a well presented, thorough and novel piece of work. Moreover, this paper explores the connections and differences between implicit and explicit NNs.Two recent papers [p1, p2] bear relevance to the subject of this paper, and should be cited. Overall, I believe this work will be valuable for guiding future theory and practice in DEQs and other implicit networks. As such, I increase my rating to accept.
[p1] X. Xie, et al. "Optimization induced equilibrium networks: An explicit optimization perspective for understanding equilibrium models." T-PAMI.
[p2] Z. Ling, et al. "Deep Equilibrium Models are Almost Equivalent to Not-so-deep Explicit Models for High-dimensional Gaussian Mixtures." ICML 2024.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you for your review and for appreciating our work! We will cite the two papers you mentioned in our final version. | Summary: The author analyzes the DEQ from the prospective of Neural Collapse to demonstrate the reason why DEQ is effective. The Nerual collapse means that at the final phase of training (training error is close to zero), the feature and classifier vector converges to a simplex Equiangular Tight
Frame. Neural collapse usually happens in balanced dataset since convergence in some classes in unbalanced dataset is difficult, which is called minority collapse.
To analyze neural collapse in DEQ, the author starts from a layer-peeled model, which focus on last-layer features. And they prove that Neural Collapse also happens in DEQ. However, the lower bound of loss function is smaller compared with explicit neural network. In imbalanced dataset, the DEQ also show a smaller lower bound in loss function. One reason is that DEQ can be seen as an infinite layer network, which has better representation capacity. And the repeated iteration of DEQ can mitigate the problem in imbalanced dataset. However, minority collapse also happens in DEQ. The author provide some experiment to prove their claim.
Strengths: 1: To analyze the advantage of DEQ from perspective of DEQ is interesting. This paper may bring up some new perspective to analyze DEQ.
2: The author provide analysis on both balanced and imbalanced dataset.
3: The author provide extensive experiment to prove their idea.
Weaknesses: 1: The smaller lower bound of loss function may cause overfitting, which may affect the performance in the test phase. Add discussions on this part can be helpful.
2: Maybe add some experiments on dataset other than Cifar can make the conclusion more persuasive.
Technical Quality: 3
Clarity: 3
Questions for Authors: See weakness.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: The smaller lower bound of loss function may cause overfitting, which may affect the performance in the test phase. Add discussions on this part can be helpful.
A1: Thanks for your valuable suggestion! We agree with you that lower bound of loss function can lead to overfitting.
We would like to claim that we did employ some manual setup methods to avoid overfitting. For example, we implemented an early stopping mechanism (sec 5.1) to prevent instability in DEQ and halt the training process before the model overfits the training data, thereby reducing the risk of overfitting.
Additionally, the solving process of DEQ can be configured with specific thresholds to terminate at appropriate points, and a manual iteration limit can also be set.
Furthermore, in our experiments, we also employed the Jacobian regularizer proposed in [1], which penalizes overly complex models and discourages the model from fitting the training data too closely.
We have added the detailed discussion into our revised paper.
> Q2: Maybe add some experiments on dataset other than Cifar can make the conclusion more persuasive.
A2: Thanks for your comment. Following the settings in previous DEQ papers, we have also included results on additional datasets, such as SVHN and MNIST, the results are shown here:
SVHN, $K_A=3$:
| | R | 10 | 50 | 100 |
|:---:|:-------:|:-----:|:-----:|:-----:|
| Explicit NN | overall | 83.82 | 60.66 | 54.84 |
| | major | 96.01 | 96.11 | 92.30 |
| | minor | 78.60 | 45.46 | 38.78 |
| DEQ | overall | 85.14 | 64.00 | 56.52 |
| | major | 96.99 | 98.01 | 93.12 |
| | minor | 80.06 | 49.42 | 40.83 |
SVHN, $K_A=5$:
| | R | 10 | 50 | 100 |
|:---:|:-------:|:-----:|:-----:|:-----:|
| Explicit NN | overall | 81.74 | 66.80 | 52.85 |
| | major | 91.33 | 88.86 | 84.41 |
| | minor | 72.16 | 44.73 | 21.30 |
| DEQ | overall | 83.75 | 68.24 | 53.30 |
| | major | 93.52 | 90.90 | 86.12 |
| | minor | 73.98 | 45.58 | 20.48 |
SVHN, $K_A=7$:
| | R | 10 | 50 | 100 |
|:---:|:-------:|:-----:|:-----:|:-----:|
| Explicit NN | overall | 82.36 | 71.41 | 63.97 |
| | major | 89.09 | 89.90 | 87.39 |
| | minor | 66.64 | 28.28 | 9.33 |
| DEQ | overall | 82.63 | 72.17 | 65.21 |
| | major | 88.86 | 90.15 | 88.51 |
| | minor | 68.10 | 30.21 | 10.85 |
And the trend also resembles as Cifar dataset. In the MNIST dataset, the results of DEQ and Explicit NN are quite similar due to the dataset's simplicity, so we do not present them here.
ref:
[1] Stabilizing Equilibrium Models by Jacobian Regularization, icml 2021 | Rebuttal 1:
Rebuttal: We thank the reviewers for their careful reading of our paper and help with improving our manuscript. We sincerely appreciate that you find our work:
- adds to the understanding of the behavior of DEQ (C4pT, a3Sj),
- conducts extensive and solid experiments (C4pT, a3Sj, Zj9t),
- addresses an novel, interesting and important topic (a3Sj, 6ZyB),
- provides rigorous proof and derives interesting conclusion (a3Sj, Zj9t, 6ZyB),
- is well-organized and easy to follow (Zj9t).
We would like to express our gratitude to all the reviewers for their valuable suggestions for improving our paper.
Several reviewers pointed out that if we add more experimental results could make the conclusion be more convincing. Therefore, we conducted additional experiments on **different datasets** other than Cifar-10 and Cifar-100, and also tested **alternative backbones** apart from ResNet. Interestingly, we obtained similar conclusions: expressing the network architecture in the form of DEQ provides a slight improvement in addressing imbalanced datasets.
Since there is currently a lack of related work analyzing the performance of DEQ, we would like to emphasize that our paper primarily focuses on analyzing the **theoretical explanation** for why DEQ outperforms explicit NNs in classification tasks under mild conditions. Furthermore, we are also **the first to consider the performance of DEQ on imbalanced datasets**.
Additionally, we also would like to highlight that incorporating NC into DEQ is motivated by NC's capability to analyze any form of neural network. The key feature of DEQ is also its ability to represent any network structure as an implicit model and solve it using fixed-point iteration. Both of them are **similar** in this aspect. Therefore we discussed the performance of parameters $W_{DEQ}$ and $W_{EX}$ for two different forms of networks using the NC tool. We observed that under certain mild conditions, DEQ tends to produce features that are closer to the simplex ETF. Moreover, there is better alignment between weights and features in DEQ.
In response to the issues raised by reviewers, we have implemented the following adjustments:
- Included the newly added experiments.
- Improved the textual explanations, especially added the analysis of the similarities between DEQ and NC.
- Conducted a thorough review of the proof process and add more detailed discussions.
- Corrected typos and addressed language expression errors. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Boosting the Potential of Large Language Models with an Intelligent Information Assistant | Accept (poster) | Summary: The paper introduces AssistRAG, a framework that integrates an intelligent information assistant within LLMs. AssistRAG employs a two-phase training approach involving Curriculum Assistant Learning and Reinforced Preference Optimization, focusing on memory management and knowledge management. Experiments on three QA datasets demonstrate the effectiveness.
Strengths: 1. Experiments show the proposed method is effective.
Weaknesses: 1. Questionable innovation, the key ideas of the proposed method (memory and knowledge management) become a standard practice in terms of Agent design.
2. The proposed method involves model training and serving, which could be costly, and also make it harder to adapt to different foundation LLMs.
3. A.2 shows that 58% of the errors stem from insufficient knowledge retrieval, but no further details are provided. Given such a scenario, the retrieval strategy may need to be improved.
4. Using GPT-4 to generate fine-tuned data potentially limits the proposed method.
5. Notations are unclear, e.g., line 97, what is d_i? Eq. 2 is not well explained.
6. Table 1 shows that AssistRAG is combined with LLaMA2, ChatGLM, and ChatGPT, but the manuscript only mentioned using ChatGLM3 for training and ChatGPT in inference, which is inconsistent.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Please include more details about the (ChatGPT) model versions employed in this paper.
2. The articles were segmented into 100-token passages - this is a relatively short length for LLM, and A.2 also shows that insufficient knowledge retrieval caused more than half of the errors, why don't consider increasing the chunk size?
3. Experiment data and questions are focused on Wikipedia, how about other domains?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: 1. Given the current development of the Agent framework, it is necessary to compare the proposed method with the Agent solution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer,
Thank you for your valuable feedback. We have responded to each of the weaknesses (W), questions (Q), and limitations (L) you raised. We hope the following responses clarify the contributions of our work and address your concerns.
***
**R to W1:**
While memory and knowledge management are common elements in agent design, our approach introduces significant innovations that set it apart:
1. AssistRAG is the first to integrate a complete agent solution within RAG scenarios, encompassing tool usage, action execution, memory building, and plan specification. This sets a foundational approach for future intelligent assistants for human.
2. By decoupling the RAG task into an Assistant LLM and a Main LLM, AssistRAG enhances the assistant's adaptability to RAG scenarios without compromising the main LLM's inherent capabilities through a two-phase training process.
3. AssistRAG adapts efficiently to new LLMs without the need for retraining from scratch, achieving significant improvements with preference alignment.
The following table highlights the distinctions between AssistRAG and other representative works:
|Model|Tool|Action|Memory|Plan|Training for RAG|No Impact on Main LLM|Adaptation to new LLMs|
|-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|ReAct|√|√|×|√|×|√|No training|
|Self-RAG|√|√|×|√|√|×|Training from scratch|
|Selfmem|√|×|√|×|√|×|Training from scratch|
|AssistRAG|√|√|√|√|√|√|Preference alignment|
***
**R to W2:**
We understand your concerns regarding the cost and adaptability of the proposed method. In fact, our Assistant-based RAG framework is specifically designed to improve adaptability to different foundation LLMs.
1. Decoupling the RAG task into an Assistant LLM and a Main LLM is intended to separate the training of the Assistant LLM from the Main LLM. The Assistant Learning phase remains unchanged regardless of the Main LLM. This allows a well-trained Assistant LLM to be adapted to various Main LLMs without the need for retraining from scratch.
2. The DPO training phase is designed to enhance compatibility between the Assistant and Main LLMs. Even without this phase, the Assistant LLM can still effectively support the Main LLM, as demonstrated in Table 2 (No DPO Training).
***
**R to W3:**
There may have been a misunderstanding regarding our presentation. We hope the following points clarify this:
1. The 58% refers to the proportion of errors caused by insufficient knowledge retrieval among all incorrect answers, not all questions.
2. We employ a highly advanced and widely used model LLM-Embedder for retrieval, which is proposed in 2023 and has been downloaded over 56k times on Huggingface.
3. Our focus is on complex multi-hop QA tasks, which require retrieving at least two correct documents to ensure knowledge completeness. This increases the difficulty for the retriever.
***
**R to W4:**
We would like to clarify the following points to address your concerns regarding the use of GPT-4:
1. Using GPT-4 to generate fine-tuned data has been widely adopted in various scenarios, such as WizardLM, WizardCoder, WizardMath.
2. Our proposed method does not rely exclusively on GPT-4. Recently released powerful open-source models, such as Llama-3.1 and DeepSeek-V2, can serve as alternatives.
3. We will open-source our training data and models. Community researchers will be able to adapt our method to their own LLMs without needing to use GPT-4 for generating training data.
***
**R to W5:**
We apologize for the unclear notation. We will revise the manuscript to provide clearer definitions and explanations for the notations.
***
**R to W6:**
We acknowledge the inconsistency you pointed out between Table 1 and section 4.3 (Inference Settings). To clarify, the base model used for training the Assistant LLM is ChatGLM3. In order to verify that the Assistant LLM can adapt to various main LLMs, we conducted inference using LLaMA2, ChatGLM, and ChatGPT, which also involved providing preference data. We will improve our wording in future versions to prevent any misunderstandings.
***
**R to Q1:**
We appreciate your request for additional details regarding the model versions used. The specific ChatGPT model employed in this paper is gpt-35-turbo-16k, provided by Azure, with a release date of 2023-05-15.
***
**R to Q2:**
There are several reasons for choosing 100-token wikipedia passages as the retrieval corpus:
1. Most baselines in its original papers used this setting, allowing for easier comparison and alignment with these baselines.
2. Wikipedia's official passage-level retrieval corpus segments articles into 100-token passages.
3. We do not input just one passage into the LLM but combine multiple retrieved passages. This length ensures that the combined input does not exceed the length limits of certain LLMs.
***
**R to Q3:**
This is indeed a valuable suggestion. We have expanded our study to include the ALCE-ELI5 dataset, which originates from the Reddit forum. This dataset is built upon the Sphere corpus, encompassing 899 million passages. The results are as follows:
|Main LLM|Method|Fluency|Correctness|
|-|-|-|-|
|LLaMA-2-chat|CloseBook|50.3|20.3|
|LLaMA-2-chat|Naive RAG|66.8|27.1|
|LLaMA-2-chat|AssistRAG|65.9|33.5|
|ChatGPT|LLMLingua|67.2|41.4|
|ChatGPT|CloseBook|53.3|37.2|
|ChatGPT|Naive RAG|67.4|40.0|
|ChatGPT|AssistRAG|**67.8**|**45.4**|
***
**R to L1:**
We have conducted additional experiments comparing our proposed method with two Agent frameworks capable of performing RAG tasks: Toolformer and Reflexion. These experiments were conducted on the 2wiki dataset, and we will supplement results from other datasets in later version.
| |EM|F1|Recall|
|-|-|-|-|
|Toolformer|27.2|38.6|43.2|
|Reflexion|31.8|41.7|44.2|
|AssistRAG|**39.6**|**45.6**|**45.7**|
***
We hope these clarifications address your concerns and provide a better understanding of our work. If you have any further concerns, we would be delighted to continue the discussion with you.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thank you for your response and the additional information provided. After carefully considering the paper and the rebuttal content, I have decided to slightly increase my evaluation, based on a comprehensive assessment of the work's contribution to the field.
I would like to further clarify my concern regarding the insufficient knowledge retrieval. The current experimental results indicate that insufficient knowledge retrieval accounts for 58% of the errors, suggesting potential issues at the retrieval stage. This problem typically does not stem from the retrieval algorithm, package, or vector database, but rather from the data processing workflow. This raises questions about the use of 100-token passages, as current best practices typically involve chunk sizes in the range of 500-1000 tokens to capture more semantic content.
Thank you for your efforts, and good luck.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thorough evaluation and for considering the content of our rebuttal. We sincerely appreciate your willingness to reevaluate our work and the constructive feedback you’ve provided.
***
Regarding your concern about insufficient knowledge retrieval, this is indeed an interesting issue. We conducted experiments by increasing the chunk size to 512 tokens while keeping the number of retrieved documents constant. The results show the following changes in error proportions:
| Error Type | Original Proportion | New Proportion |
|------------|---------------------|----------------|
| Insufficient Knowledge Retrieval | 58% | 48% |
| Knowledge Extraction Errors | 12% | 20% |
| Answer Reasoning Mistakes | 20% | 22% |
| Other | 10% | 10% |
Increasing the chunk size did reduce the proportion of insufficient knowledge retrieval errors but also led to a higher likelihood of knowledge extraction errors due to the introduction of more irrelevant context. This trade-off is worth further exploration. Our opinion is that for models with strong knowledge extraction capabilities, increasing the chunk size or the number of retrieved documents can be an effective strategy.
***
Once again, thank you for your invaluable insights and your support of our work. Your recognition is our greatest encouragement. | Summary: To address the limitation of LLMs generating factually incorrect information, the authors have introduced AssistRAG, an intelligent information assistant with LLMs, building upon existing retrieval-augmented generation (RAG) strategies. The system operates in two main categories: memory management and knowledge management. AssistRAG employs a two-phase training approach: Curriculum Assistant Learning and Reinforced Preference Optimization. During inference, AssistRAG follows a three-step process: information retrieval and integration, decision-making, and answer generation with memory updating.
Strengths: 1. Novel integration of curriculum assistant learning and reinforced preference optimization, distinguishing it from traditional approaches.
2. AssistRAG demonstrates consistent outperformance across multiple datasets and base models, highlighting its potential for broad applications in improving LLM capabilities.
Weaknesses: 1. The paper might benefit from addressing potential scalability issues and providing more extensive comparisons with a broader range of state-of-the-art models.
2. Including real-world application scenarios and discussing potential limitations or challenges in deploying AssistRAG in practical settings would also enhance the paper's impact and applicability.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does AssistRAG perform in terms of computational efficiency and scalability when applied to very large datasets or in real-time applications? Can the authors provide any benchmarks or comparisons?
2. The ablation studies are informative, but could the authors include additional analysis on the sensitivity of AssistRAG to different hyperparameters or training configurations?
3. Incorporate user studies or qualitative evaluations to assess the practical usability and effectiveness of the assistant in aiding human users.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: 1. Llama 2 Chat (7B parameters) and ChatGLM3 (6B parameters) have substantially fewer parameters than GPT-3.5, which is reported to have a much higher parameter count. Moreover, GPT4-turbo is used for annotating the dataset (as mentioned in Appendix section A.1.3), however, the model isn’t used, or mentioned, in the experiments. This disparity in model size should be taken into account when comparing the performance of these models, particularly in relation to GPT-3.5.
2. The analysis done on token usage (section 5.2) is only conducted on the 2WikiMultiHopQA dataset, however other experiments in the study are done on the HotpotQA, 2WikiMultiHopQA, and Bamboogle datasets. The paper could benefit from a consistent dataset used for all evaluations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable feedback and recognition of our paper. We hope that the following responses will address your concerns:
***
**R to W1:**
To further demonstrate the effectiveness of our model, we have included additional experiments with two agent solutions on the 2wiki dataset. The results are as follows:
| | EM | F1 | Recall |
| ---------- | -------- | -------- | -------- |
| Toolformer | 27.2 | 38.6 | 43.2 |
| Reflexion | 31.8 | 41.7 | 44.2 |
| AssistRAG | **39.6** | **45.6** | **45.7** |
These results show that AssistRAG outperforms both Toolformer and Reflexion, demonstrating its superior performance in RAG tasks.
***
**R to W2:**
Thank you for your insightful suggestion. In Appendix A.4, we have discussed some potential limitations and challenges in deploying AssistRAG in practical settings. We will provide a more detailed discussion in future versions to further enhance the paper's impact and applicability following your advice.
***
**R to Q1:**
Computational efficiency in real-time applications is indeed crucial. To address the concerns, we first analyze the composition of inference time when encountering a new question:
1. The time taken by the Assistant LLM to perform actions, including Question Decomposition and Knowledge Extraction.
2. The time taken by the Assistant LLM to call retrievers, including memory retrieval and knowledge retrieval.
3. The time taken by the Main LLM to generate the answer.
Among these, steps 1 and 3 are independent of dataset size. Dataset size only affects step 2. In step 2, to achieve low-latency retrieval, we have utilized the FAISS library to index the documents and employed an IVF index structure for acceleration. This allows for millisecond-level retrieval speed even with datasets in the millions of documents range. Compared to the seconds-level time taken by steps 1 and 3, the retrieval time in step 2 is negligible. Therefore, very large datasets (below the billion-level) would not significantly impact computational efficiency.
***
**R to Q2:**
Thank you for your insightful suggestion. We have conducted additional experiments to analyze the sensitivity of AssistRAG (ChatGLM 6B) to different hyperparameters, specifically the learning rate during the Assistant Learning phase and the number of documents retrieved (K) during the knowledge retrieval phase. The results are as follows:
| Top K | K=1 | K=2 | K=3 | K=4 | K=5 |
| --------- | ---- | ---- | ---- | ---- | ---- |
| F1| 20.4 | 36.6 | 39.8 | 42.4 | 43.2 |
| Learning Rate | lr=2e-4 | lr=1e-4 | lr=5e-5 | lr=2e-5 | lr=1e-5 |
| ------------- | ------- | ------- | ------- | ------- | ------- |
| F1 | 40.6 | 41.4 | 42.0 | 43.2 | 42.6 |
We hope these additional analyses provide a clearer understanding of how AssistRAG performs under different hyperparameter settings and training configurations.
***
**R to Q3:**
This is indeed a valuable suggestion. Following your recommendations, we conducted a user study involving three participants. Each participant was asked to write 20 factual questions. Subsequently, answers were generated using ChatGPT in three configurations: CloseBook, Naive RAG, and AssistRAG. The participants then evaluated their satisfaction with the answers provided. The results are as follows:
| | User A | User B | User C |
| --------- | -------- | -------- | -------- |
| CloseBook | 0.25 | 0.20 | 0.30 |
| Naive RAG | 0.45 | 0.55 | 0.45 |
| AssistRAG | **0.70** | **0.75** | **0.65** |
These results demonstrate that users were more satisfied with the answers generated by AssistRAG compared to the other configurations. We believe this user study highlights the practical usability and effectiveness of AssistRAG in aiding human users.
***
**R to L1:**
We apologize for the confusion. Our intention in selecting models with varying parameter counts was to demonstrate that AssistRAG significantly outperforms CloseBook and Naive RAG inference methods, regardless of the model size. We appreciate your suggestion and agree that presenting the results by grouping models of similar sizes will make the comparisons clearer. We will revise Table 1 to reflect this improvement, ensuring that models with similar parameter counts are compared directly.
***
**R to L2:**
Thank you for your suggestion. In fact, we have conducted token usage analysis across all three datasets. Since these datasets share the same retrieval corpus, the lengths of the retrieved passages are similar, resulting in no significant differences in token usage across different datasets. Therefore, we chose to present the representative 2Wiki results. For consistency, we will include token usage results for all three datasets in future versions of the paper.
***
We greatly appreciate the time and effort you have invested in reviewing our manuscript. Your insightful comments have been invaluable in helping us improve the quality and clarity of our paper. Thank you once again for your constructive feedback. | Summary: This paper proposes AssistRAG, an architecture for augmenting LLMs with a separate, trainable agent that helps with information retrieval and memory/knowledge management. The authors motivate such an architecture (as opposed to, say, fine-tuning the main LLM for RAG) and describe how to build and train it. The conduct experiments with 3 information heavy benchmarks and show consistent performance gains compared to a number of recent baselines.
Strengths: * The motivation behind AssistRAG is strong -- directly fine-tuning the main LLM for retrieval augmented generation is costly and can/does negatively impact its other abilities. Thus, freezing the main LLM and instead training a smaller auxiliary LLM to decide what to retrieve (tailored to the needs of the main LLM) and how to manage retrieved information is a highly desirable approach.
* The paper is very clearly written. The structure is easy to follow and balanced. The motivation, as noted above, is well articulated. The baselines and method are clearly described and the results well-summarized.
* Ablations show the value of individual components, and additional experiments confirm the benefits of the proposed approach even as the amount of training data is varied.
Weaknesses: I don't see any notable weaknesses in this paper. It's possible that I am not aware of the latest in this line of work and my assessment of novelty may not be accurate. I will defer to the other reviewers on the novelty aspect.
Some of the datasets used in this paper are rather old -- they are from the era of the train-test paradigm with 10s of thousands of training instances (e.g., HotpotQA was published in 2018!). I realize they are being used here in the model context of prompted LLMs. Nevertheless, I wonder if there are newer datasets that might be more timely.
Technical Quality: 4
Clarity: 4
Questions for Authors: Please see the weakness section.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your recognition of our paper. We sincerely appreciate all the feedback from the reviewers and will make revisions to enhance the paper's impact and applicability based on your valuable suggestions.
Regarding the datasets, we have compiled the publication years of all commonly used QA datasets, as shown in the table below:
| Year | Datasets |
| ---- | ------------------------------------------------------- |
| 2023 | Bamboogle |
| 2022 | PopQA, ASQA, Musique |
| 2020 | 2WikiMultiHopQA |
| 2019 | SIQA, NQ, CommenseQA, BoolQ, PIQA, Fermi, ELI5, AmbigQA |
| 2018 | HotpotQA, NarrativeQA, WikiQA |
| 2017 | TriviaQA, MSMARCO-QA |
| 2016 | SQuAD |
As our paper focuses on complex QA tasks, we selected Multihop QA datasets for our study. These include Bamboogle (2023), 2WikiMultiHopQA (2020), and HotpotQA (2018). Among these, Bamboogle is the most recent Multihop QA dataset available.
Thank you once again for your constructive feedback. We greatly appreciate the time and effort you have invested in reviewing our manuscript.
---
Rebuttal Comment 1.1:
Title: Re: Rebuttal by Authors
Comment: Thank you for the response. Given your focus on Multihop QA datasets, `Musique` might have been a good one to try as it has some harder questions requiring 3 or 4 hops. Nevertheless, after looking at the other reviews and your responses, I remain in favor of accepting this paper. The only other suggestion I have is to include a clearer comparison with other similar work in the updated version of the paper. Thank you for the work!
---
Reply to Comment 1.1.1:
Comment: Thank you for your positive feedback and valuable suggestions. We will explore integrating the Musique dataset and expand our comparison part to provide a clearer analysis of similar work in the updated version. We appreciate your support and look forward to enhancing our paper based on your recommendations. | null | null | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Measuring Per-Unit Interpretability at Scale Without Humans | Accept (poster) | Summary: The paper proposes a novel method for measuring per-unit (e.g. per-neuron) interpretability of vision models, which is based on a DreamSim-based automation of the 2-AFC task. The method, which is called the Machine Interpretability Score (MIS) is found to be highly correlated with human measures of interpretability, as well as capable of making novel predictions about what units humans will find more and less interpretable. The authors then use MIS to compute average per-unit interpretability for 835 computer vision models, and highlight their findingings (such as the negative correlation between accuracy and average per-unit interpretability, and the increased interpretability of deep layers) which would have been impossible to accomplish without the novel metric.
Strengths: The paper is a triumph of the genre: it is written in an incredibly clear fashion, presents limitations as they arise, and has an excellent appendix which directly addresses most questions that came up as I was reading the paper.
* Originality: the contribution solves a longstanding problem in per-unit interpretability in a novel and scalable way. It builds upon some of the best work in the field, and runs experiments that, as the paper mentions, would cost billions of dollars to complete using current approaches.
* Quality: the paper is of outstanding quality. It is exceptionally clear, and the appendix contains a host of highly useful sensitivity analyses. I especially appreciate Appendices C (justification of the choice of DreamSim) and I (application to sparse autoencoders).
* Clarity: It's very clear what the results are, and what the contribution is, and where the claims are supported. The authors do a good job of demonstrating that MIS explains existing data; then, that it makes novel predictions, which they support with a human experiment (!); and finally, have lots of detailed figures explaining their experiments which use MIS.
* Significance: human-free per-unit interpretability has long been sought in vision model mechanistic interpretability, and this provides a possible solution.
Weaknesses: No significant weaknesses.
Minor nitpicks:
* The way the query images are selected is only mentioned in the appendix; this might be worth quickly mentioning in the main paper (basically, the fact that they're also highly-activating images)
* In Figure 14, unless I'm misunderstanding something, there's a strange difference in y-axis scaling. I reckon these should be aligned.
* Line 195 has an extra full stop. (Or, if you were trying to evoke a sense of mystery, has one less than needed...)
* Figure 17 says "interpreability"; should this be "interpretability"?
Technical Quality: 4
Clarity: 4
Questions for Authors: * Here's something I don't quite understand: for a unit, why do its weakest-activating dataset samples seem to all be monosemantic? Wouldn't we expect them to be more random than that?
* It's unclear to me why one might expect "wider layers to be more interpretable" when comparing across models—sure, this seems true for a fixed model architecture and training set, but among the models being analyzed, wouldn't the models with larger layers be more likely to be larger models trained on more data?
* Is there a way to quantify how superposition affects MIS? Do you have any thoughts on what percentage of the decreasing MIS throughout training is explained by increased superposition in neurons?
* Do you have thoughts on training autoencoders to directly optimize for a combination of MIS and reconstruction loss?
Minor questions:
* In Figure 3, the red points simply represent the location of the models tested by Zimmerman and not the HIS determined by Zimmerman for those models, correct?
* What models form the pareto frontier in Figure 4A?
* In what sense is the word "define" used on line 122?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors adequately address the limitations as they come up.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer gaJB, \
Thank you for your valuable feedback and for praising our paper as a *“triumph of the genre”* with *”outstanding quality”* and finding it *”solve[s] a longstanding problem”*. Please let us know whether our responses below addressed all of your questions or whether there are further questions we can answer so that you feel even more confident in our work.
**Q:** “The way the query images are selected is only mentioned in the appendix”\
**A:** Thank you for your suggestion. We will make use of the increased page limit in the camera-ready version of the paper to move this information to the main text.
**Q:** “Why [are] weakest-activating samples [not] more random?”\
**A:** We agree with you that it is not obvious why the least activating samples are also clustered, and all contain the same visual feature. However, one can think of these features as “counterparts”/”anti-features” to the features displayed by the maximally activating samples: A unit only has high activation if just the feature but not the antifeature is present, which potentially allows the network to learn more specific feature detectors.
**Q:** “Is there a way to quantify how superposition affects MIS?”\
**A:** That’s a great question and suggestion! While we can’t think of a precise quantification right now, we can offer a thought experiment. Polysemantic units respond (strongly) to multiple concepts. If those different features elicit similar activation ranges, the query and reference images used to compute the MIS differ, resulting in harder tasks and lower MIS. We think that quantifying through human studies how well a drop in MIS is correlated with units being polysemantic will be an interesting follow-up for our work. Thank you for the suggestion!
**Q:** “why [might] one [...] expect "wider layers to be more interpretable"?”\
**A:** Thanks for raising this question! In this plot, we compare the relation of a layer's relative width and its interpretability, i.e., we ask whether wider layers of a network are more interpretable than narrower ones of the same network. We chose this relative comparison exactly to circumvent your concern. We think our observation might be explained by the superposition hypothesis: Narrow layers do not have sufficient capacity to represent different concepts individually through single units and instead have to leverage polysemantic units. We will expand on this connection in the camera-ready version of our paper.
**Q:** “thoughts on training autoencoders to directly optimize for MIS and reconstruction loss?”\
**A:** We think that using our proposed MIS to increase the interpretability of networks is very exciting — both for auto-encoders used to make large networks interpretable and for directly making networks more interpretable. While the current computation of the MIS can be used for non-gradient-based optimization (e.g., hyperparameter grid search), optimizing it scales inefficiently with parameter count. A challenge for future work will be to find a differentiable approximation of the MIS that can be optimized using gradient descent, circumventing the efficiency issue. We believe such an approximation can be defined when training with sufficiently large batch sizes, and hope to explore this further in follow-up work.
**Q:** “Do the red points in Fig. 3 represent the location of models tested by Zimmermann et al.?”\
**A:** Yes, that is correct. We will update the caption to ensure future readers do not mistake them for the results of Zimmermann et al.
**Q:** “What models form the pareto frontier in Figure 4A?”\
**A:** Thank you for this suggestion! We determined the Pareto frontier of Fig. 4A using the paretoset python package and will include the following table in the camera-ready version of our paper:
| Model | Acc [%] | MIS |
|:-------------------------------------------------|----------:|------:|
| googlenet | 69.15 | 0.908 |
| timm:resnet34.a3_in1k | 72.97 | 0.904 |
| timm:resnet50_gn.a1h_in1k | 81.22 | 0.901 |
| timm:ecaresnet101d_pruned.miil_in1k | 82 | 0.895 |
| timm:eva02_small_patch14_336.mim_in22k_ft_in1k | 85.72 | 0.890 |
| timm:vit_base_patch8_224.augreg_in21k_ft_in1k | 85.8 | 0.871 |
| timm:caformer_b36.sail_in1k_384 | 86.41 | 0.870 |
| timm:caformer_s36.sail_in22k_ft_in1k_384 | 86.86 | 0.870 |
| timm:caformer_b36.sail_in22k_ft_in1k_384 | 88.06 | 0.864 |
| timm:beitv2_large_patch16_224.in1k_ft_in22k_in1k | 88.39 | 0.839 |
We found it interesting that although purely convolutional networks with high accuracy exist, all points on the Pareto frontier with high accuracy belong to transformer architectures.
**Q:** “In what sense is the word "define" used in L122”\
**A:** We use the word define here in the same sense as one defines a statistical/machine learning model. Would you prefer the word “model” instead of “define” here?
**Q:** “Figure 14 [...] difference in y-axis scaling”\
**A:** Thanks for the pointer, we will update this figure in the final version of the paper to use an aligned y-axis.
**Q:** Typos in L195 and caption of Fig. 17\
**A:** We corrected these typos.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed responses! This is a great paper. I guess my main hesitancy to increasing the score further is that while this paper is self-contained and attacks an important problem, it's not obvious to me that it would have "excellent impact on at least one area, or high-to-excellent impact on multiple areas"—this just seems like a really high bar. Essentially, I would be happy to see this paper accepted, but I am unsure as to what the actual impact will be, due to things like an unclear optimization target and complexity. | Summary: The paper presents a method to automate a per-unit (e.g. individual neuron, channel in a conv layer) interpretability score for vision models that was previously computed via an expensive human study [50]. They demonstrate that their automated scores are highly correlated with the human measures, and then apply their method on 835 vision models, obtaining a ranking of models by 'interpretability' that is consistent with the much smaller subset considered in [50]. Additional analyses using their method involve inspecting 'interpretability' vs. depth, vs. layer width, vs. layer type, and throughout training. The paper motivates their work by arguing that automating the interpretability score will enable optimizing for it, towards more 'interpretable' models.
Strengths: - The paper achieves the goal it sets out to achieve, namely that of automating the human interpretability score of [50].
- The authors conduct extensive experiments, evaluating a tremendous number of models (835!).
- The authors offer two forms of validating their main claim: one via correlating their score with the human interpretability scores from a prior study [50], and another via a second (new) human study conducted after developing their method.
- A number of follow up experiments are considered. Some interesting behaviors observed, like a big jump in MIS (machine interpretability score) for some of the batchnorms during the first epoch of training, and the drop in MIS over the final deepest layers (fig 6a).
- I am a big fan of the abstract problem: measuring interpretability is a hard problem, and an automatic method could potentially be very useful.
Weaknesses: While the MIS seems to do well at modeling the HIS, I am struggling to see the merits of such a score, as I find the underlying task too easy and as such, limited in its downstream potential for applications. As I understand it, the psychophysics task used to proxy interpretability simply asks users (given a 'unit' that maps each image to a scalar activation) to match a least activating image to a set of other least activating images and a highest activating image to a set of other highest activating images. This can be done so long as there is any kind of distinctiveness between the highest and least activating images. It also does not account for the well-known superposition phenomenon, in which a single unit may encode multiple concepts, thus hindering its interpretability / ability to steer model behavior; in such a case, one could still pass the psychophysics test by recognizing any one of the multiple concepts encoded by a unit, as the least activating images would not contain those concepts.
^to summarize, I don't think the underlying test that MIS can proxy gives us any valuable signal, as it does not tell us if a unit is aligned with a single concept. I could be wrong, but I find it vital for the authors to demonstrate an application of their interpretability score (e.g. combatting spurious correlations, steering model behavior, uncovering biases, etc).
The claim 'if we can measure it, we can optimize for it' is unsubstantiated and, in my opinion, misleading. I don't see a straightforward way nor reason to efficiently optimize for MIS.
The range of values that the MIS outputs empirically is quite tight, making it hard to place much significance on the observed differences.
I find the methods section to be needlessly mathy; I think it obscures the underlying method, which is not too complicated (sort images along a unit, select your query+example sets, use dream sim to get similarity between each example and the queries, average, pick).
The paper relies very heavily on [50]. It would be nice to see how this method could be incorporated with other popular ideas in current interpretability literature.
Technical Quality: 2
Clarity: 2
Questions for Authors: How would you envision optimizing for the proposed MIS score? What advantages would that yield?
Why do more accurate models have lower MIS? This makes the idea of optimizing for MIS a harder sell.
Is there any evidence that nodes with higher MIS are easier to name? I'd expect there to be a positive correlation, but I'm not sure how strong it would be. Again, I'm trying to think of way in which MIS can be useful, e.g. in collaboration with other interpretability techniques.
What was the correlation between the per-unit HIS and MIS scores for the new human study on the two models?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The primary limitation is that the underlying score this method proxies may not sufficiently align with notions of interpretability that can ultimately be useful. I do not see how the method or the findings of this paper can be operationalized towards more research going forward. I would encourage the authors to devise and convincingly present a way in which this method can ultimately be utilized towards more trustworthy, transparent, or reliable models.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer j42j, \
Thank you for your detailed feedback. Please let us know whether our responses below addressed all of your questions or whether there are further questions we can answer so that you can confidently increase your score
**Q:** “find the underlying task to easy”, “does not account for polysementacity”\
**A:** You are right that the underlying psychophysical task checks for a rather rudimentary level of understanding. Note, however, that understanding units at this level is necessary for any deeper understanding. This is important because experimental results show that human understanding of units at this level is rather brittle: Zimmermann et al. (2023) showed for two models that by performing the psychophysical task for slightly less extreme query images, human performance drops strongly (see Fig. 6/7 of their paper). We can reproduce this finding with our MIS for many more models (see Fig. 1 of the general response), showing that this task is not trivial to solve. Further, the task partially accounts for polysemanticity as the most extremely activating images of polysemantic units might not correspond to a single but various concepts, making the task even harder to solve. In conclusion, this means there is still ample room for improvement in future models.
**Q:** “application of interpretability score”, “(advantanges of) optimizing for MIS”\
**A:** Thank you for raising this question. We see two types of practical applications for our MIS that go beyond analyzing and understanding neural networks: (1) Optimizing neural networks directly to become more interpretable via gradient descent. (2) Performing model selection based on interpretability. (2) Tuning hyperparameters of other interpretability tools to make them explain networks better. While having a differentiable version of the MIS is required for (1) and would surely benefit (2) & (3), note that a non-differentiable metric still provides valuable insights enabling the latter two directions (although potentially less efficiently). As an example, we performed experiments with sparse auto-encoders (L231ff & Sec. I), revisited and investigated inconclusive results of previous papers, and performed hyperparameter selections. We use the increased page limit of the camera-ready version to highlight these results more in the main text.
**Q:** “range of [MIS] values is tight”\
**A:** On a per-unit level, we find that the MIS spans the entire theoretical value range (~0.5 to 1.0) (see Fig. 2B/C). When averaged per model, the effective range indeed becomes tighter. We argue, however, that this is not an issue of the metric but instead shows that models only trained for good downstream performance all learn similar representations (https://arxiv.org/abs/2405.07987) with mediocre interpretability. With an increased interest in interpretable models, we expect future models to achieve higher MIS values. By choosing less extremely activating samples as query images (see Fig. 1 of general response), we can also increase the task’s difficulty to have a meaningful signal also for more interpretable models.
**Q:** “[Incorporation] with other popular ideas in current interpretability literature?”\
**A:** A particularly popular topic at the moment is SAEs. As described above, our MIS can be used to make model selection and hyperparameter tuning of SAEs more efficient, enabling large-scale sweeps. Further, it is conceivable to use the MIS as a guiding signal when finding interpretable circuits in a network: by excluding particularly uninterpretable units from the search, one might reduce the computational cost of finding circuits. Finally, as the MIS also works with explanations other than dataset examples (see Appx. E), it can be used with future explanation methods, too (e.g., MACO [1]).
**Q:** “Why do more accurate models have lower MIS?”\
**A:** This is an important question. We hypothesize that this is related to the phenomenon of superposition: With limited capacity, one way for models to obtain higher downstream performance is to represent features in superposition/entangle them. However, at the same time, this makes units harder to interpret as they don’t correspond to individual features anymore, explaining the lower MIS. We politely disagree with your assessment that this result makes it difficult to sell “optimizing for MIS”. First, note that we see only a correlation, which does not mean that there has to be a tradeoff between performance and interpretability, as this would assume a causal relation between these two variables. Second, note that if accuracy and MIS were positively correlated, then there would be no need to optimize for MIS as one would get this for free. On the contrary, our results show clearly that one should not hope to automatically get very interpretable models by only optimizing for high accuracy. Instead, we need to optimize for both accuracy and interpretability/MIS.
**Q:** “[are] nodes with higher MIS are easier to name?”\
**A:** Interesting question! To verify the correctness of our MIS, we used the human psychophysical data of [50]. This data also contains scores indicating how confident humans were when making their choices. Regarding your question, one might say that if humans find it easier to name units, they will have high confidence in solving the 2AFC task. Interestingly, we find a high linear/rank correlation of 80% between this confidence score and our MIS. Thus, we conclude that nodes with higher MIS tend to be easier to name.
**Q:** “correlation between the per-unit HIS and MIS scores for the new human study?”\
**A:** Based on your suggestion, we computed the correlation between MIS and HIS for the new human experimental data. Specifically, we compute the correlation for all units used for creating Fig. 2C and again find a high correlation ($\rho_p = 0.85, \rho_s = 0.81$).
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for their detailed responses and all their effort during the rebuttal period. tldr: Some concerns remain (and are shared by another reviewer, so they will need convincing too), but in general I think the paper takes a gradient step in the 'right' direction, so I won't be the reason why it does not get published. I'll increase my score to 5.
Some comments:
* on the underlying A2FC task: I still think the task is too easy, but I guess it is better than nothing. I do find it important though to mention other works that try to automatically name neurons (a much harder task), as an automated interpretability metric (i.e. how well their automatically generated name for the neuron matches with the highly activating images) is a by-product of their work. Since those works do not focus (nor, I believe, flesh out) on that contribution, the novelty will fall to this submission.
* Looking at MIS for the least interpretable nodes is an interesting way to make the metric more insightful, though I'm not sure we need every node in the network to be interpretable.
* I would still clarify that it is not obvious how MIS can be directly optimized for (i.e. via gradient descent / during model training). The suggestions of using MIS for model selection or filtering out nodes when finding circuits is not rigorously substantiated, but still interesting -- we can let the next paper figure out better uses for MIS ;)
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your positive feedback and for appreciating our efforts in providing the new explanations/results. We are very pleased that you have increased your score to 5 and will be sure to integrate your comments into the discussion section of the camera-ready version of our paper! | Summary: The authors introduce a computational metric for interpretability. Their proposed metric is a computational version of an evaluation metric introduced in previous literature which measure human perceived interpretability. Crucially, they find that this metric correlates well, and since it does not require human studies, it can scale well, and therefore potentially enabling more insights in model interpretability.
Strengths: - **Significance**: in time and cost terms, human studies (which are necessary for assessing interpretability) are the main bottleneck. This work is therefore significant as it proposes a way to bypass this.
- **Results**: results seem overall interesting. I have some doubts here and there. see more below.
- **Clarity**: paper is in general easy to follow
Weaknesses: - **Experiments/Applicability**: results seem compelling for vision tasks in a domain that somehow falls in human common knowledge. I am not sure what would be the results in different domains and more complex/specialized vision tasks. More discussion below.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Perhaps a bit of a philosophical question, but how do the authors envision their metric in a computer-human interaction (CHI) context. As said before I am totally sold in terms of time/money saving for interpretability evaluation by avoiding human user experiments. However, I am strongly of the opinion that humans should still be somehow involved in terms of interpretability. After all, they are the ones affected/taking the final decisions. So can the authors expand/comment more on how a highly scalable method can interact with human decision making, which is by nature non-scalable? Let's say that you identified a model which is on average more interpretable than many others (Section 4.2.1), at this point how would a person interested in interpreting the model proceed? Would you show the units with the highest MIS? I think my general question would be: can MIS be used to help an user to fully comprehend their model, or would MIS be useful only for some pre-filtering?
2. As said above, results seem compalling in common knowledge vision tasks. Do you expect HIS and MIS to have the same degree of correlation in, say, text classification tasks?
3. If we stay in vision task, what about out-of-distribution (OOD) samples? I somehow would expect HIS and MIS to stop correlating for such samples? If so, this would diminish the contribution of this work.
4. (L. 209) do you have a reference for the claim "googlenet [..] is widely claimed to be more interpretable"?
5. Do you have an intuition of why would interpretability decrease after the first training epoch? I would rather imagine the, e.g., conv filters to adapt and resemble more the data
6. Would be interesting to see a "distributional study" (similar to figure 4b) also for the width layer comparison. Why are wider layers more interpretable? is the MIS a "real average behavior" of units in the wider layers, or it's more because the wider the layer the higher the probability of having by change a unit that appears more interpretable?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Most limitations discussed in the conclusion
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer P5jG,\
Thank you for your valuable feedback and for praising our paper as a *“significant work”* with *”interesting results”*. Please let us know whether our responses below addressed all of your questions or whether there are further questions we can answer so that you can confidently increase your score
**Q:** *“how do the authors envision their metric in a computer-human interaction (CHI) context?“*\
**A:** That’s a great question! We completely agree with you that, at its core, interpretability is a human-centered field, and, thus, humans need to be involved eventually. Therefore, we ensured that the proposed MIS explains human interpretability annotations well. We see numerous opportunities for the MIS to interact with/support humans: (1) The MIS might be used to increase a model’s interpretability to simplify the human’s job, e.g., through model selection, hyperparameter tuning or directly optimizing for it (where the latter would benefit from a differentiable approximation of the MIS). (2) The MIS can save time when interpreting models: Instead of attempting to understand incomprehensible units, we can start with particularly easy ones (based on their MIS) and allocate our time accordingly. (3) Such an MIS ordering can also be helpful when finding neural circuits in the network as excluding very uninterpretable units from the search reduces the combinatorial complexity and, thus, computational cost.
**Q:** *“Do you expect HIS and MIS to have the same degree of correlation in, say, text classification tasks?”*\
**A:** This is a very interesting question! Assuming access to a sufficiently human-like perceptual similarity function, we expect the MIS to generalize to various data modalities. Given the tremendous progress in language modeling/embedding, we are optimistic the MIS will work on text data, too. Testing this hypothesis requires extensive human psychophysical experiments. We will include this exciting experiment in the outlook paragraph of our paper and hope it will inspire future work.
**Q:** *“what about out-of-distribution (OOD) samples?”*\
**A:** Thanks for asking this question. Our experimental results indicate that OOD samples cause no problems. In Appx. E, we demonstrate that the MIS is still highly correlated with the HIS when using synthetic feature visualizations instead of dataset examples as reference images. These images pose a substantial distribution shift as they look very different from natural images. Therefore, we conclude that the MIS works correctly also for OOD samples.
**Q:** *“reference for the claim "googlenet [..] is widely claimed to be more interpretable"?”*\
**A:** This statement refers to the fact that many interpretability papers focus on this network, producing more and more insights into how it operates. We see now that our statement was imprecise and will revise it accordingly.
**Q:** *“why would interpretability decrease after the first training epoch?”*\
**A:** We have no definite answer to this question yet but hypothesize the following: This could be a sign of learning dynamics and the order in which features are learned. After initialization, the network can improve the fastest by learning very simple feature detectors (e.g., colors, simple geometric shapes), as those are weakly correlated with certain classes (e.g., blue colors increase the chance of seeing a fish). Those features are easy for humans to understand. Throughout the training, these feature detectors are replaced with more complex ones that are harder to decode. As suggested by reviewer gaJB, in later training epochs, the network might also tend to a state of stronger superposition to increase classification performance with the cost of decreased interpretability. In Fig. 3 of the general response, we show visual explanations of units with strong MIS drop between the second and last training epoch. We will use the increased page limit of the camera-ready version to discuss this hypothesis in the main text and include these visualizations in the appendix.
**Q:** *“Why are wider layers more interpretable?”, “distributional study for the width”*\
**A:** Our data shows a moderate increase of the per-layer interpretability (i.e., per-unit scores averaged per layer) with increasing layer width. We see the same trend when instead of plotting the per-layer-average we plot the 5th or 95th percentile per-layer (see Fig. 2 of the general response). This suggests that the overall MIS distribution moves to higher values with increasing layer width and the effect is not dominated by few outliers.
Overall, we hope to have answered all of your questions. If you are satisfied with our responses, we would appreciate it if you would increase your score. | Summary: The paper suggests a new automatic measure to asses how interpretable individual units inside vision models are (called MIS). The per-unit metric assesses the similarity of two query images (one should maximize unit activation and one minimize it) to two groups of representative exemplars (top-activating and least-activating images for that unit) using LPIPS. Repeating this for several query image pairs measures how well top/least activating images are consistent with themselves. If MIS score of a unit is high, that means all visual exemplars represent very similar visual concepts (ie unit is monosemantic) and therefore the unit is highly interpretable. If MIS score approaches chance (0.5) the unit is likely less interpretable as visual concepts of visual explanations are broad (ie polysemantic). The metric is shown to be well correlated with human assessment of the same task. Authors show several uses of the metric in several tasks such as assessing the interpretability of a wide range of models, units of different depths and widths along deep nets, training dynamics, and correlation of model interpretability and performance.
Strengths: * Automating scientific processes is important.
* The metric is built from reliable tools and well-known 2afc test formatting
* The authors made efforts to exemplify the use of MIS in several large scale analysis
Weaknesses: * Although the attempt to provide analysis of large-scale experiments, I do not find any of the conclusions of these very exciting and do not see the contribution of these to future studies: \
a. Paper shows a study for a large number of networks, but the analysis is a bit insightful; The MIS is very similar across all the networks. The per-unit analysis is also not very surprising showing that shallow layers are not as interpretable as deeper layers. There is an interesting phenomenon at the very beginning and end of the network but the authors do not attempt to explain it.\
b. Anticorrelation between the interpretability of units with classification performance was already shown in [50]. The larger scale experiment of the paper includes many types of networks, trained for different tasks, I wish authors would explore correlation to network performance for a broader set of tasks, not only classification.\
c. The training dynamic analysis is interesting, but again the interesting phenomenon of MIS the highest after solely 1 ephoc is not attempted to be explained, making it harder to consume the results.
d. The only part of the analysis that felt more exciting was the SAE experiment, however, unfortunately, the authors chose to analyze layers with relatively high MIS scores. In that setting, SAE does not seem to improve the interoperability of units. Of course, the interesting experiment would be testing a layer with low MIS to see if MIS improves its interpretability.
* The method relies heavily on existing tools like DreamSim, and well-known tests (eg 2afc setting from [50]). Therefore, there is no actual technical contribution in forming the metric itself. The correlation of MIS with the human scores is not surprising- authors use DreamSim in the exact setting it was trained for, with a high correlation with human judgment being the training objective.
* Visual explanations are only one way to measure interpretability, the method does not suggest tackling any more advanced form of explanation, for example, those of [1],[2],[3].
* An important aspect the MIS is missing, is how legible the visual explanations to end-users (which is at the end what we really care about when we want to measure "interpretability"). Is it true to assume that low variability in exemplars always implies visual explanation the is well-legible to users?
* The method does not consider the full distribution of neuron activation bur rather the two extrema.
In general, I feel like the paper is an immediate extension of [50] (ie as if it was another subsection in it) and not a paper by it's own.
[1] "Natural language descriptions of deep visual features" iclr 2022\
[2] "CLIP-Dissect: Automatic Description of Neuron Representations in Deep Vision Networks" iclr 2023\
[3] "a Multimodal Automated Interpretability Agent" icml 2024
Technical Quality: 3
Clarity: 3
Questions for Authors: * Please explain how the scalability of the measure enables to reveal phenomena that were unknown before (e.g. in [50])
* Can further analysis of unexpected phenomena as described above be done?
- Why is the SAE experiment performed on highly interpretable layers? Why not perform it on lower interpretable layers?
- What are the insights for other vision models (not necessarily trained for classification)?
- Can MIS be applied to other types of explanations (e.g. textual descriptions like Milan, clip-dissect, and MAIA)?
- Can MIS be extended to testing the full distribution of unit activations, not only extrema?
- Please explain the key differences and what is shared with the automated evaluation protocol of MAIA [3].
- Is it true to assume that low variability in top activating exemplars also makes the visual explanation well-legible to users?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: authors discuss the limitations of the method in the paper
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 8Jb5,\
Thank you for reviewing our paper. Please let us know whether our responses below addressed all of your questions or whether there are further questions we can answer so that you feel confident in increasing your score.
**Q:** *“I do not find any of the conclusions very exciting”*\
**A:** We see our contribution as part of a wider community effort to develop better techniques for understanding the inner workings of neural networks. As such, the MIS is better understood as improving our “microscope” for analyzing networks, rather than providing any specific conclusion. However, as you noted, our improved “microscope” already surfaces a range of interesting phenomena that have yet to be explained. It’s such unexplained phenomena that fuel and guide scientific progress, highlighting how relevant MIS is to the study of networks.
The improvement MIS provides is quite a step-change: compared to the manual approach of prior work (e.g. [50]), we can now test several orders of magnitude more units & models. This is why we could detect the anticorrelation between a model’s accuracy and interpretability, a relation that [50] failed to substantiate due to a lack of statistical power. It’s these kinds of analysis and phenomena that require automated interpretability metrics and which have been sought after for some time (e.g. see Rev. gaJB or [6]). Now, for the first time, such large-scale interpretability analysis is possible with our MIS.
**Q:** Relation to [50], *“technical contribution”*\
**A:** Before our paper, there was no work that allowed large-scale quantification of per-unit interpretability in vision models. With respect to [50], please note:
(1) The manual approach of [50] (and other prior works) does not scale at all. Our method is the first to scale their type of evaluation up and we succeed by finding a simple but clever way to automate it. We hope you can agree that, ultimately, a new method should be mainly evaluated by its potential impact. This impact is visible, e.g. by the fact that our study finds a clear anticorrelation whereas [50] could only produce inconclusive results due to multiple orders of magnitude fewer measurements.
(2) We’d also like to emphasize that our technical contribution is far from obvious. It’s not clear why a global perceptual metric (DreamSim) could fit human responses in our 2AFC task. In particular, one might expect (e.g. [7]) that humans solve the task by searching for common local patterns in the reference images and then comparing the most common ones to those of the query images. Hence, the fact that a global perceptual metric fits human responses so well was quite surprising to us.
**Q:** *“Differences to MAIA”*\
**A:** Thanks for asking this important question. Approaches such as MAIA and our MIS tackle completely different problems: MAIA is an automated explanation method, i.e. it tries to find explanations of what a unit does. But it does not allow quantifying *how interpretable* the unit is. On the contrary, our MIS is an *interpretability metric* that tells how interpretable a unit is given some explanations. Such an interpretability metric enables practitioners to increase the level of interpretability of a network, e.g., by model selection or hyperparameter tuning of the model or of an interpretability method.
**Q:** Relation b/w *“diversity in exemplars”* and legibility\
**A:** Our MIS leverages an established 2AFC task used in multiple prior works [6, 49, 50]. We will integrate more information from these works on why this task measures how legible explanations are (i.e. how well they explain a unit). Their reasoning can be summarized as follows: The task requires humans to reason about positive (and negative) explanations, to identify the positive (or negative feature) shared by the explanations, and to recognize this feature in the correct query image. Please note that these features rarely correspond to completely unambiguous, clear-cut semantic classes, but identifying the common feature can be challenging. If the explanations are so diverse that humans fail to identify a shared feature, it shows that they cannot understand what the unit is firing for.
**Q:** *“[Extension] to full distribution”*\
**A:** The MIS can very easily be extended to test more than just the extrema of the activation distribution: Instead of choosing the most extremely activating samples as query images, we can use less strongly activating ones and sample from other parts of the activation distribution. Please see Fig. 1 of the general response for a version of the paper’s Fig. 3 where we chose query images from the 98th/2nd percentile. As the human understanding (measured by HIS/MIS) even of the extrema is still limited and performance breaks down a lot when moving away from the extreme, we suggest using the MIS with images near the distribution’s tails (e.g. 98th or 99th percentile) to get a strong signal. Once models (let it be base models or SAEs) or explanations improve, it can be insightful to test larger parts of the distribution, too.
**Q:** *“[What about] textual descriptions”*\
**A:** We are optimistic that the MIS can be generalized to textual descriptions too. With the recent progress in modeling language-vision models, it is conceivable to replace DreamSim with a language-vision encoder such that the MIS is based on the similarity between textual descriptions and query images. To keep our paper focused, and due to the lack of human interpretability annotations for such textual explanations, we decided to leave exploring this extension to future work. We will add this discussion to the final version of our paper.
**Q:** *“SAE experiment performed on interpretable layers”*\
**A:** We chose a layer representing the median interpretability of GoogLeNet to neither make the experiment too hard nor too easy. We will re-run our experiments on a less interpretable model/layer and include it in the final version of our paper.
---
Rebuttal 2:
Comment: Thank you for your reply.
- if the conclusions in the paper are not the main contribution but rather the metric itself, please describe future potential use cases for it.
- Regarding the usage of DreamSim: because the aggregation over all image pairs is by taking an average, I believe there would not be much difference between the scheme you mentioned and the current implementation of MIS (DreamSim had shown to represent a global space for perceptual similarity that goes beyond pairwise comparisons.) Nevertheless- if the procedure you describe is indeed how humans perform the task, why not construct MIS accordingly?
- technically- I understand how MIS can be easily expanded to other "activation level" exemplars- but is it meaningful at these levels? The figure added is of 98% percentile, which is still very high. What about the percentages?
- will be great to see SAEs results during the discussion period if possible, thank you for the effort on this.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer 8Jb5,\
Thank you for your response!
- We see numerous potential practical applications for our MIS that go beyond analyzing and understanding networks: (1) **Directly optimizing networks for interpretability** using gradient descent to improve the interpretability score; (2) **Model selection** by choosing models based on their interpretability scores; (3) **Hyperparameter tuning for interpretability tools** by optimizing their performance in explaining networks; (4) **Prioritizing interpretability efforts** by identifying easily and difficult-to-understand units to focus research; (5) **Reducing computational complexity in neural circuit identification** by excluding highly uninterpretable units from the search space. We demonstrate some of these applications by revisiting inconclusive results from previous work (Sec. 4.2.1) and performing hyperparameter selection (e.g., Sec. I for SAEs). We will further highlight these results in the main text using the extended page limit for the camera-ready version.
- So far, the exact strategy humans employ in the 2AFC task is unknown. Based on our own subjective experience (and earlier work), we initially hypothesized a focus on local pattern recognition. Nevertheless, we decided to test how aligned the decisions of a machine based on a global similarity metric like DreamSim are with those of humans. To our surprise, the strong correlation between MIS and human decisions suggests a different strategy may be dominant. While a small performance gap between humans and the MIS may exist (currently unanalyzable due to the noise ceiling in the human data), future work could explore incorporating a "local pattern search" strategy into the MIS.
- Both MIS and the evaluation protocol used for MAIA assess explanation informativeness. However, their implementations differ due to the distinct goals/outputs of MAIA (textual description) and our MIS (interpretability score): MAIA generates textual descriptions and LLMs and text-to-image models are used to evaluate activation differences in generated images based on these descriptions. In contrast, our MIS, grounded in established human psychophysical setups [6, 49, 50], utilizes natural images from a large database, eliciting high/low activations, and simulates human identification of these differences. This approach allows for a direct assessment of interpretability based on human perception.
- While the 98th percentile might still seem high, please note that our results (Fig. 1 of the general response) indicate that this task is nevertheless very hard for current models. This shows there is ample room for models to increase their interpretability. Evaluating lower percentiles can be insightful in future work, especially as models and explanations improve. As model interpretability (whether base models or SAEs) or explanation quality improves, exploring a broader range of the distribution will become increasingly valuable.
- Thank you for appreciating our effort! We are currently training SAEs for less interpretable layers but can already share some preliminary results with you for layer2_2_conv2 of a ResNet50 (MIS=85.83%). The table below now shows that using SAEs is beneficial compared to using the original layer. Moreover, this demonstrates how the MIS can be used for hyperparameter tuning (i.e., choosing the optimal sparsity weight). We will continue with more experiments and integrate them into the camera-ready version of our paper!
| Sparsity Weight| L0 Count | MIS [%] | MIS Improvement to Original Layer [%] |
|-----------:|-------------:|------:|----------------:|
| 0.01125 | 233 | 89.17 | 3.34 |
| 0.02500 | 138 | 90.79 | 4.96 |
| 0.03750 | 99 | 91.60 | 5.77 |
| 0.05000 | 75 | 91.47 | 5.64 |
| 0.06250 | 60 | 91.87 | 6.04 |
| 0.07500 | 49 | 91.84 | 6.01 |
| 0.08750 | 41 | 92.18 | 6.35 |
| 0.10000 | 35 | 91.78 | 5.95 |
We hope to have clarified our message, answered your questions, and addressed your concerns. We sincerely hope this information offers a clearer understanding of our work, allowing you to reassess our work's value and increase your score. | Rebuttal 1:
Rebuttal: Dear reviewers,\
Thank you for your valuable feedback. We are delighted that you praise our paper as a *“triumph of the genre”* with *”outstanding quality”* (Rev. gaJB) and finding it’s results *”overall interesting”* (Rev. P5jG) and *“potentially very useful”* (Rev. j42j) for an *”important”* topic (Rev. 8Jb5).
Based on your feedback and questions, we implemented the following changes in our paper for the rebuttal:
- We explained and demonstrated how the MIS can be computed not just for the extreme of the activation distribution but the rest of the distribution, too (Rev. 8Jb5 & j42j) (Fig. 1 in attached PDF)
- We conducted a “distributional study” (Rev. P5jG) on the relation between layer width and interpretability (Fig. 2 in attached PDF)
- We computed the correlation between MIS and HIS on the new data collected for Fig. 2C. (Rev. j42j)
- We determined the Pareto frontier of models in terms of their accuracy-interpretability tradeoff (Rev. gaJB)
- We explored reasons for why the interpretability of a ResNet50 decreases during training (Rev. P5jG) by visualizing units with a particularly strong drop in MIS (Fig. 3 in attached PDF)
Please let us know whether our responses below addressed all of your questions or whether there are further questions we can answer so that you feel more confident in our work and can increase your score.
Pdf: /pdf/d7f95dd2000b1d92bc8337366bd76c9eda3a3860.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SparseLLM: Towards Global Pruning of Pre-trained Language Models | Accept (poster) | Summary: This paper tries to improve the pruning technique for LLM to enhance computational and memory efficiency. The proposed SparseLLM, circumvents the scalability problem of global pruning and suboptimal performance due to local pruning. It breaks down global pruning into subproblems
Strengths: 1. Pruning LLM remains to be an interesting problem given the size of LLM keeps growing and serving these models on less powerful devices are worth researching.
2. The idea of decomposing the global pruning into smaller subproblems and conceptualizing LLM as several modular functions are practical.
Weaknesses: 1. It appears the pruning procedure differs from model to model. If the model architecture changes, the pruning needs to be adjusted. Therefore, I have doubts on the generality of the proposed solution.
2. The costs of pruning are not specified in the evaluation. For example , in Table 2, it seems for most of the tasks, SparseLLM has a comparable performance as SparseGPT. In this case, what is the advantage of SparseLLM?
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Regarding the global pruning, what is it necessary to fit the models within one GPU? Why not applying tensor parallel?
2. What is the comparison of training loss convergence between SparseLLM and other baseline methods?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors address the limitations and there is no potential negative societal impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer K3H3,
Thank you for finding our method interesting and practically useful. Please refer to our response below for details:
> *"It appears the pruning procedure differs from model to model. If the model architecture changes, the pruning needs to be adjusted. Therefore, I have doubts about the generality of the proposed solution."*
**A1.**
The pruning procedure such as the closed-form solutions for each subproblem in our *SparseLLM* could be different from model to model, but the generality of our method roots in the mathematical formulation of our Eq. 3 and is guaranteed theoretically. More specifically, as long as the neural network architecture satisfies the Directed Acyclic Graph (DAG), which is, in general, the case for LLMs, our Eq. 3 of *SpasreLLM* can handle that.
That being said, we have discussed this weakness in the limitation section of our manuscript. We are happy to include more discussion there and explore extending *SparseLLM* onto more diverse model architectures in the future work.
> *"The costs of pruning are not specified in the evaluation. For example, in Table 2, it seems for most of the tasks, SparseLLM has a comparable performance to SparseGPT. In this case, what is the advantage of SparseLLM?"*
**A2.**
*SparseLLM* consistently achieves competitive results, significantly decreasing perplexity by up to 80% compared to SparseGPT and Wanda. Notable improvements include OPT-2.7b at 90% sparsity: *SparseLLM* achieves a perplexity of 759.11 versus SparseGPT's 2285.01 for PTB, and 527.70 versus 1776.08 for C4, representing over **60%** improvements in both cases. For OPT-125m at 80% sparsity for C4, *SparseLLM* achieves a perplexity of 654.54 versus SparseGPT's 1050.83, representing over **40%** improvement.
*SparseLLM* is a generic framework, that both local pruning and global pruning are special cases. By flexibly switching between these extremes, the computational complexity of *SparseLLM* can be the same as local pruning.
> *"Regarding the global pruning, what is it necessary to fit the models within one GPU? Why not apply tensor parallel?"*
**A3.**
In this work, we consider pruning methods for LLMs under resource-constrained environments, where it is likely that only one GPU is available. Representative examples include academic labs and hospitals, edge devices, and mobile computing, which are prevalent in reality.
By utilizing more computational resources (e.g., bigger and more powerful GPUs, distributed training including tensor parallel, pipeline parallel, etc) one can to some extent achieve the vanilla or extreme global pruning of LLMs, but that goes beyond the main focus of this work. However, we agree how to achieve the extreme global pruning from a high-performance computing perspective is definitely a challenging but interesting future direction on the basis of our manuscript.
> *"What is the comparison of training loss convergence between SparseLLM and other baseline methods?"*
**A4.**
Since baseline methods such as SparseGPT and Wanda both consider one-shot pruning, there is no convergence curve for those methods. However, it is still possible to compare the training loss of *SparseLLM* over SparseGPT and Wanda. For example, under the setting of pruning layer 3 of OPT-125M at 80% sparsity, the training loss of our *SparseLLM* is **0.3** after 2 epochs and is **0.15** when convergence is achieved. On the contrary, the training loss of SparseGPT is **0.8**, after the one-shot pruning. Hence, *SparseLLM* can further reduce the training loss of one-shot pruning methods such as SparseGPT and Wanda via an iterative algorithm.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I appreciate the authors' effort!
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer K3H3,
You are welcome :)
We greatly appreciate your constructive comments, as well as the time and patience you have dedicated to this review.
Best regards,
Authors | Summary: In this paper, the author proposes a method to globally prune large language models (LLMs) without consuming significant memory. By using auxiliary variables, the LLM can be pruned separately while maintaining dependencies. The evaluation demonstrates that the proposed method outperforms previous approaches in terms of perplexity and accuracy across various sparsity settings.
Strengths: 1. The proposed method is promising and innovative.
2. The paper is well-written and easy to follow.
3. The experiments are comprehensive, with detailed settings provided.
Weaknesses: 1. The use cases appear tricky regarding model size. For smaller models (<7B), they can fit into GPU memory (A100), allowing global pruning. For larger models (>70B), achieving 90% sparsity does not outperform the smaller versions (7B).
2. The work only considers unstructured pruning. Unstructured sparsity may not accurately reflect the actual model size and inference time.
3. There is no discussion on memory consumption across various model sizes and pruning methods, which may undermine the claim that global pruning is infeasible for LLMs due to memory constraints.
4. Although it outperforms previous methods, the performance at 90% sparsity drops significantly, reducing its practical usefulness.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What model sizes benefit more from the proposed pruning method?
2. Can this method be extended to structured pruning, which may be more useful?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please refer to the weaknesses section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 35o1,
We are grateful for your recognition of the novelty of our method. Please find our detailed response below:
> *"The use cases appear tricky regarding model size. For smaller models (<7B), they can fit into GPU memory (A100), allowing global pruning. For larger models (>70B), achieving 90% sparsity does not outperform the smaller versions (7B)."*
**A1.**
In this work, our major goal is to improve local pruning methods for LLMs under **resource-constrained** environments, where high-end GPUs such as A100 are typically unavailable. Representative examples include academic labs and hospitals, edge devices, and mobile computing, which are very prevalent in reality.
Globally pruning LLMs with sizes greater than or equal to 7B under resource constraints is typically infeasible and might require distributed training. The major contribution of *SparseLLM* is the systematic exploration of global and local (layer-wise) pruning and everything in between using a theoretically sound technique. This allows us to decompose global pruning into smaller and decoupled subproblems that can be seamlessly combined with distributed and resource-efficient training.
Moreover, *SparseLLM* can provide larger pruned models that outperform smaller dense models. For instance, we tested *SparseLLM* with 40% sparsity on Llama-2-13B and achieved a perplexity of **5.09** on WT2 and **7.01** on C4, which are **lower** than dense Llama-2-7B (5.47 on WT2 and 7.26 on C4). This shows that our approach is getting close to Pareto optimality and just needs a little push.
> *"The work only considers unstructured pruning. Unstructured sparsity may not accurately reflect the actual model size and inference time."*
**A2.**
In this work, we considered not only unstructured pruning but also $N$:$M$ sparsity, or **semi-structured** pruning. The $N$:$M$ sparsity pruning requires that every $M$ consecutive parameters have at least $N$ zero elements. This can leverage NVIDIA’s sparse tensor cores to accelerate matrix multiplication in practice [1] *Asit Mishra, et al. "Accelerating sparse deep neural networks." arXiv preprint arXiv:2104.08378, 2021*. As shown in Table 1 (of the original manuscript), *SparseLLM* achieves competitive performance with 3:4 sparsity pruning in most cases, indicating its potential for actual GPU acceleration.
> *"There is no discussion on memory consumption across various model sizes and pruning methods, which may undermine the claim that global pruning is infeasible for LLMs due to memory constraints."*
**A3.**
We provide the memory cost analysis of global pruning as below:
For zero-order pruning methods, such as magnitude pruning, the memory cost is relatively low because there is no need for forward or backward propagation. The weights are pruned based solely on their absolute values, resulting in a memory complexity of $O(N)$, where $N$ is the size of the LLM model.
For first-order methods that use derivatives to estimate the score of each parameter for pruning, an end-to-end forward and backward propagation is required to estimate the pruning mask. Additionally, adjusting the unpruned weights also necessitates another end-to-end propagation. The memory complexity for such methods includes storing the parameters, optimizer states, activations, gradients, and pruning mask, leading to a total memory cost (in GB) of $5$-$10\times$ model size, which is already prohibitive for most LLMs on a normal GPU.
For second-order methods that use the Hessian, which are the most effective and commonly-used pruning methods, the memory cost is significantly higher due to the need to store and compute second-order derivatives. This results in a memory complexity of $O(N^2)$.
In conclusion, global pruning with first or second-order methods is extremely memory expensive, making them impractical for large-scale LLMs. Our proposed method, *SparseLLM*, is very useful as it decomposes the global pruning problem into manageable subproblems, significantly reducing the memory requirements and making it feasible for resource-constrained environments.
> *"Although it outperforms previous methods, the performance at 90\% sparsity drops significantly, reducing its practical usefulness."*
**A4.**
The 90% sparsity level is just one of the sparsities we have shown in our experiments. Despite the significant challenge, the performance improvement of our method over baselines at 90% sparsity is meaningful, demonstrating that *SparseLLM* can achieve better perplexity than SparseGPT and Wanda can even at extremely high sparsity levels, which is a non-trivial achievement.
We believe the significant performance improvement of *SparseLLM* remains useful, as it can provide a better initialization for the "prune and re-train" approach. Re-training methods such as [2] *Zhang, Yuxin, et al. "Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs." ICLR 2024* could complement our *SparseLLM*, potentially close the gap and lead to more practically useful models.
> *"What model sizes benefit more from the proposed pruning method?"*
**A5.**
We empirically proved that *SparseLLM* can achieve competitive performance over comparison methods over various model sizes, from 125 million up to 66 billion. The performance gap between *SparseLLM* and other methods at a given sparsity will in general decrease as the model size increases, which, however, is due to that larger models are more over-parameterized and easier to prune. This monotonic decreasing pattern is true for all pruning methods including SparseGPT and Wanda.
> *"Can this method be extended to structured pruning, which may be more useful?"*
**A6.**
Although in this work we mainly focus on unstructured and $N$:$M$ sparsity pruning, we believe extending our *SparseLLM* further to structured pruning is potentially feasible and interesting to explore. The high-level idea of *SparseLLM* is generic and not restricted to specific model pruning algorithms. | Summary: This paper introduces SparseLLM, a novel pruning technique targeted at the FFN layers in LLMs. By treating global and local (layer-wise) pruning as special cases in the proposed formulation, SparseLLM can circumvent the limitations of both extremes. The proposed method introduces auxiliary variables and soft constraints within LLM feedforward layers, which helps decompose pruning into subproblems that can be solved analytically. SparseLLM is evaluated on modern LLMs such as OPT and LLaMa2, pruning their weights in both unstructured and N:M patterns; the performance is compared with other state-of-the-art LLM pruning techniques such as SparseGPT and Wanda.
Strengths: * The authors address an important and relevant problem: how to induce relatively high unstructured and semi-structured sparsity (>70%) into LLMs both cheaply and effectively (in terms of accuracy w.r.t. dense model).
* The paper is very well-written and the core ideas have been presented clearly in an easy-to-understand manner.
* The proposed formulation permits the systematic exploration of global and local (layer-wise) sparsity and everything in between. This is quite powerful.
* The benchmarks are fairly strong, using modern medium-scale language models and comparisons to SoTA approaches like SparseGPT/Wanda; however, performance is a bit lacking (see comments below).
Weaknesses: * The proposed approach appears to obtain accuracy figures on par with SparseGPT at 70% unstructured sparsity. In higher sparsity regimes, SparseLLM seems to outperform SparseGPT and Wanda; however, most of the perplexity values reported for these regimes by all approaches (including SparseLLM) are extremely high; I'd argue that the obtained zero-shot models are pretty much unusable in the real world. I'm not sure I understand why saving pruning+retraining time is important in this regime - wouldn't additional training to close this gap make more sense? If so, SparseLLM could potentially be a good initialization for such an approach.
* Why are there no experiments performed for 2:4 sparsity, especially since results for 3:4 are reported? 2:4 is particularly relevant since hardware acceleration for this sparsity pattern is possible with today's GPUs. Does SparseLLM outperform SparseGPT/Wanda in this case?
Technical Quality: 2
Clarity: 4
Questions for Authors: * To clarify, does 3:4 mean that 3 out of 4 consecutive elements are zero? N:M sparsity is traditionally defined differently: at most N out of M consecutive elements are non-zero.
* Can SparseLLM handle alternative activation functions such as GeLU?
* Can SparseLLM handle networks with residual connections between layers?
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: Limitations are adequately discussed in Section A.7 (Appendix).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer snMo,
We sincerely appreciate that you found our paper and method interesting with solid results. Please refer to our response below for details:
> *"The proposed approach appears to obtain accuracy figures on par with SparseGPT at 70% unstructured sparsity. In higher sparsity regimes, SparseLLM seems to outperform SparseGPT and Wanda; however, most of the perplexity values reported for these regimes by all approaches (including SparseLLM) are extremely high; I'd argue that the obtained zero-shot models are pretty much unusable in the real world. I'm not sure I understand why saving ``pruning and retraining" time is important in this regime - wouldn't additional training to close this gap make more sense? If so, SparseLLM could potentially be a good initialization for such an approach."*
**A1.**
*SparseLLM* achieves an accuracy improvement over SparseGPT by an average of 1-3% on zero-shot tasks. For example, at 70% sparsity with LLaMA-2 7b on the WinoGrande dataset, *SparseLLM* outperforms SparseGPT by 2.3 percentage points (61.39 vs. 59.04), and at 70% sparsity with LLaMA-2 13b on the RTE dataset, *SparseLLM* shows a 3.25 percentage point improvement (61.73 vs. 58.48).
In higher sparsity regimes, for example, 80% sparsity, the perplexity of *SparseLLM* on datasets WT2 and C4 is practically useful. For instance, *SparseLLM* achieves perplexity values of 15.61 and 16.61 on WT2 and C4 for OPT-30b at 80% sparsity. Additionally, *SparseLLM* achieves perplexity values of 16.45 and 17.70 on WT2 and C4 for OPT-66b at 80% sparsity. The perplexity of pruned model by *SparseLLM* is close to that of some smaller dense models, and just needs a little push.
*SparseLLM* could provide a better initialization for the "prune and re-train" approach, potentially enhancing performance and closing the gap caused by pruning. Re-training methods such as [1] *Zhang, Yuxin, et al. "Dynamic Sparse No Training: Training-Free Fine-tuning for Sparse LLMs." ICLR 2024* could complement our *SparseLLM*. We will explore such approaches and discuss this topic in the limitations and future work section of our paper.
> *"Why are there no experiments performed for 2:4 sparsity, especially since results for 3:4 are reported? 2:4 is particularly relevant since hardware acceleration for this sparsity pattern is possible with today's GPUs. Does SparseLLM outperform SparseGPT/Wanda in this case?"*
**A2.**
We have added the results for 2:4 sparsity in Table 5 in our one-page PDF, where *SparseLLM* can consistently beat the comparison methods, which demonstrates the potential of *SparseLLM* to achieve practical GPU acceleration.
> *"To clarify, does 3:4 mean that 3 out of 4 consecutive elements are zero? N:M sparsity is traditionally defined differently: at most N out of M consecutive elements are non-zero."*
**A3.**
In our paper, $N$:$M$ means every group of consecutive $M$ values contains at least $N$ zeros, which we follow the definition from [2] *Mishra, Asit, et al. "Accelerating sparse deep neural networks." arXiv preprint arXiv:2104.08378 (2021).*
> *"Can SparseLLM handle alternative activation functions such as GeLU?"*
**A4.**
Yes, *SparseLLM* can handle alternative activation functions such as GeLU in a similar way to how it handles SiLU. Both activation functions are element-wise operators, meaning each output position is calculated independently. By following the approach used for the SiLU activation function in LLaMA (Lines 233-238), leveraging a pre-computed look-up table, one can obtain analytical solutions for each subproblem.
> *"Can SparseLLM handle networks with residual connections between layers?"*
**A5.**
Yes, *SparseLLM* can handle architectures with residual connections. An intuitive way to explain this is by referring to Figure 2 in our original manuscript. Specifically, in the sub-figure of *SparseLLM* on the LLaMA layer, residual connections can be regarded as a special case of the bottom half of the FFN module in the LLaMA layer. In this case, the "up proj" layer is replaced by a trivial identity mapping, and the dot-product aggregation (black round dot) is replaced by summation. Since the "up proj" layer is replaced by a non-parametric function, the auxiliary variable $z_{\ell}$ can be discarded.
---
Rebuttal 2:
Title: Request to review the rebuttal [Author-Reviewer discussion phase ending soon]
Comment: Dear Reviewer snMo,
We would like to sincerely thank you again for your valuable feedback and insights, which have greatly improved our paper. We promise to reflect all your comments in the final manuscript thoroughly. As we are towards the end of the author-reviewer discussion period, we kindly request you to please go through our rebuttal, and we would be immensely grateful if you could reconsider your recommendation.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for the response. I appreciate the new 2:4 results (among others) given the limited rebuttal timeframe. I will raise my score.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer snMo,
Thank you very much for acknowledging our responses. We are pleased to see that you have raised your score.
We greatly appreciate your constructive comments, as well as the time and patience you have dedicated to this review.
Best regards,
Authors | Summary: This paper presents SparseLLM, a framework to prune large language models by decomposing the global pruning objective into multiple sub-problems, each of which can be solved with low resources, when combined, solve the global pruning objective. The method reformulates LLMs as a chain of modular functions and uses auxiliary variables to enable problem decomposition. Empirically, SparseLLM shows consistent improvements over the local pruning methods especially in high-sparsity regimens.
Strengths: 1. The paper introduces a novel method to address the limitations of both global and local pruning for large language models.
2. The proposed approach is well-grounded in theory with clear mathematical foundations.
Weaknesses: 1. The experiments are a bit weak in model choice. Older models like OPT and Llama-2 are chosen, when a lot of new and better performing models have come out like Mistral, Gemma, and even Llama 3 (which came out in April).
2. Experiments based on lower sparsity levels are also missing, I would like to see the comparison and time computations at 10/20/50% sparsity as well.
3. The dense baseline (0% sparsity) is missing from all tables. It's important to gauge the effectiveness of the proposed method.
4. For small models, it seems that SparseLLM performs on par with SparseGPT, with the added computational complexity.
5. Ablations on the effectiveness of the hyperparameters $\alpha$ and $\beta$ are missing.
Technical Quality: 3
Clarity: 3
Questions for Authors: I've asked most of my questions in the weaknesses section above.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations section is a bit vague and only discusses the requirement for certain structural properties of the network to be satisfied in order to demonstrate the effectiveness of SparseLLM. I would like to add that this does not affect my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer FMFD,
We sincerely appreciate that you found our paper and method interesting with solid results. Please refer to our response below for details:
> *"The experiments are a bit weak in model choice. Older models like OPT and Llama-2 are chosen, when many new and better-performing models have come out like Mistral, Gemma, and even Llama 3 (which came out in April)."*
**A1.**
We have performed additional experiments on the LlaMA-3 8b model and presented the perplexity results in Table 1 of our one-page PDF. Our results demonstrate that *SparseLLM* achieves competitive performance on the latest LlaMA-3 model, confirming its applicability to state-of-the-art models.
> *"Experiments based on lower sparsity levels are also missing, I would like to see the comparison and time computations at 10/20/50% sparsity as well."*
**A2.**
We provide the perplexity and computation time results for sparsity 10/20/50% on OPT-1.3b and OPT-2.7b in Table 2 and Table 3 in our one-page PDF. We see similar perplexity results for all four methods for the 10% and 20% sparsity, as naive magnitude pruning can achieve pretty good perplexity. However, we start to see improvements in perplexity from *SparseLLM* starting from 50% sparsity and the improvements are more significant with subsequent even higher sparsity, as shown in the original manuscript.
> *"The dense baseline (0% sparsity) is missing from all tables. It's important to gauge the effectiveness of the proposed method."*
**A3.**
The performance of the dense baselines was provided in our original manuscript. In Table 1 (of the original manuscript), it is placed in parentheses after each LLM's name. In Table 2 (of the original manuscript), it is marked as "Dense" below the row of dataset names. We will modify our paper to improve its readability in the future.
> *"For small models, it seems that SparseLLM performs on par with SparseGPT, with the added computational complexity."*
**A4.**
*SparseLLM* consistently achieves competitive results for small models (e.g., OPT-125m, OPT-1.3b, and OPT-2.7b), significantly decreasing perplexity by up to 80% compared to SparseGPT and Wanda. Notable improvements include OPT-2.7b at 90% sparsity: *SparseLLM* achieves a perplexity of 759.11 versus SparseGPT's 2285.01 for PTB, and 527.70 versus 1776.08 for C4, representing over **60%** improvements in both cases. For OPT-125m at 80\% sparsity for C4, *SparseLLM* achieves a perplexity of 654.54 versus SparseGPT's 1050.83, representing over **40%** improvement.
*SparseLLM* is a generic framework, with mathematical proof that both local pruning and global pruning are special cases. By flexibly switching between these extremes, the computational complexity of *SparseLLM* can be the same as local pruning.
> *"Ablations on the effectiveness of the hyperparameters $\alpha$ and $\beta$ are missing."*
**A5.**
We present the ablation studies for the hyperparameters $\alpha$ and $\beta$ in Table 4 of our one-page PDF. Specifically, we consider the model OPT-1.3b with 70% sparsity on the WikiText2 dataset. We vary the values of $\alpha$ and $\beta$ from the set {$0.01, 0.1, 1, 5, 10, 100$} and compare the resulting perplexity. In the table, we use ``-" to indicate instances of numerical instability. The best combination of $\alpha$ and $\beta$ we found is $(0.1, 0.1)$, while the perplexity is in general insensitive to the choice of $\alpha$ and $\beta$, which is a good sign, meaning that our method is robust to the choice of hyperparameters. | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely thank all your professional and constructive comments, especially given the time and workload for this year's NeurIPS. We have provided detailed responses to each individual comment and hope we have addressed all your concerns. Below is a brief summary of the new experimental results added during the rebuttal period:
- Results for LlaMA-3 8B
- Low sparsity regime results: 10%, 20%, and 50% sparsity for all pruning methods
- Ablation study on the hyperparameters $\alpha$ and $\beta$
- 2:4 sparsity results
**All newly added results have been compiled into our one-page PDF, which we invite you to review for further details.**
Sincerely,
The Authors
Pdf: /pdf/19fa13db6809ce38e6cd3dfba6bb85a7c32e4668.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Implicit Regularization Paths of Weighted Neural Representations | Accept (poster) | Summary: Neural networks are a powerful tool for extracting features from data, but these features can be very high-dimensional. This high dimensionality can be a bottleneck for training machine learning models, requiring computational power and memory. This manuscript investigates implicit regularization through subsampling and draws a connection to weighted regression problems. The authors provide theoretical results for the claimed connection between implicit regularization via subsampling and regression. The authors provide numerical evidence of their investigations.
Strengths: The manuscript is well-written and provides an interesting insight into the connection between subsampling approaches and explicit regularization.
Weaknesses: Some works have provided between subsampling and iterative least squares approaches, see Slagel et al., Sampled Tikhonov Regularization for Large Linear Inverse Problems.
Technical Quality: 3
Clarity: 3
Questions for Authors: see weaknesses.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for their time and valuable feedback and acknowledging the strengths of our paper!
Thanks also for a nice reference pointer!
Below, we expand more on this reference and connections to our work.
**Weakness** and **Question**
- **[W1] and [Q1] Related work on iterative least squares.**
We thank the reviewer for pointing out this great relevant reference about the relationship between subsampling and iterative least squares approaches. We will definitely add the missing reference to the related work in our revision.
Compared to the work by [SCC+19], there are several differences (and some improvements) that our work offers:
- *Problem setup.* [SCC+19] show the equivalence between the iterates along the (randomly) subsampled iterative methods and full Tikhonov regularized estimator (also known as generalized ridge regression estimator). On the other hand, we show equivalence between the subsampled (and ensemble) weighted ridge estimators and ridge(less) estimators on full data (at a different regularization). It is definitely related but these are somewhat complementary directions.
- *Regime of interest.* [SCC+19] implicitly only analyze an underparameterized regime (where the sample size $n$ is required to be larger than the feature dimension $p$). This is because the data matrix is required to have full column rank, which will only happen in the underparameterized regime. Moreover, they only consider the target ridge estimator with $\lambda > 0$. In comparison, our paper considers both underparameterized and overparameterized regimes (allowing for both $n > p$ and $n < p$). Furthermore, we also allow the target ridge with $\lambda = 0$ (also known as ridgeless) and potentially negative $\lambda$ as well.
- *Sampling strategies.* [SCC+19] only consider random subsampling matrices, which are akin to orthogonal matrices (in expectation). In comparison, our work allows for general weighting matrices that we assume to be asymptotically free from the data matrices.
- *Data model.* [SCC+19] assume a well-specified linear model. Our work does not assume any specific model for the response and only require finite average response energy (in the limit).
- *Types of equivalences.* [SCC+19] show the equivalence of the estimators in expectation (over the randomness in subsampling). On the other hand, we show asymptotic equivalences for both the estimators and quadratic risks that hold almost surely (over the randomness in weight and data matrices). In addition, we also establish entire (data-dependent) paths of equivalences for the weighted ensembled ridge estimators.
**Reference**
- [SCC+19] Slagel J. T., Chung, J., Chung M., Kozak D., and Tenorio L. Sampled Tikhonov regularization for large linear inverse problems. Inverse Problems, 2019.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The authors have addressed some of my concerns and I have slightly increased my rating.
---
Reply to Comment 1.1.1:
Comment: Thanks, we are happy that our response addressed your concerns! Thanks again for the reference pointer! We will definitely mention this and the comparison in our revision. | Summary: This paper studies the weighted linear regression problem where the feature matrix is left-multiplied with a random matrix $\mathbf{W}$ denoting the sample weighting. Under the assumption of the asymptotic freeness between this weighting matrix and the feature matrix, the paper shows that the weighted ridge regression estimator is equivalent to a simple ridge regression estimator in the limit of infinite samples by establishing a connection between the regularization strengths of the weighted and the simple ridge regression. Based on this regularization path, the paper gives illustrative examples of concrete weighting matrix and feature matrix. Moreover, the paper also shows a equivalence between the risk of ensemble weighted and simple ridge regression under the same regularization path. Based on this result, the paper derives an optimal sub-sample size for the ensemble training. Theoretical results of this paper is accompanied by experimental verification.
Strengths: 1. The paper considers a general scheme where $\mathbf{W}$ can be any random weighting matrix, which can generalize beyond the sub-sampling scenario. The assumption on the asymptotic freeness between the weighting matrix and the feature matrix is quite relaxed.
2. The paper derives an exact relationship between the regularization of the weighted ridge regression and that of the simple ridge regression in the limit of $n\rightarrow\infty$. This relationship is verified by the experiments presented in the paper.
3. The paper also derives the equivalence between risks of the ensemble weighted ridge regression and the simple ridge regression. The equivalence demonstrates a bias-variance trade-off and also validate the benefit of using a larger ensemble size.
4. Although the results is based on sophisticated mathematical notions, the paper uses intuitive explanation to make the results more comprehensible by a broader audience.
Weaknesses: 1. The results derived in the paper has limited applicability. In particular, although it would be interesting to know the equivalence between the weighted ridge regression and the simple ridge regression, the regularization path does not tell much about how should we choose a weighting matrix, even assuming the optimal ridge regression regularization is known. Moreover, the optimal sub-sample size in Proposition 7 involves computing $\mu^*$ which cannot be done easily. Although the paper discussed a way to measure $\mu^*$, it is still computationally heavy, and even in the case where $\mu^*$ is known, one might not be able to measure the degree of freedom easily.
2. The paper considers an asymptotic regime where the sample size goes to infinity. The results in the paper will have better implications if a dependency on $n$ is given.
3. The paper's result is derived under the ridge regression setting. Although one can always choose $\Phi$ to be the output of some pre-trained neural network, the results barely connect to any property of neural network training.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Are there any example in real-world applications where the weighting matrix is not a diagonal matrix?
2. Based on the theoretical results in the paper, what would be the benefit of using a sub-sampling matrix with non-binary diagonal entries?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper has a good discussion of its limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for taking the time to review our paper and providing valuable suggestions!
Thank you also for the nice questions!
We appreciate all the feedback and have addressed the weaknesses and questions below.
**Weaknesses**
- **[W1] Practical applicability.** Both Theorems 1 and 2 provide "data-dependent" paths through path (2) and path (4), respectively. By fixing target regularization $\mu$, one can compute the degrees of freedom of the full ridge estimator using training data (see line 149). This allows us to numerically solve for the path since the degrees of freedom of the weighted ridge estimator can also be computed using the same formula.
Statisticians have two hyperparameter choices: the type of weighting and the regularization level. We offer a method to tune these in Section 4.2, specifically for subsampling-based weighting, which can be generalized to other parameterized weighting schemes. The oracle optimal subsample size in Proposition 7 may include unknown parameters like $\mu^*$, but for practical purposes, knowing these parameters is unnecessary.
The equivalence implies one can fix a small regularization level and adjust the subsample size. The method in Section 4.2 leverages this, providing a data-driven way to tune the optimal subsample size. Although it requires fitting multiple models, these can be computed independently and distributed, reducing computational time. In Section 4.3, we apply this approach to real-world datasets (e.g., Figure 4) to predict explicit regularization matching the implicit regularization from subsampling.
When $\mu^*$ is known, the situation simplifies. We can compute the degrees of freedom of the full ridge estimator at $\mu^*$ and use the data-dependent path to find various weights and regularization levels that yield the optimal estimator.
- **[W2] Asymptotic analysis.** We examine proportional asymptotic regimes where the sample size $n$, feature dimension $p$, and subsample size $k$ all tend to infinity, but their ratios $p/n$ and $p/k$ converge to constants. In this analysis, both data and weighting matrices are indexed by $n$, a common approach in random matrix theory. This asymptotic method simplifies proofs and highlights essential problem characteristics under minimal assumptions.
Extending our results to finite samples is possible but requires additional assumptions, as precise error bounds depend on the specifics of feature and response distributions. For subsampling weights, techniques from [KY17], [L22], [CM22], and [HX23] may yield non-asymptotic versions of some statements in our paper. However, for general weights satisfying the asymptotic freeness assumption, obtaining non-asymptotic results is challenging. This remains an active area of research in free probability theory.
- **[W3] Relation to neural network training.** Our results are indifferent to the type of neural network training that proves advantageous. Other theoretical analyses link the generalization error of trained models to network properties and data distribution parameters (e.g., [JGH18], [AP20], [BMR21], [MM22]). For example, in random features regression, similar to a two-layer network with random first-layer weights, the generalization error depends on the distributional properties of the weights and the activation function. Although these analyses offer insights into how network properties affect generalization error, the risk expressions become complicated with additional layers.
Our work is different in style from these works. In particular, we do not analyze the risk of either the full model or the subsampled models in isolation. Instead, we relate these two sets of models, allowing us to maintain weak assumptions about the features. Note that it is possible to combine our equivalence results with the aforementioned line of work and this gives a way to port the insights from one model to other equivalent models.
**Questions**
Sincere thanks for the great questions!
- **[Q1] Non-diagonal weighting matrix.** Observation sketching involves taking random linear combinations of the rows of the data matrix, resulting in a non-diagonal weighting matrix. This technique can be useful for privacy, scrambling identifiable information, or mitigating the effects of non-i.i.d. data in time series or spatial data. We will discuss the non-diagonal weighting matrix in our revision.
- **[Q2] Non-binary diagonal weighting matrix.** Even with subsampling, a non-binary diagonal weighting matrix is possible. For instance, sampling with replacement or a specific distribution results in non-binary diagonal weighting matrices, as illustrated in Figures 6 and 7 of the supplement. Other examples include inverse-variance weighting to address heterogeneous variations when responses have different variances for different units.
**References**
- [KY17] Antti Knowles and Jun Yin. Anisotropic local laws for random matrices. Probability Theory and Related Fields, 169:257–352, 2017
- [JGH18] Jacot, A., Gabriel, F., and Hongler, C. Neural tanget kernel: Convergence and generalization in neural networks. In NeurIPS, 2018.
- [AP20] Adlam, B.~ and Pennington, J. The neural tangent kernel in high dimensions: Triple descent and a multi-scale theory of generalization. In ICML, 2020.
- [BMR21] Bartlett, P., Montanari, A., and Rakhlin, A. Deep learning: A statistical viewpoint. Acta Numerica, 2021.
- [L22] Cosme Louart. Sharp bounds for the concentration of the resolvent in convex concentration settings. arXiv:2201.00284, 2022
- [CM22] Chen Cheng and Andrea Montanari. Dimension free ridge regression. arXiv:2210.08571, 2022.
- [MM22] Mei, S. and Montanari, A. The generalization error of random features regression: Precise asymptotics and the double descent curve. In Communications on Pure and Applied Mathematics, 2022.
- [HX23] Han, Qiyang, and Xiaocong Xu. The distribution of Ridgeless least squares interpolators. arXiv preprint arXiv:2307.02044, 2023.
---
Rebuttal Comment 1.1:
Title: Response to the Author's Rebuttal
Comment: Thank you for your response. The authors response to [W1] and [Q1] further convinced me about the applicability of the results. Moreover, the author also discussed how to extend the result to finite $n$ in the response to [W2]. Therefore, I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Great, thanks! We are happy that you found our response helpful and convincing! Thanks again for all the feedback, which we will definitely use in our revision. | Summary: This paper investigates the implicit regularization effects of weighted pretrained features. It establishes a path of equivalence between different weighting matrices and ridge regularization with matching effective degrees of freedom. The study extends results to structured features and ensembles, providing theoretical validation for random and kernel features, and proposes an efficient cross-validation method to finetune subsampled pretrained representations in practice.
Strengths: - Table 1 provides a clear overview of the previous works, along with the contributions and organization of this paper.
- The theoretical results are comprehensive and strong in terms of relaxed assumptions on the features and weights.
- The theoretical results bring insights into designing a practical cross-validation algorithm, whose effectiveness is supported by experiments.
Weaknesses: - While Table 1 and the summary of results provide a relatively clear view of the theoretical contributions, the abstract and the motivation part of the introduction read somewhat confusing. For example, the "path of equivalence" is repeatedly mentioned in the abstract and introduction as one of the main contributions, but without a clear explanation/intuition of what it means until later.
- The problem setup and notations are not clearly stated before being used. For example, in terms of setup, in Assumption A, it's unclear what the convergence refers to in "$W^\top W$ and $\Phi \Phi^\top/n$ converge almost surely to bounded operators". In terms of notations, $\|\cdot\|_{tr}$ and $\overline{tr}(\cdot)$ are used without clear definitions for reference.
- Overall, the layout of the results is a little bit hard to follow, partially due to the lack of clear problem setup and notations mentioned before, and partially due to presuming prior knowledge of specific technical tools like free probability theory.
Technical Quality: 2
Clarity: 1
Questions for Authors: - In Definition A, what's the precise definition of $\overline{tr}(\cdot)$? What does it mean by saying "$W^\top W$ and $\Phi \Phi^\top/n$ converge almost surely to bounded operators"? If it refers to the convergence as $n \to \infty$, then does it mean the analysis is conducted in the asymptotic regime?
- In line 196, if (5) implies $\lambda < \mu$, isn't the regularization level $\lambda$ of the subsampled estimator lower (instead of higher) than that of the full estimator $\mu$?
- It's somehow counter-intuitive that the equivalence path results depend only on the subsample fraction $k/n$ under the "general" data assumption. Common analyses in the asymptotic regime reduce data properties to the subsample ratio $k/n$ usually based on strong assumptions on the data distribution like Gaussian, while data-agnostic subsampling on adversarial data should bring inevitable compromise compared to full data (e.g., with a direction that is only learnable through a single data point). It seems that the "infinitesimally freeness" in Assumption A implicitly circumvents this issue, but it's unclear in the current discussion. For example:
- Whether Assumption A enforces a "nice" data distribution, a "good" subsampling strategy, or both?
- If Assumption A implies a "good" subsampling strategy, can such subsampling matrix $W$ be efficiently constructed in practice?
Confidence: 3
Soundness: 2
Presentation: 1
Contribution: 3
Limitations: Limitations are well-discussed in the Conclusion. Some further limitations are mentioned in Weaknesses and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the encouragement and comments! While we appreciate all the feedback, we believe that the main concerns raised are related to the clarity of exposition and can be easily addressed easily. Below, we address all the weaknesses and questions on a point-by-point basis.
We respectfully request that the reviewer reconsider the scores based on the technical contributions of the paper, which we believe are significant.
**Weaknesses**
- **[W1] Explanation/intuition of main contributions.**
Path of equivalence refers to data-dependent set of weighted ridge estimators $(\mathbf{W} , \lambda)$ that connect to the unweighted ridge estimator $(\mathbf{I}, \mu)$ is defined in terms of "matching" effective degrees of freedom of component estimators in the set (see lines 161-163). This path allows us to relate the performance and properties of models fitted on full datasets to those fitted on subsampled datasets.
In the Abstract (lines 7-9), we refer to it as "a path of equivalence connecting different weighting matrices and ridge regularization with matching effective degrees of freedom." This is to keep the abstract within the recommended 4-6 lines. In the introduction (lines 39-41), we explain mathematically what a path means. Based on your feedback, we will add a line in the Abstract to even more clearly state what a path of equivalence means.
- **[W2] Problem setup.**
In terms of problem setup, we explain the meaning of ``converge almost surely to bounded operators'' below. In free probability theory and random matrix theory, matrices as linear transformations are viewed as operators. A bounded operator is a linear transformation on a vector space where the magnitude of the output vector is bounded by a constant multiple of the magnitude of the input vector. In matrix terms, this means that the eigenvalues of the matrix are bounded. For us, this means that the maximum eigenvalues of the matrices $\mathbf{W}^\top \mathbf{W}$ and $\mathbf{\Phi} \mathbf{\Phi}^\top / n$ are almost surely bounded in the limit as $n \to \infty$. This is a standard assumption in free probability theory and keeps the estimators and their risks bounded in the limit.
- **[W2] Notational clarity.**
All the important notation in the paper is well-defined and exhaustive. We have an entire subsection of the Appendix on this (lines 552-580). Not all of this notation is in the main paper because of the page limit of 9 pages due to the space limit, but note that the meaning of $\overline{tr}(\cdot)$ is explained in the main paper (130-131). The notation of $tr(\cdot)$, $\overline{tr}(\cdot)$, and $\|\cdot\|_{tr}$ is defined on lines 572-573, 577. This notation is standard in the fields of random matrix theory and free probability theory.
While we understand that not all readers will be familiar with these fields, it is not feasible for us to include every notation and definition in the main paper due to page limits, and hence, we added this in the Appendix. Based on your feedback, we will include a paragraph on notation in the main paper in our revision.
- **[W3] Layout and accessibility of results.**
The Summary of Results (line 55-77) outlines the paper's contents: Section 2 covers preliminaries, Section 3 discusses implicit regularization paths and examples of weight and feature matrices, and Section 4 addresses prediction risk asymptotics, risk estimation, optimal tuning, and validation on real-world datasets. Section 5 concludes with limitations and outlook. We will add a separate organization section in the revised version.
To aid readers unfamiliar with random matrix and free probability theories, Appendix A provides a self-contained background, including:
- Appendix A.1: Basics of free probability theory.
- Appendix A.2: Useful transforms and their relationships.
- Appendix A.3: Asymptotic ridge resolvents.
Advanced tools are necessary for such general results, and we try to make them accessible with Appendix A. We welcome additional suggestions from reviewers but we consider the use of tools from other fields in machine learning as a strength rather than a weakness.
**Questions**
- **[Q1] Definitions.** The definition of $\overline{tr}$ is given in the notation paragraph (line 577) of the Appendix. See also our response to [W2]. The analysis in the current paper is indeed conducted in the asymptotic regime, in which the sequence of random matrices (with possibly growing dimensions) is indexed by $n$ (see lines 151-156).
- **[Q2] Comparing regularization strength of subsample and full estimators.**
We thank the reviewer for pointing out the typo in wording. Indeed, the regularization level $\lambda$ of the subsampled estimator is lower than that of the full estimator $\mu$. We will correct this in the revised version.
- **[Q3] Equivalence path results.** Thanks, this is a great question!
Common analyses in the asymptotic regime often rely on strong assumptions like Gaussian distribution, while our approach is more general. A key result is that equivalence paths are characterized by degrees of freedom.
- Assumption A does not require a "nice" *marginal* data and weighting distribution. But it relies on the "infinitesimal freeness" of the (data, weight) pair. Intuitively, for subsampling, it means that the pair needs to be "sufficiently" independent (in the limit). In particular, adversarial subsampling may not be permitted as such.
- Yes. *Independent* random subsampling is easy to construct. In Section 3.2, we construct various examples that satisfy Assumption A. In general, it is also empirically satisfied on real data (as in Figure 3 and Appendix 3.2).
Given the significant technical contributions and the ease of addressing the reviewer's concerns, we kindly request the reviewer to reconsider the score. We will improve the presentation of the paper based on the feedback. We believe the technical contributions and practical relevance of our work merit a more favorable evaluation.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for the detailed response. Assuming a better presentation, I will increase my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thanks, we appreciate it! And yes, we will definitely improve on the presentation as indicated in our response above. | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for taking the time to review our paper. We appreciate the constructive comments and valuable feedback.
All three reviewers have acknowledged several key strengths of our paper.
- Reviewer **cp5Y** liked the clear overview of previous works provided in Table 1, the comprehensive and strong theoretical results with relaxed assumptions, and the practical insights from our theory that lead to an effective cross-validation algorithm.
- Reviewer **nM8u** highlighted the generality of our results, the relaxed assumptions on asymptotic freeness that our results require, the exact relationship established between the implicit regularization effects, and the comprehensible presentation despite sophisticated mathematical theory.
- Reviewer **HnNV** appreciated the well-written manuscript, the interesting insights into the connection between subsampling approaches and explicit regularization, and the theoretical (for a variety of standard feature models) and numerical (on a variety of datasets) evidence supporting our investigations.
The reviewers have also pointed out potential weaknesses and places for improvements in the paper.
- Reviewer **cp5Y** mentioned some confusion regarding the notation, terminology, and layout of the paper.
- Reviewer **nM8u** noted the focus on the asymptotic regime.
- Reviewer **HnNV** suggested additional related work on subsampling and iterative least squares approaches.
We have carefully addressed each reviewer's questions and points of weakness in a point-by-point manner below. To improve readability and for space reasons, we have divided our responses into different posts. We apologize for the length of our response, but we provide thorough and comprehensive answers to all the reviewer's questions and concerns.
In particular, we have tried to clear some misunderstandings of Reviewer **cp5Y** with regard to notation, terminology, and organization. We briefly describe this below for reviewers' convenience.
- **Notation**: The reviewer may have missed this but we actually have a dedicated notation section in the Appendix (lines 552-580) that summarizes all the key notations used in the paper. Some of the important notation (such as average trace) is also mentioned in the main paper where it is used (lines 130-131). We understand that the paper is notation-heavy, and the notation may seem foreign for readers not familiar with random matrix theory and free probability theory, but the notation we use is standard in these fields.
- **Terminology**: We understand that some of the terminology in the paper may be specific to the literature in this area. But note that we define the term "paths of equivalence" in the Introduction (lines 38-41) and intuitively explain this in the Abstract (lines 7-9). We will try to explain this in even more detail in our revision.
- **Organization**: With regards to the layout, we have a section-wise summary of results in the paper in the Summary of Results paragraph (lines 55-77). We will add an explicit organization section in our revision (with the additional page provided in the final version).
We have kindly asked Reviewer **cp5Y** to reconsider the score given after clearing these misunderstandings.
We believe that our response comprehensively addresses the concerns raised by the reviewers. We are open to any further feedback and look forward to subsequent comments from the reviewers. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Private Stochastic Convex Optimization with Heavy Tails: Near-Optimality from Simple Reductions | Accept (poster) | Summary: The paper investigates the problem of differentially private stochastic convex optimization (SCO) under the heavy-tailed setting and achieves a nearly optimal rate of $G_2 \cdot \frac{1}{\sqrt{n}}+G_k\left(\frac{\sqrt{d}}{n \varepsilon}\right)^{1-\frac{1}{k}}$. Specifically, it first provides results using Clipped-DP-SGD in the differentially private empirical risk minimization (DP-ERM) framework and then utilizes generalization techniques to offer similar results in the population case. Finally, it explores the heavy-tailed DPSCO.
Strengths: 1. The paper provides a nearly optimal bound for DPSCO in the heavy-tailed setting.
Weaknesses: 1. Section 3 is not very clear. The connection between Sections 3.1 and 3.2 is not well explained.
2. The presentation needs improvement. One of the most important parts, Population-level Localization, is placed on the last page and is only briefly introduced and discussed. For example, how to choose parameters like $\lambda$ in Algorithm 2 and what $\Delta 4^i$ in Equation 8 represents should be explained.
Technical Quality: 3
Clarity: 1
Questions for Authors: NA
Confidence: 3
Soundness: 3
Presentation: 1
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your reviewing efforts and feedback.
Section 3: We apologize for the lack of clarity in Section 3. In Section 3.1, we propose Algorithm 1, which achieves good performance on an empirical loss assuming the dataset satisfies a property (bounded $b_{\mathcal{D}}$, see (6)). In Section 3.2, we show we can use Algorithm 1 to get a solution with respect to the population function if the dataset is drawn from a heavy-tailed distribution. We included a description of this relationship at the start of Section 3 (Lines 226-235), but will include more connective tissue between Sections 3.1 and 3.2.
Presentation: Due to page limitations, we did not add many details about the population-level localization in the main paper (some details were deferred to the appendix). We will revise our final version and add more details and intuitions to clarify it. Currently, a short description is given in Lines 85-92, and full proof of its guarantees is given in Proposition 2 in the main body. We provided additional intuition of why we developed this technical tool and its comparison to prior work in our response to Reviewer KReZ, and will incorporate this response into our revision. For your other questions, $\lambda$ is a hyperparameter that we optimize in Line 295 to minimize the expression on the previous Line 294, and $\Delta$ is a scaling of the objective error defined in Line 301.
We hope this discussion was clarifying, addressed some of your concerns, and elevated the merits of our paper in your view.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for the clarification. I look forward to seeing more discussion in the revised paper. I've increased my score. | Summary: This paper addresses differentially private stochastic convex optimization (DP-SCO) with heavy-tailed gradients, where previous assumptions of uniform Lipschitz constants are relaxed to bounded k-th moments. The authors introduce a new reduction-based framework that adapts strategies from the uniform Lipschitz setting, enabling optimal rates up to logarithmic factors under (ε,δ)-approximate differential privacy. They propose several algorithms, including an optimal algorithm for known Lipschitz constants, a near-linear time algorithm for smooth functions, and an optimal linear-time algorithm for smooth generalized linear models. A novel population-level localization framework is also presented, overcoming technical barriers and providing robust bounds on excess population loss without stringent gradient moment assumptions. This work advances the theoretical and practical understanding of DP-SCO with heavy-tailed gradients, outperforming previous approaches in handling real-world data challenges.
Strengths: 1. The paper introduces a novel reduction-based framework for handling heavy-tailed gradients in differentially private stochastic convex optimization (DP-SCO). This innovative approach enables the achievement of near-optimal rates and overcomes limitations of previous methods, marking a significant advancement in the field.
2. The proposed method bypasses the need for bounding $\mathbb{E}b_{\mathcal{D}}^2$
Weaknesses: 1. Referring to Corollary 2 in Line 4 of Algorithm 3 reduces readability. To enhance clarity, it would be beneficial to specify the algorithm $\mathcal{A}$ (from Corollary 2) clearly, at least within the proof. While leaving the optimization oracle as a black box might help generalize the framework, it makes the algorithm harder to follow if the instantiations are not highlighted.
2. The motivation of the population-level localization could be further clarified.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. To solve population-level localization, could one use the same (or slightly adjusted) method for the localization of the empirical minimizer?
2. How does the sample split parameter $J$ affect the utility of Algorithm 3?
3. Should $G_2^2$ and $G_k^k$ on line 7 be $G_2$ and $G_k$?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading; we address your questions here.
Re: clarity, we agree with your suggestion, and will add a description of the algorithm and the specific guarantees we are using about it to help the reader. Thanks for pointing this out.
Re: population-level localization, we will add an additional description after Line 85. The motivation for population-level localization is that we wish to aggregate empirical solutions to multiple datasets, some of which satisfy assumptions of Corollary 1 (i.e., have small $b_{\mathcal{D}}$), and some of which do not. However, each dataset has a different empirical minimizer, so it is unclear which to argue about convergence to. We instead aggregate solutions close to a population-level minimizer, shared across datasets. Previous localization frameworks could not handle the setting we consider, as described in Section 1.2, prompting our new development. In particular, previous localization frameworks used non-strongly convex losses, which have known difficulties in generalizing to population-level objectives [SSSS09].
Q1: Our population-level localization is inspired by and closely related to frameworks used to minimize an empirical loss. Both rely on stochastic access to the input, but a key difference is that we cannot verify our empirical dataset actually satisfies a key assumption used to get the optimal rate (bounded $b_{\mathcal{D}}$), so the empirical solution could be meaningless. This is why we target population-level objectives, which can aggregate multiple datasets.
Q2: Suppose that we can get $x_{i,j}$ such that $\|x_{i,j}-x_i\|\le R(i,J)||$ with probability at least 0.55. Then $J$ can not be too small, to make use of a Chernoff bound to show at least half of the $x_{i,j},j\in [J]$ are close to $x_i$. On the other hand, the larger $J$ is, the smaller the privacy budget we can use to generate $x_{i,j}$, since we need to compose over $J$ instances, which hurts our final utility bound. Hence $J$ can not be too small, or too large.
Q3: We believe Line 7 is correct as stated. As assumed in Line 67, $G_j^j$ is the $j^{\text{th}}$ moment.
We appreciate you found our approach innovative, and that our result was a significant advancement. We hope our response was clarifying, and that it elevates the merits of our paper in your view.
---
Rebuttal 2:
Comment: Thank you to the reviewers for the clarifications. I will stay moderately positive on this paper, and keep my score. | Summary: .The paper studies DP-SGD under the assumption that the gradients have heavy-tailed phenomenon. This has been recently motivated and studied a lot by recent works. The authors claim to achieve optimal rate for this problem.
Strengths: They obtain the first optimal rates (up to logarithmic factors) in the heavy-tailed setting, achieving error that depends on $\frac{G_1}{\sqrt{n}} + G_k(\sqrt{d}/nk)^{1-1/k}$ under approximate-DP guarantee.
They additionally study this problem under well-studied assumptions: Lipschitz constant assumption, smooth convex functions, and smooth generalized linear model.
I haven't checked the proof, but the paper is definitely an accept considering it solves a problem in this domain.
Weaknesses: N/A
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your reviewing efforts. We appreciate your positive feedback. Please feel free to let us know if you have any questions or suggestions. | Summary: This paper gives three main results.
The first is a nearly optimal (losing a few logarithmic factors) excess loss rate for differentially private stochastic convex optimization (DP-SCO) when the gradient norms have $k$ bounded moments. In particular, they achieve the optimal excess loss under $\rho$-concentrated differential privacy (CDP) of
$\frac{G_2 D}{\sqrt{n}} + G_kD \cdot \left(\frac{\sqrt{d}}{n\sqrt{\rho}} \right)^{1-1/k}$
where $G_i$ denotes the uniform upper bound on the $i$th moment of the gradients and $D$ is the diameter of the domain. This improves over the prior state of the art [LR23] by a factor of $G_kD \cdot \left(\frac{\sqrt{d}}{n\sqrt{\rho}} \right)^{1/k}$. This improvement is increasingly pronounced as we decrease the number of gradient moments that we assume are bounded, which is probably the important regime in this work.
As I understand, the main technical innovation is a "population-level localization framework", through which the authors are able to control the population excess risk without having to argue about the variance of the bias term arising from gradient clipping.
The second main result is an algorithm that achieves the optimal excess risk when each sample function arrives with a known Lipschitz constant. This yields algorithms for privately learning a generalized linear model with the optimal excess risk. This follows from a clean reduction to the case where we know that our losses are uniformly Lipschitz.
The third main result is an improved query complexity for optimizing smooth functions. The algorithm follows from an application of the sparse vector technique.
Strengths: The algorithms seem very natural, the analysis looks clean, and the results yield quantitative improvements over prior work. Therefore, I think this work is an important contribution to private convex optimization.
The ideas are cleanly explained and at least on a surface level make sense to a complete outsider to the field (such as myself).
Weaknesses: Can't really think of anything significant. A natural criticism I anticipate is that the improvement over [LR23] is pretty minor when $k$ is large, but given that the authors get the optimal result and [LR23] doesn't, this doesn't seem like a real issue.
Technical Quality: 4
Clarity: 4
Questions for Authors: N/A
Confidence: 2
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your reviewing efforts, and we appreciate your positive feedback. Please feel free to let us know if you have any questions or suggestions. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper studies the problem of differentially private stochastic convex optimization with heavy-tailed gradients. This paper points out that in typical optimization research, the assumption of uniformly G-Lipschitz, while convenient for bounding sensitivity, does not always hold. Based on this weakness, the authors studied k-heavy-tailed DP-SCO. The author obtains near-optimal algorithms that lie in the clipped DP-SGD subroutine and ensure the private minimization of a regularized ERM problem under k-heavy-tailed DP-SCO conditions. This method yields points that closely approximate the minimizer of population loss, verified through Markov’s inequality.
Updates post response:
- I find the lack of experiments a bit underwhelming, and the authors' response didn't really convince me why this should not be part of this paper. Data privacy is a highly practical question that should be of strong interest to practitioners. The lack of practical implications is thus concerning.
- The paper does not have a conclusion section, which is very strange for a NeurIPS paper. I have been reviewing NeurIPS submissions for many years; It is really awkward for a paper to not have a conclusions section. Not having a conclusions section also means that the authors did not carefully think about the "Limitations" of their work, which is a requirement as per NeurIPS submission guidelines.
- Lastly, I wonder what are some of the broader impacts/implications of this work. It would be helpful if the authors could add some discussions about this. Again, I feel like this would require a proper "conclusions" section added to the paper.
- While reading through the paper I noticed many notational inconsistencies and concepts that have not been clearly explained (please see concrete examples under the list of questions below), I suggest the authors to carefully proof read their paper and making improvements on explaining their work. This will help broaden the impact of this research.
Disclaimer: Please be aware that due to the rather short reviewing time, I have not been able to check all the proof details in the appendix.
Strengths: - This paper provides a comprehensive analysis of prior work, enabling readers to quickly understand the limitations of previous research on DP-SCO and why the authors believe this work is necessary.
- This paper conducts solid theoretical research and has a profound understanding of the k-heavy-tailed DP-SCO problem. The entire paper is filled with rigorous derivations, providing a theoretical foundation for future research.
Weaknesses: - Experiments could be added to validate the proposed algorithm for readers to better understand the key takeaways.
- The paper's organization could be improved, and the contribution can be better highlighted.
Technical Quality: 2
Clarity: 1
Questions for Authors: - Why is the k-heavy-tailed assumption more common than the uniformly G-Lipschitz assumption? Intuitively, the uniform is indeed a weaker assumption, but I would like more direct evidence to prove the necessity of using the k-heavy-tailed assumption.
- What are the comparisons between the bounds under the k-heavy-tailed assumption with those of prior work? Please comment on this
- Is it possible to provide the convergence analysis of the proposed algorithm?
- From equation (1) to equation (2), what's the difference between $G$ and $G_2$?
- In line 45, can you give a specific example of what you mean by "heavy-tailed gradients"? When might readers expect this condition to hold in practice (e.g., in the training of a neural network)?
- While citing a paper, can you please properly cite the author's name(s) using \cite{}, as opposed to things like [LR23]?
- In the introduction, can you please clarify exactly under what notion of DP your results will hold? Note: I'm seeing multiple versions of DP definitions in section 2, page 5, thus prompting this question.
- Is Theorem 1 (at the bottom of page 9) supposed the main result in your paper?
- What are some of the broader impacts/implications of your work? Please comment on this.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Limitations are discussed on page 32.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable feedback. We answer your questions below.
Experiments: The focus of our work is theoretical in nature: developing algorithms that address known gaps in the literature on DP-SCO, a problem of interest in both theory and practice. We believe this goal has significant merit in its own right, as it provides insights that lay the groundwork for future research, both theoretical and experimental. We absolutely agree with the reviewer on the complementary importance of experimental evaluation of theoretical derivations. We chose to restrict our focus, but think that transferring our insights to practice is an exciting and promising direction for future research.
Organization: We take the organization of our paper and presentation of our contributions seriously as concerns, and appreciate the feedback. We would like to kindly request that the reviewer provide some more specific feedback on our organization and presentation. Re: our contributions, all of our key results are described in Section 1.1 (with corresponding formal statements as Theorems 1-4), and compared to prior work in Section 1.2.
Q1: Indeed, the k-heavy-tailed assumption applies to more situations than the uniform Lipschitz assumption, as a distribution can be $k$-heavy-tailed but not $\infty$-heavy-tailed. However, there is a tradeoff in the gains achievable for finite $k$. Our work is the first to obtain an algorithm that gives the best possible tradeoff up to logarithmic factors.
In practice, differentially private neural networks are trained with clipped gradients (see [ACG+16]), due to outliers which create undesirable tail behavior. For more problems in practice where the $k$-heavy-tailed assumption is a more appropriate model than the uniform Lipschitz assumption, we refer the reviewer to the reference [WXDX20], which discusses how biomedicinal and financial research often use heavy-tailed modeling. More generally, power laws are heavy-tailed, and used to model many phenomena in the sciences. We will add additional discussion of this motivation; thank you for raising this point.
Regarding the relative strength of these assumptions, for $k’ > k$, the $k’$-heavy-tailed assumption is stronger than the $k$-heavy-tailed assumption (with appropriate parameters), as a variable can have a finite $k$-th moment but not a finite $k'$-th moment. In this sense, as uniform Lipschitzness is an $\infty$-heavy-tailed assumption, it is the strongest.
Q2: We compared our bounds to existing work in Lines 57-61 of the submission, as well as more in depth in Section 1.3. The heavy-tailed DP-SCO problem is relatively new, and few theoretical guarantees are provided by the literature. The two most relevant works are of [KLL22] and [LR23]: the first requires stringent conditions (such as uniformly smooth loss functions) to guarantee optimality, whereas the former obtains sub-optimal rates that can be polynomially worse than ours. We close the gap left by these prior works, achieving the first near-optimal rate, as well as generalizations in many settings (smooth and/or GLM).
We were not sure if the reviewer meant how our Assumption 1 compared to previous definitions of the problem. Our problem statement is the same as in [LR23]. The problem definition is phrased slightly differently in [WXDX20, KLL22], using a (in our opinion, less natural) coordinatewise bound; we chose the definition consistent with more of the literature, e.g. previous works on private mean estimation [BD14], and uniform Lipschitz DP-SCO.
Q3: We believe all of our algorithms have complete analyses and proofs. For example, Theorem 1 is proven in Section 3 and Appendix A, with a fully specified algorithm description. We are happy to address your concerns, but we would like to again kindly request that the reviewer be specific about which missing convergence analyses they refer to.
We appreciate that you found our research solid and our descriptions comprehensive, providing a good foundation for the problem. We hope our response was clarifying, and that it elevates our contributions from your viewpoint. Thank you for your feedback again. | null | null | null | null | null | null |
BiDM: Pushing the Limit of Quantization for Diffusion Models | Accept (poster) | Summary: This paper proposes BiDM, which focuses on quantizing both weights and activations of diffusion models (DMs). Specifically, the authors introduce:
- Cross-timestep feature connection to enhance the accuracy of noise prediction in binarized DMs.
- Space-patched distillation, a novel variant of distillation loss that emphasizes spatial locality.
Strengths: - Binarization of diffusion models (DMs) represents a promising research area aimed at accelerating sampling processes, and this work appears to be the first to fully binarize DMs to W1A1 to the best of my knowledge.
- The introduction of cross-timestep information connection is innovative in enhancing the performance of binarized DMs.
- The experimental results on LSUN-Bedrooms, LSUN-Churches, and FFHQ effectively demonstrate the effectiveness of BiDM and XNOR techniques.
Weaknesses: - The experimental results lack sufficient conviction. Tables 1 and 2 compare BiDM primarily with works focused on quantizing discriminative models, which may not be the most appropriate choice of baseline. It would be more informative to include comparisons with quantization methods designed specifically for diffusion models, such as EfficientDM [cite], given its open-sourced nature and compatibility under W1A1 settings. This comparison is crucial for a comprehensive evaluation.
- Figure.4 may not effectively demonstrate the advantages of paying attention to local information. To better illustrate the effectiveness of Space-Patched Distillation (SPD), the authors should compare experimental results with those obtained using vanilla distillation loss (Mean Squared Error between outputs of a full precision model and a binarized model).
- The proposed binarization method introduces additional convolution computations between two matrices in floating-point, which may not be efficient for hardware deployment. Moreover, the dynamic calculation of matrix $A$ during inference introduces memory access overhead that could be significant in practical deployment scenarios.
- During inference, matrix multiplications (MMs) in the attention mechanism typically consume considerable time. It appears that this work retains these MMs in floating-point arithmetic, which could impact efficiency.
**minor issues**
- Line 93: The citation for BinaryDM is incorrectly linked.
- In Figure 4, the caption states "Ours denotes the self-attention," but in the figure, it's labeled as "SPD."
- The term "self-attention" in Section 3.3 is confusing; perhaps using an alternative term would be clearer.
Technical Quality: 2
Clarity: 3
Questions for Authors: - Are the Q, K, V in the attention modules binarized during inference?
- During inference, is $A$ in Equation 9 calculated from the inputs?
- In Line 224, the authors claim that "conventional $l_2$ loss makes it difficult for binary DMs to achieve direct matching." Could you provide evidence or reference existing works that have verified this claim?
- Figure 3 is somewhat confusing. Why is $L^{t-1}$ a linear scaling up of $B^{t-1}$? Is this relationship depicted by Equation 12?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: This paper includes a discussion of limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review of our work. Here are our responses to your concerns:
> Q1: The experimental ...
Thank you for your suggestion. We have added experiments and discussions on EfficientDM. You can check the Global Rebuttal (1) for more details.
> Q2: Figure.4 ...
Thank you for your suggestion. We have already placed this exploration in the manuscript. Table.7 shows that $L_2$ represents the results using MSE Loss as the distillation loss. $L_{SPD}$, with an FID of 22.74, significantly outperforms MSE Loss, which has an FID of 26.07, demonstrating the suitability of SPD for DM.
> Q3: The proposed ...
It should be noted that during inference, BiDM requires only a very small number of additional floating-point additions for the connections across timesteps compared to the classic binarization work XNOR-Net, and there are no differences in the majority of calculations, such as convolutions. XNOR-Net's original paper already stated that it achieves a 58× speedup in convolution operations in its abstract.
We understand your concerns about efficiency, but performing a floating-point convolution with a depth of 1 for scaling factors requires only a very small amount of computation, and the overhead for averaging matrix $A$ is also minimal. Statistical results show that this portion accounts for only 0.08% of the total OPs in binarized DM. Therefore, BiDM remains efficient during inference.
> Q4: During inference ...
BiDM fully binarizes components such as convolutions and QKV, and the remaining components have a very minimal impact on computation. Specifically, the computational cost of this part accounts for only 0.38% of the total OPs in the binarized DM.
In particular, we fully binarized linear/conv1d layers with weights and MMs involving the input x in AttentionBlock, while MMs involving intermediate results (such as proj_out) were performed using floating-point operations. This decision was based on our observation that the latter does not involve weight storage, involves very little computation, and has a relatively noticeable impact on the final results.
Thus, we chose not to fully binarize the mentioned MMs, and this trade-off is necessary to achieve a more accurate fully binarized DM.
> Q5: minor issues:
>
> Q5-1: Line 93 ...
Sorry for the incorrect link; it should have pointed to reference [54] in the manuscript.
> Q5-2: In Figure 4 ...
Here, we provide a clearer explanation to present the meaning of Fig.4 better:
- The first row labeled `FP32` shows the visualization of the full-precision model output $\mathcal{F}^{fp}$.
- The second row labeled `Diff` represents the visualization of the difference between the full-precision model output and the binarized model output, $\mathcal{F}^{fp} - \mathcal{F}^{bi}$.
- The third row labeled `SPD` shows the visualization of the local attention for both the full-precision model and the binarized model calculated using the SPD method: $ \frac{ A^{fp}}{ ||A^{fp}||_2 } - \frac{A^{bi}}{ ||A^{bi}||_2 } $.
From Figure 4, it can be observed that our SPD method allows the model to pay more attention to local information in each patch.
> Q5-3: The term ...
Thank you for your feedback. Using "attention-guided" might be a more appropriate term. Our implementation of SPD is inspired by self-attention, involving matrix multiplication with its transpose locally to measure local self-attention. However, as you mentioned, self-attention is a widely accepted technical term, and using it to describe our SPD could lead to conceptual confusion.
We will update the explanations for these three issues in the revised version of the manuscript. Thank you very much for your detailed review of our work.
> Q6: Are the Q ...
Sure, as we mentioned in our response to `Q4` above, QKV components have all been binarized.
> Q7: During inference ...
Yes, the calculations are dynamic. However, as explained in `Q3`, this part of the computation is identical to that in XNOR-Net and involves very minimal additional calculation, resulting in a very limited extra burden on the inference process.
> Q8: In Line 224 ...
As a general quantization method, real-to-binary [1] suggests that using attention map-based loss during the distillation of a binary model from a full-precision model achieves better results. In contrast, BinaryDM, as the work most closely related to BiDM, directly points out that using L2 loss makes it difficult to align and optimize binary features with full-precision features. These studies indicate that the general L2 loss is inadequate for meeting the optimization needs of binary scenarios.
As a result, Table.7 of our manuscript shows that SPD indeed outperforms L2 loss with MSE.
We will include a discussion on this section in the revised version of the manuscript to make the motivation behind SPD more clear.
[1] Training binary neural networks with real-to-binary convolutions.
> Q9: Figure 3 ...
>
> (1) Why is Lt−1 ...
>
> (2) Is this relationship ...
Sorry for the confusion. Here are our explanations for the two issues:
- (1) In reality, although the trainable $k$ results in changes to local elements, this does not imply that the entire vector is linearly scaled. Thus, ${L}^{t-1}$ is not a linear scaling up of ${B}^{t-1}$.
- (2) The relationship between these two is unrelated to Equation 12. ${L}^{t-1}$ and ${B}^{t-1}$ should be considered as differences in the output features of the model obtained after training under non-trainable/trainable settings of $k$ in Eq.10. This means that the learnable $k$ allows for more flexible and free feature representation.
You could refer to the Global Rebuttal (2) for a further explanation.
---
Rebuttal 2:
Comment: Dear Reviewer AtKg,
Thank you for your thorough review of our work, BiDM, during the review stage. We have carefully considered your concerns during the rebuttal stage and made revisions to the relevant sections of the manuscript.
We are looking forward you to reviewing our response and we are also willing to answer any further questions.
Best regards,
Authors
---
Rebuttal 3:
Title: Thanks for the responses.
Comment: Thanks for the authors' response. My concerns regarding **Q1, Q2, Q4, Q5, Q6, Q8** have been satisfactorily addressed. Nonetheless, I remain uncertain about the deployment efficiency, particularly given that practical inference speeds often diverge from theoretical operations per second (OPs). The dynamic *min-max* operations, despite their minimal theoretical OPs, may introduce substantial latency in practical deployment, especially with smaller model sizes where matrix multiplications are less significant. I am keen to know if there are effective tools available to verify this.
Regarding **Q3**, could you elaborate on the computations in Eq.(9)? Specifically, how is the convolution between $A$ and $ k\alpha $ executed efficiently?
---
Rebuttal Comment 3.1:
Title: Re: Further response (2/2)
Comment: > Q11: Regarding **Q3**, could you elaborate on the computations in Eq.(9)? Specifically, how is the convolution between A and kα executed efficiently?
Yes, in our response to `Q10`, we provided a detailed breakdown of the computation sequence in Eq.(9), including the pre-computed $k'$ and $\alpha$, and the calculations (1)~(6) that need to be performed during inference.
For the convolution between $ A $ and $ k\alpha $, the following calculations are involved:
- (a) Pre-computation before inference, which does not need to be repeated during inference. This includes:
- The divisor $ c $ (the channel dimension) involved in calculating the mean of $ A^{1,h,w} $ from $ I^{c,h,w} $ in a single convolution
- $ \alpha^{n,1,1,1} $ obtained from $ W^{n,c,h,w} $
- $ k'^{1,1,3,3} = \frac{k^{1,1,3,3}}{c} $
- (b) As mentioned in `Q10` (4):
- Perform the convolution between full-precision $ A^{1,32,32} $ and $ k'^{1,1,3,3} $ to obtain $ O_1^{1,32,32} $
- (c) $ \alpha $ needs to be used in the operation in `Q10` (6): $ O_2^{448,32,32} \odot \alpha^{448,1,1,1} $
The efficiency of our operators (as in XNOR-Net) is reflected in the following:
- The operations in (a) can be pre-computed before inference, which means:
- Step (3) in `Q10` does not involve high-cost multiplication or division operations.
- Step (6) in `Q10` does not require computing $ \alpha $ during inference.
- The convolution performed in part (b) has a channel count of only 1.
- Step (5) and (6) in `Q10` are point-wise multiplications rather than matrix multiplications, resulting in a lower computational burden.
In terms of actual inference time, the inference efficiency of our operators is only 0.19x slower than that of a fully binarized convolution in a Baseline without any full-precision scaling factors, even though we use full-precision inference components, consistent with XNOR-Net, to achieve viewable generative performance.
---
Rebuttal 4:
Title: Re: Further response (1/2)
Comment: Thank you for your prompt response. We are glad to clarify the inference efficiency of BiDM.
> Q10: Thank you for the authors' response. My concerns regarding **Q1, Q2, Q4, Q5, Q6, Q8** have been satisfactorily addressed. Nonetheless, I remain uncertain about the deployment efficiency, particularly given that practical inference speeds often diverge from theoretical operations per second (OPs). The dynamic *min-max* operations, despite their minimal theoretical OPs, may introduce substantial latency in practical deployment, especially with smaller model sizes where matrix multiplications are less significant. I am keen to know if there are effective tools available to verify this.
In BiDM, there are no dynamic *min-max* operations; instead, our design involves only basic arithmetic operations such as convolution, matrix multiplication, addition and summation, and matrix dot multiplication. Additionally, as stated in Eq.(9) of our manuscript and in our response to `Q3`, the operators in BiDM behave the same as those in XNOR-Net[1] during inference, which corresponds to Eq.(11) in the XNOR-Net's original paper.
Following your suggestion, we expand upon the inference process described in `Eq.(9)` and provide a detailed explanation and testing. Since the divisor involved in calculating the mean of $ A^{1,h,w} $ from $ I^{c,h,w} $ (i.e., the channel dimension $ c $) can be integrated into $ k^{1,1,3,3} $ in advance, resulting in $ k'^{1,1,3,3} = \frac{k^{1,1,3,3}}{c} $. Additionally, $ \alpha^{n,1,1,1} $ derived from $ W^{n,c,h,w} $ can also be computed ahead of inference. Therefore, the actual operations involved during inference are as follows:
[FP] Original full-precision convolution:
- (0) Perform convolution between full-precision $I_f^{c=448,w=32,h=32}$ and full-precision $W_f^{n=448,c=448,w=32,h=32}$ to obtain the full-precision output $O_f^{448,32,32}$.
[XNOR-Net/BiDM] The inference process for XNOR-Net/BiDM involves the following 6 steps:
- Sign operation:
- (1) Compute $I_b^{448,32,32} = \text{sign}(I_f^{448,32,32})$.
- Binary operation:
- (2) Perform convolution between the binary $I_b^{448,32,32}$ and the binary $W_b^{448,448,3,3}$ to obtain the full-precision output $O_f^{448,32,32}$.
- Full-precision operations:
- (3) Sum the full-precision $I_f^{448,32,32}$ across channels to obtain $A^{1,32,32}$.
- (4) Perform convolution between full-precision $A^{1,32,32}$ and $k'^{1,1,3,3}$ to obtain $O_1^{1,32,32}$.
- (5) Pointwise multiply $O_f^{448,32,32}$ by $O_1^{1,32,32}$ to obtain the full-precision output $O_2^{448,32,32}$
- (6) Pointwise multiply $O_2^{448,32,32}$ by $\alpha^{448,1,1}$ to obtain the final full-precision output $O^{448,32,32}$
We utilized the general deployment library Larq[2] on a Qualcomm Snapdragon 855 Plus to test the actual runtime efficiency of the aforementioned single convolution. The runtime results for a single inference are summarized in the table below. Due to limitations of the deployment library and hardware, Baseline achieved a 9.97x speedup, while XNOR-Net/BiDM achieved an 8.07x speedup. Besides, the improvement in generation performance brought by BiDM is even more significant, and we believe that it could achieve better acceleration results in a more optimized environment.
| | (0) | (1)+(2) | (3) | (4) | (5) | (6) | Runtime($\mu s$/convolution) | FID$\downarrow$ |
| :--------------: | :------: | :-----: | :----: | :----: | :--: | :--: | ---------------------------: | --------------: |
| FP | 176371.0 | | | | | | 176371.0 | 2.99 |
| Baseline(DoReFa) | | 17695.2 | | | | 4.3 | 17699.5 | 188.30 |
| XNOR-Net / BiDM | | 17695.2 | 2948.8 | 1133.3 | 83.2 | 4.3 | 21864.8 | 22.74 |
[1] Rastegari et al. Xnor-net: Imagenet classification using binary convolutional neural networks. ECCV 2016
[2] "LarQ". https://github.com/larq/larq | Summary: The manuscript proposes a method for fully binarizing both the weights and activations of diffusion models, named BiDM. Structurally, it introduces an improved XNOR method for scaling factors of activations and high-level feature connections across time steps, based on observations of existing temporal phenomena. For optimization, the manuscript presents a patch-based attention mechanism distillation method, approached from the perspective of spatial features. Quantitative analysis and generated examples demonstrate that BiDM, as the first work to fully binarize diffusion models, surpasses existing binarization methods, showcasing the potential for deploying diffusion models in resource-constrained scenarios.
Strengths: 1. This is the first work to fully binarize diffusion models, preventing them from collapsing in extreme scenarios and demonstrating acceptable generated samples. This is significant for exploring the compression potential of generative models.
2. The improvement to XNOR is not only applicable to diffusion models (DMs) but also has positive implications for the whole binarization field. Most papers and deployment frameworks, when replicating and implementing XNOR, have consistently followed the approach described in Equations 7 or 8 of this manuscript, rather than the method described in Equation 9. The authors have meticulously examined and adaptively improved the original XNOR approach within the DM field, prompting reconsideration in both DM compression and general binarization fields.
3. The cross-time step connection appears novel and well-suited to the inference structure of DMs. Unlike the cache design in DeepCache, which aims at inference efficiency, this manuscript uses connections to enhance information due to the inherently efficient nature of binarized models. These connections can be placed at multiple nodes within the model, with learnable coefficient factors providing greater adaptability.
4. Section 3.2 first designs TBS from the perspective of information enhancement and then provides further explanation from the perspective of error reduction in Figure 3, making the overall method design appear cohesive and intuitively clear.
5. The distillation strategy design is straightforward and effective, approaching patch division based on the inherent requirements of the generation task and the natural locality of the convolution module, and achieving excellent results.
Weaknesses: 1. The description of the cross-time step connection during the training phase is somewhat vague. During the inference phase, the impact of this connection is iteratively attenuated. For example, when α is 0.3, the influence of step T on step T-2 is 0.3×0.3=0.09, rather than 0. However, upon reviewing the source code in the supplementary materials, I found that the authors only consider steps T-1 and T-2 during training. I would like to know if this affects the final accuracy.
2. The scenario assumed in Figure 3 is overly idealized. When only L^t changes, for instance, when L^t is at the lower right of L^(t-1), the weighted average might result in T^(t-1) being further from F^(t-1). How is this situation explained?
3. Although BiDM shows significant improvements over other binarization methods, there remains a substantial performance gap compared to full-precision models. This could still hinder its practical application.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The manuscript only addresses diffusion models using a U-Net backbone primarily based on convolution. It appears to be inapplicable to diffusion models like DiT, which are primarily based on transformers and not related to U-Net.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your high recognition of our work and the valuable suggestions you provided. Our response is as follows:
> Q1: The description of the cross-time step connection during the training phase is somewhat vague. During the inference phase, the impact of this connection is iteratively attenuated. For example, when α is 0.3, the influence of step T on step T-2 is 0.3×0.3=0.09, rather than 0. However, upon reviewing the source code in the supplementary materials, I found that the authors only consider steps T-1 and T-2 during training. I would like to know if this affects the final accuracy.
From the optimization principles of DDPM[1], this does not affect the accuracy of the results. Essentially, our training process is no different from that of DDPM, as both use efficient training to optimize random terms of the usual variational bound on negative log likelihood with stochastic gradient descent. Due to the cross-time step connection in TBS, which requires considering the values from the previous inference step, we included T-2, T-1, and T in our considerations. This approach still adheres to the DDPM training methodology.
[1] Ho J, Jain A, Abbeel P. Denoising diffusion probabilistic models. NeurIPS, 2020.
> Q2: The scenario assumed in Figure 3 is overly idealized. When only L^t changes, for instance, when L^t is at the lower right of L^(t-1), the weighted average might result in T^(t-1) being further from F^(t-1). How is this situation explained?
You can refer to the Global Rebuttal (2) for the explanation. Additionally, in practical applications, the learnability of $\alpha$ ensures the effective collaboration between $L^{t-1}$ and $L^{t}$. Moreover, the ablation study results in Table.6 also demonstrate the effectiveness of connecting $L^{t-1}$ and $L^{t}$ across time steps.
> Q3: Although BiDM shows significant improvements over other binarization methods, there remains a substantial performance gap compared to full-precision models. This could still hinder its practical application.
BiDM is the first fully binarized DM method capable of generating viewable images, significantly surpassing advanced binarization methods like ReSTE in quantitative metrics (on LSUN-Bedrooms, BiDM achieves an FID of 22.74, notably better than ReSTE's FID of 59.44). This demonstrates the feasibility of fully binarized DMs, marking a step towards practical applications with great potential.
Besides, as you highlighted in the Strengths section, the improvements to techniques like XNOR will have a positive impact on the entire binarization field. We will further explore this area in future work to achieve broader practical applications.
> Q4: The manuscript only addresses diffusion models using a U-Net backbone primarily based on convolution. It appears to be inapplicable to diffusion models like DiT, which are primarily based on transformers and not related to U-Net.
The connections across timesteps in TBS and SPD should be directly applicable to DiT, with SPD likely being even more compatible with DiT's inherently space-patched input. While the convolutional design in TBS isn't suitable for DiT's linear-based architecture, we have observed that finer quantization granularity (such as per-group quantization) is gradually being proposed for architectures like transformers. Using a similar approach of dynamically calculating statistics for scaling factors followed by convolution or linear transformation could also be applicable. We plan to explore these aspects further in our future work.
---
Rebuttal 2:
Comment: Dear Reviewer cNoX,
Thank you for your thorough review of our work, BiDM, during the review stage. We have carefully considered your concerns during the rebuttal stage and made revisions to the relevant sections of the manuscript.
We are looking forward you to reviewing our response and we are also willing to answer any further questions.
Best regards,
Authors
---
Rebuttal Comment 2.1:
Comment: Thank you for the detailed rebuttal and the additional results provided. I appreciate the effort in addressing the issues raised. Due to the novelty and potential broad applications, I am willing to vote for acceptance for this paper. | Summary: This paper aims to fully binarize weights and activations of diffusion models (DMs) to achieve storage saving and inference acceleration. To this end, the paper proposes timestep-friendly binary structure (TBS), which employs learnable activation binarizers and cross-timestep feature connections to capture the correlation of the activation features over the timesteps. In addition, the paper introduces space patched distillation (SPD) to match the spatial locality of binary features with the full-precision ones during training. The method is tested on some common image generation benchmarks such as LSUN and FFHQ 256x256, CIFAR-10.
Strengths: - The paper aims to tackle a challenging problem which is to fully binarize both weights and activations of diffusion models.
- The paper is well-written.
- The experimental results are promising.
- The proposed methods of using timestep-friendly binary structure and space patched distillation are well-motivated.
Weaknesses: - The main weakness of the paper is that the proposed method, BiDM, increases the training time of DMS compared to the original process. This issue should be more precise in the paper. The authors should compare the convergence rates of the proposed method and its competitors in terms of wall-clock time and identify which components contribute to the increased training time by breaking down the complexity of all main components of the proposed model.
- The chosen baselines are quite weak as all of them were originally designed for discriminative tasks such as image classification. The authors should consider stronger baselines dedicated to generative tasks such as [1] and the baselines therein.
Ref:
[1] Xia et al. Basic Binary Convolution Unit for Binarized Image Restoration Network. ICLR 2023
Technical Quality: 2
Clarity: 2
Questions for Authors: In Table 3, it is not clear why adding SPD alone makes the results worse, whereas combining it with TBS leads to improved performance. Do the authors have any explanation for this phenomenon? This point should be addressed to clarify the underlying reasons for the observed behavior and to provide a deeper understanding of the interaction between SPD and TBS in the proposed method.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for reviewing our manuscript and providing valuable suggestions. Here are our responses to some of the concerns you raised:
> Q1: The main weakness of the paper is that the proposed method, BiDM, increases the training time of DMS compared to the original process. This issue should be more precise in the paper. The authors should compare the convergence rates of the proposed method and its competitors in terms of wall-clock time and identify which components contribute to the increased training time by breaking down the complexity of all main components of the proposed model.
Thank you for your suggestions. BiDM consists of two techniques: TBS and SPD. The time efficiency analysis during training is as follows:
- TBS includes the learnable convolution of scaling factors (Eq.10) and the cross-time step connection (Eq.12):
- The increase in training time due to the convolution of trainable scaling factors is minimal, as the depth of the convolution for scaling factors is only 1, and the size of the trainable convolution kernel is only $3\times3$.
- The cross-time step connection is the primary factor for the increase in training time. Since it requires training $\alpha$, we introduce this structure during training, so each training sample requires not only noise estimation for $T^{t-1}$ but also for $T^{t}$, directly doubling the sampling steps.
- SPD may lead to a slight increase in training time (an additional 0.18 times), but since we only apply supervision to the larger upsampling/middle/downsampling blocks, the increase is limited.
We have supplemented the actual training time on an NVIDIA A100 40GB GPU, and the results in Global Rebuttal (3) align well with the theoretical analysis mentioned above. Due to the actual software and hardware frameworks, the actual training time per iter for BiDM did not completely double compared to the baseline.
Following your suggestion, we compared the training loss under the same training iterations or training time. BiDM achieved significantly better generative results than baseline methods under the same training cost, demonstrating that it not only has a higher upper limit of generative capability, but is also relatively efficient when considering generative performance. You can refer to Global Rebuttal (3) for a more detailed explanation.
We also tested the FID after uniformly training for 0.5 days, and the results in Global Rebuttal (3) show:
- BiDM has the best convergence, even in a short training time.
- No.3 significantly outperforms No.5 because connections across timesteps greatly increase training time, making No.3 converge faster in the early training stages.
- No.5 slightly outperforms No.7 because $\mathcal{L}_{SPD}$ causes a slight increase in training time.
We emphasize that the biggest challenge in fully binarizing DM lies in the drop in accuracy. Although BiDM requires a longer training time for the same number of iters, it significantly enhances the quality of generated images, as no other method has been able to produce effective images.
> Q2: The chosen baselines are quite weak as all of them were originally designed for discriminative tasks such as image classification. The authors should consider stronger baselines dedicated to generative tasks such as [1] and the baselines therein.
Thank you for your suggestion. We have supplemented the results and analysis for BBCU, and we have also included experimental results for other quantization methods suited for generative models, such as EfficientDM. You can refer to the Global Rebuttal (1) for more comprehensive information.
We will include the above discussion in the revised version of the manuscript. These discussions will further clarify the motivation and necessity behind each component of BiDM.
> Q3: In Table 3, it is not clear why adding SPD alone makes the results worse, whereas combining it with TBS leads to improved performance. Do the authors have any explanation for this phenomenon? This point should be addressed to clarify the underlying reasons for the observed behavior and to provide a deeper understanding of the interaction between SPD and TBS in the proposed method.
Sorry for the confusion, but there might be a misunderstanding. In fact, SPD alone improves the results compared to Vanilla. Specifically, the ablation results in Table 3 can be more clearly restated in the table below. In all metrics, the binarized DM with only SPD outperforms the Vanilla method. For example, FID decreases from 106.62 to 40.62, demonstrating its superiority in optimizing binarized DMs.
When SPD and TBS are used together, SPD brings even better generative performance on top of TBS, further reducing the FID from 35.23 to 22.74.
| Method | TBS | SPD | FID$\downarrow$ | sFID$\downarrow$ | Prec.$\uparrow$ | Recall$\uparrow$ |
| ------- | :------: | :------: | --------------: | ---------------: | --------------: | ---------------: |
| Vanilla | | | 106.62 | 56.81 | 6.82 | 5.22 |
| +TBS | $\surd $ | | 35.23 | 25.13 | 26.38 | 14.32 |
| +SPD | | $\surd $ | 40.62 | 31.61 | 23.87 | 11.18 |
| BiDM | $\surd $ | $\surd $ | 22.74 | 17.91 | 33.54 | 19.90 |
---
Rebuttal Comment 1.1:
Title: Official comment from Reviewer 4HDt
Comment: Thanks the author(s) for the rebuttal.
As most of my concerns have been addressed, I will increase score by 1 point.
---
Rebuttal 2:
Comment: Dear Reviewer 4HDt,
Thank you for your thorough review of our work, BiDM, during the review stage. We have carefully considered your concerns during the rebuttal stage and made revisions to the relevant sections of the manuscript.
We are looking forward you to reviewing our response and we are also willing to answer any further questions.
Best regards,
Authors | null | null | Rebuttal 1:
Rebuttal: ## Global Rebuttal
We appreciate all reviewers for their careful reviews and the constructive feedback provided on our work, BiDM. Here is a summary of the main contributions of BiDM:
We propose BiDM, the first method to achieve an accurate fully binarized diffusion model, aiming for extreme compression and inference acceleration. Based on two observations — activations at different timesteps and the characteristics of image generation tasks — we introduce the Timestep-friendly Binary Structure (TBS) and Space Patched Distillation (SPD) from temporal and spatial perspectives, respectively. In TBS, the learnable tiny convolution adapts to the highly dynamic activation range of DMs from the basic binary operator level. It is tightly integrated with cross-timestep connections that leverage the similarity of activation features between adjacent timesteps, forming the structure of BiDM. SPD takes advantage of the spatial locality in both the DM model structure and the image generation tasks it performs, alleviating optimization difficulties brought by general training methods or naive L2 distillation loss and achieving better generative results.
As the first fully binarized DM, BiDM achieves the best generative results, reducing FID by 62% on LSUN-Bedrooms and is currently the only method capable of generating visually acceptable images. At the same time, BiDM ensures considerable efficiency during inference. Compared to the classic binarization method XNOR-Net, it only adds a minimal amount of addition operations for cross-timestep connections, achieving 28.0× storage and 52.7× OPs savings.
We also noticed that the reviewers raised some common questions. We have summarized and responded to them collectively as follows:
(1) Reviewers suggested adding quantization methods more suited to generative models, such as BBCU and EfficientDM, for comparison. We have supplemented this on LSUN-Bedrooms and conducted the following analysis:
- BBCU:
- We have supplemented our work with BBCU, a binarization method more akin to generative models like DMs rather than discriminative models. We implemented the residual connections in BBCU and used RPReLU. However, since DMs do not have BN layers, we did not incorporate the BN design from BBCU in our adaptation.
- Experimental results indicate that even as a binarization strategy for generative models, BBCU faces significant breakdowns when applied to DMs.
- EfficientDM:
- As a work targeting QAT for DM, EfficientDM is indeed a suitable comparison, especially since it designs TALSQ to address the variation in activation range. We adapted DoReFa as the basic operator for its use under W1A1.
- The results in the table below show that EfficientDM struggles to adapt to the extreme scenario of W1A1, and this may be due to its quantizer having difficulty adapting to binarized DM, and using QALoRA for weight updates might yield suboptimal results compared to full-parameter QAT.
As we mentioned in the TBS section of our manuscript, most existing binarization methods struggle to handle the wide activation range and flexible expression of DMs, further highlighting the necessity of TBS. Their optimization strategies may also not be tailored for the image generation tasks performed by DM, which means they only achieve conventional but suboptimal optimization.
| Method | #Bits | FID$\downarrow$ | sFID$\downarrow$ |
| :---------- | ----- | --------------: | ---------------: |
| BBCU | 1/1 | 236.07 | 89.66 |
| EfficientDM | 1/1 | 194.45 | 113.24 |
| BiDM | 1/1 | 22.74 | 17.91 |
(2) Some reviewers expressed confusion regarding the details of Fig.3. Here is a clearer clarification for Fig.3:
- Since the feature space is very high-dimensional, we could only illustrate it using schematic diagrams. The functions of the two components in TBS are exaggerated for a clearer explanation. Thus, Fig.3 is merely a visual representation of the collaborative function of the two components in TBS. For a detailed explanation, please refer to the end of section 3.2 in our manuscript. In simple terms, the learnable tiny convolution $k$ in TBS allows scaling factors to adapt more flexibly to the representation of DM, while connections across timesteps enable the binarized DM to use the previous step’s output information for appropriate information compensation. Together, these elements enhance the precision of the binarized DM.
- We have adjusted Fig.3 to make it more general. You can view the updated illustration in the attached PDF.
(3) We have also included in the PDF attachment the convergence of training loss over iterations/time for different methods. The results show that BiDM not only achieves the best generative performance with sufficient training time, as stated in the original manuscript but also exhibits the best convergence even under the same number of iterations/time.
We will include the above discussions in the revised version of the manuscript. These discussions will further clarify the motivation and necessity behind each component of BiDM.
For specific questions raised by each reviewer, please refer to our corresponding responses.
Pdf: /pdf/33bd8ba363c0e1f1cacb1e7be37e7b71e27c3f35.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Stepwise Weighted Spike Coding for Deep Spiking Neural Networks | Reject | Summary: This work belongs to ANN2SNN and proposes a novel coding scheme and neuron model to enhance the efficiency and accuracy of Spiking Neural Networks (SNNs) while reducing energy consumption. The Stepwise Weighted Spike (SWS) coding scheme improves information encoding by stepwise weighting input signals and introducing negative pulses, reducing the number of coding spikes needed. The Ternary Self-Amplifying (TSA) neuron model further enhances accuracy by progressively weighting the input through residual membrane potential adjustments and incorporating negative residuals and thresholds. Introducing silent periods allows the neuron to receive more input information before firing, significantly improving accuracy with minimal latency. Experimental results on datasets like MNIST, CIFAR10, and ImageNet demonstrate that the SWS coding scheme achieves better performance with fewer coding and computing steps, performing well even in very deep SNNs and achieving accuracy comparable to Artificial Neural Networks (ANNs) with the same structure.
Strengths: Originality: This work introduces the Stepwise Weighted Spike (SWS) coding scheme, which is a novel approach in the field of Spiking Neural Networks (SNNs). The proposed method compresses spikes by weighting their significance in each step of neural computation, which enhances the performance and reduces the energy consumption of SNNs. Ternary pulses are a relatively new method in SNN, so the improvement of the ternary SNN encoding method has a relatively high degree of originality.
Quality: The paper provides a comprehensive set of experiments to validate the proposed SWS coding scheme. These experiments demonstrate that the SWS coding scheme significantly reduces operations and latency compared to existing neural coding schemes. The paper outlines the parameters used during training and provides justifications for the chosen experimental settings.
Clarity: The introduction of the paper effectively motivates the work by discussing the limitations of current SNN coding schemes and proposing SWS as a solution. The methodology is clearly presented, with detailed descriptions of the new coding scheme and the Ternary Self-Amplifying (TSA) neuron model. Important symbols and their meanings are well-explained, contributing to the overall clarity of the paper.
Significance: The paper makes a significant contribution by proposing the SWS coding scheme, which enhances the efficiency and performance of SNNs. This new method addresses critical issues such as high latency and energy consumption in existing coding schemes, making it a valuable addition to the field. By improving the encoding of information in spikes, the SWS scheme has the potential to advance the development of more efficient and lower-power computing systems, thereby providing new options for the choice of coding schemes in SNNs.
Weaknesses: In the ImageNet experiments in Table 2, SWS and other comparative ANN-SNN methods used different baselines, which is why the '$SNN\ Acc$' results are much higher than those of the comparative methods. However, the ‘$\Delta ACC$’ does not seem to show a significant difference (except for Hybrid training and Spiking ResNet). Using the same network architecture and pre-trained weights would be more credible.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1)
For the experiments on ImageNet in Table 2, it seems that the ANN baseline of SWS has a higher accuracy than that for the comparative methods. Why weren't other methods tested on the same architecture and pre-trained weights?
2)
Figure 2 describes the 'silent period' proposed in this paper, during which neuron potentials accumulate and are not allowed to fire spikes. After experiencing a silent period of $T_s$, the corresponding $\theta^l$ and $V_{th}^l$ are amplified by a factor of $\beta^{T_s}$. Firstly, should the $V_{th}^l$ in Figures 2(b) and 2(c) be $V_{th}^l\beta^{T_s}$? Otherwise, it does not correspond to $\theta^l\beta^{T_s}$. Secondly, how many time steps do the 'burst' spikes last after the silent period? Why, in the middle graph of Figure 2(b), are there larger spikes at the second and fourth time steps and a smaller spike at the third time step? In the middle graph of Figure 2(c), why is there a smaller spike at the third time step and a larger spike at the fourth time step? Is the amplification of $V_{th}^l$ and $\theta^l$ by $\beta^{T_s}$ maintained for several time steps or continuously after the silent period?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors have not explicitly addressed the limitations or potential negative societal impacts of their work. To improve the transparency and completeness of their research, the authors could consider the following constructive suggestions:
1) Limitations:
Create a dedicated "Limitations" section in the paper to discuss any constraints, assumptions, or potential weaknesses of the proposed SWS coding scheme.
Reflect on the robustness of the results to violations of assumptions, such as noiseless settings, model specifications, or dataset dependencies.
Discuss the scope of the claims made in the paper, including the generalizability of the approach across different datasets and scenarios.
Address factors that may influence the performance of the SWS coding scheme, such as computational efficiency and scalability with varying dataset sizes.
Consider possible limitations related to privacy and fairness concerns in the implementation of the SWS coding scheme.
2) Negative Societal Impact:
Explicitly acknowledge the potential negative societal impacts of the SWS coding scheme, such as privacy risks, fairness considerations, or unintended consequences.
Discuss how the technology could be misused or lead to harmful outcomes, even if not intended by the authors.
Consider mitigation strategies to address any identified negative societal impacts, such as controlled release of models, monitoring mechanisms, or additional safeguards.
Emphasize the importance of ethical considerations and responsible deployment of the SWS coding scheme in real-world applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive and thoughtful comments. We are encouraged that you have recognized the novelty of our encoding scheme, the completeness of our experiments, the clarity of our presentation and the significance of our research. We would like to address your concerns and answer your questions in the following.
> 1. For the experiments on ImageNet in Table 2, it seems that the ANN baseline of SWS has a higher accuracy than that for the comparative methods. Why weren't other methods tested on the same architecture and pre-trained weights?
Thank you for pointing this out. Let us first explain the reason and then supplement the experiments. In conversion-based papers, almost every work uses different pretrained ANN weights. Considering the time cost of reproducing from scratch, we directly used the ANNs in the corresponding works as the accuracy baseline and added the conversion loss ($\Delta\text{Acc}$) as a standard. Since we used more recent pretrained ANN weights (most of the pretrained weights in this work are sourced from Torchvision or Hugging Face), our baseline might be higher.
In the conversion-based ImageNet experiments in *Tab. 2*, [13] and [26] perform far worse than SWS-SNN in terms of both conversion loss and time steps; [20] and [11] fall far behind SWS-SNN in terms of time steps, and their conversion losses are also unsatisfactory. Compared to these four works, the performance advantage in our work is very clear. For [2] and [12], they actually require modifying the original ANN architecture and training from scratch. Specifically, they replace the ReLU in the original ANN with a QCFS function favorable for SNN conversion (the modified ANN is referred to as QCFS-ANN). Therefore, their ANN weights are necessarily different from the weights used in our work. In our opinion, this is unfair to SWS because QCFS-ANN is naturally more conducive to low time step conversion.
For [7], we found it is open-source (the open-source works include: Hybrid Training [26], QCFS [2], Fast-SNN [14], and COS [12]), uses pretrained weights from Torchvision, and does not require structural changes. Therefore, we will supplement the experimental results using the same pretrained weights as [7], as shown in the table below.
|ref |architecture |time step |$T_{s}$ |SNN acc |$\Delta$acc |
|:---: | :---: |:---: |:---: |:---: |:---: |
|[7] |VGG-16 |$7$ |$-$ |$72.95$% |$-0.41$% |
|ours |VGG-16 |$8$ |$2$ |$73.28$% |$-0.08$% |
We also want to highlight the difference in model structure. We are the only ones who conducted experiments on ResNeXt101, a network with 101 layers, which is a challenge for any encoding method. The results show that SWS encoding achieved a conversion loss of only $0.42$% with just $8$ coding steps, demonstrating the effectiveness of our approach.
> 2. Figure 2 describes the 'silent period' proposed in this paper, during which neuron potentials accumulate and are not allowed to fire spikes. After experiencing a silent period of $T_{s}$, the corresponding $\theta^{l}$ and $V_{th}^{l}$ are amplified by a factor of $\beta^{T_{s}}$.Firstly, should the $V_{th}^{l}$ in Figures 2(b) and 2(c) be $V_{th}^{l}\beta^{T_{s}}$? Otherwise, it does not correspond to $\theta^{l}\beta^{T_{s}}$. Secondly, how many time steps do the 'burst' spikes last after the silent period? Why, in the middle graph of Figure 2(b), are there larger spikes at the second and fourth time steps and a smaller spike at the third time step? In the middle graph of Figure 2(c), why is there a smaller spike at the third time step and a larger spike at the fourth time step? Is the amplification of $V_{th}^{l}$ and $\theta^{l}$ by $\beta^{T_{s}}$ maintained for several time steps or continuously after the silent period?
For the first question: While the firing threshold does increase by a factor of $\beta^{T_{s}}$, we still use $V_{th}^{l}$ to denote it because this aligns with the definition of the symbol. The inclusion of a silent period increases the reset magnitude of the membrane potential (i.e., $\theta^{l}\beta^{T_{s}}$) after firing a spike but does not change the amplitude of the spike (i.e., $\theta^{l}$) itself. Therefore, we use $V_{th}^{l}$, $\theta^{l}$, and $\theta^{l}\beta^{T_{s}}$ as symbols in this figure.
For the second question: We apologize for the mistakes in *Fig. 2(b)* and *Fig.2(c)*. Your understanding is correct. The amplitude of all the blue dotted lines in the figure should be the same and equal to the amplitude indicated by the orange dashed line. We will correct this error in the full version of the paper. Thank you for pointing this out. | Summary: This paper proposes a novel Stepwise Weighted Spike (SWS) coding scheme designed to improve the efficiency of Spiking Neural Networks (SNNs) by compressing spikes and weighting their significance in each step of neural computation. This method addresses the issues of high delays and energy consumption associated with existing SNN coding schemes, as well as the complexity of neuron models and training techniques. The authors also introduce a Ternary Self-Amplifying (TSA) neuron model, incorporating a silent period to support SWS-based computing. This model is designed to minimize the residual error resulting from the stepwise weighting process. The experimental results provided in the manuscript demonstrate that the proposed SWS coding scheme significantly outperforms existing neural coding schemes, particularly in very deep SNNs. Key improvements include reduced operations and latency, enhanced overall performance, and lower energy consumption.
Strengths: 1. Innovative Approach: Introducing the SWS coding scheme and TSA neuron model is innovative.
2. Performance Improvement: This paper provides experiments showing that the proposed methods outperform existing coding schemes regarding both performance and energy efficiency.
Weaknesses: 1. Clarity and Detail: Some sections of this paper could benefit from more detailed explanations, particularly in the description of the SWS coding scheme and TSA neuron model. This would help in understanding the underlying mechanisms and their advantages.
2. Comparative Analysis: While the experimental results are promising, there is no proof from the experimental results that the encoding method proposed is more advantageous.
3. There are some grammatical errors in the paper. Such as the second paragraph of Section 3.3, "The neurons only integrates input and performs stepwise weighting". It is recommended that a uniform representation be used for "spike" and "pulse".
4. Symbol design problem, "t" in Eq. (3) becomes "n" in Eq. (5).
5. There are many long paragraphs and sentences in the paper, making it difficult for readers to accurately understand the meaning of the paper.
6. The description of the problem in the third paragraph of Section 1 and the end of Section 2 is not clear, making it difficult for readers to understand the problem that the article really wants to solve.
7. The description of the encoding method in Eq. (7) is difficult to understand. According to Eq. (7), the encoded value $A_j$ should have no time step. However, in the experimental part, the method of this paper has 8 time steps.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. The paper proposes a new encoding method. Is this encoding method designed for the TSA neuron model, or is it an encoding method in more general scenarios? If it is a targeted approach, would it be more appropriate as a sub-module of the proposed method?
2. Three data sets are used in the experimental section. Why don’t show the comparison results on MNSIT in Table 2?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The paper mentioned that due to the setting of the neuron's silent period, the delay increases. It can be seen from the experiments that the overall latency of the method is lower, which can be regarded as solving this limitation. At the same time, this article does not have potential negative social impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive and valuable feedback. We are encouraged that you found our approach innovative and the performance satisfactory. We would like to address your concerns and your questions in the following.
> 1. Clarity and Detail:
Thank you for your constructive feedback. We apologize for the unclear descriptions in these sections. In the full version, we will provide a more detailed explanation of the residual error issue. We will also rewrite some of the long sentences for clarity.
> 2. Comparative Analysis: While the experimental results are promising, there is no proof from the experimental results that the encoding method proposed is more advantageous.
In our experiments, we have already compared accuracy, latency, and the number of operations with other encoding schemes. For instance, in *Tab. 2*, all SNNs using conversion strategies other than ours are based on rate encoding. It can be seen that most of them require very long time steps, and the conversion loss is not satisfactory (QCFS, COS, and Fast-SNN have shorter time steps because they have modified the original ANN to facilitate rate-based conversion). TTFS coding will also be added to the table in the full version (see our response to reviewer FCv3’s *question 4* for details).
We compare the latency of SWS encoding with that of rate encoding and TSC encoding in *Sec. 4.2*. We did not find data on latency (measured by time steps) in the papers based on TTFS encoding, so it is not included in this part. In *Sec. 4.3*, we compare the operation numbers of SWS with other encoding schemes, including TTFS and rate encoding. These experimental results all show that SWS encoding has advantages.
In the full version, we will be more explicit about the encoding scheme used in each compared work.
> 3. There are some grammatical errors in the paper. Such as the second paragraph of Section 3.3, "The neurons only integrates input and performs stepwise weighting". It is recommended that a uniform representation be used for "spike" and "pulse".
Thank you for pointing out these issues. We will check every detail carefully.
> 4. Symbol design problem, "t" in Eq. (3) becomes "n" in Eq. (5).
This is actually not a notation error. In *Eq. 5*, $n$ refers to a specific time point, whereas $t$ in *Eq. 3* is a variable.
> 5. There are many long paragraphs and sentences in the paper, making it difficult for readers to accurately understand the meaning of the paper.
Thank you for pointing out this issue. We will break down long sentences and appropriately shorten or split lengthy paragraphs.
> 6. The description of the problem in the third paragraph of Section 1 and the end of Section 2 is not clear, making it difficult for readers to understand the problem that the article really wants to solve.
In this paper, we tried to address the residual error problem caused by stepwise weighting. We will provide a clearer explanation of this issue and include additional mathematical definition to aid understanding (see our response to reviewer FCv3’s *question 2* for details).
> 7. The description of the encoding method in Eq. (7) is difficult to understand. According to Eq. (7), the encoded value $A_{j}$ should have no time step. However, in the experimental part, the method of this paper has 8 time steps.
For static image classification tasks, the encoding process is given by *Eq. 9* (not *Eq. 7*), where $p_{j}$ denotes the input pixel value and $z_{j}^{0}(t)$ denotes the input to the TSA neuron at each time step $t$. *Eq. 7* is used to specify the range of values that can be losslessly encoded by SWS encoding.
> 8. The paper proposes a new encoding method. Is this encoding method designed for the TSA neuron model, or is it an encoding method in more general scenarios? If it is a targeted approach, would it be more appropriate as a sub-module of the proposed method?
Yes, the SWS coding method is a targetted approach. According to *Eq. 5*, the SWS encoding scheme allows preceding spikes to encode more information. However, stepwise weighting can lead to an amplification of the residual membrane potential. The key to implement the SWS encoding is addressing the residual error. In this paper, we limit the value of the residual membrane potential by lowering the threshold and incorporating a silent period, resulting in the TSA neuron model. Therefore, our encoding scheme indeed requires specific neuron models to be effective, and our neurons are not interchangeable with those in other coding schemes.
Thanks for this insightful comment. It would be more appropriate as a sub-module of the proposed method. We will put more emphasis on the neurons themselves and reorganize the content in the full version.
> 9. Three data sets are used in the experimental section. Why don’t show the comparison results on MNSIT in Table 2?
Thank you for pointing this out. We found that recent work seems to seldom compare accuracy on MNIST. Due to space constraints, we presented the experimental results on CIFAR10 and ImageNet, as we believe they are of higher priority. In our work, we use LeNet-5 to compare the number of operations on MNIST, and the comparison results can be found in *Fig. 4*. We have supplemented the experiment as requested, and the accuracy performance on MNIST is shown in the table below. We will include this in *Tab. 2* in the full version.
|ref |architecture |time step |$T_{s}$ |SNN acc |$\Delta$acc |
|:---: | :---: |:---: |:---: |:---: |:---: |
|[6] |LeNet-5 |$500$ |$-$ |$99.12$% |$-0.02$% |
|[29] |LeNet-5 |$44$ |$-$ |$98.93$% |$-0.03$% |
|[27] |LeNet-5 |$-$ |$-$ |$98.53$% |$-0.43$% |
|ours |LeNet-5 |$4$ |$0$ |$99.21$% |$+0.13$% |
|ours |LeNet-5 |$8$ |$0$ |$99.33$% |$+0.25$% | | Summary: The paper proposes a new coding scheme called Stepwise Weighted Spike (SWS) coding scheme for spiking neural networks to enhance the efficiency and reduce the number of operations and thus energy consumption. The SWS coding scheme tackles challenges associated with temporal and rate coding, such as heightened latency and energy usage. It achieves this by compressing spikes and assigning them varying weights at each computational step. Additionally, the paper introduces the Ternary Self-Amplifying (TSA) neuron model, which incorporates a silent phase to mitigate residual errors arising from the weighting procedure.
Strengths: The SWS coding scheme enhances information capacity and reduces the number of spikes, leading to lower energy consumption and higher accuracy as compared to other coding schemes. The effectiveness of this approach is demonstrated using different datasets.
Weaknesses: 1. Which model of a spiking neuron is being employed in equation 3 (line 120)? What is the reset mechanism here after the neuron fires? Are the weights allowed to have negative values? The description of the model is unclear.
2. The notion of residual error intuitively makes sense but it is confusing. Please define the residual error mathematically (line 139) for better understanding.
3. Why ANN-(sws)SNN conversion is opted instead of directly training the SWS based SNN?
4. There are some recent works [1,2,3] with TTFS encoding which claims better results in regard to energy-efficiency and low-latency. First, these works need to be cited in the related work section. In my opinion, a detailed comparative analysis with other models and encoding schemes (for instance with [1,2,3]) needs to be carried out.
[1] Göltz, J., Kriener, L., Baumbach, A. et al. Fast and energy-efficient neuromorphic deep learning with first-spike times. Nat Mach Intell 3, 823–835 (2021).
[2] I. M. Comsa, K. Potempa, L. Versari, T. Fischbacher, A. Gesmundo and J. Alakuijala, "Temporal Coding in Spiking Neural Networks with Alpha Synaptic Function," ICASSP 2020 - 2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 2020, pp. 8529-8533, doi: 10.1109/ICASSP40776.2020.9053856.
[3] Stanojević, Ana et al. “An Exact Mapping From ReLU Networks to Spiking Neural Networks.” Neural networks : the official journal of the International Neural Network Society 168 (2022): 74-88.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Please address all the weaknesses above.
2. This is an empirical study. Can you provide theoretical proof that the number of time steps or operations required to interpolate a given set of data points (i.e., fitting the training data) is significantly less compared to other methods, such as the Time to First Spike, when using Equation 10 as a metric?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: There is no potential negative societal impact and and one limitation related to the inclusion of silent period is noted in the main text.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your constructive and thoughtful comments. We are encouraged that you found our proposed coding scheme effective. We would like to address your concerns and answer your questions in the following.
> 1. Which model of a spiking neuron is being employed in equation 3 (line 120)? What is the reset mechanism here after the neuron fires? Are the weights allowed to have negative values? The description of the model is unclear.
For the first question: In *Eq. 3*, we aim to represent the process of stepwise amplification of the membrane potential. $z_{j}^{l}(t)$ is given by *Eq. 4*, and $S_{j}^{l}(t)$ is given by *Eq. 1* and *Eq. 2*. The spiking neuron in this equation can be simply understood as a variant of the basic IF neuron model, in which we use $\beta$ to represent the coefficient of the stepwise weighting.
For the second question: This neuron adopts a soft reset mechanism, that is, the membrane potential is substracted by an amount equal to the firing threshold.
For the third question: Yes, the weight can take a negative value.
We apologize for being unclear in these parts of our paper, and we will be more explicit on all of these in the full version.
> 2. The notion of residual error intuitively makes sense but it is confusing. Please define the residual error mathematically (line 139) for better understanding.
Thank you for your valuable suggestion. Let us first explain the residual error in more detail: After the reset mechanism, there is always some residual membrane potential in the neuron (unless the membrane potential is exactly at the threshold before firing). This value can be used to measure the quality of the encoded information: We assume the time step for neural computation is $T$. In the best-case scenario, the residual membrane potential $u^{l}_{j}(T)$ is $0$. This means the input has been perfectly encoded and transmitted to the next layer, with no residual error. However, due to stepwise weighting, the residual membrane potential can easily accumulate over time (see *Fig. 1(b)*). The phenomenon of residual error refers to a situation where the residual membrane potential is significantly large (greater than the threshold) after neural computation finishes.
The residual membrane potential $u_{j}^{l}(T)$ can be expressed by the following equation: $$u_{j}^{l}(T) = \sum_{\tau=1}^{T}\beta^{T-\tau}z_{j}^{l}(\tau)-\sum_{\tau=1}^{T}\beta^{T-\tau}S_{j}^{l}(\tau)$$ where $z_{j}^{l}(t)$ denotes the integrated input (see *Eq. 4*) and $S_{j}^{l}(t)$ denotes the output spike train (see *Eq. 1*). When $u_{j}^{l}(T)$ exceeds the firing threshold $V_{th}^{l}$, we refer to this as a residual error.
> 3. Why ANN-(sws)SNN conversion is opted instead of directly training the SWS based SNN?
Our experiments are based on conversion rather than direct training because the contribution of this paper lies in the encoding scheme. If we were to use direct training, any performance advantages might be attributed to new training algorithms. In our future work, we will incorporate training to demonstrate the benefits that it can bring.
> 4. First, these works ([1,2,3]) need to be cited in the related work section. In my opinion, a detailed comparative analysis with other models and encoding schemes (for instance with [1,2,3]) needs to be carried out.
Thank you for the references, which we will include in the full version of our work. In our experiments, we have already compared accuracy, latency, and the number of operations with other encoding schemes. For instance, in *Tab. 2*, all SNNs using conversion strategies (except ours) are based on rate coding. It can be seen that most of them require very long time steps, and the conversion loss is not satisfactory. QCFS, COS, and Fast-SNN have shorter time steps because they modified the original ANN to compensate for the shortcomings of rate encoding. QCFS and COS both replace ReLU in the ANN with the QCFS function to facilitate conversion, and Fast-SNN quantizes the ANN to only 2 bits.
We will include *Ref. 3* you provided in *Tab. 2*, which achieves lossless conversion based on TTFS coding. However, *Ref. 3* does not explicitly provide the time steps used for their lossless conversion. Based on *Fig. 5* of that paper, we estimate that achieving lossless conversion for VGG-16 on CIFAR-10 requires approximately 150 time steps, which is much higher than the SWS coding scheme. Nonetheless, the high efficiency in terms of the number of operations with TTFS is undeniable, as we have compared in *Sec 4.3*.
Additionally, in *Sec 4.2*, we compare the latency of SWS coding with rate coding and TSC coding. We did not find data on latency (measured in time steps) in papers based on TTFS encoding, so it is not included in this section.
In the full version, we will be more explicit about the encoding scheme used in each compared work.
> 5. Can you provide theoretical proof that the number of time steps or operations required to interpolate a given set of data points (i.e., fitting the training data) is significantly less compared to other methods, such as the Time to First Spike, when using Equation 10 as a metric?
We believe that theoretically proving that SWS requires fewer operations per frame is not feasible. In *Eq. 10*, $n^{l}(\tau)$ represents the number of spikes fired by neurons at a specific time $\tau$, which is too complex to quantify. Therefore, in our experiments, we obtained the OPF number through statistical methods.
However, it is feasible to make certain estimations based on *Eq. 10*. In *Sec. 3.4*, we analyzed the time steps required to encode the same range under SWS encoding and rate encoding, which are $T_{c}$ and $2^{T_{c}}$ respectively. Assuming that the neuron has a $50$% chance of firing a spike at each time step, then according to *Eq. 10*, the number of operations required for SWS encoding shows an exponential reduction compared to rate encoding.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' responses, which have addressed most of my concerns. The proposal of a new encoding scheme is interesting; however, given that this is primarily an empirical study, further comparison with TTFS-based encoding schemes—both ANN-SNN conversion and direct training—is essential. Since TTFS encoding also emphasizes energy efficiency, low latency, and the number of operations based on optimal parameter training, a more thorough evaluation is necessary to truly assess the significance of the proposed approach. Based on this, I prefer to keep my rating unchanged. | Summary: The authors introduce a novel encoding method called Stepwise Weighted Spike (SWS) and a corresponding new neuron model named Ternary Self-Amplifying (TSA) for classification tasks utilizing the ANN2SNN training method. The proposed SWS encoding method assigns weights to the importance of spikes at each time step. The TSA neuron, which employs the SWS encoding method, features a lower threshold and includes a silent period.
Strengths: 1. Comprehensive method analysis: the authors conduct a thorough analysis of the Stepwise Weighted Spike (SWS) process, proposing a lower threshold and a silent period method to address residual error issues.
2. Superior Performance: the proposed method demonstrates superior performance in the field of ANN2SNN classification tasks.
Weaknesses: 1. Effectiveness of SWS: Various encoding methods, such as rate encoding and Time-to-First-Spike (TTFS) encoding, can be applied to different neurons and models. However, as illustrated in Figure 5, the SWS encoding method alone is ineffective without incorporating a lower threshold and a silent period. It only functions effectively when a neuron employs SWS encoding along with these additional components. Therefore, the paper should emphasize the neuron model rather than the encoding method, as it is not a universally applicable approach.
2. Lack of Experiments: The ablation study shows that the introduction of a silent period is the primary contributor to the improved performance. This raises doubts about the effectiveness of the SWS encoding method itself. Can the authors provide performance metrics for rate encoding combined with a lower threshold and silent period (if applicable) to ensure a fair comparison?
Technical Quality: 3
Clarity: 3
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable and constructive feedback. We are delighted that you found our analysis of the method comprehensive and the experimental results satisfactory. We would like to address your concerns and answer your questions in the following.
> 1. Effectiveness of SWS: Various encoding methods, such as rate encoding and Time-to-First-Spike (TTFS) encoding, can be applied to different neurons and models. However, as illustrated in Figure 5, the SWS encoding method alone is ineffective without incorporating a lower threshold and a silent period. It only functions effectively when a neuron employs SWS encoding along with these additional components. Therefore, the paper should emphasize the neuron model rather than the encoding method, as it is not a universally applicable approach.
In fact, many works proposing new encoding schemes require the cooperation of corresponding neuron models, as seen in [1-3] below. There are many types of neurons available, and not all of them are suitable for existing encoding methods. Since our coding scheme is novel, and the meaning of each spike has changed, so we need to design new neurons.
According to *Eq. 5*, the SWS encoding scheme allows preceding spikes to encode more information, demonstrating the effectiveness of this coding scheme to some extent. However, stepwise weighting can lead to an amplification of the residual membrane potential. The key to implement the SWS encoding is addressing the residual error. In this paper, we limit the value of the residual membrane potential by lowering the threshold and incorporating a silent period, resulting in the TSA neuron model. Therefore, our encoding scheme indeed requires specific neuron models to be effective, and our neurons are not interchangeable with those in other coding methods.
Thanks for this insightful comment. We should put more emphasis on the neurons themselves. We will reorganize the content in the full version.
[1] B. Rueckauer and S. -C. Liu, "Conversion of analog to spiking neural networks using sparse temporal coding," 2018 IEEE International Symposium on Circuits and Systems (ISCAS), Florence, Italy, 2018, pp. 1-5, doi: 10.1109/ISCAS.2018.8351295.
[2] Han, B., Roy, K.: Deep spiking neural network: Energy efficiency through time based coding. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.M. (eds.) Computer Vision – ECCV 2020. pp. 388–404. Springer International Publishing, Cham (2020)
[3] S. Park et al., "Fast and efficient information transmission with burst spikes in deep spiking neural networks," in DAC, 2019.
> 2. Lack of Experiments: The ablation study shows that the introduction of a silent period is the primary contributor to the improved performance. This raises doubts about the effectiveness of the SWS encoding method itself. Can the authors provide performance metrics for rate encoding combined with a lower threshold and silent period (if applicable) to ensure a fair comparison?
Thanks for your valuable suggestion. Incorporating a silent period is a method to lower the residual membrane potential, which may also be effective for the basic IF model in rate coding. We have supplemented the experiment as you requested. The experiment was conducted on CIFAR10 using VGG-16, and the results are shown in the following table. $T_{s}$ is set to $32$ to ensure that it cannot be ignored compared to a large $T_{c}$. It can be seen the performance improvement brought by the silent period is not significant. This is because there is no stepwise weighting procedure in rate coding, and therefore, the residual membrane potential is not large. It is worth noting that in some cases (e.g., $T_{c}=128$), lowering the threshold or adding a silent period even leads to a decrease in performance.
|Method|$V_{th}^{l}$|$T_{s}$|$T_{c}=64$|$T_{c}=128$|$T_{c}=256$|$T_{c}=512$|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|rate|$\theta^{l}$ |$0$ |$93.21$%|$95.33$%|$95.70$%|$95.80$%|
|rate|$\frac{1}{2}\theta^{l}$|$0$ |$90.64$%|$94.89$%|$95.66$%|$95.84$%|
|rate|$\theta^{l}$ |$32$ |$95.11$%|$95.17$%|$95.44$%|$95.70$%|
|rate|$\frac{1}{2}\theta^{l}$|$32$ |$94.98$%|$94.64$%|$95.19$%|$95.66$%|
We provide the accuracy of the original ANN and SWS-SNN ($T_{c}=8, T_{s}=1$) for comparison below. It can be seen that SWS-SNN brings improved accuracy and significant latency advantages (use *Eq. 6*). Note that the accuracy here differs from that in the paper because MaxPool was replaced with AvgPool in rate coding, necessitating the retraining of the ANN weights.
|ANN|SWS-SNN ($T_{c}=8, T_{s}=1$) |
|:---:|:---:|
|$95.91\%$|$95.90\%$|
---
Rebuttal 2:
Comment: Thank you for addressing my concerns, I will keep the rating. | null | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The authors introduce a new spike coding scheme, which allows them to directly convert quantized ANN to their coding scheme. They demonstrate the effectiveness of their conversion on several pre-trained ANN with minimal loss in performance at the cost of an increase in latency.
Strengths: - strong experimental results
- coding scheme appears to be novel
Weaknesses: - limited connection to spiking neurons, a more straightforward motivation would be a temporal encoding of quantized ANN
Technical Quality: 3
Clarity: 2
Questions for Authors: - could you compare your approach more explicitly to ref. 30?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: - method only applicable to conversion from pre-trained ANN
- no demonstration of training of a model using this coding scheme.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your valuable comments. We are encouraged that you found our proposed coding scheme to have strong experimental performance. We would like to address your concerns and answer your questions in the following.
> 1. Could you compare your approach more explicitly to *ref. 30*?
We first discuss the similarities between the two papers. In *Ref. 30*, the authors proposed using the weighted sum of spikes to approximate the activation value in an ANN, aiming to reduce the number of spikes after ANN-SNN conversion. In our paper, *Eq. 5* indicates that stepwise amplification of the membrane potential results in spikes with different weights at different time steps. This highlights a similarity between the two approaches.
When the generation time of a spike determines its weight, it becomes challenging for the neurons to fire both quickly and accurately. The problem arises due to the uncertainty of the input distribution in the temporal domain; neurons may receive a large amount of input after the moment for generating a high-weight pulse has passed. In *Ref. 30*, the authors addressed this issue by having neurons first receive all inputs and then generate spikes of different weights according to various thresholds. In our approach, we identified that the fundamental cause of this problem is the residual error resulting from membrane potential amplification. Consequently, our primary objective is to regulate the residual membrane potential. Initially, we attempted to lower the firing threshold and introduce negative pulses. However, this proved insufficient under certain extreme input conditions, leading us to implement a short silent period.
From the results of the two methods, *Ref. 30* is equivalent to setting a silent period equal to the length of the coding time steps. Assuming the coding steps is $T_c$, *Ref. 30* requires $2T_c$ time steps to process an image. In contrast, SWS-SNN can achieve good results in $T_c+1$ time steps.
From the perspective of motivation, the authors of *Ref. 30* directly aimed to reduce the number of spikes after ANN-SNN conversion. In contrast, we were inspired by the phenomenon of temporal information concentration and conceived the idea of amplifying the membrane potential at each time step. This led us to focus more on the behavior of spiking neurons, making it easier to identify residual errors and address the aforementioned issue more efficiently. In our opinion, our approach is more closely related to the spiking neurons.
> 2. Method only applicable to conversion from pre-trained ANN & No demonstration of training of a model using this coding scheme.
Thanks for your insightful comments. Our experiments are based on conversion rather than direct training because the contribution of this paper lies in the encoding scheme. If we were to use direct training, any performance advantages might be attributed to new training algorithms. In our future work, we will incorporate training to demonstrate the benefits that it can bring. | null | null | null | null | null | null |
Universal Physics Transformers: A Framework For Efficiently Scaling Neural Operators | Accept (poster) | Summary: The authors presented Universal Physics Transformers (UPTs), a framework for efficient learning and scaling neural operators. UPTs offer flexibility in handling various data types, whether grid-based or particle-based. UPTs compress data into a low-dimensional latent space and perform dynamics propagation within this space, resulting in fast simulations. Additionally, the latent representation allows evaluation at any point in space-time. UPTs demonstrated strong performance when tested on various fluid dynamics problems.
Strengths: - The authors developed a model based on linear attention that handles both mesh-based and particle-based data effectively.
- It leverages a low-dimensional latent space for efficient temporal dynamics propagation, enabling rapid evaluations.
- They conducted very interesting Navier-Stokes experiments.
- They demonstrated that UPT serves as a robust and efficient baseline.
- In scalability experiments, UPT showed excellent performance, scaling efficiently with input size and outperforming other baselines across all tested scales.
- They examined the model's convergence concerning the number of I/O points, obtaining good outcomes.
Weaknesses: - The framework is labeled as universal but is only tested on fluid dynamics, lacking experiments on other PDEs.
- Although the model scales well with input size, its scalability with model size appears poor based on a conducted experiment (Figure 5). While it outperforms other baselines, its scaling rate is significantly lower than that of models like GINO. Additional experiments are necessary to address this issue (if the UPT is meant to be used in alrge scale applications)
- The scalability of UPT concerning training size remains unclear, which is an important property.
- The stability of the latent rollout technique for long rollouts is uncertain, raising concerns about its general reliability
Technical Quality: 3
Clarity: 3
Questions for Authors: - What do you think causes the performance to plateau with increasing model size as observed in Figure 5? Is it because the error is already sufficiently low, preventing further enhancement?
- Why do you believe that latent rollouts do not require stabilization?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors explained well the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your review and respond to the raised concerns below.
**Experiments with other PDEs**
UPTs are neural operators known to be applicable across various PDE types. Due to the nonlinear nature of the NS equations, they are notoriously more challenging to solve than parabolic or hyperbolic PDEs, like the heat or wave equations. Therefore, we hypothesize that UPTs should also generalize well to different PDEs.
Additionally, as requested by reviewer JKEE, we added comparisons of UPT with other methods on small-scale regular grid datasets. In total, UPT outperforms competitors on 5 diverse datasets that span over different dataset sizes (900 to 1M frames), resolutions (2K to 60K inputs), spatial dimensions (2D, 3D), simulation types (steady state, transient), different boundary conditions and specifications (Eulerian, Lagrangian).
**Scalability with model size**
We see now that Figure 5 could suggest that UPT does not scale well with parameter size. However, the scaling of GINO only looks good because GINO underperforms in contrast to UPT (GINO-68M is worse than UPT-8M). As you correctly identified, for well trained (UPT) models it gets increasingly difficult to improve the loss further. Ideally, one would show this by training larger GINO models, however this is *prohibitively* expensive (68M parameter models already take 450 A100 hours per model). We therefore go in the other directions and train even smaller UPT models that achieve a similar loss to the GINO models and compare scaling there. In Figure 2 of the supplemental pdf, we compare UPT 1M/2M/8M against GINO 8M/17M/68M. UPT shows similar scaling on that loss scale.
We see how this could be easily misinterpreted from Figure 5 and adjust the paper accordingly to remove this misunderstanding.
We strongly hypothesis that the effect of larger UPT models would become apparent in even more challenging settings or even
larger datasets. However, challenging large-scale datasets are hard to come by, which is why we created one ourselves. Creating even larger and more complex ones is beyond the scope of our work as it exceeds our current resource budget, but it is definitely an interesting direction for future research/applications.
**Scalability with dataset size**
We added experiments to show the scalability and data efficiency of UPTs by training UPT-8M on subsets of the data used for the transient flow experiments (we train on 2K and 4K out of the 8K train simulations). The results in Figure 3 of the supplementary rebuttal pdf show that UPT scales well with data and is data efficient, achieving comparable results to GINO-8M with 4x less data.
We also show that UPTs can handle various dataset sizes. ShapeNet-Car is a small-scale dataset consisting of 889 car shapes with 3.6K mesh points each. TGV3D is a bit bigger with 8K particles per simulation and 200 simulations of length 61 (12K frames). The dataset for our transient flow experiments contains around 50K mesh cells per simulation with 10K simulations of length 100 (1M frames).
UPT shows strong performances across all considered dataset sizes.
Additionally, UPT consists of mostly transformer blocks, which have demonstrated outstanding scalability in other domains such as language modeling [1] and computer vision [2].
[1] Kaplan et al., "Scaling laws for neural language models", arXiv 2020, https://arxiv.org/abs/2001.08361
[2] Zhai et al., "Scaling vision transformers", CVPR 2022, https://arxiv.org/abs/2106.04560
**Stability of latent rollout**
We found UPT to be fairly stable without any special techniques to stabilize the rollout (e.g. [3]). However, such methods could further improve UPTs performance but these methods are not specific to UPT and typically require additional computations during training. Therefore, we leave exploration of this combination to future work.
Additionally, the latent rollout opens new avenues to potentially apply existing stabilization tricks with less compute. For example, one could do the forward propagation for the stabilization technique from [4] in the latent space, which would greatly reduce training costs thereof. However, as these tricks can be tricky to train (e.g. due to requiring a precise trade-off between training the next-step prediction vs n-step prediction), we leave this direction to future work.
[3] Lippe et al., "Pde-refiner: Achieving accurate long rollouts with neural pde solvers", NeurIPS 2023, https://arxiv.org/abs/2308.05732
[4] Brandstetter et al., "Message passing neural pde solvers", ICLR 2022, https://arxiv.org/abs/2209.15616
---
Rebuttal Comment 1.1:
Comment: Thank you for your answers. I increased the rate to 7. Good luck! | Summary: This paper introduces Universal Physics Transformers (UPTs) to provide a unified learning paradigm for grid- or particle-based structures, enabling scalability across meshes and particles. UPTs mainly follow Encode-Process-Decode paradigm and allow queries at any space-time point through perceiver-like cross attention. To separate the responsibilities of individual components, the authors introduce inverse encoding and decoding losses in addition to the next-step prediction loss. Extensive experiments are conducted in diverse applications, including mesh-based fluid simulations and Lagrangian dynamics.
Strengths: UPTs provide a unified framework that can handle both grid- and particle-based structures, enhancing flexibility across different simulation types. UPTs reduce computational overhead by selecting and aggregating features on supernodes within mesh- and particle-based structures. The paper is well-written with a clear structure.
Weaknesses: - Regardless of the structure of unstructured mesh/structured mesh/point clouds, when processing with UPTs, each node is actually modeled as a token, and a transformer architecture is built on this basis. However, there have been several works [1, 2, 3] that utilize transformers or attention mechanisms to model PDE problems. The novelty of using UPTs is somewhat limited in this way.
- The paper misses comparisons to existing work that employs transformer and attention mechanisms for similar purposes. While [1, 2, 3] have designed methods to reduce the computation overhead of transformer or attention mechanisms, the UPTs approach uses a simple random sampling to aggregate supernode features to reduce computation overhead.
- While the latent rollout approach significantly accelerates inference speed, the experimental results do not show a clear improvement over the autoregressive unrolling via the physics domain. This raises concerns about the effectiveness and practicality of the latent rollout method.
[1] GNOT: A General Neural Operator Transformer for Operator Learning
[2] Transolver: A Fast Transformer Solver for PDEs on General Geometries
[3] Transformer for Partial Differential Equations' Operator Learning
Technical Quality: 4
Clarity: 3
Questions for Authors: - Given that the latent rollout approach does not provide a significant improvement in experimental results compared to autoregressive unrolling, can you elaborate on the potential reasons behind this? Specifically, do you believe that the lack of clear separation between the encoder, approximator, and decoder in this training framework contributes to this issue? Additionally, how effective do you find the inverse encoding and decoding processes? Considering the substantial training overhead and marginal performance gains, what are the key motivations for choosing the latent rollout method over other approaches? Are there any foreseeable optimizations or modifications that could enhance its efficacy?
- How dose DiT modulation influences UPTs' performance? Additionally, which features have you found to be most effective as conditions for DiT modulation?
- How do you specifically determine the number of supernodes in each region of mesh-structured data? After sampling nodes on the mesh, the original edge connections are lost. What impact does using a radius graph to replace the original mesh connectivity have on the results? Can you provide details on how this transformation affects the model's performance and accuracy?
- How is the radius for the radius graph determined when aggregating information at the selected supernodes via a message passing layer? Additionally, do the circles of different supernodes overlap?
- For each optimization step, are the supernodes fixed or do they change dynamically?
- It is mentioned that the edge limit is imposed on nodes in a radius graph to prevent memory fluctuations. Is the edge limit satisfied if the radius is smaller than the set value, or if the limit is exceeded, are edges randomly removed to comply with the constraint?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and helpful comments which were very useful to improve the paper. We address your points individually.
**Comparison to other transformer methods**
The fundamental building principles of UPT are (i) an encoding that is designed to handle irregular grids of various sizes, (ii) a compressed and unified latent space representation that allows for efficient forward propagation in the latent space, and (iii) a neural field-type decoder. We consider all of these points as fundamental for scaling neural operators.
GNOT processes each mesh point as a token and therefore quadratically scales with the number of mesh points.
Furthermore the output can only be evaluated at the positions of the input.
Transolver's concept of "Physics-aware Tokens" is somewhat similar to UPTs supernodes.
However, only the substitutes attention part operates on a reduced space, whereas the FFN part of the transformer operates on the uncompressed space. Therefore, it is a type of linear complexity transformer, which quickly becomes infeasible for larger input sizes (see Figure 2).
Similarly, OFormer also operates on the uncompressed input.
In contrast, UPT heavily compresses the input, leading to sub-linear complexity w.r.t. input size in all transformer layers.
Additionally, we added comparisons to transformer baselines on regular grid datasets (see general response) where UPT outperforms e.g. OFormer by quite a big margin.
**Benefits of the latent rollout**
While the latent rollout does not provide a significant performance improvement, it is almost an order of magnitude faster.
The UPT framework allows to trade-off training compute vs inference compute. If inference time is crucial for a given application, one can train UPT with the inverse encoding and decoding objectives, requiring more training compute but greatly speeding up inference. If inference time is not important, one can simply train UPT without the reconstruction objectives to reduce training costs.
Additionally, the latent rollout enables applicability to Lagrangian simulations. As UPT models the underlying field instead of tracking individual particle positions it does not have access to particle locations at inference time. Therefore, autoregressive rollouts are impossible since the encoder requires particle positions as input. Using the latent rollout, it is sufficient to know the initial particle positions as dynamics can be propagated without any spatial positions. After propagating the latent space forward in time, one can simply query the latent space at arbitrary positions to evaluate the underlying field at given positions. We showcase this in Figure 7 where the latent space is queried with regular grid coordinates (white arrows).
We discuss a potential improvement for the latent rollout to make it more efficient in Appendix A. While we show in the paper that a latent rollout can be enabled via a simple end-to-end training, we think that it can be improved, e.g. via a two stage procedure of first training encoder/decoder in an autoencoder setting, followed by freezing encoder/decoder and training the approximator on the fixed pre-trained latent space akin to [1]. Such an approach would not require inverse encoding/decoding objectives as the separation of components is enforced through the multi-stage training.
[1] Rombach et al., "High-resolution image synthesis with latent diffusion models", CVPR 2022, https://arxiv.org/abs/2112.10752
**DiT modulation**
Conditioning onto external features is crucial to encode this external information into the model. We condition onto the current timestep and also onto the inflow velocity in the transient flow experiments.
Note that this is not specific to UPT and we also apply conditioning to compared models.
**Supernodes and radius graph creation**
We realize that our description in the paper is a bit misleading. Supernodes are, in the case of Eulerian data, sampled according to the underlying mesh and, in the case of Lagrangian data, sampled according to the particle density. A random sampling procedure which follows the mesh or particle density, respectively, allows us to put domain knowledge into the architecture. (We change "randomly sampled" to "sampled according to the underlying mesh / underlying particle density".)
Consequently, complex regions are accurately captured, as these regions will be assigned more supernodes than regions with a low-resolution mesh or few particles.
The sampling of supernodes is done for each optimization step thus having a regularization effect.
The radius graph encodes a fixed region around each supernode. While one could use the original edge connections for Eulerian data, Lagrangian data does not have edges. Additionally, we employ randomly dropping input nodes as a form of data augmentation which makes using the original edge connections more complicated from an implementation standpoint. In contrast, a radius graph is agnostic to randomly dropping input nodes.
We choose the radius depending on the dataset. For each dataset, we first analyze the average degree of the radius graph with different radius values. We then choose the radius such that the degree is around 16, i.e. on average each supernode represents 16 inputs. We found our model to be fairly robust to the radius choice. Also, the circles of different supernodes can overlap. Therefore, the encoding of dense regions can be distributed among multiple supernodes.
We impose the edge limit of the radius graph by randomly dropping connections to preserve the average distance from supernode to input nodes.
Additionally, the radius graph facilitates Lagrangian settings where there is no predefined connectivity between particles.
We extended discussion in the corresponding sections and added a guide on how to choose these hyperparameters to the paper.
---
Rebuttal Comment 1.1:
Title: Reply to rebuttal
Comment: Thank you for the clarifications, and the additional experiments in general response which I believe strengthen the paper. I have raised my score. However, it would be better if the supernode pooling/radius graph construction could be included either in the maintext or supplementary for reference. | Summary: This paper introduces a transformer-based architecture to scale neural operators to larger and more complex conditions involving spatiotemporal modeling. The novelty of this architecture is the use of the so-called latent rollout, which performs autoregressive modeling on the latent space, as opposed to the decoded space in other approaches. Authors claim that this architecture can operate without spatial structures, which is an inherent benefit of transformer-based approaches, and inspection of the latent-space in space-time. The authors show the usefulness of this approach across different physics-based problems. This is a meaningful contribution to design of NNs for physics applications, with potential broader applications in other ML domains. However, there are some questions that this reviewer hopes that the authors can address.
Strengths: 1. Paper is clearly written, and appendices help with self-completeness.
2. Proposed method is shown to be better than existing popular architectures for physics-based modeling.
3. Code is provided for reproducibility.
Weaknesses: 1. A bit more analysis and information could be included to improve completeness. See Questions.
2. Only test MSE has been used as an accuracy metric. Physics applications typically care about conservation of mass and performance near boundaries. However, the lack of further metrics is somewhat justified by the speedup and memory gains shown,
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Authors should be commended on the discussion on memory complexity in Line 105 to 119. However, this could be summarized as a table for cleaner presentation. Can the authors also include time complexity information for a more transparent comparison of the trade-offs associated with the different architectures? (I know this has been somewhat benchmarked in Table 1, but it could be useful to have some more theoretical information)
2. What kind of position encoding is used for the UPT? This is unclear in the paper. Does the type of position encoding matter for Lagrangian vs Eulerian.
3. For a more comprehensive display of results, could the authors include accuracy metrics (e.g., mean error) and memory benefits to Table 1?
4. This has been briefly discussed in Appendix A. But for further clarification, is the present architecture approach limited to physics-related applications? Or are there opportunities for extending the present approach towards other autoregressive ML applications, such as video or language modeling?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Limitations are well-discussed in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your comments which helped to improve the paper. We addressed all your comments and followed all your suggestions.
**Memory and time complexity of different architectures**
We provide theoretical complexities in Appendix C.6 in Table 3. Note that when scaling only the input size towards infinity to calculate an asymptotic runtime/memory complexity, the runtime complexity is mostly the same as the memory complexity as storing intermediate activation dominates the other factors. For example, transformers scale quadratically with the number of inputs M. For large M, a transformer needs to calculate the MxM attention matrix and also store it in memory. Therefore, it has $O(M^2)$ runtime and memory complexity.
For simplicity, we only consider the theoretical case and do not take optimizations such as hardware-aware implementations (e.g. [1]) into account that can trade-off additional computations for a reduced memory footprint.
As the theoretical complexities introduce many variables due to different architectures processing the input in vastly different ways, we believe that Figure 2 presents the practical complexities in a cleaner way and provide the theoretical complexities later in the appendix.
[1] Dao et al., "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness", NeurIPS 2022, https://arxiv.org/abs/2205.14135
**Positional encoding type**
We employ the same positional embedding approach used in the original transformer paper [2], applying it separately to each dimension, as is also common in vision transformers [3]. Namely, we employ a combination of sine and cosine functions with different frequencies to encode each position. The revised version of the paper offers a clearer explanation of this positional embedding.
We also experimented with different types of positional embeddings such as the Fourier feature mapping from [4], which uses randomly sampled frequencies from a Gaussian distribution. In an experiment on TGV2D of our Lagrangian experiments it shows a slight improvement (see Figure 1 of the supplemental rebuttal pdf). The results show a slight improvement for longer rollouts, which we hypothesize is because this method doesn't overly emphasize features aligned with the axes. However, as improvements of other positional embeddings were minor, we stuck to the transformer positional embedding for simplicity.
[2] Vaswani et al., "Attention is all you need", NeurIPS 2017, https://arxiv.org/abs/1706.03762
[3] Dosovitskiy et al., "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale", ICLR 2021, https://arxiv.org/abs/2010.11929
[4] Tancik et al., "Fourier features let networks learn high frequency functions in low dimensional domains", NeurIPS 2020, https://arxiv.org/abs/2006.10739
**Additional accuracy metrics and memory benefits of Table 1**
We included memory benefits in Table 3 of Appendix C.6. However, we find it difficult to assign a representative accuracy metric to each architecture
as most methods are specialized for a certain type of task. For example, the accuracy of a CNN on a task with irregular data will naturally be worse than the accuracy of a GNN on that problem. UPT shows strong performances across various input domains and input scales, but as other methods are not as flexible we find it difficult to quantify their performance with a single metric.
**Application in other domains**
Our method could also be applied to other fields, such as video modeling. In that context, it would be possible to train on a dataset with varying resolutions. Since the decoder's output is a point-wise neural field, it could generate videos at arbitrarily high resolutions.
However, in the context of language modeling, this approach is challenging to apply because language data is finite-dimensional.
---
Rebuttal Comment 1.1:
Comment: Questions have been addressed rigorously. Well done. Changing score from 6 -> 7. | Summary: The authors present a new framework for unifying PDE surrogate modeling across domains. The proposed encoder, approximator, and decoder structure can accommodate PDEs of different discretizations and simulation types. Solutions are approximated in latent space, which aids in scalability and reducing computational cost of high-dimensional PDE data.
Strengths: - The method for encoding/decoding to a common latent space is exciting and seems to provide a good balance between compute/accuracy.
- The experiments are done with challenging fluids problems, which provides a good evaluation of model capabilities.
- The authors make a good effort to run a variety of experiments and provide good justification for when certain experiments cannot be performed. Furthermore, it is evident that the authors spent a lot of time thoroughly evaluating different model choices/hyperparameters and their effects.
- The generalization study and latent space evaluation are convincing.
- The authors consider Lagrangian fluid simulation, which is not commonly done in PDE surrogate modeling, and being able to accommodate this underexplored domain is good.
- The authors provide a good background of the fluid data used and the clarity of the paper is good.
Weaknesses: - The Unet, FNO, and GINO benchmarks are good, however, they are not specifically tailored for efficient computation. There have been prior works that address latent space modeling or efficient PDE modeling of 3D data, namely Latent Spectral Models [1], Latent Evolution of PDEs (LE-PDE) [2], and FactFormer [3]. I would like to see some reasoning or experiments as to why your model is better.
- The proposed architecture seems similar to OFormer [4]. It follows a similar encoding/propagating/decoding scheme as well as uses the transformer architecture. There are good improvements to adjust to higher dimensional and diverse data (e.g, Perceiver, supernodes, decoder design), but the broad architectural claims are shared with this previously proposed model (e.g. using a latent space, training an encoder/decoder, relying on transformer models for scalability). There are also other transformer-based works that present scalable architectures (DPOT [5], MPP [6]). I understand that these models do not include mechanisms for high-dimensional or irregular data, but I am curious to see how your model compares to these transformer-based models on a regular domain.
- There is no justification given for why your model is a neural operator. I know that not every paper that proposes a neural operator justifies the operator approximation theorem, and there are good discretization invariance results, but I would like to see some reasoning for how your model satisfies operator approximation.
Overall, I think the paper is a good contribution and is a valuable step in extending neural operators to large, complex PDE problems. There is a lack of comparable benchmarks, however this does not significantly detract from the experimental rigor and results.
1. Haixu Wu, Tengge Hu, Huakun Luo, Jianmin Wang, Mingsheng Long, Solving High-Dimensional PDEs with Latent Spectral Models. https://arxiv.org/abs/2301.12664
2. Tailin Wu, Takashi Maruyama, Jure Leskovec, Learning to Accelerate Partial Differential Equations via Latent Global Evolution, https://arxiv.org/abs/2206.07681
3. Zijie Li, Dule Shu, Amir Barati Farimani, Scalable Transformer for PDE Surrogate Modeling, https://arxiv.org/abs/2305.17560
4. Zijie Li, Kazem Meidani, Amir Barati Farimani, Transformer for Partial Differential Equations' Operator Learning, https://arxiv.org/abs/2205.13671
5. Zhongkai Hao, Chang Su, Songming Liu, Julius Berner, Chengyang Ying, Hang Su, Anima Anandkumar, Jian Song, Jun Zhu, DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training, https://arxiv.org/abs/2403.03542
6. Michael McCabe, Bruno Régaldo-Saint Blancard, Liam Holden Parker, Ruben Ohana, Miles Cranmer, Alberto Bietti, Michael Eickenberg, Siavash Golkar, Geraud Krawezik, Francois Lanusse, Mariel Pettee, Tiberiu Tesileanu, Kyunghyun Cho, Shirley Ho, Multiple Physics Pretraining for Physical Surrogate Models, https://arxiv.org/abs/2310.02994
Technical Quality: 3
Clarity: 4
Questions for Authors: - I’m curious about the general behavior of stability of error accumulation of your model during auto-regressive rollout. Are there specific training details that you use to mitigate this? Does using a latent space help mitigate this effect?
- What are your thoughts on using a latent space vs. directly working in physics space when it comes to the accuracy of the solutions? Latent unrolling is obviously faster during inference, but even in the physical domain many neural surrogates are still orders of magnitude faster than numerical solvers with the added benefit of being simpler and potentially more accurate than working in a latent space.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: - For common benchmark problems on a regular grid, I don’t see why this architecture would be preferred over a baseline or fine-tuning a pretrained model. Most physical problems use a mesh or irregular grid, but certain problems such as weather or isotropic turbulence come to mind.
- Fixing the latent space is presented as a method to standardize different physics to a common space. However, more complex physics problems (3D turbulence vs. 2D laminar flows) may conceivably require larger latent spaces and the model would need to be retrained for different latent sizes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and helpful comments which were very useful to improve the paper. We address your points individually.
**Why UPT is a better scalable latent space model**
Latent Spectral Model uses geo-FNO (which could be consider as predecessor of GINO) to handle irregular grids. Therefore, it is somewhat similar to GINO as it has to map its input into a fixed regular grid representation. UPT doesn't map to a regular grid which makes it more flexible/expressive.
LE-PDE only considers small-scale regular grid data and their design requires train and test data to have the *exact* same resolution (as mentioned in Appendix C of their paper). Therefore, its application to irregular grid data is impractical as a lot of data has different number of particles/mesh cells during test time. UPT can handle arbitrary input resolutions during training and generalizes well to higher input resolutions (as shown in Figure 5).
FactFormer does not compress its input into a smaller latent space and therefore doesn't scale to large input sizes. Their method proposes an efficient attention mechanism, but their compute requirement would be similar to linear transformers and therefore become infeasible on large systems (as shown in Figure 2).
**Comparison to transformer based models on regular grid**
We compare UPT on two regular grid benchmarks as shown in the general response and in the supplemental pdf Tables 1 to 3. UPT outperforms all compared methods (such as OFormer or DPOT) -- often by quite a margin -- without being specifically designed for regular grid datasets.
**Justification why UPTS are neural operators**
Thank you for bringing up this important point. To avoid overloading the notation, we will provide a brief sketch of how universality can be established for our architecture. For transformer-based neural operators, universal approximation has been recently demonstrated in [1], Section 5. The arguments in this work are heavily based on [2], which establishes that nonlinearity and nonlocality are crucial for universality. By demonstrating that the attention mechanism can, under appropriate weight choices, function as an averaging operator, the results from [2] are directly applicable. For detailed proof, refer to Theorem 22 in [1].
Our algorithm fits within this framework as well: we employ nonlinear, attention-based encoders and decoders (as allowed by the results of [2], Section 2) and utilize attention layers in the latent space.
Please let us know if you require further explanations. We will provide detailed information in an updated version of the manuscript.
[1] Calvello et al., "Continuum Attention for Neural Operators", arXiv 2024, https://arxiv.org/abs/2406.06486
[2] Lanthaler et al., "The nonlocal neural operator: Universal approximation" arXiv 2024, https://arxiv.org/abs/2304.13221
**Rollout stability**
Our training procedure doesn't use any special methods to stabilize the rollout because this typically comes at increased training costs. However, one could easily apply such techniques to UPT.
As we found our models to produce fairly stable rollouts even without additional techniques, we leave this direction for future work.
**Thoughts on latent space vs physics space**
We see the latent rollout as a promising way to speedup inference, particularly for large-scale systems where encoding and decoding takes up most of the runtime.
UPT offers the flexibility to invest resources into training it as a latent rollout model (by using the inverse encoding/decoding objectives) or to save resources during training (by omitting the reconstruction objectives) at the cost of increased inference time. We consider this a valuable tool to have for neural operators.
Additionally, the latent rollout enables applicability to Lagrangian simulations. As UPT models the underlying field instead of tracking individual particle positions it does not have access to particle locations at inference time. Therefore, autoregressive rollouts are impossible since the encoder requires particle positions as input. Using the latent rollout, it is sufficient to encode the initial particle positions into the latent space, which can then be propagated without the knowledge of any spatial positions. After propagating the latent space forward in time, one can simply query the latent space at arbitrary positions to evaluate the underlying field at given positions. We showcase this in Figure 7 where the latent space is queried with regular grid coordinates (white arrows).
**Limitations of non-regular gridded data**
We are interested in complex physics simulations which are often simulated using e.g., finite element meshes. Several phenomena are modeled by particle-based simulations such as smoothed particle hydrodynamics, material point methods, or discrete element methods. Many phenomena even require a coupling of different aforementioned simulation types. This is very much driven by daily life engineering applications, and as such was a main motivation for developing UPT.
**Limitations on fixed latent space**
In the current architecture of UPT, we consider a fixed latent space as it has proven to be an efficient way to compress the input into a fixed size representation to enable scaling to large-scale systems while remaining compute efficient.
However, if an application requires a variable sized latent space, one could also remove the perceiver pooling layer in the encoder. With this change the number of supernodes is equal to the number of latent tokens and complex problems could be tackled by a larger supernode count. While we currently do not consider a setting where this is necessary, we show that the performance of UPT steadily increases with the number of supernodes during training (Figure 9 in Appendix C.4.3).
Additionally, UPT's performance improves when more supernodes are used during evaluation than were utilized during training (Figure 8 in Appendix C.4.2).
---
Rebuttal Comment 1.1:
Comment: Thanks for your detailed reply, most of my concerns have been addressed and I've raised the score. Best wishes! | Rebuttal 1:
Rebuttal: We thank all reviewers for their positive feedback and for their constructive comments and suggestions.
We are pleased to see that the reviewers highlighted the clarity of our paper and appreciated our detailed and thorough experiments. Several reviewers recognized the efficiency of our approach by encoding and decoding into a latent space as well as UPTs ability to handle both Lagrangian and Eulerian data.
To address the questions of the reviewers, we included the following new experiments. Please find visualizations thereof in the supplemental rebuttal pdf.
**Experiments on a regular grid Navier-Stokes equations dataset**
We run comparisons against different Transformer baselines on regular gridded Navier-Stokes equations data (data, baseline results and evaluation protocol taken from [1]). UPT outperforms all compared methods, some of which are specifically designed for regularly gridded data.
As baseline transformers often train small models, we first compare on a small scale, where UPT significantly outperforms other models.
| Model | # Params | Rel. L2 Error |
|---|---|---|
| FNO | 0.5M | 9.12 \% |
| FFNO | 1.3M | 8.39 \% |
| GK-T | 1.6M | 9.52 \% |
| GNOT | 1.8M | 17.20 \% |
| OFormer | 1.9M | 13.50 \% |
| UPT-T | 1.8M | **5.08** \% |
We also compare on larger scales, where UPT again outperforms competitors, even if they train much larger models or pre-train (PT) on more data followed by fine-tuning (FT) on the Navier Stokes dataset.
| Model | # Params | Rel. L2 Error |
|---|---|---|
| DPOT-Ti | 7M | 12.50 \% |
| DPOT-S | 30M | 9.91 \% |
| DPOT-L (PT) | 500M | 7.98 \% |
| DPOT-L (FT) | 500M | 2.78 \% |
| DPOT-H (PT) | 1.03B | 3.79 \% |
| CViT-S | 13M | 3.75 \% |
| CViT-B | 30M | 3.18 \% |
| UPT-S | 13M | 3.12 \% |
| UPT-B | 30M | **2.69** \% |
[1] Hao et al., "DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training", ICML 2024, https://arxiv.org/abs/2403.03542
**Experiments on a regular grid Shallow Water equations dataset**
We run comparisons against UNet, FNO, Dilated ResNet variants on regular gridded Shallow Water eqations data (data, baseline results and evaluation protocol taken from [2]). Similarly to the Navier-Stokes experiments, UPT can outperform methods which are specifically designed for regularly gridded data.
| Model | # Params | Rel. L2 Error |
|---|---|---|
| DilResNet | 4.2M | 13.20 \% |
| U-Net | 148M | 5.68 \% |
| FNO | 268M | 3.97 \% |
| CViT-S | 13M | 4.47 \% |
| UPT-S | 13M | **3.96** \% |
[2] Wang et al., "Bridging Operator Learning and Neural Fields:. A Unifying Perspective", arXiv 2024, https://arxiv.org/abs/2405.13998
**Impact of different positional encodings**
We added ablation results with different positional encoding.
Using a Fourier feature mapping from [3] resulted in an slightly better performance in the TGV2D experiments. We visualize this in Figure 1 of the supplemental rebuttal pdf.
[3] Tancik et al., "Fourier features let networks learn high frequency functions in low dimensional domains", NeurIPS 2020, https://arxiv.org/abs/2006.10739
**UPT scaling clarifications**
We added experiments to clarify the scaling of UPTs in the transient flow experiments by training small UPT models that perform similar to larger GINO models. We visualize the results in Figure 2 of the supplemental rebuttal pdf, which shows that UPT scales well when increasing parameter counts. This experiment should clarify that UPTs do scale well with parameter counts, but as the testloss of UPT (in Figure 5 of the paper) is significantly lower than for GINO, it is much harder to improve it further.
**Scaling with dataset size**
We investigate scaling behavior of UPT w.r.t. dataset size by training a UPT-8M model on smaller subsets in the setting of our transient flow experiments. Figure 3 in the supplementary rebuttal pdf visualizes the scaling curves. UPT scales well with data and achieves comparable results to GINO-8M with 4x less data.
**Further additions to the paper**
We followed the reviewers comments and suggestions, e.g., by including pseudocode of an UPT forward pass, detailed paragraphs on positional encoding and conditioning mechanisms, implementation details of the supernode pooling/radius graph and additional related works.
In addition, we followed the reviewers comments and suggestions, e.g. by discussing their raised points in the Discussion section or added clarifications, contextualization, and references.
Pdf: /pdf/bd5dd5f3292cc1fc57c523d99c058e506922d2f5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this paper a framework for efficiently scaling neural operators is introduced under the name of Universal Physics Transformers (UPTs). It is a novel paradigm that scales neural operators across diverse spatio-temporal problems without relying on specific grid or particle-based structures. Leveraging transformer architectures, UPTs propagate dynamics within a compressed latent space, enabling efficient and flexible simulations.
Strengths: 1/ Originality:
The paper presents an original contribution to the field of neural operators by introducing Universal Physics Transformers (UPTs). This novel paradigm removes the traditional reliance on grid or particle-based latent structures, enabling greater flexibility and scalability across various simulation types. The innovative use of transformer architectures to propagate dynamics within a compressed latent space is a quite original combination of existing ideas applied to a new domain.
2/ Quality:
The quality of the overall contribution is good. The authors address a critical challenge in scaling neural operators for complex simulations and provide a robust framework with practical applications. The UPT framework is well-formulated, and the encoding and decoding schemes are designed to ensure efficient simulation rollouts. The experiments are comprehensive, covering multiple types of simulations, and the results demonstrate the superiority of UPTs in terms of performance and scalability.
Moreover, the paper's methodology is sound, and the technical claims are well-supported by evidence from the experiments.
3/ Clarity:
The paper is well-written and clearly presented. The authors provide a thorough background.
4/ Significance:
The significance of the paper lies in its potential impact on the broader scientific and engineering communities. By providing a unified and scalable framework for neural operators, UPTs can be applied to a wide range of spatio-temporal problems, including those in fluid dynamics. The ability to efficiently handle large-scale simulations can lead to significant advancements in these fields, offering valuable insights and solutions to complex physical phenomena.
Moreover, the experimental design is robust, with appropriate datasets and baselines used for comparison. The methodology is clearly described, allowing for reproducibility of the results. The use of inverse encoding and decoding techniques to enable latent rollouts is particularly noteworthy, as it demonstrates a deep understanding of the underlying principles and challenges in neural operator learning.
5/ Latente space and scalability:
The Universal Physics Transformers (UPTs) framework is efficient since it allows to get compact yet expressive latent space representation, capturing the essential dynamics of physical systems while significantly reducing memory usage and computational overhead. This compactness, combined with efficient encoding and decoding schemes, ensures that the transformation to and from the latent space is both efficient and accurate. By unifying the encoding of various grids and particles, UPTs simplify the modeling process and improve generalization across different spatio-temporal problems.
The empirical validation provided in the paper, through steady-state and transient flow simulations, demonstrates that UPTs achieve lower mean-squared error (MSE) and faster computation times compared to other models, highlighting the effectiveness of the latent space representation.
In terms of scalability, UPTs are designed to handle large-scale simulations effectively. The fixed-size latent space, regardless of input size, allows UPTs to manage large inputs without a proportional increase in computational cost. This scalability is evident in the framework's ability to maintain performance and efficiency with large meshes and numerous particles. The efficient latent space rollouts further enhance scalability by enabling quick updates and predictions, making UPTs suitable for time-dependent simulations. Additionally, the transformer architecture provides a robust foundation for scalability, ensuring that UPTs can handle complex spatio-temporal simulations efficiently.
6/ Lagrangian dynamic modeling:
The Universal Physics Transformers (UPTs) framework presents several notable strengths when viewed from the perspective of Lagrangian dynamic modeling. One of the primary advantages is the framework's ability to model particle-based simulations effectively without relying on traditional particle-structures. In Lagrangian simulations, particles move with the local deformation of the continuum, and modeling these dynamics accurately requires handling a large number of particles and their interactions.
UPTs use a unified latent space to encode particle information, enabling the framework to handle varying numbers of particles flexibly. This flexibility is particularly beneficial for Lagrangian methods, such as Smoothed Particle Hydrodynamics (SPH), where particle counts can vary significantly based on the simulation's complexity. By compressing this information into a fixed-size latent space, UPTs manage to reduce computational overhead while maintaining the ability to capture intricate particle interactions.
The efficient encoding and decoding processes in UPTs ensure that particle dynamics are propagated accurately and swiftly within the latent space. This efficiency is crucial for large-scale Lagrangian simulations, where the computational cost can be prohibitive with traditional methods. The ability to perform latent rollouts enables UPTs to predict future states quickly, making the framework suitable for real-time or near-real-time applications in Lagrangian dynamic modeling.
The paper provides strong empirical evidence of UPTs' effectiveness in Lagrangian dynamic modeling through experiments on datasets such as the Taylor-Green vortex in three dimensions (TGV3D). The results demonstrate that UPTs can effectively learn the underlying field dynamics and predict particle velocities with lower error compared to traditional Graph Neural Networks (GNNs) and other baselines. This empirical validation underscores the practical applicability of UPTs in complex Lagrangian simulations
UPTs exhibit strong generalization capabilities, which are critical for Lagrangian dynamic modeling. The ability to query the latent representation at any point in space-time allows UPTs to adapt to different particle distributions and simulation conditions without extensive retraining. This robustness ensures that UPTs can handle a wide range of Lagrangian problems, from small-scale particle systems to large-scale simulations involving a representative number of particles.
The use of transformers to manage the latent space representation in UPTs is a significant technical innovation. Transformers are known for their efficiency in handling large datasets and sequences, and their application in UPTs leverages this strength to manage the complexities of Lagrangian dynamics. This choice of architecture allows UPTs to model particle interactions accurately while maintaining computational efficiency.
Weaknesses: 1/ Clarity and detail in methodology:
The methodology, particularly the detailed implementation of the encoding and decoding schemes, could be more clearly articulated. Some parts of the algorithms are difficult to follow. Providing additional diagrams, detailed explanations, or pseudo-code would help clarify these complex processes. Ensuring that each step of the process is well-explained and easily understandable would make the paper more accessible and replicable.
2/ Scalability with extremely large datasets:
While the paper demonstrates UPTs' scalability, there is limited discussion on how the framework performs with extremely large datasets or in distributed computing environments. Given the increasing size of datasets in fields like high-resolution climate modeling and genomics, a discussion or preliminary results on the scalability of UPTs in such contexts would be valuable. This could include potential challenges, proposed solutions, and the impact on computational resources.
3/ Insufficient analysis of generalization capabilities:
While UPTs are shown to generalize across different simulation types, the paper lacks a deep analysis of this capability. Neural operator networks are often praised for their ability to generalize across different boundary conditions and initial states. A more thorough examination of UPTs' generalization performance, especially in unseen or out-of-distribution scenarios, would strengthen the claims. Detailed experiments and discussions on how UPTs perform under varied conditions would be valuable.
4/ Potential overfitting concerns:
Given the complexity and high capacity of transformer-based models, there is a risk of overfitting, especially on smaller datasets. The paper briefly mentions overfitting issues in some experiments but does not provide a detailed strategy for mitigating this. More discussion on regularization techniques, data augmentation strategies, or how UPTs handle overfitting would be beneficial. Insights into how the model can be generalized better and made more robust against overfitting are crucial for practical applications.
5/ Analysis from particle and grid based methods:
The Universal Physics Transformers (UPTs) framework offers a novel approach, but there are specific areas where it could be improved, particularly when compared to traditional particle and grid-based methods.
A/ Lack of detailed comparison with established methods:
The paper does not provide a thorough comparison with well-established particle-based methods (such as Smoothed Particle Hydrodynamics, SPH) and grid-based methods (such as Finite Volume Methods, FVM). Including detailed performance metrics, such as accuracy, computational cost, and scalability, would help highlight UPTs' advantages and areas needing improvement. Specifically, comparative studies on benchmark problems typically addressed by SPH (except figure 12) and FVM would provide more context.
B/ Handling of boundary conditions:
Particle and grid-based methods have well-established techniques for handling complex boundary conditions, which are often crucial in physical simulations. The paper does not sufficiently address how UPTs manage complex boundary conditions compared to these traditional methods. A more detailed discussion or experiments showcasing UPTs' effectiveness in dealing with various boundary conditions would strengthen the paper's claims.
C/ Adaptation to high-resolution grids and large particle systems:
While UPTs are shown to be scalable, the paper lacks detailed insights into their performance on extremely high-resolution grids or very large particle systems. Traditional methods often excel in these areas due to their specialized structures and optimizations. Providing more evidence on how UPTs handle such scenarios, including any potential bottlenecks and solutions, would be beneficial.
D/ Numerical stability and accuracy:
Grid-based methods, such as FVM, are known for their numerical stability and accuracy, especially in simulating fluid dynamics. The paper does not provide an in-depth analysis of UPTs' numerical stability and accuracy compared to these methods. Detailed experiments and discussions on how UPTs ensure stability and accuracy over long simulation times would enhance the paper’s credibility.
E/ Computational efficiency in complex geometries:
Particle and grid-based methods have specific strategies to efficiently handle complex geometries, such as adaptive meshing in grid-based methods or kernel adjustments in particle methods. The paper does not clearly explain how UPTs manage complex geometries and whether they can maintain computational efficiency in such scenarios. More detailed experiments or case studies involving complex geometrical domains would be informative.
F/ Interoperability with existing simulation tools:
Traditional methods are often integrated into comprehensive simulation tools (e.g., OpenFOAM for grid-based methods or LAMMPS for particle-based methods). The paper does not discuss how UPTs can be integrated or used alongside these existing tools, which is crucial for practical adoption. Providing insights into interoperability and how UPTs can complement or enhance traditional methods within established simulation frameworks would be useful.
G/ Handling of multiscale phenomena:
Particle and grid-based methods have developed sophisticated techniques to handle multiscale phenomena, such as adaptive mesh refinement (AMR) in grid-based methods. The paper lacks a discussion on how UPTs address multiscale phenomena, which are common in many physical simulations. Including experiments or theoretical discussions on UPTs' capabilities in multiscale modeling would be beneficial.
6/ Model conditioning standpoint:
Model conditioning is crucial for ensuring that neural networks adapt accurately to varying inputs and simulation conditions. The Universal Physics Transformers (UPTs) framework has room for improvement in this aspect. Here are specific areas where the paper could enhance its discussion and implementation of model conditioning.
A/ Insufficient detail on conditioning mechanisms:
The paper briefly mentions the use of feature modulation (e.g., DiT modulation) for conditioning UPTs to various inputs, such as the current timestep and boundary conditions. However, it lacks a detailed explanation of how these conditioning mechanisms are implemented and their impact on model performance. Providing more comprehensive descriptions and theoretical justifications for the chosen conditioning methods would help readers understand their effectiveness and potential limitations.
B/ Limited analysis of conditioning performance:
While the paper demonstrates that UPTs can handle different flow regimes and domains, it does not provide a thorough analysis of how well the conditioning mechanisms work across a broader range of scenarios. Including detailed experiments that specifically evaluate the performance of UPTs under different conditioning inputs, such as varying boundary conditions, initial states, and external forces, would strengthen the paper's claims.
C/ Scalability of conditioning methods:
The scalability of the conditioning mechanisms is not thoroughly discussed. As simulations grow in complexity, the effectiveness of conditioning methods can degrade if not properly scaled. The paper should address how the conditioning mechanisms scale with increasing input dimensions and simulation complexities, providing insights into any potential bottlenecks and how they are mitigated.
D/ Generalization to unseen conditions:
One of the strengths of neural operator networks is their ability to generalize to unseen conditions. The paper does not sufficiently explore how well UPTs generalize to entirely new boundary conditions or physical scenarios that were not present in the training data. Including experiments that test the generalization capabilities of UPTs to unseen conditions would provide valuable insights into the robustness of the conditioning methods.
E/ Comparison with other conditioning approaches:
The paper does not compare its conditioning mechanisms with other advanced conditioning techniques used in neural operator networks or related fields. A comparative analysis of different conditioning approaches, such as those used in Fourier Neural Operators and DeepONets, would highlight the strengths and weaknesses of the methods employed in UPTs.
F/ Impact of conditioning on training stability:
Conditioning mechanisms can significantly impact the training stability of neural networks. The paper does not discuss how the chosen conditioning methods affect the stability of UPT training, especially in the presence of complex and noisy data. Analyzing and addressing potential stability issues arising from conditioning would be beneficial.
G/ Real-world application scenarios:
While the paper presents conditioning in the context of fluid dynamics simulations, it does not discuss its applicability to other real-world scenarios that require complex conditioning. Exploring and providing evidence of UPTs' effectiveness in diverse applications, such as climate modeling or structural analysis, where conditioning to various external factors is crucial, would enhance the practical relevance of the framework.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1/ The paper briefly mentions feature modulation for model conditioning but lacks detailed discussion on handling complex boundary conditions. Providing specific examples or additional experiments that illustrate UPTs' effectiveness in managing complex boundary conditions would strengthen the paper’s claims. Addressing this can clarify the practical applicability of UPTs in real-world scenarios where boundary conditions play a crucial role.
How does the UPT framework handle complex boundary conditions compared to traditional particle and grid-based methods?
2/ While UPTs are presented as efficient and scalable, the paper does not delve into the specifics of their numerical stability and accuracy compared to traditional methods. Discussing strategies to maintain stability and accuracy over extended simulations and providing relevant experimental evidence would address potential concerns about the reliability of UPTs in long-term applications.
What measures are in place to ensure the numerical stability and accuracy of UPTs over long simulation periods?
3/ The paper demonstrates scalability and generalization within certain limits but does not extensively explore these aspects in more demanding scenarios. Providing additional experimental results or theoretical insights on UPTs’ performance in high-resolution and large-scale environments, as well as their ability to generalize to entirely new conditions, would significantly bolster the paper’s contribution and practical relevance.
What are the scalability and generalization capabilities of UPTs when applied to extremely high-resolution grids, large particle systems, and unseen physical scenarios?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your profound review and suggestions that helped us to improve our paper a lot. We addressed all your comments and followed all your suggestions. Please let us expand a bit on your comments.
**Clarity and detail in methodology**
In order to make the paper more accessible we -- as suggested -- add pseudocode of a UPT forward pass with adaptive sampling. This together with sketch Figure 3a, sketch Figure 3b, and the encoder/approximator/decoder description of Section 3 should make the paper understandable wrt. all necessary details.
**Scalability to extremely large datasets / high-resolution grids**
We agree that scalability to extremely large datasets is a desired property. In the paper, we have done our best to test this. Our models are trained using distributed training up to 32 A100 GPUs. We have self-simulated a dedicated dataset which is larger (> 50000 mesh points), and more complex (different flow regimes, different number of differently placed obstacles) as standard grid based datasets. As requested by several reviewers we have added experiments on the Shallow Water dataset and the Navier-Stokes dataset to the paper, which demonstrate the diverse applicability of UPT. However, even larger simulations than our transient flow simulations are beyond the scope of this paper and the available compute budget. Nevertheless, we have tested GPU usage of several model classes (Figure 2), which demonstrates that our framework is able to process meshes / particle clouds with up to 4 million nodes on a single GPU.
**Insufficient analysis of generalization capabilities**
We agree that generalization capabilities are one of the most important aspects to test for neural operators. Therefore, in the transient flow experiment -- our largest experiment -- we test generalization across number of input / output points (discretization convergence), discretization across different flow regimes (input velocity), and generalization across different scenarios (differently placed obstacles).
**Potential overfitting concerns**
We observe that the variable selection of the supernodes and the variable selection of input/output nodes allows for a strong regularization and data augmentation. The general training pipeline of UPT is therefore very robust against overfitting. This is further strengthened by the additional Navier-Stokes and Shallow water experiments.
**Lack of detailed comparison with established numerical methods**
We are not claiming to be better than numerical methods. We compare with neural methods and especially focus on scaling. We follow the general procedures of the community, and report MSE, correlation time measures, and runtime comparisons. The latter two give an idea of the potential of UPT when compared with numerical methods. Similar to e.g. weather modeling (see Aurora [2], Pangu [3], ...) we follow the belief that benefits of neural operators will become more pronounced at scale.
**Handling of boundary conditions**
The ShapeNet car experiment allows for testing neural operators on complicated geometries. UPT performs favorably, even when using a strongly reduced latent space representation. Additionally, in the transient flow example we have varying numbers of differently placed obstacles which can also be seen as complicated in-domain boundaries. In fact, UPT is specifically designed for large data on non-regular domains including different boundary conditions.
**Model conditioning**
We have added a more detailed paragraph on the model conditioning choices in this paper. In our paper we follow closely the detailed studies described in [1].
**Real world applications**
Real world applications are beyond the scope of this work, but a very strong motivation for developing our framework. We note that current weather modeling such as Aurora [2] or Pangu [3] follow a similar design principle, but for regular gridded data, and without discretization convergence properties.
**Question on complex boundary conditions**
Transformers offer a flexible way to encode various boundary conditions.
Skalar boundary conditions, such as inflow velocity or the current timestep, can be encoded via feature modulations. We use DiT modulation as it has shown strong performances in transformers.
Additionally, transformers offer a flexible mechanism to encode additional information by encoding the information into tokens and concatenating them to the supernodes or latent tokens. We make use of this flexible encoding in the ShapeNet-Car experiments where we additionally encode the signed distance function evaluated on a 3D grid via a CNN and feed the resulting tokens to the approximator.
**Question on numerical stabilitiy and accuracy**
Traditional methods require extremely small timesteps in order to preserve numerical stability when resolving the physics.
As neural operators only approximate the solution of the traditional solver, they can operate on much larger timesteps as their powerful modeling capabilities enables them to learn dynamics also from a coarse time resolution.
Furthermore, UPT rollouts are fairly stable without any specific measures to ensure rollout stability. Sophisticated techniques to improve rollout stability (e.g., [4]) could be easily applied to UPTs to further enhance rollout stability. However, as these techniques typically impose a runtime overhead, we leave exploration thereof to future work.
[1] Gupta et al., "Towards Multi-spatiotemporal-scale Generalized PDE Modeling", arXiv 2022, https://arxiv.org/abs/2209.15616
[2] Bodnar et al., "Aurora: A foundation model of the atmosphere", arXiv 2024, https://arxiv.org/abs/2405.13063
[3] Bi et al., "Accurate medium-range global weather forecasting with 3D neural networks", Nature 2023, https://www.nature.com/articles/s41586-023-06185-3
[4] Brandstetter et al., "Message passing neural pde solvers", ICLR 2022, https://arxiv.org/abs/2209.15616
---
Rebuttal Comment 1.1:
Title: Summary of the revision and follow-up
Comment: Thank you for addressing the feedback provided in my initial review. Here is a summary of the revisions and responses you have made, along with some additional feedback and observations.
**Scalability to extremely large datasets / high-resolution grids**
I commend your efforts to demonstrate scalability within the constraints of your computational resources. The additional experiments on the Shallow Water and Navier-Stokes datasets effectively illustrate your framework's ability to handle large datasets. Your discussion on using distributed training across multiple GPUs highlights the potential of your approach in handling complex simulations.
**Generalization capabilities**
The expanded tests on generalization, including across different flow regimes and discretizations, strengthen the evidence for your framework's robustness. This comprehensive analysis is a significant enhancement and effectively addresses previous concerns about the adaptability of your model.
**Handling of boundary conditions**
The examples provided on handling complex boundary conditions through experiments like the ShapeNet car and transient flow simulations highlight the flexibility and adaptability of UPTs in various scenarios. This aspect of your work is well-explained and a notable strength of your framework.
---
Rebuttal Comment 1.2:
Title: Suggestions
Comment: 1. While real-world applications are beyond the current scope, discussing potential use cases or hypothetical scenarios could highlight the practical impact of UPTs.
2. Consider exploring how UPTs might be integrated with traditional particle or grid-based methods in practice. Discussing potential hybrid approaches could broaden the applicability of your work
3. Discussing specific measures or techniques to enhance long-term stability and robustness in simulations, particularly with noisy data, could strengthen the paper's claims.
4. As hardware capabilities evolve, exploring how UPTs can leverage advancements in GPU and TPU technologies could provide insights into the future potential of your framework.
---
Rebuttal Comment 1.3:
Title: Follow-up questions
Comment: 1. In your response, you mentioned enhancing the model conditioning with techniques such as feature modulation, but I could not find the additional details in the paper. Could you elaborate on how the DiT modulation and any other conditioning mechanisms are implemented in the UPT framework, and how these techniques specifically improve model adaptability to varying boundary conditions and simulation inputs?
2. You highlighted the use of transformers to manage complex boundary conditions, particularly in experiments like the ShapeNet car. Could you provide more details on how the transformers are configured to encode these boundary conditions effectively? Are there specific architectural adjustments or tokenization strategies employed to ensure accurate representation and processing of boundary conditions?
3. In the context of your framework's numerical stability, you mentioned that UPT rollouts are stable without specific measures. Could you clarify how the UPT framework maintains stability and accuracy over extended simulations, especially when compared to traditional numerical solvers that require small timesteps for stability? Are there specific aspects of the transformer architecture that contribute to this stability?
4. While you provided insights into UPT's scalability with large datasets and distributed training, can you discuss any potential limitations or bottlenecks encountered when scaling the model further? How does the framework handle challenges such as increased computational demands or memory usage as the complexity of the simulations and datasets grows?
---
Rebuttal Comment 1.4:
Title: Missing pseudo-code and extra details on model conditioning
Comment: I am unable to find the pseudo-code and the additional details on model conditioning that are mentioned in your response.
---
Reply to Comment 1.4.1:
Title: Pseudo-code for training UPT
Comment: Due to space constraint in the rebuttal, we were not able to include it there (the rebuttal did also not allow to update the submitted paper). Please find below pseudocode for UPT in the setting of our transient flow experiments (2D positions with 3 features per node).
```
# input_embed: linear projection, projecting from the number of input features to a hidden dimension
# pos_embed: positional embedding (with sine and cosine waves of different frequencies) as common in transformers
# message_mlp: shallow MLP to create messages
# encoder_transformer: stack of transformer blocks of the encoder
# latent_queries: `n_latent_tokens` learnable query vectors
# encoder_perceiver: cross attention block
# approximator_transformer: stack of transformer blocks of the approximator
# query_mlp: shallow MLP in the decoder to encode query positions
# decoder: cross attention block
def encoder(input_features, input_pos, radius, n_supernodes):
"""
encode arbitrary pointclouds into a fixed latent space
inputs:
input_features `Tensor(n_input_nodes, 3)`: features of input nodes
input_pos `Tensor(n_input_nodes, 2)`: positions of input nodes
n_supernodes `integer`: number of supernodes
radius `float`: radius for creating the radius_graph
outputs:
latent `Tensor(n_latent_tokens, hidden_dim)`: encoded latent space
"""
# create radius graph (using all input nodes)
# edges are uni-directional and are passed from nodes_from to nodes_to
nodes_from, nodes_to = radius_graph(input_pos, radius)
# select supernodes from input_nodes
n_input_nodes = len(input_features)
supernode_idx = randperm(n_input_nodes)[:n_supernodes]
# filter out edges that do not involve supernodes
is_supernode_edge = nodes_to in supernode_idx
nodes_from = nodes_from[is_supernode_edge]
nodes_to = nodes_to[is_supernode_edge]
# encode inputs and positions
encoded_nodes = input_embed(input_features) + pos_embed(input_pos)
# create messages
messages = message_mlp(encoded_nodes[nodes_from])
# accumulate messages per supernode by averaging messages
supernodes = accumulate_messages(messages, nodes_to, reduce="mean")
# process supernodes with some transformer blocks
supernodes = encoder_transformer(supernodes)
# perceiver pooling from supernodes to latent tokens
latent = encoder_perceiver(query=latent_queryies, key=supernodes, value=supernodes)
return latent
def approximator(latent):
"""
propagates latent forward in time
inputs:
latent_t `Tensor(n_latent_tokens, hidden_dim)`: encoded latent space at timestep t
outputs:
latent_t_plus_1 `Tensor(n_latent_tokens, hidden_dim)`: encoded latent space at timestep t + 1
"""
return approximator_transformer(latent)
def decoder(latent, query_pos):
"""
decode latent space pointwise at arbitrary positions
inputs:
latent `Tensor(n_latent_tokens, hidden_dim)`:
query_pos `Tensor(n_outputs, 2)`: positions for querying the latent space
outputs:
decoded `Tensor(n_outputs, 3)`: evaluation of the latent space at query positions
"""
# encode query positions
query_pos_embed = query_mlp(pos_embed(query_pos))
# query latent space
decoded = decoder(query=query_pos_embed, key=latent, value=latent)
return decoded
def train_step(input_features, input_pos, n_supernodes, radius, query_pos):
"""
encode arbitrary pointclouds into a fixed latent space
inputs:
input_features `Tensor(n_input_nodes, 3)`: features of input nodes at timestep t
input_pos `Tensor(n_input_nodes, 2)`: positions of input nodes at timestep t
n_supernodes `integer`: number of supernodes
radius `float`: radius for creating the radius_graph
query_pos `Tensor(n_outputs, 2)`: positions for querying the latent space
target_features `Tensor(n_input_nodes, 3)`: features of nodes at timestep t + 1
outputs:
loss `Tensor`: skalar loss value
"""
# next-step prediction
latent_t = encoder(input_features, input_pos, radius, n_supernodes)
latent_tplus1 = approximator(latent_t)
decoded_tplus1 = decoder(latent_tplus1, query_pos)
next_step_loss = mse(decoded_tplus1, target_features)
# inverse decoder (decode latent into inputs)
decoded_t = decoder(latent_t, input_pos)
inverse_decoding_loss = mse(decoded_t, input_features)
# inverse encoder (encode predictions at t + 1 into latent of t + 1)
inverse_encoded = encoder(decoded_tplus1, query_pos, radius, n_supernodes)
inverse_encoding_loss = mse(inverse_encoded, latent_tplus1)
return next_step_loss + inverse_decoding_loss + inverse_encoding_loss
```
---
Rebuttal 2:
Comment: We updated the paper with additional details concerning modulation which would be included in the camera-ready version, but the rebuttal unfortunately did not allow updating the paper.
**1. Details on modulation**
We use the DiT modulation as it was introduced in the original paper [1] and condition onto boundary conditions (timestep, inflow velocity) in the following way:
```
# embed: embedding layer (with sine and cosine waves of different frequencies) as also common to encode positions in transformers
# condition_mlp: shallow MLP to combine boundary conditions
def create_condition(timestep, velocity):
timestep_embed = embed(timestep)
velocity_embed = embed(velocity)
embed = concat([timestep_embed, velocity_embed])
condition = condition_mlp(embed)
return condition
```
From this condition, DiT modulates its features by (i) shifting the features after layer normalization (ii) scaling the features after layer normalization or (iii) gating the features before adding the residual.
This is done in the same way as DiT does it. We provide a link to the original implementation [2] but are also happy to provide pseudocode if needed.
This form of conditioning allows us to encode the inflow velocity and the current timestep into the model. Therefore, the model can better adapt to different inflow velocities as this information can be explicitly encoded into the model. Without encoding such boundary conditions into the model, it would need to derive it from the mesh points or particles alone, which is sometimes impossible.
Note that we also apply modulation to UNets in the form of FiLM [3] (similar to DiT it scales and shifts intermediate features) and for FNO/GINO we apply the "spatial-spectral parameter conditioning" from [4] (see Appendix B.2.6) which applies modulation in the fourier space of FNO.
[1] Peebles et al., "Scalable Diffusion Models with Transformers", ICCV 2023, https://arxiv.org/abs/2212.09748
[2] https://github.com/facebookresearch/DiT/blob/main/models.py#L101
[3] Perez et al., "FiLM: Visual Reasoning with a General Conditioning Layer" AAAI 2018, https://arxiv.org/abs/1709.07871
[4] Gupta et al., "Towards Multi-spatiotemporal-scale Generalized PDE Modeling", arXiv 2022, https://arxiv.org/abs/2209.15616
**2. Boundary conditions in ShapeNet-Car**
In the ShapeNet-Car experiments, we showcase the flexible encoding of UPT by additionally encoding the signed distance function (SDF) of the geometry.
Given SDF evaluated on a 3D grid, we encode it with a shallow CNN with 3D convolutions. For example, we use a 64^3 grid of SDF values and encode it into a 8^3 latent representation with said CNN.
This grid is then flattened out into a 1D sequence of 8^3=512 tokens. Afterwards these 512 tokens are concatenated to the output of the encoder (the 1024 latent tokens from encoding the mesh of the car) resulting in 1536 tokens as input to the approximator.
**3. Numerical stability of UPT**
Traditional solvers resolve the physics in very small timesteps as this is required to preserve the physical properties of the simulation. For example, if two particles are about to collide but the time resolution is not small enough to accurately detect when the two particles collide (and therefore change direction/velocity), the two particles will overlap with each other in the next simulation timestep. If this overlap is large, the resulting force acting on these particles will be extremely high, which in turn results in numerical instability and a collapse of the simulation.
Contrary, UPTs do not learn to explicitly model the interactions between two particles, but rather an abstract representation thereof. Therefore, UPT does not need to resolve at a fine-grained time resolution as it can learn that "if two particles are about to collide, they are about to change direction". This makes it much more resilient against numerical stability as it focuses on abstract concepts instead of low-level interactions.
Other neural operators like GNNs or graph neural operators (GNOs) also resolve particle-particle interactions and therefore require smaller timesteps than UPT. We showcase this in Figure 6 of our paper, where the UPT modeling paradigm (via an abstract compressed latent space) can resolve much larger timesteps without problems. We show this in our experiments on TGV3D where UPT is trained on 10x coarser timesteps than GNS or SEGNN.
**4. Scaling properties of UPTs**
As UPTs consist of mainly transformer layers, it does not have any issues with scaling. The same techniques that are applied to train large language models (LLMs) can be applied to UPTs as well. As an example, Llama3-405B [1] was scaled to 16K H100 GPUs. While we do not envision that we train UPT on such a massive scale in the foreseeable future, the used techniques can be readily applied to UPTs.
[1] Dubey et al., "The Llama 3 Herd of Models", arXiv 2024, https://arxiv.org/abs/2407.21783
Title: Reply to follow-up questions | null | null | null | null | null | null |
Intruding with Words: Towards Understanding Graph Injection Attacks at the Text Level | Accept (poster) | Summary: The paper studies Graph Injection Attacks on text-attributed graphs. The study presents three attack designs: Vanilla Text-level GIA (VTGIA), Inversion-based Text-level GIA (ITGIA), and Word-frequency-based Text-level GIA (WTGIA). The key contributions include demonstrating the effectiveness of text level perturbation on Graph Injection Attack and the performance of using LLMs as a defender. WTGIA shows significant attack potential while maintaining understandability. It is a timely study on the vulnerabilities of text attributes in graphs.
Strengths: 1. Good investigation on the role of text manipulation in graph injection attacks. The text attributes in graphs are getting increasingly important. Demonstrating their vulnerabilities to GIA and the role of LLM can play as a defender are important to applications using text-attributed graphs.
2. The proposed WTGIA method achieves good attack performance with understandable text perturbation.
Weaknesses: 1. The embedding-text-embedding process is not very clear to me. Additional explanation will be helpful. It is unclear how different embedding models may affect the performance of this process.
2. The term interpretability in this paper has different semantics to the commonly used one in explainable AI. It is confusing and creates unnecessary obstacles in reading. It is better replace with understandability.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. What is the computing complexity of WTGIA? How long does it take to generate an attack in average in your experiment environment?
2. What is the impact of different text embedding models on the attack performance?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Limitations on transferability is discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback on our paper. We have addressed your concerns and provided clarifications below.
---
**W1:** *More about the Embedding-Text-Embedding Process.*
**Why Embedding-to-Text:** Traditional GIAs included only the text-to-embedding process, with the pipeline as follows: 1) Preprocess the original graph text to embeddings, denoted as A.
2) The attacker designs an attack on this embedding by injecting abnormal embeddings, denoted as B.
3) The defender evaluates the same embeddings (A and B).
There are several unreasonable aspects in this process, including: embedding B may not be reasonable at the text level (ITGIA); in real scenarios, attackers cannot directly inject embeddings into the defender's system, making step 3 overly idealistic for the attacker; and the defender can choose a different process in step 1 compared to the attacker.
These issues arise because the attacker did not consider the embedding-to-text process.
Our paper focuses on supplementing this step, requiring the attacker to generate text at the text level, injecting the original text.
**Why Text-to-Embedding:** The embedding-text-embedding process is ultimately to align with previous GNN evaluations, as the GNN ultimately uses embeddings as input. So we finally transform the generated text into embeddings for evaluation. An exception is the LLM-as-predictor in Section 5, which only includes the embedding-to-text process. In the future, we believe that studying graph attacks and defenses involving embedding-to-text will be a fascinating direction.
We will provide a clearer illustration of the Embedding-Text-Embedding process in the revised versions.
---
**W2:** *Use of the term Interpretability*
Thank you very much for your clarification, which helps avoid potential misunderstandings we might have caused.
We will consider clarifying the distinction between this term and interpretability in the field of explainability when it is first mentioned, emphasizing that it is more about understandability to humans.
---
**Q1:** *Running Time*
We have included the running time and discussion about complexity in the **author rebuttal**.
In summary, scalability is not a major issue in current text-level GIAs.
---
**Q2:** *Impact of Embedding Models*
We added a new embedding model, SentenceBert, and related experiments in the **author rebuttal**.
Overall, the impact of text-embedding includes:
1) **Impact on the Original GNN:** According to [1], different embeddings can cause performance differences in GNNs on clean datasets. For example, on the Reddit dataset, GTR can have more than a 5-point performance advantage over the original dataset, which is significant in some cases.
2) **Impact on Attacker Design:** Designing a better attack for continuous embeddings can be more challenging. WTGIA shows some transferability across both, but the results are still unsatisfactory. Although GTR and SBERT are both PLM-based embeddings, the adversarial text by ITGIA performs even worse than WTGIA when transferred to SBERT. We believe that exploring this aspect remains a challenge.
3) **Impact on Defender Design:** LayerNorm is effective in defending against continuous features because it can capture structural anomalies caused by attacks on continuous features, but it performs mediocrely with BoW methods. While EGNNGuard can be easily bypassed by HAO, it remains competitive when defending against WTGIA. Therefore, different defense methods may be suitable for different types of embeddings.
To sum up, there are still many areas worth exploring in this regard, and we believe considering more combinations of graph and text methods will be an interesting research direction in the future.
---
Thank you once again for your valuable feedback and suggestions. We believe that incorporating these insights will significantly enhance the quality and impact of our work. We welcome any further questions or discussions.
---
**Reference**
[1] Chen, et al. Exploring the potential of large language models (LLMs) in learning on graphs.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response and additional experimental results. These largely address my concerns.
---
Reply to Comment 1.1.1:
Comment: Thank you for your appreciation of our work.
We will carefully consider your suggestions to further improve our paper. | Summary: This paper investigates an interesting topic. The authors first discuss the practicality of Graph Injection Attacks (GIA), arguing that previous approaches, which merely inject harmful embeddings, are unrealistic. It introduces three text-level GIA methods: ITGIA, VTGIA, and WTGIA, and conducts a thorough examination of their attacking performance and interpretability. The findings reveal that WTGIA excels in both attacking effectiveness and interpretability. Additionally, the paper explores the defensive capabilities of Large Language Models (LLMs) and discovers that they can effectively evade the impact of injected nodes. This underscores the importance of enhancing the performance of text-level attackers.
Strengths: 1. This paper first addresses the impracticality of embedding-based GIAs and explores text-level GIAs, introducing three novel text-level GIA methods. Among these, WTGIA effectively balances attack performance and interpretability.
2. The authors have conducted comprehensive experiments that thoroughly address the research questions posed in the study.
3. The theorem discusses the conditions under which destructive texts can be identified, providing a clear theoretical foundation for the work.
4. This paper offers valuable insights into the new challenges associated with text-level GIAs, identifying emerging directions in this field.
Weaknesses: 1. The authors should provide references for the cited papers on TDGIA, ATDGIA, MetaGIA, and AGIA.
2. The authors should explain why the attack performance initially decreases and then increases as the sparsity budget increases, as shown in Figure 5.
3. The authors should clarify how the LLM optimizes the graph structure after generating the text for the injected nodes.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. For VTGIA, how does the LLM optimize the graph structure after generating the text for the injected nodes?
2. The attacking method of WTGIA is interesting. Could this idea inspire other areas of GNN?
3. In Section 4, could you clarify the embedding-text-embedding process? Specifically, in Section 3, under Analysis of Performance and Interpretability, it seems that after inversion, text is generated. What embedding results after this inversion?
4. Why do you introduce FGSM to generate binary embeddings? Is it not possible to directly transform the embeddings into binary embeddings?
5. In Figure 5, why does the attack performance initially decrease and then increase as the sparsity budget increases?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Some parts of Section 4.1 are hard to understand, I admit I didn't check the proof related to the theorem in the Appendix. I think the authors could add figures or improve their presentations to better illustrate WTGIA.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your insightful comments and suggestions. Please find our detailed responses below:
---
**W1:** *Citations of TDGIA, ATDGIA, MetaGIA, and AGIA.*
We have cited TDGIA in Section 2.
The other methods (ATDGIA, MetaGIA, and AGIA) are introduced in [1].
In our experiments, we followed [1]'s implementation using a sequential optimization pipeline, which is detailed in the Appendix.
We will ensure that the citations and explanations are clarified in future versions of the paper.
---
**W2, Q5:** *Why attack performance decreases and then increases as the sparsity budget increases.*
This phenomenon is explained in lines 287-294 of the main text.
LLMs prioritize maintaining coherence and fluency in their outputs.
When the number of required words increases within a fixed text length, the Use Rate of these words decreases, leading to reduced accuracy in the embedding-text-embedding process.
This results in a decline in attack performance when the sparsity budget exceeds a certain threshold.
Appendix J provides a more detailed illustration, quantifying coherence using Perplexity.
---
**W3, Q1:** *How graph structure is optimized.*
The LLM is not involved in the optimization of graph structures.
Appendix I (Algorithm 4) provides pseudocode demonstrating that we rely on embedding-level techniques for selecting graph structures.
After text generation, we employ structure optimization techniques from [1] to generate the graph structure.
We maintain a fixed feature set once generated and do not alter the pipeline.
Using LLMs for graph structure optimization is an intriguing direction for future research, but current studies are limited, and scalability issues need to be addressed.
---
**Q2:** *Transferring WTGIA to more fields.*
The WTGIA concept transforms text generation tasks into word-to-text tasks, enabling LLMs to effectively complete generation tasks while maintaining downstream classification performance.
Given the importance of classification in graph representation learning, we believe this approach simplifies the generation tasks for related fields.
---
**Q3:** *Embedding-Text-Embedding process.*
Our paper focuses on the embedding-to-text process, introducing three text-level GIAs. The text-to-embedding process is necessary because, except for the LLM-as-predictor (Section 5), GNNs require converting injected text into embeddings as input. We utilize embedding methods like BoW and GTR to facilitate GNN testing on downstream tasks. During this process, the ITGIA method experienced significant accuracy loss. This loss occurs because, even with the same embedding method, the inverted text’s embedding differs significantly from the original. Section 3 discusses the reasons in detail, emphasizing the challenges of the ill-defined interpretable region.
---
**Q4:** *Use of FGSM*
FGSM is a classic optimization method suitable for binary states (0 and 1).
It uses flipping for perturbation, preserving physical meaning, such as 0-1 for using a word and 1-0 for not using a word.
As a result, it facilitates the later embedding-to-text process in WTGIA.
Converting continuous embeddings to binary is challenging and relies on text as a bridge.
What's more, the accuracy of the embedding-to-text process is crucial, posing a challenge for continuous features.
---
Thank you again for your valuable feedback.
---
**Reference**
[1] Chen, et al. Understanding and improving graph injection attack by promoting unnoticeability.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thanks for the detailed response. I am pleased to note that you have addressed all the concerns I raised. I appreciate your efforts in clarifying these points and I have raised the scores.
---
Reply to Comment 1.1.1:
Comment: Thank you for your appreciation of our work. We will carefully consider your suggestions to further improve our paper. | Summary: The paper studies GIAs, particularly focusing on text-attributed graphs (TAGs). It introduces a method for GIAs by injecting textual content directly into the graph, as opposed to the traditional method of embedding-level attacks. The authors propose three new attack designs: Vanilla Text-level GIA (VTGIA), Inversion-based Text-level GIA (ITGIA), and Word-frequency-based Text-level GIA (WTGIA). They demonstrate that text interpretability is necessary for the effectiveness of these attacks. They also show that defenses can be enhanced using customized text embedding methods or large language model (LLM)–based predictors.
Strengths: I found this to be a very original paper as it shifts from embedding-level GIAs to text-level GIAs. Three new attacks are created and are tested theoretically and empirically. Most importantly, the GIAs created ensure that the attacked nodes have semantically useful information, making the attack more realistic, using LLMs.
Weaknesses: The paper does not explore the robustness of the proposed GIAs against more sophisticated defense mechanisms beyond customized text embedding methods and LLM-based predictors.
Although the authors mention that LLM-based defenses enhance protection against text-level GIAs, the experiments primarily focus on a few specific methods and datasets. These are limited in scope.
The trade-offs between performance, unnoticeability, and interpretability, as discussed in Sections 4.1 and 4.2, could be examined more thoroughly. While the paper shows that increasing the sparsity budget enhances attack performance but reduces interpretability, further experiments across a wider variety of datasets could validate these findings more convincingly, especially with additional baselines. Including more traditional GIA methods or other recent advancements in graph attack techniques would also be useful.
The discussion on the practical implementation of these attacks in real-world applications could be expanded. The paper mainly provides experimental insights but lacks detailed exploration of the real-world scenarios you stake your paper around, and practical challenges like the scalability of the proposed methods to larger and more complex graphs, the computational costs associated with implementing these attacks, and potential detection mechanisms in real-world systems are not thoroughly addressed. Providing more actionable insights and guidelines for practitioners could significantly enhance the practical relevance of the research.
Lastly, while the authors discuss the potential for enhancing defenses using customized text embedding methods or LLM-based predictors, they do not explore the possibility of integrating multiple defense strategies. Combining different defense mechanisms could potentially create more robust protection schemes against text-level GIAs.
Typos: line 296 and line 211
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Can the authors provide more details on how the sparsity budget impacts the interpretability and effectiveness of the text-level GIAs across different datasets?
2. How would this work apply to dynamic graphs where nodes and edges evolve over time?
3. Are there any challenges in scaling the proposed GIAs to larger graphs or more complex datasets?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback on our paper. We appreciate your insights and have addressed your concerns below.
---
**W1 and W5:** *Exploration of Robustness Against Defense Mechanisms*
In the **author rebuttal**, we included classic methods like GAT and GraphSAGE, as well as the EGNNGuard and Layernorm (LN) methods, which have been proven highly effective in [1].
We indeed found interesting results, such as LN’s effectiveness on text-level GIAs not being as strong as previously reported for embedding-level attacks.
This is because embedding-level GIAs often exhibit abnormal norms, which LN exploits for defense.
However, text-level GIAs derive embeddings from real text, avoiding structural anomalies and bypassing LN’s defenses. This also means that some traditional defense methods that were particularly effective may actually be limited to the embedding level.
The EGNNGuard method is generally effective and performs well overall. As we mentioned in the main paper, attackers can add tricks like HAO to bypass EGNNGuard, but this often involves trade-offs.
Regarding using LLM-based methods for defense, we believe the current LLM for Graph methods still face the following challenges: 1) There is still a lack of dedicated methods combining LLM and Graph for defense. 2) The LLM-as-Predictors [2] method still faces scalability issues, and directly using it on large graphs requires significant resources. 3) LLM-as-Enhancer [2] methods still rely on GNN as the backbone, making them not much different from using defensive GNNs in terms of defense.
Considering these challenges, we used predictors as an initial exploration. We believe that further exploring the combination of LLM and GNN to enhance defense capabilities will be an interesting direction.
---
**W2, W3, Q1:** *More Datasets, More GIAs, More about the trade-off*
In terms of datasets, we selected the most common GIA test datasets in the original paper.
In the **author rebuttal**, we added larger datasets, such as ogbn-arxiv, and the social network dataset Reddit.
We demonstrated that WTGIA can be applied to larger graphs (over 160,000 nodes) and different domains.
Considering that the complexity of VTGIA and ITGIA is lower than that of WTGIA (see running time), we believe this sufficiently demonstrates the versatility of the text-level GIA framework.
In the future, we believe it is interesting to explore the differences in text-level GIA across various domains.
For GIAs, we selected representative and competitive GIA models for evaluation.
For instance, TDGIA is a top-ranked model on the GRB benchmark [3], while the SeqGIA family [1] effectively addresses unnoticeability with desirable efficiency.
For embeddings, we included PGD, FGSM, and feature-smooth (TDGIA/ATDGIA), three representative learnable embeddings. Therefore, we believe the current GIAs are sufficient to support our conclusions.
We are open to incorporating new GIAs as needed in the future.
Regarding the trade-off, we have more discussion in Appendix J.
For example, we found that larger models tend to reduce the Use Rate to ensure the fluency of the output, selecting words that more easily form a coherent text. This also manifests as lower perplexity, indicating more fluent text.
---
**W4, Q3:** *Practical issues in Real-world Applications*
We acknowledge that the paper could benefit from a more detailed exploration of practical implementation challenges.
We believe one of them is similar to that of graph foundation models, namely the issue of uniformity.
Different graphs have varying text length limitations and text themes, and the LLMs’ understanding of them differs.
In practice, it will be challenging to propose a unified text-level attack algorithm that does not require adjustments for each domain.
Regarding scalability and complexity, please refer to the **author rebuttal**.
In summary, scalability is not a major issue in current text-level GIAs, but it may need to be considered in future designs.
---
**W6:** *Typos*
Thank you for pointing out the typos. We will correct these in the revised version.
---
**Q2:** *Application to Dynamic Graphs*
We believe that attack and defense on dynamic graphs is a very interesting direction.
When generating attacks on dynamic graphs, the historical interaction information of nodes should also be considered as a basis for generating injected nodes.
At the same time, in dynamic graph scenarios, not all injected nodes are harmful.
A more accurate characterization of this scenario will help us better align with real-world applications.
---
Thank you once again for your valuable feedback and suggestions. We believe that incorporating these insights will significantly enhance our work’s quality and impact. We welcome any further questions or discussions.
---
**Reference**
[1] Chen, et al. Understanding and improving graph injection attack by promoting unnoticeability.
[2] Chen, et al. Exploring the potential of large language models (LLMs) in learning on graphs.
[3] Zheng, et al. Graph robustness benchmark: Benchmarking the adversarial robustness of graph machine learning.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We sincerely appreciate the time and effort you have dedicated to reviewing our paper. Your feedback has been invaluable to us. We have provided detailed responses to the concerns and questions you raised during the rebuttal period. Your insights are crucial to enhancing the quality of our work.
With the discussion period ending within one day, we hope we have thoroughly addressed all your concerns. Should you have any further questions or suggestions, we would be delighted to continue our discussion.
Thank you for your support and guidance.
Best regards,
Submission15853 authors | Summary: This paper explores the vulnerability of GNNs to Graph injection Attacks (GIAs), which involve injecting malicious nodes into a graph. This paper explores GIAs at the text level, presenting three types of GIA designs that inject textual content into the graphs. The significance of text interpretability in attack effectiveness and the defense against text-level GIAs are also explored.
Strengths: 1. The idea is straightforward and easy to follow.
2. The study of text-level graph injection attacks is interesting and practical.
Weaknesses: 1. In line 126–131, the paper claims that traditional methods focus mainly on the embedding level. However, some existing works (e.g., [1]) focus on the raw feature level. The authors fail to discuss why these works cannot be applied to this task.
2. Limited technical contribution. In this paper, three kinds of text-level GIA are proposed. However, I think all of them are simply combining both existing GIA techniques and language models, which are kind of incremental contributions.
3. I think the proposed text-level GIA is similar to the traditional GIA, especially when the nodes' texts are encoded into embeddings by LMs. Therefore, I think some existing defense methods against GIA can be also applied to this task. The authors need to discuss that.
4. The experiments only focus on small-scale datasets. More large-scale datasets need to be included.
5. Only vanilla GCN is used in the experiments, which may induce to biases to the conclusions. More advanced GNNs need to be included.
[1] Understanding and Improving Graph Injection Attack by Promoting Unnoticeability. ICLR 2022.
Technical Quality: 2
Clarity: 2
Questions for Authors: Refer to weaknesses.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: A section to discuss the limitations is missing.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful feedback on our paper. We have addressed your concerns and provided clarifications below.
---
**W1:** *Why previous works cannot be applied to this task.*
We would like to clarify that by “Raw Feature,” we are referring to raw text, which represents the unembedded original features of TAGs.
Previous GIAs, including the work referenced in [1], require the use of text embeddings during the generation process and are limited to injecting embeddings rather than raw text.
Consequently, these methods cannot be directly applied to our task.
To apply the method from [1] to this task, the embedding-text step must be completed, which is achieved by ITGIA in our paper.
It advances embedding-level GIAs by transforming the embeddings generated in [1] back into raw text.
However, we discovered several practical issues with the generated content, such as poor interpretability, loss of accuracy, and transferability challenges.
These issues have not been addressed in previous embedding-level studies of GIAs and are found to be a significant challenge first identified by us.
---
**W2:** *Limited technical contribution.*
Firstly, the three proposed text-level GIAs require the introduction of previously unconsidered designs.
While ITGIA builds on traditional GIAs, we additionally introduce normalization to align with the structure of normal embedding and discover the importance of interpretable regions.
In VTGIA and WTGIA, we first introduce LLMs to Graph Adversarial Attacks, especially GIAs, ensuring interpretability during poisoning content generation.
In VTGIA, we explore three different strategies for direct generation.
In WTGIA, we transform FGSM into a sparsity-constrained version suited for raw text scenarios, unlike traditional FGSM, which samples features based on average sparsity.
Additionally, WTGIA constructs the “Words to Text” task and uses masking to enhance attack performance.
Second, our research pioneers the exploration of text-level GIAs, aiming to fill the gaps left by previous studies.
We identify incomplete evaluations in earlier GIAs and propose three new lines for development, highlighting the inherent challenges in this domain.
As an initial attempt, it is essential to explore techniques that appear to be straightforward.
Our work focuses on understanding the landscape of text-level GIAs and identifying critical issues that need future attention, which sets the stage for more sophisticated advancements in subsequent research.
Thus, the perceived technical limitations should not detract from the significance of our work.
---
**W3:** *Limited Defense GNNs.*
In the original paper, we discussed the EGNNGuard, which demonstrated significant defensive capabilities as described in [1].
When incorporating HAO or increasing sparsity, attackers can achieve effective improvements at the embedding level.
In the **author rebuttal**, we have added the performance of text-level GIAs against other defense models, including the Layernorm trick (LN) mentioned in [1] and common methods like GAT and GraphSAGE.
Interestingly, LN’s effectiveness on text-level GIAs is not as strong as previously reported for embedding-level attacks.
This is because embedding-level GIAs often exhibit abnormal norms, which LN exploits for defense. However, text-level GIAs derive embeddings from real text, avoiding structural anomalies and bypassing LN’s defenses.
The above results indicate that re-evaluating defenses against GIAs at the text level is a promising research direction.
We acknowledge that adding more defense models could make the evaluations more comprehensive.
However, considering that we have supplemented with classical backbones and methods proven effective in [1], as well as discussed new defense mechanisms at the text level, we believe this sufficiently supports the conclusions presented in our paper.
---
**W4:** *Large-scale Datasets.*
In the **author rebuttal**, we include results for the Reddit and ogbn-arxiv datasets, the former containing over 30,000 nodes and the latter containing over 160,000 nodes.
Note that current text-level GIAs do not increase complexity concerning the graph structure.
The text generation process operates in $O(N_{{inj}})$, proportional to the number of injected nodes, which is significantly smaller than the total number of nodes in the original graphs.
Consequently, WTGIA can complete text generation even for ogbn-arxiv within 8 hours without parallelization optimization, demonstrating high efficiency.
---
**Limitations**
In Section 5, we discussed the new challenges faced by text-level GIAs.
Regarding more limitations of the paper, we believe that the current framework can be extended to better integrate graph structure and text generation to improve performance.
We will address more about limitations in future versions.
---
**More about Experimental Completeness**
Due to the complexity of presenting results for **all combinations of datasets, attacks, defenses, and hyperparameters** (which would require more than 100 tables for presentation by estimation), we did not include all results in the original paper.
We controlled the number of datasets and defense models selected to better present our findings.
As **we did not emphasize the advantages of text-level GIA in terms of technical performance and efficiency**, we believe it is reasonable to choose the most commonly used datasets and victim models of traditional GIAs for evaluation.
In the **author rebuttal**, we also included the running time, more embedding models, and a dataset from a new domain, Reddit.
We will consider your suggestions and incorporate newly added experiments.
If you have any further questions, we welcome more discussion.
---
**Reference**
[1] Chen, et al. Understanding and improving graph injection attack by promoting unnoticeability. ICLR 2022.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We sincerely appreciate the time and effort you have dedicated to reviewing our paper. Your feedback has been invaluable to us. We have provided detailed responses to the concerns and questions you raised during the rebuttal period. Your insights are crucial to enhancing the quality of our work.
With the discussion period ending within one day, we hope we have thoroughly addressed all your concerns. Should you have any further questions or suggestions, we would be delighted to continue our discussion.
Thank you for your support and guidance.
Best regards,
Submission15853 authors | Rebuttal 1:
Rebuttal: **Summary of Rebuttal**
Thanks to the reviewers for their diligent efforts and thorough evaluation.
We are glad to receive such positive feedback.
Notably, all reviewers acknowledged that conducting text-GIA is novel and meaningful.
Reviewers nfHh and Da4q recognized our method's technical contributions, while Reviewer ZG7Z also appreciated our method's contribution to ensuring the semantic integrity of the injection text.
The reviewers' main concerns focused on the comprehensiveness of our evaluation.
In response, during the rebuttal period, we provided additional analyses on defense methods, efficiency, results from more embedding methods, and experiments on additional datasets.
Specifically, In the author rebuttal and the uploaded PDF file, we have added the following content:
---
**Experiments Against More Defense Models**
In **Tables 1 to 6 in the pdf**, we included classic methods like GAT and GraphSAGE, as well as the EGNNGuard and Layernorm (LN) methods, which have been proven highly effective in [1].
We discovered interesting new results, such as LN’s effectiveness on text-level GIAs not being as strong as previously reported for embedding-level attacks, especially for WTGIA.
This is because embedding-level GIAs often exhibit abnormal norms, which LN exploits for defense.
However, text-level GIAs derive embeddings from real text, avoiding structural anomalies and bypassing LN’s defenses. This suggests that some traditional defense methods that were particularly effective may be limited to the embedding level.
The EGNNGuard method is generally effective and performs well overall.
As mentioned in the main paper, attackers can use techniques like HAO to bypass EGNNGuard, but this often involves trade-offs.
While including more baselines could offer a more comprehensive assessment, it entails a substantial workload in the domain of Graph Adversarial Attacks.
Based on our estimates, presenting results for **all combinations of datasets, attacks (both embedding-level and text-level), defenses, and hyperparameters would require over 100 tables**.
Therefore, we have chosen to focus on the most representative examples to provide a clearer and more concise analysis.
---
**Running Time and Complexity**
In **Tables 7 and 8 in the pdf**, we included the running time of the largest dataset, PubMed, from the original paper.
We also included results on ogbn-arxiv, which contains more than 160,000 nodes, to verify the scalability of our method.
In terms of graph structure, the text-level GIA proposed in the paper does not introduce additional complexity regarding graph structure.
Based on the results in [1], even the most complicated variant, MetaGIA, has a time complexity bounded by
$O(|V_T| N_{inj}^2 \log (|V_T| N_{inj}))$ and the space complexity is bounded by $O(|V_T| N_{inj})$
Considering that the number of injected nodes and edges is usually small compared to the original graph, current Text-GIAs are intrinsically scalable.
For example, on ogbn-arxiv, as long as the most complex MetaGIA is not used as the backbone, other experiments can be effectively completed.
In terms of text generation, our methods are linear to the number of injected nodes and can be executed efficiently.
For all methods that rely on LLM, we can further improve efficiency using techniques like LLM's inference acceleration, parallel generation, and reducing the number of correction rounds.
Therefore, scalability is not our bottleneck at present.
---
**More Embeddings**
In **Tables 9 and 10 in the pdf**, we included SentenceBert all-MiniLM-L6-v2 [2] as a new embedding backbone.
We find that WTGIA shows some transferability between different embeddings, but the results are still unsatisfactory. Although GTR and SBERT are both PLM-based embeddings, the adversarial text by ITGIA performs even worse than WTGIA when transferred to SBERT.
We believe the transferability of text-level GIAs remains a significant challenge.
---
**More Datasets**
In **Tables 11 and 12 of the PDF**, we included the ogbn-arxiv dataset [3], containing over 160,000 nodes, and a social network dataset, Reddit, with more than 30,000 nodes [4], to evaluate the scalability and generalization of text-level GIAs.
We injected 1,500 nodes into ogbn-arxiv, following the budget specified in [1], and proportionally injected 500 nodes into Reddit.
Due to space constraints, we use WTGIA with average sparsity budgets, as it has the longest runtime and best performance.
On ogbn-arxiv, WTGIA maintains use rates above 90% and shows good attack performance when transferred to GTR.
On Reddit, though effective on BoW embeddings, the transferability issue is more severe, indicating domain variability in graph attacks.
With more datasets from various domains becoming available in the future, we look forward to exploring the differences in graph attacks and defenses across different domains.
---
**Update Plan**
We addressed other weaknesses and questions in each separate rebuttal.
We will incorporate the above content in the updated version of our paper.
In addition, we will improve explanations of interpretability and the embedding-text-embedding process, correct typos, and add necessary citations based on the reviewers’ suggestions.
We sincerely thank the reviewers for their valuable feedback and constructive comments, which have significantly contributed to improving our work.
If there are any further questions or if you would like additional clarification, we welcome further discussion and engagement.
---
**Reference**
[1] Chen, et al. Understanding and improving graph injection attack by promoting unnoticeability.
[2] Reimers, Nils, and Iryna Gurevych. Sentence-BERT: Sentence embeddings using Siamese BERT-networks.
[3] Hu, et al. Open graph benchmark: Datasets for machine learning on graphs.
[4] Huang et al. Can GNN be a good adapter for LLMs?
Pdf: /pdf/383af4aae344211f31ff8535b65eb9dff1a76486.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A theoretical design of concept sets: improving the predictability of concept bottleneck models | Accept (poster) | Summary: This paper provides a theoretical analysis of properties of concept sets in CLIP-based CBMs. Specifically, the authors focus on the effect of concept sets on the empirical performance of CBMs in the low-resource regime and under distribution shifts. Towards this, the authors identify two characteristics of concept sets that help improve CBM performance: the size of CBMs and the degree of misspecification. The authors validate their findings on two datasets: CIFAR-10 (for evaluating image classifier performance) and MetaShift (for evaluating distribution shift).
Strengths: __Significance__: A lot of recent work has provided empirical efficacy of using CLIP-based interpretable models for few shot classification [1-4]. In practice, the concepts that achieve a high CLIP activation are often nonsensical. A theoretical understanding of the underlying mechanisms behind CLIP based CBMs are extremely desirable and relevant to the NeurIPS community.
`[1]`: https://arxiv.org/abs/2211.11158
`[2]`: https://arxiv.org/abs/2308.03685
`[3]`: https://arxiv.org/abs/2404.09941
`[4]`: https://arxiv.org/abs/2210.07183
Weaknesses: _(This work is outside my area of expertise; I have slight empirical familiarity with CBMs, but I cannot comment on the mathematical correctness of the proofs. Most of my comments are in the questions section)._
__Lacking experimental evaluation__: We have much better datasets than CIFAR-10 to measure the efficacy of an image classifier. I recommend the authors look at LaBO's `[1]` experimental evaluation section for a more comprehensive set of experiments.
Overall, I'm recommending a __Weak Accept__. Theoretical research in the properties of CLIP based CBMs is deeply desired and relevant to the NeurIPS community. I haven't given a higher score because it's hard to make informed opinions based on the current evaluation setup.
Technical Quality: 3
Clarity: 2
Questions for Authors: Questions while reading the paper:
- L23: I think CBMs can be motivated with a better example. Here is another example: A “Tesla model X” did not exist when CLIP was trained, yet we can make a reasonably accurate classifier with a concept bottleneck: {SUV, T logo, gull wing doors, (lack of) exhaust pipe}.
- L99: What is P here?
- L106: (Definition 1) What is a non-trivial instantiation of a set that is rejected by this definition (ie: a set which is not a concept set)?
- L154: I understand that the theoretical justification necessitates linearity, but empirical studies suggest that the final CLIP activation is based on a hierarchical composition of neuron activations `[5]`. Can such relationships be captured by the linear model?
- L248: I agree that removing redundant concepts is important. However, in practice, redundant concepts seem to help improve performance (`[4]` identifies almost 3000 concepts for some datasets).
- L267: I recommend the authors try out more datasets than just CIFAR10 `[6]`. I think following the experimental section for LaBO `[1]`, one of the paper's cited, would be a good idea here.I recommend testing in domain generalization on CINIC-10 `[7]` as well.
- L275: The appendix mentions using ViT B-32 while this section mentions using ViT L-14. The provided code shows references to both ViT B-32 (for CLIP embedding generation) and ViT L-14 (for querying DINO features; which I'm assuming isn't part of this paper). A clarification of which model was used in which context would be beneficial!
`[5]`: https://distill.pub/2021/multimodal-neurons/
`[6]`: https://arxiv.org/pdf/1806.00451
`[7]`: https://arxiv.org/abs/1810.03505
-----
Increasing my score to __Accept__ after discussions with authors.
Confidence: 1
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed limitations in the Appendix. I appreciate the inclusion of a broader impacts section!
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your in-depth review and for highlighting this theoretical work's high relevance to the NeurIPS community.
## New experiments
The purpose of the experiments in our paper is to illustrate the theoretical insights, and we believe the current experiments serve that purpose well. However, to validate our insights further, and as per your comment, we have run experiments on additional datasets as suggested, including CINIC-10 for domain generalization. We refer to the 'Global Response' for the new results.
## Questions
We emphasize our gratitude for the careful review shown by the questions posed, notably since each led to valuable improvements. Next, we indicate the resulting improvements to the paper.
### Clarity
* We address the ambiguity regarding the meaning of $P$ in L99 with the following modification: "[...] $\\mathcal{D}=\\{(x_i, y_i)\\sim_{iid} \\mathbb{P}(\\mathcal{X}, \\mathcal{Y})\\}$ is a dataset drawn from the data-generating process $\\mathbb{P}(\\mathcal{X},\\mathcal{Y})$ [...]".
* Thanks for flagging the model typo. We use ViT B-32; we have updated it in the text in L275-276.
### Linearity
Indeed, CLIP embeddings of images $CLIP:\\mathcal{X}\\rightarrow \\bar{\\mathcal{X}}$ are non-linear and result from a hierarchical composition of neurons. Our analysis does not assume linearity of such a mapping, but rather linearity of the mapping from the CLIP embedding $\\bar{\\mathcal{X}}$ to the concept representation, which is consistent with the CLIP objective [2] as stated in L150 ("where it assumed that concepts can be related to directions"), as well as linearity
of the output layers, a standard practice for CBMs [1] and when using CLIP embeddings [2].
We understand our usage of $\\mathcal{X}$ to denote inputs from this embedding space might confuse readers, so we will denote the CLIP embedding space as $\\bar{\\mathcal{X}}$ throughout the paper and modify L149 to "To that end, we consider the input space $\\bar{\\mathcal{X}}$ as the joint embedding space of inputs and concepts from a foundational model [...]".
### Redundant concepts
We agree that a more nuanced discussion would benefit L248.
As per Theorem 1, having a concept that does not decrease the misspecification $\\varepsilon$, which can happen either if it is spanned by the other concepts or orthogonal to the target function, will be detrimental to the risk since it increases the first term of the risk (see L177-178). This is what we mean by a *redundant concept*.
However, there will be many cases where the given concept will not be entirely redundant (i.e., only slightly affect $\\varepsilon$), and in these cases, the choice of keeping or discarding it will depend on the number of samples we have. Both our theoretical and empirical insights (see L281-293) conclude that in lower sample regimes, we should be more aggressive at discarding slightly *redundant concepts*, and as $n$ grows, we will benefit more from preserving them, as you point out.
We appreciate that you flagged the importance of providing a more comprehensive discussion of when to "remove *redundant concepts*" in L248 and connecting it to the theoretical insights. We have correspondingly included the above discussion as a footnote.
### Examples
#### Motivation for CBMs
Your provided example nicely illustrates how CBMs can exploit the composition of concepts to perform tasks beyond CLIP's training distribution. We have correspondingly added a version of it in line 24. We believe it best to keep the presentation of CBMs in the first paragraph agnostic to the identification method (CLIP) for generality, so we slightly modified it to the following phrasing without discussing CLIP: "Indeed, despite never having seen a Tesla Model X, someone could reasonably classify them provided they can identify an SUV, a T logo, gull-wing doors, and the absence of exhaust pipes."
#### Non-Concepts
Let us start with a specific example and then move into a more formal characterization.
* We humans do not develop some concepts for fields outside our areas of interest (our interest is informative of our underlying task $\\mathcal{T}$) despite experiencing occurrences of such concepts. Consider, for example, filmography concepts such as "Jump cut," where two sequential shots of the same subject are taken from camera positions that vary only slightly (or a "Dutch tilt"). According to Def. 1, for people not interested in filmography, these characteristic functions would not be included in the concept set because they add negligible expressiveness to their current concept set, i.e., $N_0\\rightarrow \\infty$ without significantly contributing to the model-aware inductive bias, so $N_1 < N_0$, failing the definition.
* More abstractly, a characteristic function $c$ that does not provide additional valuable information on top of the existing set $\\mathcal{C}$ for $\\mathcal{T}$ will have low expressiveness (i.e., large $N_0$) without increasing $N_1$. This can occur when targets are independent of $c$ conditioned on $\\mathcal{C}$: e.g., concepts irrelevant to the task (e.g., tinted glass when predicting a Model X) or completely redundant with $\\mathcal{C}$. It can also happen when the output model cannot capture the correlation, e.g., $c$ might help cluster the input distribution into two concentrical spheres, but a linear model would not benefit from this. Further still, even if a characteristic function $c$ provides information that can be captured, if the inductive bias of the method disprefers the optimal hypothesis given this representation, once again, $N_0$ will be large.
We believe this discussion can be valuable to some readers, so we incorporate it in a new appendix and reference it in L109.
## References
[1] P. W. Koh et al., "Concept Bottleneck Models".
[2] A. Radford et al., "Learning Transferable Visual Models From Natural Language Supervision".
---
Rebuttal 2:
Comment: Thank you for the response! I think the additional studies address my main concern regarding evaluation setup. I'm increasing my score to recommend Acceptance.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer,
Once more, thank you very much for your thoughtful and constructive review. We are pleased your main concerns were addressed and grateful for your increased acceptance score. | Summary: This paper presents theoretical contributions related to CBM, which delves into the impact that the choice of concept set has on CBM performance. It identifies advantageous conditions for CBMs, offering an orthogonal and meaningful perspective compared to most other works on CBMs.
Strengths: - Overall, the theoretical findings are insightful. The authors demonstrate the situations in which the performance of CBMs can be enhanced by well-choosing the concept sets.
- The framework of the theories is clear. The assumptions are appropriate and not overly restrictive. The definitions are comprehensive and primarily clarify the expressiveness and model-aware inductive-bias, which are two key properties affecting the performance.
- The empirical results show that the CBMs surpass the foundational embedding models. These results also provide a unique perspective compared to existing works.
Weaknesses: - The multi-step approach introduced in Sec.5.2 seems to be difficult and brings uncertainty from LLMs, which to some extent limits the practicality and generality.
- It would be better if more baselines and datasets were provided in the experiment part. Additionally, due to the costly training procedure, this paper only incorporates the concept representations from foundational multimodal models. This is reasonable but complicates comparisons with other CBMs.
Technical Quality: 3
Clarity: 2
Questions for Authors: None.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for your thoughtful evaluation and appreciate your recognition of our theory's insightfulness and clarity, as well as our results' unique perspective.
### The use of LLMs for concept generation
The core objective of our method and experiments is to illustrate and validate our theoretical insights, which are agnostic to how concepts are sampled. In fact, we note that the first mention of an LLM as a generator happens in L229 in section 5.2. Consequently, using an LLM for sampling concepts in our experiments does not affect the practicality or generality of our theoretical insights. We employ LLMs to reduce the required human expertise and flexibly supply task-specific concepts, decreasing the barrier to adopting CBMs for new tasks.
As mentioned, our theory encompasses concept sets that are not dependent on LLMs. Options include human-generated concepts, concept graphs like ConceptNet, or leveraging correlations within text corpora like Wikipedia. Even hybrid approaches could be considered, with the goal of enhancing LLM generation by conditioning their context on external knowledge sources like ConceptNet or Wikipedia, as mentioned in L229-232.
Orthogonally, we also note that the method in section 5.2 can be regarded as a 'rejection-sampling loop' that partially palliates the practical limitations of relying on an LLM. This loop controls the quality of LLM generation through steps 2 and 3 by rejecting concepts that do not align with our theoretical criteria, thereby encouraging only high-quality concepts to be incorporated into the concept set.
To convey more effectively that this choice does not limit the practicality and generality of our theory and insights, we will include a version of the first two paragraphs above following L232 and a version of the third preceding L254.
### New experiments
The purpose of the experiments in our paper is to illustrate the theoretical insights, and we believe the current experiments serve that purpose well. However, to further validate our insights and in agreement with your comment, we have run experiments on additional datasets and a new baseline. We refer to the 'Global Response' for the new results.
---
Once again, we express our gratitude for your comments. We hope that the new experiments and the discussion we incorporate about how our theory is agnostic of the generation method have addressed all your comments, and we look forward to your continued engagement with our research.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. Most of my concerns have been addressed and I'm glad to see the authors had supplemented some empirically results to better show the effectiveness. By the way, I have also tried to run Label-free CBM with VIT-B/32 backbone on CIFAR10 and the result is about 86.5%, a bit higher than that in Figure 2 in the rebuttal PDF (a typo here, Figure 2 should be Figure 2b I think, and the left one should be Figure 2a not 2c). Could this discrepancy be due to differences in training size?
I have read the comments and responses from the other reviewers and currently have no further questions. However, I found the responses difficult to read due to the extensive use of LaTeX formatting. It would be helpful to use markdown formatting in the rebuttal period for improved readability.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your continued engagement and the thoughtful follow-up. We are pleased that the additional experimental results addressed most of your concerns.
Regarding the performance discrepancy you observed with Label-free CBM using the ViT-B/32 backbone on CIFAR-10, this is indeed primarily due to differences in training size. As discussed in our global response, we also made adjustments to the concept set in the fifth code block of the `GPT_conceptset_processor.ipynb` file from the available Label-Free CBM repository, specifically modifying the concept set sampler $c_i \\sim L(c | C_T)$ to ensure a fair comparison between Label-free CBM and our method. Nonetheless, this modification led to a slight improvement in the performance of Label-free CBM on CIFAR-10, where we achieved an accuracy of $86.79\\%$, which aligns closely with your results.
To clarify and avoid any confusion, we will add a final data point to Figure 2 in the attached PDF, showing the performance on the full CIFAR-10 dataset. Here, our method achieves an accuracy of $92.23\\%$, thus maintaining an advantage over the baseline comparable to previous data points.
You are correct about the typo in the figure labels; the left figure in the second row should be labeled "Figure 1c." We have corrected this in the revised document.
Regarding your feedback on using LaTeX formatting, we apologize for any readability issues you encountered. On our end, the equations render correctly, so we are not entirely sure what specific formatting challenges you experienced. However, we fully appreciate the importance of clarity and are more than willing to reformat the content using markdown or any other format you prefer. If you could provide more details or specify your preferred format, we would happily make the necessary adjustments to ensure the content is as accessible as possible.
We hope that these clarifications address all your remaining concerns. We also hope that the improvements and additional results presented across the rebuttal will prompt you to consider raising your evaluation of our work.
Thank you again for your valuable feedback and time. | Summary: This paper addresses an important research question in CBM — understanding the properties of concept sets and their connection to the performance of CBMs.
A theoretical framework for concept sets is proposed, focusing on two desiderata for concepts: expressiveness and model-aware inductive bias.
Their theoretical analysis covers under what conditions the concept representation-based predictions outperform raw feature representation-based predictions.
Empirical evaluation with two datasets confirms their theoretical insights that well-chosen concept sets improve sample efficiency and robustness under distribution shifts.
Strengths: Studying the properties of concept sets and how to effectively find a good concept set satisfying the properties, ultimately improving the utility of CBMs, is a very interesting and important research question.
The two proposed properties of concept expressiveness and model-aware inductive-bias are intuitive and well-defined.
Especially, their observation on connecting them to improved data efficiency is interesting.
The proposed theoretical framework naturally reveals the regimes on which it is undesirable to have a concept bottleneck and leads to a principled solution based on the theoretical insights.
Weaknesses: Notations are very loosely defined:
* Notations in introduction are used without being defined; for instance, $f, g, \theta, \mathcal{H}$.
* in line 104, the definition of function f as a concatenation of a mapping $\hat{g}_m$ and a set $[c_1, \dots, c_m]$ is awkward.
* what is $d$ in Theorem 1?
* in line 154, please clarify that $x$ is not a raw input (e.g., image), but a feature representation output from a backbone foundation model.
Limited experiments:
* Evaluated only with two datasets.
* No performance comparison with other CBM methods, given the same number of concepts and training set size.
* No demonstration and anlysis on the actual results of generated concept sets
Please increase the fonts in the figures for visibility.
Technical Quality: 3
Clarity: 2
Questions for Authors: * in lines 135-136, `This flexibility means we can replace the underlying ... while keeping the output model unchanged.` Could you please elaborate on this? I reckon the output model should also be changed when the coupled concept identification model is changed.
* in lines 137-139, `The concept set serves as a stable communication interface ... potentially harmonizing datasets and modalities`. Please extend these statements and explain the broader potential impact of this perspective using concept representations.
* how do you determine the thresholds in concept generation? (i.e., $\Omega_m$, $R_m$).
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Limitations are described in Appendix C.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We are glad you find the research question important and the insights interesting.
## Notation
We agree with all the suggestions and have incorporated them. Specifically:
* Changes in the introduction:
* We remove $\\theta$.
* L22: we introduce $\\hat{g}$ with "[...] a CBM first identifies key concepts such as the presence of a semaphore or a pedestrian crossing the road ($C: \\mathcal{X}\\rightarrow\\mathcal{C}$), and learns $\\hat{g}:\\mathcal{C}\\rightarrow\\ \\mathcal{Y}$ to act based on them $(\\hat{g}\\circ C)(x)$".
* L33: we define $\\mathcal{H},\\mathcal{H_C}$: "[...] the degree of misspecification $\\varepsilon$ as key drivers of the CBM performance, where $\\varepsilon=|| \\text{arg min}\_{f\\in\\mathcal{H}} \\ell(f) - \\text{arg min}\_{g\\in\\mathcal{H_C}} \\ell(g\\circ C)||$, with $\\mathcal{H}\\subseteq \\{f:\\mathcal{X}\\rightarrow\\mathcal{C}\\}, \\mathcal{H_C}\\subseteq \\{g:\\mathcal{C}\\rightarrow\\mathcal{Y}\\}$ the hypotheses spaces."
* L48: the additions "concept-based models ($\\mathcal{R}(\\hat{g}\\circ C)$, $\\hat{g}\\in \\text{arg min}\_{g\\in\\mathcal{H_C}} \\ell(g\\circ C)$)" and "baseline counterpart ($\\mathcal{R}(\\hat{f}), \\hat{f}\\in\\text{arg min}\_{f\\in\\mathcal{H}} \\ell(f)$)" further clarify $\\hat{g}$ and $\\hat{f}$.
* L104: we thank you for pointing out this detail since there was a technical condition missing: either (1) if concept sets are unordered, then $\\mathcal{H_C}$ must be a complete set of maps under permutation symmetry, and the learning method invariant under feature permutations, which holds for the output methods considered in the paper; or (2) $\\mathcal{C}$ is a concept sequence, not a set. In the first case, the composition is well-defined since it is equivalent across concept reorderings; in the second, $\\cup$ should be substituted by $\\times$. We make this explicit in L101 by adding, "Consider the output hypothesis spaces $\\mathcal{H_C}\\subseteq\\{g:\\mathcal{C}\\rightarrow \\mathcal{Y}\\}$ complete under permutation symmetry and the learning algorithm invariant under feature permutations. We refer to Appendix B for a technical discussion of this assumption.". We correspondingly added Appendix B with an extension of the above discussion, including the full alternative definition and what methods satisfy assumption (1).
* We update Thm 1's statement to "Under the above setting and assuming that $\\mathbb{P}$ is an isotropic distribution on the $d-$dimensional input" to introduce $d$ unambiguously.
* To avoid confusion, we will denote the CLIP embedding space as $\\bar{\\mathcal{X}}$, and we modify L149 to "[...] input space $\\bar{\\mathcal{X}}$ as the joint embedding space of inputs and concepts from a foundational model [...]".
* We increased the figures' fonts. This is shown in the new figures; previous ones have been modified correspondingly.
## New experiments
The experiments in our paper aim to illustrate the theoretical insights, and we believe the current experiments serve that purpose well. However, in agreement with your comment, we have run experiments on additional datasets and a new baseline to further validate our insights. We refer to the "Global Response" for details. We also included throughout appendices C.4-6 filtered concept sets from our experiments.
## Concepts as a stable interface
> This flexibility means we can replace the underlying concept identification model with a more advanced version in the future while keeping the output model unchanged. The concept set serves as a stable communication interface with fixed dimensionality and functionality. Additionally, concepts can act as common representations across different feature sets, potentially harmonizing datasets and modalities.
We welcome the opportunity to expand on these observations, which cover exciting properties that could lead to impactful future works.
Suppose we aim to learn $f^*:\\mathcal{X}\\rightarrow \\mathcal{Y}$, and let us consider a CBM counterpart, with the concept representation $\\mathfrak{C}^*:\\mathcal{X}\\rightarrow \\mathcal{R}\_{\\mathcal{C}}$, followed by a mapping $f\_\\mathcal{C}^*:\\mathcal{R}\_{\\mathcal{C}}\\rightarrow \\mathcal{Y}$ from concepts to targets. By keeping the concept set $\\mathcal{C}$ fixed (and thus the representation $\\mathcal{R}\_{\\mathcal{C}}$), we can replace an estimator of the concept mapping $\\hat{\\mathfrak{C}}$ by another $\\tilde{\\mathfrak{C}}$ while the probe $\\hat{f}\_\\mathcal{C}$ remains effective, only affected by a distribution shift. E.g., if we train the probe on a CLIP-based concept representation and migrate to a new CLIP2, our probe will remain applicable to the updated and more precise embedding. However, a probe cannot be swapped from the CLIP raw representations to the ones from CLIP2 since they will have different metrics or even dimensionality. That is, the concept representation $\\mathcal{R}\_{\\mathcal{C}}$ is a stable interface with fixed dimensionality and meaning.
Furthermore, suppose we have datasets with different features or modalities $\\mathcal{X}_1, \\dots,\\mathcal{X}_k$ (e.g., images from different sensors or angles). In that case, we can use a common concept set $\\mathcal{R}\_{\\mathcal{C}}$ which has fixed semantic meaning, and project all inputs there $\\mathfrak{C}^*_i:\\mathcal{X}_i\\rightarrow \\mathcal{R}\_{\\mathcal{C}}$, thereby harmonizing them.
Some readers might be interested in delving deeper into these ideas, so we include a version of the above text in the appendix and refer to it in L139.
## Thresholds
We keep a concept if it improves the estimated risk, i.e., $R_m=0$, or $\\hat{\\mathcal{R}}(\\hat{f}\_{C_i}) - \\hat{\\mathcal{R}}(\\hat{f}\_{C_{i-1}})>0$, and we set the entropy threshold proportionally to the number of buckets $n_b$ used to estimate $\\Omega$ to ensure not all samples fall into the same bucket (i.e. $\\Omega_m=\\frac{1}{n_b}$). We will specify this as a footnote in L253.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
I would appreciate if you could comment on the author's rebuttal, in light of the upcoming deadline.
Thank you,
Your AC | null | null | Rebuttal 1:
Rebuttal: We sincerely thank all reviewers for their thoughtful and insightful feedback. We appreciate their recognition of the relevance and importance of our research on understanding the properties of concept sets and their impact on Concept Bottleneck Models (CBMs). We value their appreciation of the provided theoretical insights, and we are committed to addressing their comments and questions to improve our work based on their feedback.
While we provide individual responses to address specific questions, we would like to share globally the **new results** we obtained during the rebuttal process in response to the reviewers' comments. The primary goal of the experiments in our paper is to illustrate our theoretical insights, and we believe the current experiments effectively fulfill this purpose. However, based on the reviewers' suggestions for a more comprehensive set of experiments encompassing additional datasets and baselines, we have included a new out-of-distribution **(OOD) generalization experiment on CINIC-10** [1] and three new experiments on **sample efficiency on CUB-200-2011** [4], **Food 101** [3], and the **Describable Textures Dataset** [2]. Furthermore, we run an experiment on CIFAR-10 **comparing our method with Label-free Concept Bottleneck Models** [5], a meaningful baseline from the literature. The results, including four plots and a table, are provided in the supplementary PDF. We discuss the results and experimental setup in detail in a new appendix. These additions are referenced in the main text at L274, where we discuss the datasets used, as well as at L293 and L326, to point the reader to the additional experiments.
Next, we include an adapted version of the appendix to discuss the new experiments.
## New Data-efficiency Experiments
Responding to the reviewers' call for more comprehensive evaluations, we conducted three new experiments focusing on the data efficiency of CBMs. These experiments compare the performance of a linear output layer on top of the CLIP representation against our concept-bottleneck counterpart across various sample sizes, averaged over five seeds. The datasets used include CUB-200-2011, Food 101, and the Describable Textures Dataset.
**The results reaffirm the insights from theorem 1**. In the small sample regime, the first term of the risk dominates and predicts increased sample efficiency for the CBM compared to the baseline.
However, as the sample size increases, we observe the gap in performance decreases and even reverses in the large sample regime, which is explained by the fact that
> [...] the $\\varepsilon$ coefficient grows with the number of examples $n$
That is, the information loss resulting from the bottleneck, encapsulated in the $\\varepsilon$, becomes dominant and determines the asymptotic gap between the CBM and the baseline.
## New OOD Experiment on CINIC-10
To further validate the robustness of CBMs to distribution shifts, we conducted an OOD generalization experiment on CINIC-10. This experiment used CIFAR-10 images for training, and ImageNet subclasses from the CINIC-10 test set, conforming to a shifted distribution setting.
**Results show that CBMs exhibit superior OOD generalization capabilities**, aligning with our previous results on the 12 datasets from the MetaShift collection.
## New Baseline: Comparison with Label-Free Concept Bottleneck Models
We reiterate that the main contributions from our work lie in the theoretical insights, and our method and experiments are tailored to support these insights. Nonetheless, as per reviewer comments, we have run an additional experiment on CIFAR-10, comparing the performance of our method with label-free CBM [5] as a baseline.
For a fairer comparison of the concept set selection mechanisms, we provide both methods with the same sampling mechanism $c_i \sim L(c | C_T)$ by fixing the language model $L$ and the context we feed it $C_T$. We also use VIT-B/32 for both methods. We then use CLIP and a linear probe, as described in [5], to obtain the baseline. We average the results of training the output probes over five seeds for each training size.
The results show that **our CBM consistently outperforms the baseline**.
---
We hope the additional experiments address the reviewers' concerns; we believe these enhancements strengthen our submission and look forward to further feedback.
## References
[1] L. N. Darlow, E. J. Crowley, A. Antoniou, and A. J. Storkey, "CINIC-10 is not ImageNet or CIFAR-10," Oct. 02, 2018, arXiv: arXiv:1810.03505. doi: 10.48550/arXiv.1810.03505.
[2] M. Cimpoi, S. Maji, I. Kokkinos, S. Mohamed, and A. Vedaldi, "Describing Textures in the Wild," in 2014 IEEE Conference on Computer Vision and Pattern Recognition, Jun. 2014, pp. 3606–3613. doi: 10.1109/CVPR.2014.461.
[3] L. Bossard, M. Guillaumin, and L. Van Gool, "Food-101 – Mining Discriminative Components with Random Forests," in Computer Vision – ECCV 2014, D. Fleet, T. Pajdla, B. Schiele, and T. Tuytelaars, Eds., Cham: Springer International Publishing, 2014, pp. 446–461. doi: 10.1007/978-3-319-10599-4_29.
[4] Wah, C., Branson, S., Welinder, P., Perona, P., and Belongie, S., "The Caltech-UCSD Birds-200-2011 Dataset," California Institute of Technology, CNS-TR-2011-001, 2011.
[5] T. Oikarinen, S. Das, L. M. Nguyen, and T.-W. Weng, "Label-free Concept Bottleneck Models," presented at the The Eleventh International Conference on Learning Representations, Sep. 2022.
Pdf: /pdf/70c837aa6313a6041230ef1cb58ade7c2e29c4eb.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Identify Then Recommend: Towards Unsupervised Group Recommendation | Accept (poster) | Summary: This paper addresses the group recommendation task and innovatively proposes an unsupervised approach to automatically infer user-group distributions and make suitable recommendations.
For the group identification stage, a heuristic-based merge-and-split method is developed to facilitate this inference. For group recommendation, two self-supervised pre-training objectives are introduced: group-level pseudo group recommendation and user-group alignment through pull-and-repulsion mechanisms.
Extensive experiments on both open-sourced benchmarks and industrial scenarios demonstrate the effectiveness of the proposed approach.
Strengths: 1. This paper introduces a novel task setting. The proposal of unsupervised group recommendation effectively addresses real-world scenarios where user groups dynamically change, freeing the model from the pre-definition of group numbers.
2. Technical details are well-founded and sound.
3. The paper is well-written and easy to follow.
Weaknesses: 1. Presentation error in Line 147: the matrix should have the dimension of $k \times m$.
2. Methodology part requires further clarification:
* Although the proposed GIM is intuitive, why not directly adopt existing community detection algorithms applied to the user network? This approach is applicable when user links are available or can be adaptively constructed based on user-item interactions (e.g., adding a link between two users who consumed the same items). Additionally, what is the **complexity and convergence** of this algorithm?
* The authors seem to omit crucial technical details. **How does the ITR model make inferences during testing**? When the test data consists of an annotated group and candidate items, how is the embedding of this group generated? What strategy is adopted for recommendation given the computed group embedding and pre-trained item embedding? This process is also unclear when applying ConsRec to the scenario without annotations.
3. Experiments part requires further explanation:
* In the ablation study, the meaning and implementation of the "base" variant are unclear and need further explanation. I am confused how to extend ITR model to the w. group annotation scenario.
* **The effectiveness of GIM should be demonstrated through experiments**. How do the identified groups compare with ground-truth user-group relations? Either qualitative or quantitative evaluations would be acceptable.
* Details of the A/B Testing experiments should be provided, such the meaning of evaluation metrics, data scale, performance of different models, etc.
* (A minor point as I understand time is limited for rebuttal) One research line in group recommendation is the occasional group recommendation, where testing groups are newly formed and do not appear in the training set [1,2,3,4]. I highly recommend the authors conduct supplementary experiments on these datasets (such as Yelp, Weeplaces, etc.) to provide a more comprehensive evaluation, as these datasets align well with the motivation of this study.
----
[1] Sankar et al. GroupIM: A Mutual Information Maximization Framework for Neural Group Recommendation. In SIGIR, 2020.
[2] Chen et al. Thinking Inside The Box: Learning Hypercube Representations for Group Recommendation. In SIGIR, 2022.
[3] Zhang et al. GBERT: Pre-training User Representations for Ephemeral Group Recommendation. In CIKM, 2022.
[4] Li et al. Self-Supervised Group Graph Collaborative Filtering for Group Recommendation. In WSDM, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to weaknesses
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer x7i3**
Thanks for your valuable and constructive reviews. We appreciate your insights and suggestions, as they will undoubtedly contribute to improving the quality of our paper. In response to your concerns, we provide answers to the questions as follows in order.
### **Dimensions of Matrix**
Thanks for your careful check! We have correct the dimension of the matrix $\mathbf{Q} \in \mathbb{R}^{k \times m}$ in the revised paper.
### **Existing Community Detection**
Thanks for your question. The clustering/community detection methods, which need the pre-defined number of cluster centers, such as k-means, spectral clustering/community detection, cannot be applied in this scenario. For other clustering/community detection methods, which do not require the pre-defined number of cluster centers, such as DBSCAN, is hard to deal with our scenario well since 1) the data distribution of our application change dynamically, therefore it’s hard to determine the parameters regarding to the density, such the radius, and the threshold. But our proposed group identification module can automatically estimate the density and discover the groups via the heuristic merge-and-split strategy, thus can be easily applied in the real-time large-scale unsupervised recommendation system. 2) They can not be integrated into our framework without our proposed pre-text tasks, including the pseudo group recommendation pre-text task and the pull-and-repulsion pre-text task. We have added these claims in the revised paper.
### **Experimental Details**
Thanks for your suggestion. For the A/B testing experiments, the evaluation metrics include two parts, including the click metrics and the trade metrics. For the click metrics, there are for metrics, including uv (user view), uvctr (user view click through rate), pv (page view), pvctr (page view click through rate). And for the trade metrics, there are two metrics, including including uv (user view) and pv (page view). For the data scale, the application contains about 130 million page view and 50 million user view per day. And the baseline model is the MMOE model for recommendation. We merely compared the baseline model with our strategy and the baseline model without our strategy to demonstrate the effectiveness of our proposed method. The details can be found in the application background in Section 6.4.1.
### **Detailed Introduction of the Application**
We apply the propose ITR method in the livestreaming recommendation scenario. In a livestreaming room, the users naturally form several user groups based on their interests. For instance, in a sports goods livestreaming room, the users with basketball interests will form a user group, while the users with badminton interests will form another user group. These user groups are important information to describe the profile of the livestreaming room. For example, when this livestreaming room begins to sell basketball-related goods, the basketball user group will be activated and be willing to click, comment, or buy merchandise. Therefore, we should guide more users with basketball interests into this livestreaming room. Following this principle, we aim to group the users in the livestreaming room into different groups and utilize this group information to assist the recommendation. However, the number of user groups is always unknown and dynamic. Besides, the different livestreaming rooms have different group numbers, e.g., the gold livestreaming room has less group number than the groceries livestreaming room since user interests in the gold livestreaming room are more concentrated. In addition, the group annotation information, such as user-group distribution and group-item interactions, is also missing or needs extensive human annotation costs in this scenario. To solve these problems, we present ITR model in this paper. The proposed GIM can identify the user groups in the livestreaming rooms, and the proposed SGRM can assist and promote the recommendation. Furthermore, upon acquiring the identified group embeddings, we can also utilize them to enhance the traffic mechanism within real-time livestreaming recommendations. In practical terms, we initially designate the primary user groups within a livestreaming room as the room's profile. Additionally, we devise and calculate the matching score between the primary user groups and potential users, subsequently leveraging these matching scores to dynamically bolster the ranking score of relevant users, therefore improving the recommendation performance and the interactive environments in the livestreaming room.
### **Occasional Group Recommendation**
Thanks for your suggestion. Although the occasional group recommendation is similar to our method, they are still essentially different. The setting of our experiments is purely unsupervised, and we are not merely solving the occasional group recommendation problem. But we are still glad to survey and discuss them [1-4] in the revised paper since the motivation of occasional group recommendation is partly similar to ours.
[1] Sankar et al. GroupIM: A Mutual Information Maximization Framework for Neural Group Recommendation. In SIGIR, 2020.
[2] Chen et al. Thinking Inside The Box: Learning Hypercube Representations for Group Recommendation. In SIGIR, 2022.
[3] Zhang et al. GBERT: Pre-training User Representations for Ephemeral Group Recommendation. In CIKM, 2022.
[4] Li et al. Self-Supervised Group Graph Collaborative Filtering for Group Recommendation. In WSDM, 2023.
---
Rebuttal 2:
Comment: ## **Response to Reviewer x7i3 [2/3]**
### **Technical Details During Inference**
Thanks for your question. For our proposed ITR, the embeddings of the groups are generated as shown in Eq. (6) and are optimized by minimizing Eq. (8) and Eq. (11). Concretely, the adaptive density estimation, and the heuristic merge-and-split method will discover the group assignments, and the group embeddings will be formed with the average of the assigned user embeddings. During the training stage, the group embeddings are also optimized with the pseudo group recommendation pre-text task and the pull-and-repulsion pre-text task. For the inference, during the testing stage, the group embeddings are learned, and user assignments are fixed. Therefore, we can calculate the recommended score between groups and the items (for the group recommendation task) and the recommended score between users and the items (for the user recommendation task). For calculating recommendation scores, we follow ConsRec and adopt the dot product or the neural mapping method. For applying ConsRec to the scenario without annotations, we just remove the interaction labels between the groups and the items, namely removing the group loss in Eq. (5) in the original paper of ConsRec.
### **Explanation of “base” & Utilization of ITR**
Thanks for your question. The baseline model is the convention MMOE-based recommendation model. And we aim to adopt our proposed method at the re-rank stage, i.e., modified the predicted score of the recommendation model by considering the user groups. Next, we assume you have carefully read the background of this application and introduce the details of adopting our method into the re-rank stage. For different livestreaming rooms, the number of user groups will be different. For example, the gold livestreaming room has less group number than the groceries livestreaming room since user interests in the gold livestreaming room are more concentrated. Therefore, we need an unsupervised group recommendation method to discover the user groups in different livestreaming rooms. Here, we adopt our ITR method to discover the user groups and then produce the group embeddings. Then upon acquiring the identified group embeddings, we can also utilize them to enhance the traffic mechanism within real-time livestreaming recommendations. In practical terms, we initially designate the primary user groups within a livestreaming room as the room's profile. Additionally, we devise and calculate the matching score between the primary user groups and potential users, subsequently leveraging these matching scores to dynamically bolster the ranking score of relevant users, therefore improving the recommendation performance and the interactive environments in the livestreaming room. And our proposed ITR will not be used with the group annotations since if the group annotations are easy to obtain, we don’t need to develop an unsupervised learning-based group recommendation method.
### **Experiments for Occasional Group Recommendation**
Thanks for your suggestions. In our paper, the setting of the experiments is purely unsupervised. However, the occasional group recommendation methods still rely on the annotation of the groups. Although they are essentially different, we are still glad to survey, discuss, and test them in the revised paper. Actually, for GroupIM [1] and CubeRec [2], we have already surveyed, discussed, and tested in our original paper. Please carefully check the related work part and the comparison experiment part in the original paper. For GBERT [3] and SGGCF [4], thanks for your suggestion and we will briefly introduce them and compare them with our method. GBERT [3] is proposed to solve the data sparsity and cold-start problems by the pre-training and fine-tuning techniques on BERT. SGGCF [4] aims to solve the data sparsity and high-order interaction problems by the heterogenous graph and the self-supervised learning. For GBERT [3], the code is not available online. Therefore, it’s hard to reproduce their results. And for SGGCF, we have started the experiments. Due to the limited rebuttal time, the experiments are not finished yet, and once they are finished, we will post the experimental results during the discussion period.
[1] Sankar et al. GroupIM: A Mutual Information Maximization Framework for Neural Group Recommendation. In SIGIR, 2020.
[2] Chen et al. Thinking Inside The Box: Learning Hypercube Representations for Group Recommendation. In SIGIR, 2022.
[3] Zhang et al. GBERT: Pre-training User Representations for Ephemeral Group Recommendation. In CIKM, 2022.
[4] Li et al. Self-Supervised Group Graph Collaborative Filtering for Group Recommendation. In WSDM, 2023.
**to be continue...**
---
Rebuttal 3:
Comment: ## **Response to Reviewer x7i3 [3/3]**
### **Effectiveness of GIM**
Thanks for your suggestion. The group identification module (GIM) aims to discover the user groups based the embeddings in the pure unsupervised setting. In the pure unsupervised setting, the GIM is a must-used module to discover the user groups. And the effectiveness can be verified indirectly by the performance improvement of downstream tasks. To directly demonstrate the effectiveness of the proposed group identification module, we follow your suggestion and verify the quality of the discovered user group. Concretely, on the open benchmarks, we adopt the unsupervised clustering metric Silhouette Coefficient (SC) to evaluate the quality of the discovered user groups. SC is an important indicator for evaluating clustering quality, which can help evaluate the rationality of clustering results. It combines the cohesion and separation of each data point to quantify the clustering effect, and its value range is from -1 to 1. Specifically, the larger the SC, the better the clustering effect of the data points. On the average of the four datasets used in our paper, the discovered user groups achieve 0.879 SC, demonstrating the promising clustering performance of our proposed GIM. On the industrial data, due to the labels are not available and the data size being large, we conducted case studies to demonstrate the effectiveness of GIM. Specifically, we select some hot livestream rooms for the case studies, e.g., the gold-selling livestream room, movie-ticket-selling livestream room, and the face-tissue-selling livestream room. Then, we check the user group distribution and find the major group. Subsequently, we use some tags of the user interests to provide the labels of the major groups. In this manner, we can determine the interest of the major groups in the studies livestream room. We discuss the results as follows. For the gold-selling livestream room, the interests of the major groups include jewelry & gold and financial management. Besides, for the movie-ticket-selling livestream, the interests of the major groups include the movie and entertainment. Differently, for the face-tissue-selling livestream room, the interests of the major groups include food and hygiene products. These case studies can demonstrate that our proposed GIM method can effectively group the similar users to one group and separate the different groups. Also, the captured information of the major group can also be used for the refined recommendation.
### **Complexity Analyses & Convergence**
Thanks for your suggestion. Following your suggestion, we conduct complexity analyses of our proposed ITR method. First of all, we define the number of the users, the number of groups, the average size of groups, as $n$, $k’$, $n/k’$, respectively. In the process of the adaptive density estimation, the time complexity and space complexity of calculating radius proposal for one group is $\mathcal{O}(1)$, $\mathcal{O}({k’}^{2})$, respectively, and for all groups is $\mathcal{O}(k’)$, $\mathcal{O}({k’}^{2})$, respectively. Then, the time complexity and space complexity of density estimation for all groups is $k’ \times n/k’$, $n \times k’$, respectively. Subsequently, at the heuristic merge-and-split strategy stage, for the explore step, it takes $\mathcal{O}(k’)$ time complexity, and $\mathcal{O}(1)$ space complexity, respectively. And for the exploit step, it takes $\mathcal{O}(k’ \times n/k’)$ time complexity, and $\mathcal{O}(nk’)$ space complexity, respectively. Besides, for the proposed pseudo recommendation pre-text task, the time complexity and space complexity is $\mathcal{O}(n \times k’)$, and $\mathcal{O}(n \times k’)$, respectively. In addition, for the proposed pull-and-repulsion pre-text task, the time complexity and space complexity is $\mathcal{O}(nk’+{k’}^{2})$, and $\mathcal{O}(nk’)$, respectively. Moreover, for the BPR loss, the time complexity and space complexity is $\mathcal{O}(n)$, and $\mathcal{O}(n)$, respectively. Overall, the time complexity and space complexity of our proposed ITR method is $\mathcal{O}(k’+ k’ \times n/k’+ k’+ k’ \times n/k’+ n \times k’+ nk’+{k’}^{2}+n) \rightarrow \mathcal{O}(nk'+{k’}^{2})$and $\mathcal{O}({k’}^{2}+ n \times k’+1+ nk’+ n \times k’+ nk’+n)\rightarrow \mathcal{O}(nk'+{k’}^{2})$, respectively. Therefore, our proposed method will not bring large memory and time costs since the complexity of our method is linear to the number of users.
For convergence, we checked the loss and performance of our method on all datasets and found that they converge well, i.e., the loss decreases and gradually converges to a low value, while the performance increases and gradually converges to a high value.
---
Rebuttal 4:
Comment: Dear Reviewer x7i3,
Thank you very much for your precious time and valuable comments. We hope our responses have addressed your concerns. Please let us know if you have any further questions. We are happy to discuss them further. Thank you.
In addition, we have attached the revised paper, if necessary, you can check it: https://anonymous.4open.science/r/NeurIPS-7269-ITR-revised-7D83/NeurIPS-7269-ITR-revised.pdf.
Best regards,
Authors
---
Rebuttal 5:
Title: Follow Up for Reviwer x7i3
Comment: Dear Reviewer x7i3,
We highly appreciate your valuable and insightful reviews. We hope the above response has addressed your concerns. If you have any other suggestions or questions, feel free to discuss them. We are very willing to discuss them with you in this period. If your concerns have been addressed, would you please consider raising the score? It is very important for us and this research. Thanks again for your professional comments and valuable time!
Best wishes,
Authors
---
Rebuttal Comment 5.1:
Comment: Thank you to the authors for your reply.
I'd like to mention some actions that may not align with the rebuttal guidelines:
* Lengthy responses across **three** 6,000-character responses
* Inclusion of outside links
* Explicit request for reviewers to "raise the score"
It's important to follow the guidelines by providing concise, high-quality responses that effectively address concerns.
---
Regarding your response, some issues remain:
* **Inference Mechanism**: It seems you identify groups and use pooling methods for group recommendations. If so, the innovation appears to be in the group identification process, which might be replaced by traditional clustering or community detection methods. You mentioned these cannot be directly adopted, but they might be ***adapted***, such as using a pre-defined group number. Could you elaborate on the strengths of your group identification module and provide any empirical results?
* I’m still unsure how to extend the framework to a scenario *w. group annotation scenario*
* **Extra Datasets**: Yelp and Weeplaces are large-scale datasets. Conducting experiments on these could further demonstrate consistent high performance. Textual explanations cannot replace empirical findings.
I kindly recommend addressing these questions in a single reply within 6,000 characters.
---
Reply to Comment 5.1.1:
Comment: Dear Reviewer x7i3,
Thanks for your reminder. We will respond and try to address your remaining concerns in this box. For the inference, see the details in the “Detailed Introduction of the Application” and “Technical Details During Inference.” For the strengths, see “Existing Community Detection”. For the effectiveness of the proposed pre-text tasks, refer to Table 3 in the paper. For the weaknesses of the existing methods, refer to Figure 1 in the paper. Using a pre-defined group number is not applicable in our scenario since the number of groups dynamically changes, referring to “Annotation Costs and Dynamic Change” in response to Reviewer GTSo. Could you please clarify the meaning of “w.”. If “w.” means “with”, we have already answered your question and please carefully check it in “Explanation of“base” & Utilization of ITR”. For experimental evidence of the large-scale data, please refer to the results of the application in Section 6.4.2. For the experimental results on Yelp and Weeplaces, we start to run the experiments on them. Once they are ready, we will post the experimental results. Thanks for your response again. If you have any further concerns, don’t hesitate to tell us and start the discussion.
Best regards,
Authors of Submission 7269 | Summary: The research topic of this paper is group recommendation, i.e., recommend items to groups of users. The authors point out two issues of existing models including the fixed number of groups and the supervised training schema. To solve these problems, they propose a novel unsupervised group recommendation model named ITR (Identify Then Recommend). Experiments demonstrate the effectiveness on user recommendation and group recommendation. ITR is deployed on industrial recommender.
Strengths: - The writing and presentation are excellent. The authors clearly introduce the background, research problem, core ideas, and the proposal.
- The proposed method is novel and the design of heuristic merge-and-split is interesting.
- The improvements of ITR are significant in both user recommendation and group recommendation.
Weaknesses: - The demonstration figures are missing. Additionally, the method is relatively complex. The authors should outline the core ideas and primary designs before delving into the methodology section.
- Hyper-parameter experiments are missing. Does the proposed ITR utilize any hyper-parameters during the density estimation or the merge-and-split stage?
- What are the advantages of the proposed ITR compared to other clustering methods that do not require specifying the number of clusters, such as DBSCAN?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. How scalable is the proposed ITR? Can it be easily applied in industrial recommender systems with million users?
2. Is the number of clusters dynamic, or is it determined at the final iteration? In my view, the estimated number of clusters for open datasets may be fixed after training, but is the cluster number dynamic for industry-specific data?
3. How does the proposed method maintain balance between different groups? Are there cases where some groups contain many users while others have very few? I believe this imbalance could significantly impact the recommendation performance.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer pdQ8T**
Thanks for your valuable and constructive reviews. We appreciate your insights and suggestions, as they will undoubtedly contribute to improving the quality of our paper. In response to your concerns, we provide answers to the questions as follows in order.
### **Demonstration Figure & Code Ideas & Primary Designs**
Thanks for your suggestion. We will add the demonstrate figures and the core ideas and primary designs before delving into the method part.
### **Advantages of ITR**
Thanks for your question. The clustering methods, which need the pre-defined number of cluster centers, such as k-means, spectral clustering, cannot be applied in this scenario. For other clustering methods, which do not require the pre-defined number of cluster centers, such as DBSCAN, is hard to deal with our scenario well since 1) the data distribution of our application change dynamically, therefore it’s hard to determine the parameters regarding to the density, such the radius, and the threshold. But our proposed group identification module can automatically estimate the density and discover the groups via the heuristic merge-and-split strategy, thus can be easily applied in the real-time large-scale unsupervised recommendation system. 2) They can not be integrated into our framework without our proposed pre-text tasks, including the pseudo group recommendation pre-text task and the pull-and-repulsion pre-text task. We have added these claims in the revised paper.
### **Scalability**
Thanks for your question. In this paper, we aim to solve the practical problems in the real-time large-scale industrial recommendation system. Therefore, we first design our method and conduct quick experiments on the toy open benchmarks. Then, we conduct extensive experiments on the real-time large-scale data in the application (with about 130 million page views / 50 million user views per day). We admit the scalability of the open benchmarks is limited, but we think it is reasonable for quick trials, and our final aim is to deploy the method in real-world applications. The details regarding the application can be found in Section 6.4.
### **Hyper-parameter Experiments**
Thanks. We conduct the hyper-parameter experiments in the Section 6.3. The hyper-parameters mainly include the trade-off parameters $a$ and $b$. During the density estimation and merge-and-split stage, there are no introduced hyper-parameters.
---
Rebuttal 2:
Comment: ## **Response to Reviewer dQ8T [2/2]**
### **Core Ideas & Primary Designs**
Thanks for your suggestion. Following your suggestion, we briefly introduce the core idea and the primary designs of our proposed method before introducing the method part. Concretely, we aim to develop an unsupervised group recommendation method since we find the promising performance of existing state-of-the-art methods relies on the annotations of the groups. The experimental evidences can be found in Figure 1. However, in the real-world scenario, the annotations regarding the group-item interactions and the user-group assignments are always not available. And labeling them in the real-time recommendation system is expensive and even impossible. To this end, we develop a pure unsupervised group recommendation method, which can automatically discover the user groups and provide the precise recommendation for them. Therefore, the core ideas of our methods are twofold, including the group identification and the group recommendation. For the group identification, our initial solution is to adopt some existing clustering or community detection methods, which do not require the number of clusters, e.g., DBSCAN, DeepDPM, etc. However, these methods can not automatically discover the user groups since they need other hyper-parameters such as the radius and calculation of the density. Therefore, we design an adaptive density estimation method, which can automatically estimate the density of the samples. Then, for the group discovery, we propose a heuristic merge-and-split strategy to merge the similar users into the group and split different user groups in the large group. Besides, the group embeddings are set as the learnable neural parameters and can be optimized during the learning process. Moreover, for the group recommendation part, the existing method can not deal with it in the unsupervised setting. And we propose two pre-text tasks, including the pseudo group recommendation pre-text task and the pull-and-repulsion pre-text task. The pull-and-repulsion pre-text task aims to optimize the group embeddings via separating the different groups and pushing the samples together to the corresponding groups. Besides, for the pseudo group recommendation pre-text task, it generates the pseudo labels for the group recommendation task and guide the network to conduct group recommendation even without the precise annotations. By these designs, our proposed ITR is able to automatically discover the user groups and then provide the precise group recommendation for them. Therefore, it can be applied in the real-time large-scale recommendation system. The details can be found in Section 6.4. We have added these insights at the method Section in the revised paper.
### **Cluster Number**
Thanks for your question. In this paper, the setting of the user/group recommendation is pure unsupervised and the cluster number is not pre-given in both the open dataset and the industrial data. Therefore, during the training process, the cluster number will dynamically change on the open benchmarks since the status of the user embeddings change. Our proposed GIM can automatically discover and optimize the group embedding and the group number. After training, the number of clusters and the status of the group embeddings are restored, and then they will be used to conduct the testing of the group recommendation and the user recommendation.
Besides, on the real-time large-scale data, we first train the recommendation model using the past data. During this process, the number of clusters and the status of the group embeddings are optimized dynamically. After the training stage, they are restored to the database for serving the downstream tasks. Then, when the training data update, the recommendation model will be continue trained and the number of clusters also change dynamically.
### **Imbalance Problem**
Thanks for your question. The imbalance problem does an essential factor in our scenario. We conduct some case studies in our application. Concretely, we select some hot livestream room for the case studies, e.g., the gold-selling livestream room, the grocery-selling livestream room. For the gold-selling livestream room, the imbalance problem exists since the most users in this room are interested in the gold and will form one large cluster, but the other users in this room are interested in other items and form several small clusters. Differently, for the grocery-selling livestream room, the imbalance problem is not very serious, since there are so many small user groups and the large user group may not easily form. For the downstream task, in our scenario, the imbalance groups may not be a bad phenomenon since we can use this property to find the major user group in the livestream room and provide more precise recommendation for them. We have added these claim and discussion at the application Section in the revised paper.
---
Rebuttal 3:
Comment: Dear Reviewer dQ8T,
Thank you very much for your precious time and valuable comments. We hope our responses have addressed your concerns. Please let us know if you have any further questions. We are happy to discuss them further. Thank you.
In addition, we have attached the revised paper, if necessary, you can check it: https://anonymous.4open.science/r/NeurIPS-7269-ITR-revised-7D83/NeurIPS-7269-ITR-revised.pdf.
Best regards,
Authors
---
Rebuttal 4:
Title: Follow Up for Reviwer dQ8T
Comment: Dear Reviewer dQ8T,
We highly appreciate your valuable and insightful reviews. We hope the above response has addressed your concerns. If you have any other suggestions or questions, feel free to discuss them. We are very willing to discuss them with you in this period. If your concerns have been addressed, would you please consider raising the score? It is very important for us and this research. Thanks again for your professional comments and valuable time!
Best wishes,
Authors
---
Rebuttal Comment 4.1:
Title: increase rating from 6 to 7
Comment: Thanks for addressing my concerns. Considering this, I have increased the rating from 6 to 7.
---
Reply to Comment 4.1.1:
Comment: Dear Reviewer dQ8T,
We sincerely thank you for your meticulous and thoughtful reviews of our submission, which significantly improve the quality and clarity of our work. Thank you once again for your professional assistance and valuable contribution.
Warm regards,
Authors of Submission 7269 | Summary: This paper tackles the group recommendation problem by proposing an unsupervised group recommendation framework named ITR (Identify Then Recommend). Specifically, the paper first identifies the area and density of each region automatically, then combining with a heuristic strategy to identify groups. Then, performing group recommendation task by constructing the pseudo group-item labels to guide the self-supervised learning of the group recommendation model.
Strengths: Strengths:
1. The paper identifies groups automatically before making recommendations.
2. This paper handles group discovery and group recommendation without group annotations
3. Extensive experiments were conducted with various baselines
4. It’s interesting to see the successful A/B test in Section 6.4 Application in the Appendix. In my opinion, the authors should bring this section to the main paper to further convince the readers.
Weaknesses: Weaknesses:
1. One of the two issues used in this paper is not correct. Previous related works such as PIT[1], COM [2] or MoSAN [3] already can work well with ad-hoc groups (i.e., not require pre-defined groups). Therefore, it’s not correct to say that “the promising performance of these methods relies on a pre-defined and fixed number of user groups.” (L38-39)
2. As a result, I believe the authors should compare with those methods above in the experiments
3. Moreover, [1], [2], [3] provided other group recommendation datasets with a larger group size. If I recall correctly, Mafengwo and CAMRa2011 have very limited number of members in a group. Also, statistics of the datasets should be given.
4. L298-300 “From these results, … remove the group annotations”, the papers I mentioned also already had similar observations.
5. In Section 4.3 (L318-320), it’s unclear to me about the explanation of HR@10 for CAMRa2011 dataset. So, GroupIM is unable to effectively capture the order of user preferences, but still beat our proposed method? Does it mean that we should have another metric to evaluate the ‘order of user preferences’ to make sure our method is better than GroupIM? Please correct me here.
[1] Exploring personal impact for group recommendation. CIKM 2012.
[2] COM: a generative model for group recommendation. KDD 2014
[3] Interact and Decide: Medley of Sub-Attention Networks for Effective Group Recommendation. SIGIR 2019.
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to my questions above. I may have more questions during the discussion phase.
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes. According to the authors, "ITR still relies on the user-item interaction for the user recommendation and group recommendation", but they aim to address that in the future to solve the data sparsity problem.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer hAA1**
Thanks for your valuable and constructive reviews. We appreciate your insights and suggestions, as they will undoubtedly contribute to improving the quality of our paper. In response to your concerns, we provide answers to the questions as follows in order.
### **Previous Related Methods**
Thanks for your concern. Actually, we have surveyed and discussed two of these methods, i.e., COM [1] and MoSAN [2] in our paper. And please note that, in our paper, we claimed that two critical problems limit the applicability of the recent state-of-the-art methods are twofold. Firstly, the promising performance of these methods relies on a pre-defined and fixed number of user groups. Therefore, they can not handle group recommendations without giving the number of user groups, and unfortunately, the number of groups is usually unknown and dynamic in real-time industrial data. Secondly, the supervised training schema of existing GR methods requires extensive human annotations for user-group distribution and group-item interaction, easily leading to significant costs. The previous methods either rely on the pre-defined cluster number or the user-group / group-item annotations. Note that the setting of our experiments is purely unsupervised, and we are not merely solving the occasional group recommendation problem. For MoSAN [3], in Section 3.1 of its original paper, we find that it still needs the ad-hoc group for the training. Therefore, it is not a pure, unsupervised group recommendation method. And for the COM [2], it also user the group assignment information. Besides, for PIT [1], thanks for your suggestion; we missed this paper in the survey process since it seems old. It solves the group recommendation problem via the personal impact topic modeling. And following your suggestion, we will add it to the related work part in the revised paper. For the additional comparison experiments, we think it is hard to produce the results since all these methods are close-resourced. Besides, they are also not new papers and may not achieve very promising performance, especially in the pure, unsupervised learning setting. We have added these claims and explanations in the revised paper.
[1] Exploring personal impact for group recommendation. CIKM 2012.
[2] COM: a generative model for group recommendation. KDD 2014
[3] Interact and Decide: Medley of Sub-Attention Networks for Effective Group Recommendation. SIGIR 2019.
### **Large Group Size**
Thanks for your question. In this paper, we aim to solve the practical problems in the real-time large-scale industrial recommendation system. Therefore, we first design our method and conduct quick experiments on the toy open benchmarks. Then, we conduct extensive experiments on the real-time large-scale data in the application (with about 130 million page views / 50 million user views per day). We admit the scalability of the open benchmarks is limited, but we think it is reasonable for quick trials, and our final aim is to deploy the method in real-world applications. The details regarding the application can be found in Section 6.4.
### **Similar Observations**
Thanks. We believe the similar observations are reasonable since the phenomenon will appear in both the occasional group recommendation setting and the pure unsupervised group recommendation setting. But they are essentially different and unsupervised group recommendation is more challenging than the occasional group recommendation.
### **HR@10 for CAMRa2011 Dataset**
Thanks for your question. For the performance of our proposed method on the CAMRa2011 dataset, we consider it as a corner case since our proposed method can beat all the baselines with different metrics. And we want to give a reasonable explanation here. For the results, we can observe that our method can beat GroupIM with HR@5 but can not beat it with HR@10. And HR@5 is a more precise metric than HR@10 since it requires the model to rank correctly in the top 5 items. Therefore, we suspect that the ranking ability of GroupIM may not be strong and robust since it can achieve very promising performance when ranking in the top 10 items but can not beat our method when ranking in the top 5 items. We have added this explanation in the revised paper.
---
Rebuttal Comment 1.1:
Comment: To: Reviewer hAA1
Dear Reviewer hAA1,
Hi Reviewer hAA1! We highly appreciate your valuable and insightful reviews. We hope the above response has addressed your concerns. If you have any other suggestions or questions, feel free to discuss them. We are very willing to discuss them with you in this period. If your concerns have been addressed, would you please consider raising the score? It is very important for us and this research. Thanks again for your professional comments and valuable time!
We sincerely appreciate your constructive reviews and questions. We provide detailed responses regarding Previous Related Methods, Large Group Size, Similar Observations, HR@10 for CAMRa2011 Dataset as above. We hope our responses can effectively address your concerns. If they don't, let's have further discussion now.
Besides, if you have any additional suggestions or questions (as you mentioned, you may have more questions during the discussion phase), please do not hesitate to bring them up. We are more than willing to engage in further discussion to improve the quality of this research.
If you feel that your concerns have been satisfactorily resolved, we kindly ask you to consider revising your score. Your rating is crucial for us and our research. Thank you once again for your professional comments and the time you have invested!
Best wishes,
Authors of Submission 7269
---
Rebuttal 2:
Comment: Dear Reviewer hAA1,
Thank you very much for your precious time and valuable comments. We hope our responses have addressed your concerns. Please let us know if you have any further questions. We are happy to discuss them further. Thank you.
In addition, we have attached the revised paper, if necessary, you can check it: https://anonymous.4open.science/r/NeurIPS-7269-ITR-revised-7D83/NeurIPS-7269-ITR-revised.pdf.
Best regards,
Authors
---
Rebuttal 3:
Comment: Dear Reviewer hAA1,
We highly appreciate your valuable and insightful reviews. We hope the above response has addressed your concerns. If you have any other suggestions or questions, feel free to discuss them. We are very willing to discuss them with you in this period. If your concerns have been addressed, would you please consider raising the score? It is very important for us and this research. Thanks again for your professional comments and valuable time!
Best wishes,
Authors
Title: Follow Up for Reviewer hAA1
---
Rebuttal 4:
Title: Address some concerns
Comment: Thanks for your response. I slightly increased the score due to the efforts of addressing all the comments (from other reviewers as well), but it cannot pass my acceptance threshold due to two reasons:
1. 'Mafengwo and CAMRa2011 have very limited number of members in a group.' And I do not see using only those datasets could really convince the effect of this model. I don't see the authors address this comment as well.
2. As far as I understand, ad-hoc groups [1, 2, 3] are groups that form just for one-off events; and it's still comparable with the proposed method.
Thank you.
---
Rebuttal Comment 4.1:
Comment: Dear Reviewer hAA1,
Thanks for your responses and improving the score. We are glad that our responses can solve part of your concerns now, such as Previous Related Methods, HR@10 for CAMRA2011 Dataset and Similar Observations.
1. Regarding your question about datasets, we have already addressed this in the Large Group Size section. Our goal is to perform quick experiments on well-known open benchmarks before applying the method to real-time, large-scale data, which is significantly larger than the existing benchmarks. Besides, for the experimental results on Yelp and Weeplaces, we start to run the experiments on them. Once they are ready, we will post the experimental results.
2. Regarding ad-hoc groups or occasional group recommendation, we agree that comparing with these methods is valid. If you carefully read the original paper, you will find that we have surveyed, discussed, and tested several ad-hoc group methods, including GroupIM [1] and CubeRec [2]. These are more recent and relevant compared to the papers you pointed out, having been published in 2020 and 2022 respectively.
[1] Sankar et al. GroupIM: A Mutual Information Maximization Framework for Neural Group Recommendation. In SIGIR, 2020.
[2] Chen et al. Thinking Inside The Box: Learning Hypercube Representations for Group Recommendation. In SIGIR, 2022.
If you still have any concerns, please let us know. We are very willing to discuss them further.
Best regards,
Authors of Submission 7269
---
Rebuttal 5:
Title: Dataset concern remains
Comment: Thanks for the response.
I did see the Large Group Size section. But I still disagree with the goal is to perform quick experiments on well-known open benchmarks before applying the method to real-time as:
1. [1, 2, 3] and other papers (like CubeRec you pointed out) did test on other group recommendation datasets with various group sizes and statistics reported. I do not think using Mafengo and CAMRa2011 are good enough because the group sizes are really small. CAMRa2011 even has the average group size of around 2. My opinion is we need more datasets with different group sizes to verify the robustness of the group recommendation model.
2. I do not see much details on the industrial datasets, details setup, deployment, pipeline components, etc. What I only see is Appendix 6.4.2 A/B Testing results and "the application contains 130 million page views and 50 million user views per day" in some of the responses. I did not find much details in the original paper (correct me if I'm wrong). Therefore, I cannot check with the 'internal' part and I only can mark the paper based on the 'external' (i.e., Mafengo and CAMRa2011).
I still want to keep my score at this point. | Summary: This study pointed out two issues in group recommendation in the context of industrial applications. First, the group label can be dynamic and may require constant training. Second, the annotation cost for the supervised learning is great. To address these two issues, the study proposed an unsupervised group recommendation framework ITR, which identifies the user group in an unsupervised manner and performs two self-supervised learning tasks. Experimental results show superior improvement.
Strengths: (1) The study has conducted extensive comparisons against many baselines and shows superior performance.
(2) The writing is mostly easy to follow, although there are some missing details.
(3) The study also conducted an A/B test to show the possibility of real-world application.
Weaknesses: (1) The claimed contribution on the introduction of unsupervised learning does not align closely with the experimental results. More specifically, I think the current study did not show that the improvement indeed came from the unsupervised learning. To show that, the authors can add an additional variant of "known group annotation + GIM + PAR + PGR".
(2) The main issues that the author pointed out in the current group recommendation problem are that the group assignment can be dynamic and the annotation cost can be high. But the current experimental setting does not seem to support the evaluation for such scenarios. Does the group assignment really change a lot? Is it really expensive to do the annotation? Can we just perform several runs of clustering methods (the ones need a pre-defined number of clusters) to decide the number of clusters? How much more would it cost compared to the current framework?
Technical Quality: 3
Clarity: 3
Questions for Authors: (1) Can the authors provide more details about the binary personalized ranking loss? What are considered the positive and negative pairs? What is the strategy of finding these pairs?
(2) What is unique about the current clustering method? In the current manuscript, it seems that replacing it with any other clustering methods that do not require a pre-defined number of clusters still works for the whole framework.
(3) What are the evaluation protocols for group and user recommendation?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: (1) This study should include a comprehensive complexity analysis as it tries to address the annotation cost issues in the context of group recommendation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## **Response to Reviewer GTSo [1/2]**
Thanks for your valuable and constructive reviews. We appreciate your insights and suggestions, as they will undoubtedly contribute to improving the quality of our paper. In response to your concerns, we provide answers to the questions as follows in order.
### **Alignment between Contribution and Results**
Thanks for your suggestion. In our view, we think unsupervised learning is a setting that is more challenging compared with supervised learning. Correct me here if you have different opinions. Therefore, the unsupervised user/group recommendation is more challenge compared with the supervised ones (see the experimental results in Figure 1). In this paper, we aim to point out that most of the existing deep group recommendation methods are supervised, but in real-world scenarios, the labels are always missing. Therefore, we need to process the user/group recommendation in an unsupervised manner. In addition, with the unsupervised learning setting, we develop a novel group recommendation method by identify-then-recommend schema. Concretely, the designed group identification module aims to discover the user groups on the user embeddings. And the proposed pseudo group recommendation pre-text task and the pull-and-repulsion pre-text task aim to produce high-quality group embeddings and conduct precise recommendations, respectively. Besides, thanks for your suggestion on the experiment. However, the variant of "known group annotation + GIM + PAR + PGR" may not be reasonable since if we already know the group annotation, we don’t need to conduct the group identification. Therefore, if the group annotation is missing (the settings of this paper), we must use the GIM and then conduct the ablation studies on the PAR and PGR. The GIM is a must-use module, and the effectiveness of GIM can be verified through the performance of the downstream tasks. The experimental results can be found in Table 3 and Table 4. We have added these claims in the revised paper.
### **Annotation Costs and Dynamic Change**
Thanks for your question and suggestion. This paper aims to deploy the proposed method in the real-time large-scale industrial recommendation system. On the open benchmarks, we admit that the annotations of users and groups have already existed and in the experiments, we remove them for the unsupervised experimental setting. We also admit that annotating these toy datasets may not be very expensive. However, the group assignments must change dynamically during training, especially at the early stage. However, note that on real-time large-scale data, the annotation costs a lot, and the distribution will shift dynamically. For example, in our scenario, the application contains 130 million page views and 50 million user views per day. The group assignment and annotation will be changed daily since it is a real-time recommendation. In addition, this is a newly launched application. Therefore, the activities of users will shift drastically, e.g., from new users to old users. We believe it will lead to large annotation costs and distribution shifts, and we aim to develop a pure, unsupervised group identification method for the user/group recommendation. In this scenario, performing several runs of clustering methods (the one that needs a pre-defined number of clusters) is not applicable since the search space will become very large, especially since we don’t know the cluster number for the daily data. The current methods cannot deal with the data without a given number of clusters. For our proposed method, we just need one pass to determine the cluster number, which can fulfill the requirement of the daily data. The complexity of the multiple trials will be T times than our proposed method, where T denotes the time of trials. The details regarding the application can be found in Section 6.4. We have added these explanations in the revised paper.
### **Details about BPR Loss**
Thanks for your question. BPR loss is a commonly used loss function in the recommendation. In our proposed method, we follow ConsRec for the BPR loss. And $\mathcal{L}]\_{\text{U2I}}$ is the same as the $\mathcal{L}\_{\text{user}}$ in the ConsRec. It is formulated as $\mathcal{L}\_{\text{user}} = -\sum\_{u\_s \in \mathcal{U}}{\frac{1}{|\mathcal{D}\_{u\_s}|}} \sum_{(j,j’)\in\mathcal{D}\_{u\_s}}{\text{ln}\sigma(\hat{r}\_{sj}-\hat{r}\_{sj’})}$ , where $\mathcal{D}\_{u\_s}$ is the user-item training set sample for user $u\_s$ and the $(j,j’)$ denotes the user $u\_s$ prefers observed item $i\_j$ over unobserved item $i\_{j’}$. For the sampling of positive and negative sample pairs, we also follow ConsRec, i.e., randomly sampling from missing data as negative instances to pair with each positive instance. For the number of negative samples, ConsRec conduct experiments and analyses in Figure 8 of their original paper. For the fairness, we keep the original setting of the ConsRec. We have added these details in the revised paper.
[1] Wu X, Xiong Y, Zhang Y, et al. Consrec: Learning consensus behind interactions for group recommendation[C]//Proceedings of the ACM Web Conference 2023. 2023: 240-250.
**to be continue...**
---
Rebuttal 2:
Comment: ## **Response to Reviewer GTSo [2/3]**
### **Unique about Current Clustering Method**
Thanks for your question. The clustering methods, which need the pre-defined number of cluster centers, such as k-means, spectral clustering, cannot be applied in this scenario. For other clustering methods, which do not require the pre-defined number of cluster centers, such as DBSCAN, is hard to deal with our scenario well since 1) the data distribution of our application change dynamically, therefore it’s hard to determine the parameters regarding to the density, such the radius, and the threshold. But our proposed group identification module can automatically estimate the density and discover the groups via the heuristic merge-and-split strategy, thus can be easily applied in the real-time large-scale unsupervised recommendation system. 2) They can not be integrated into our framework without our proposed pre-text tasks, including the pseudo group recommendation pre-text task and the pull-and-repulsion pre-text task. We have added these claims in the revised paper.
### **Evaluation Protocols**
Thanks for your question. For the evaluation, we follow ConsRec and adopt four metrics to evaluate the ranking capability of the methods, including Hit Rate @ {5, 10} and Normalized Discounted Cumulative Gain @ {5, 10}.
**to be continue...**
---
Rebuttal 3:
Comment: ## **Response to Reviewer GTSo [3/3]**
### **Complexity Analyses**
Thanks for your suggestion. Following your suggestion, we conduct complexity analyses of our proposed ITR method. First of all, we define the number of the users, the number of groups, the average size of groups, as $n$, $k’$, $n/k’$, respectively. In the process of the adaptive density estimation, the time complexity and space complexity of calculating radius proposal for one group is $\mathcal{O}(1)$, $\mathcal{O}({k’}^{2})$, respectively, and for all groups is $\mathcal{O}(k’)$, $\mathcal{O}({k’}^{2})$, respectively. Then, the time complexity and space complexity of density estimation for all groups is $k’ \times n/k’$, $n \times k’$, respectively. Subsequently, at the heuristic merge-and-split strategy stage, for the explore step, it takes $\mathcal{O}(k’)$ time complexity, and $\mathcal{O}(1)$ space complexity, respectively. And for the exploit step, it takes $\mathcal{O}(k’ \times n/k’)$ time complexity, and $\mathcal{O}(nk’)$ space complexity, respectively. Besides, for the proposed pseudo recommendation pre-text task, the time complexity and space complexity is $\mathcal{O}(n \times k’)$, and $\mathcal{O}(n \times k’)$, respectively. In addition, for the proposed pull-and-repulsion pre-text task, the time complexity and space complexity is $\mathcal{O}(nk’+{k’}^{2})$, and $\mathcal{O}(nk’)$, respectively. Moreover, for the BPR loss, the time complexity and space complexity is $\mathcal{O}(n)$, and $\mathcal{O}(n)$, respectively. Overall, the time complexity and space complexity of our proposed ITR method is $\mathcal{O}(k’+ k’ \times n/k’+ k’+ k’ \times n/k’+ n \times k’+ nk’+{k’}^{2}+n) \rightarrow \mathcal{O}(nk'+{k’}^{2})$and $\mathcal{O}({k’}^{2}+ n \times k’+1+ nk’+ n \times k’+ nk’+n)\rightarrow \mathcal{O}(nk'+{k’}^{2})$, respectively. Therefore, our proposed method will not bring large memory and time costs since the complexity of our method is linear to the number of users.
---
Rebuttal 4:
Comment: Dear Reviewer GTSo,
Thank you very much for your precious time and valuable comments. We hope our responses have addressed your concerns. Please let us know if you have any further questions. We are happy to discuss them further. Thank you.
In addition, we have attached the revised paper, if necessary, you can check it: https://anonymous.4open.science/r/NeurIPS-7269-ITR-revised-7D83/NeurIPS-7269-ITR-revised.pdf.
Best regards,
Authors
---
Rebuttal 5:
Title: Follow Up for Reviewer GTSo
Comment: Dear Reviewer GTSo,
We highly appreciate your valuable and insightful reviews. We hope the above response has addressed your concerns. If you have any other suggestions or questions, feel free to discuss them. We are very willing to discuss them with you in this period. If your concerns have been addressed, would you please consider raising the score? It is very important for us and this research. Thanks again for your professional comments and valuable time!
Best wishes,
Authors
---
Rebuttal 6:
Title: Comment
Comment: Thank you for preparing the rebuttal which has mostly addressed my concerns. I decide to raise my rating.
---
Rebuttal Comment 6.1:
Comment: Dear Reviewer GTSo,
Thanks for your professional reviews and valuable suggestions. They improve the quality of our paper significantly. And we are glad that our responses can address your concerns well and you are willing to raise the score. If you have any further questions, we are very willing to discuss them with you.
Warm regards,
Authors of Submission 7269 | Rebuttal 1:
Rebuttal: We extend our sincere gratitude to the SAC, AC, and PCs for their dedicated efforts and constructive feedback. Your comments have been invaluable in enhancing the quality of our manuscript. We have meticulously addressed each of your questions and hope our responses satisfactorily address your concerns. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Binocular-Guided 3D Gaussian Splatting with View Consistency for Sparse View Synthesis | Accept (poster) | Summary: This paper proposed a 3D Gaussian Splatting method to render novel views with sparse inputs. This paper first use a pre-trained keypoints matching network to generate dense point initializations, and propose a consistent loss between the warped binocular stereo image. Meanwhile, they introduce a opacity penalty strategy to further learn the accurate gaussian points. The proposed method achieves the state-of-the-art performance on LLFF, DTU and Blender datasets.
Strengths: 1. The dense point generated from the pre-trained keypoints matching network can provide much better initialization for 3DGS with sparse inputs.
2. Using the generated binocular views to add the consistent loss can introduce more constraints in the sparse scenario.
3. The reported results seems to be much better than existing methods.
Weaknesses: 1. While the reported results show that the opacity penalty strategy brings the best improvement, it's motivation is not clear.
- Why add a coefficient to the opacity can guide the gaussian point to be closer to the scene surface? (L52. "guiding the remaining Gaussians to be closer to the scene surface"), especially under the setting of $\lambda=0.995$ (has a minor difference with 1.0), and are there any ablation results with different value configurations of $\lambda$. Because the coefficient only works on the opacity, the opinion of guiding the position of Gaussian to be closer to the scene surface does not convince me enough.
- The statement of "the gaussian close to surface has larger gradient and the gaussian far from the surface has smaller gradient" (Line176-179) seems unconvincing. The accurate Gaussian near the surface can render accurate colors with relatively small computational loss and should have small gradients. And on the contrary, the wrong Gaussian may has a larger gradient, like the situation of "wall" in the overfitting scenario.
- Such a small modification can bring nearly 2dB improvement is shocking, and the explanation and content of this part is not adequate. Adding more analysis might help make it more convincing.
2. The proposed binocular stereo consistency is actually a method to regularize the model using unseen virtual views.
- And using the virtual unseen view is a demonstrated existing strategy that has been adapted by many methods like [1,2,3], and the difference of this method is using the different sampling space of virtual views (the sampling space in this paper only has horizontal translation and no rotation).
- Intuitively, using more diverse sampling space like [1,2,3] may be more powerful, because it can use more unseen views from different angles to regularize the model. So what are the advantages of sampling only in the horizontal direction and are there any experimental comparisons?
- The disparity-based or depth-based warping will introduce the occlusion, which has a great impact of the performance. And how to mitigate this is not clarified in this paper.
- There might be an inaccuracy about the method. It seems like you want to use the backward warping to warp the right image to the left camera coordinate (Line148-160). Thus the depth needs to be pixel-aligned with the left image in Eq. (3), but you just use the depth of right image. This just confuses me a lot.
3. It seems that the dense initialization using the keypoints matching network plays an important role in the overall performance.
- How about the sensitivity of the method to different matching network?
- How about the performance of the method w/o dense or sparse initialization and only w/ the random initialization like DNGaussian. Because in the sparse scenario, popular SFM methods are hard to work and always fail to generate accurate points.
4. There are also some confusions in the experiment.
- The results reported by FSGS differ significantly from those in the paper, and there seems to be no explanation for this in the paper.
- Some visualization results (e.g., Fig. 3, Fig. 4 and Fig. 9) are not labeled with the corresponding methods, which makes comparison difficult.
- Since the runtime of 3DGS-based method is relatively fast, reporting the error bar or standard deviations of multiple experiments can help to enhance the credibility of the paper.
[1] SPARF: Neural Radiance Fields from Sparse and Noisy Poses, CVPR2023.
[2] GeCoNeRF: Few-Shot Neural Radiance Fields via Geometric Consistency, ICML2023.
[3] RegNeRF: Regularizing Neural Radiance Fields for View Synthesis from Sparse Inputs, CVPR2022.
Technical Quality: 2
Clarity: 3
Questions for Authors: Refer to weaknesses for details. And due to these doubts, I tend to give the borderline and hope to see the author's response.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Have declared in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer gh5j for the invaluable feedback and time invested in evaluating our work. We respond to each question below.
**_Q1: The opacity penalty guides the remaining Gaussians closer to the scene surface._**
Opacity penalty prune the Gaussians that far from the scene surface as much as possible, while preserving the Gaussians close to the scene surface. Our description may be ambiguous here, and we will revise it in the revision.
We perform ablation studies on the LLFF dataset with the value of $\lambda$ ranging from 0.96 to 1.0, and the results are shown in the **Table G**. We find that the best performance is achieved when the value of $\lambda$ is 0.995.
**_Q2: "the Gaussian close to surface has larger gradient and the Gaussian far from the surface has smaller gradient"._**
Our description may not be clear enough, the "gradient" refers to opacity. For instance, when initialized with a random point cloud, during the optimization process, Gaussians that are far from the scene surface will have their opacity reduced, thereby being pruned. Gaussians closer to the scene surface will see an increase in opacity, thus being preserved. However, not all Gaussians close to the surface have precise positions, hence, we employ opacity regularization to further eliminate Gaussians deviating from the surface, thereby enhancing the quality of novel view images.
**_Q3: Explanation and content of Opacity Penalty Strategy is not adequate, adding more analysis._**
We provide more detailed ablation studies on the LLFF, DTU, and Blender datasets, as shown in **Table B** and **Table C** in the attached PDF, and the results prove that the Opacity Penalty Strategy is indeed very effective, although it is simple. We will correct some places that are not clear enough in the revision.
**_Q4: Advantages of sampling only in the horizontal direction._**
Although some papers regard unseen views or adjacent training views as source views and warp them to the reference view, similar to our image-warping method based on binocular vision, these approaches do not consider the impact of the distance and angle between the source view and the reference view on the effectiveness of supervision. **Figure A** in the attached PDF shows the warped images and error maps when using different source views. It can be observed that when the source view is rotated relative to the reference view or is far away from the reference view, the warped image suffers from significant distortions due to depth errors and occlusions. This results in a large error compared with GT image, which hinders the convergence of the image-warping loss. In contrast, slight camera translations that cause minor changes in view without rotation and negligible impact from occlusions, allow the errors between the warped image and the GT image to primarily stem from depth errors, facilitating better optimization of depth.
**Table A** in the attached PDF shows the quantitative comparison using different source images on the LLFF and the DTU dataset. The binocular vision-based view consistency method achieves the best performance.
**_Q5: Using more diverse sampling space._**
We perform ablation studies for horizontal sampling range $d_{max}$ on the LLFF and DTU datassets, as shown in **Table H** of the attached PDF, and the results show that the larger sampling range is not the better. There is almost no performance increase on both the LLFF and the DTU dataset when $d_{max}$ is greater than 0.4.
Meanwhile, the **Table A** shows that the camera views with rotation lead to performance degradation.
**_Q6: The impact of occlusion on performance._**
Since we only move the camera slightly, there is no serious occlusion between the reference view and the source view, so we believe that the impact of occlusion can be ignored.
**_Q7: An inaccuracy about the equation of image warping._**
Thank you for pointing out the inaccuracy in the equation, warping the right image to the left view requires the depth of left image instead of the depth of right image. We will correct the equation in our revision.
**_Q8: The impact of initialization on overall performance._**
We conduct experiments on the LLFF and DTU datasets with several different initialization, including random initialization, sparse initialization, and using the LoFTR [1] matching network to construct initial point clouds. The results are shown in **Table K** of the attached PDF.
PDCNet+ can get more matching points than LoFTR, so using the initialized point cloud obtained by PDCNet+ result in better performance. For the LLFF dataset, the input views have large overlapping, SfM methods can generate point clouds with better quality than LoFTR. Therefore, using sparse point clouds generated by SfM as initialization yields better performance than LoFTR. However, for the DTU dataset, where there is small overlapping among input views, SfM-generated point clouds are too sparse. So, the performance as initialization is even worse than random initialization.
**_Q9: The results reported by FSGS differ significantly from those in the paper._**
We cannot reproduce the results reported in the FSGS paper. The results listed in our paper are reproduced using their official code for a fair comparison. We will explain this in our revision. Note that even using the results reported in the FSGS paper, our results remain significantly better.
**_Q10: Some visualization results are not labeled with the corresponding methods._**
Due to some unknown reasons, the method names labeled in the visualization results cannot be displayed in some PDF readers. Please use Chrome to visualize these figures. We will address this issue in revision.
**_Q11: Report the error bar or standard deviations._**
We run our method 10 times for each scene in LLFF and DTU dataset. Error bars are shown in **Figure D**.
[1] LoFTR: Detector-free local feature matching with transformers. CVPR 2021.
---
Rebuttal Comment 1.1:
Comment: Thanks for authors' response.
1. I'm glad to see the ablation studies of the opacity penalty strategy. It would make the effectiveness of this module more convincing. But the motivation is still not clear for me. To my knowledge, this strategy adds a factor (less than 1.0) to weight down the original opacity and make a part of Gaussians below the pruning threshold, and how about directly adjusting the threshold of the pruning operation.
2. The explanation of the binocular stereo consistency do not convince me enough. The authors explain that the advantage of only sampling the virtual view in the horizontal direction is the small rotation and less occlusion. And I think this is just the range control of the virtual view sampling space and can be achieved by any existing sampling methods, and don't show the advantage of the binocular sampling, after all, I can translate the camera in all direction but no rotation or small rotation.
3. The results in Tab. K show that the dense matching model plays a very important role in the overall performance, and this method is relatively sensitive to the matching model. The effectiveness of the proposed modules seems have a great reliance of the matching model, and the random initialization performs poor especially on DTU dataset. And all these results should be analyzed and declared in the paper.
4. Methodological errors and additional explanations about experiments and comparisons should be corrected and stated in the paper.
Therefore, I still think my original rating is fair and this paper requires major revision on the method and experiment.
---
Reply to Comment 1.1.1:
Title: Responses to Reviewer gh5j
Comment: Ablation studies for the pruning threshold on LLFF dataset with 3 input views.
| **pruning threshold** | 0.005 (baseline) | 0.010 | 0.015 | 0.020 | 0.025 | 0.030 | 0.035 | 0.040 | 0.045 | 0.050 |
|-----------------|------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| **PSNR** | 20.48 | 20.30 | 20.23 | 20.05 | 19.93 | 19.93 | 20.09 | 19.96 | 20.08 | 19.91 |
| **SSIM** | 0.715 | 0.711 | 0.708 | 0.707 | 0.705 | 0.706 | 0.707 | 0.706 | 0.707 | 0.706 |
| **LPIPS** | 0.218 | 0.212 | 0.203 | 0.205 | 0.206 | 0.205 | 0.205 | 0.205 | 0.205 | 0.204 |
We perform ablation studies for the pruning threshold, using dense initialization and view consistency loss by default. The experimental results show that directly increasing the pruning threshold leads to a performance decline.
---
Rebuttal 2:
Title: Responses to Reviewer gh5j
Comment: Dear Reviewer gh5j,
Thanks for your feedback.
**_Q1: How about directly adjusting the threshold of the pruning operation._**
It is clear that the opacity penalty strategy is different from setting a fixed pruning threshold. The opacity penalty strategy is applied to all Gaussians and acts as a global constraint. In contrast, directly adjusting the pruning threshold only affects Gaussians with opacity lower than the threshold. For Gaussians with opacity higher than the threshold but not on the scene surface, a fixed pruning threshold is ineffective.
We are conducting experiments on the LLFF dataset with different pruning threshold, the results will be provided later.
**_Q2: The range control of sampling space can be achieved by any existing sampling methods and the camera can be translated in all direction._**
Yes, the existing methods might be able to change the position and angle of the unseen view to achieve the same effect as ours, but they didn't do it. To the best of our knowledge, our method is the first to attempt using viewpoint translation based on Gaussian Splatting to optimize the problem of sparse view synthesis, and it has been proven that it is more effective than other sampling methods.
Indeed, the camera can be translated in any direction, but that still falls under translation without rotation, which we believe is essentially no different from horizontal movement.
**_Q3: The method is relatively sensitive to the matching model and have a great reliance of the matching model._**
Dense initialization is an indispensable part of our method, as we mentioned in Section 3.4 of our paper, our method needs a dense initialization to achieve better geometry initialization, thereby improving the quality of 3DGS. Additionally, experiments have shown that the matching model we selected is effective across different datasets.
Although our method performs poor when using random initialization, it still outperforms the baseline 3DGS, especially on the DTU dataset, which proves that the other two components of our method are effective. | Summary: This paper proposes a new method related to the problem of novel view synthesis in a sparse input setting. The authors propose to exploit stereo consistency as a self-supervision signal in contrast to the use of priors such as diffusion which tends to produce less precise geometry. Specifically, the work proposes the use of binocular stereo consistency as a guiding signal, i.e. with a horizontal translation a disparity between two images can be calculated given intrinsics and depth, and this can be leveraged to warp the translated view to input image and assess the consistency between images. Furthermore, the authors address the issue of filtering the redundant gaussians by introducing a decay scheme based on penalising the opacity leading to pruning gaussians far from the surface of the object. The paper evaluates the proposed approach in a commonly used in literature sparse novel view scenario, using 3 common datasets. The authors present competitive results in their evaluation. An ablation of system elements/contributions is also included.
Strengths: - This paper touches upon an interesting and important topic for the community - novel view synthesis in a constrained scenario. This work is among the first ones to propose a solution based on 3D Gaussian Splatting.
- The method proposed by the authors draws some inspiration from monocular depth estimation and formulates a self-supervision loss component applicable in 3D reconstruction. I find the idea novel and interesting - I appreciate the application of a relatively simple concept (correlation between depth and disparity) to the task.
- I believe that the big strength of the method is the focus on precise geometry and the lack of dependence on priors from pre-trained models.
- The paper clearly outlines performance improvements with respect to the previous state of the art, per-scene optimisation methods.
- The work evaluates the performance of the proposed method on 3 datasets well-known in the community showing improvements in all of them. I believe it is a suitable selection used throughout the literature (following the same setup of 3,6,9 views). The ablation study is suitable for the work showing the performance impact of separate components (dense initialisation, disparity consistency loss, opacity penalty).
- I find it particularly nice that the binocular stereo consistency can be added to effectively any 3D reconstruction method (provided having depth as an output). While I don't expect it to be heavily affecting dense reconstruction, I believe it would be looked into and tried to incorporate by many researchers working in sparse reconstruction.
- The paper is written clearly, I didn't have any trouble following it.
- The method is described in a way that should be reproducible - I believe I wouldn't have significant trouble reimplementing it.
Weaknesses: - The main weakness I see is the lack of comparison to generalisable methods [1, 2, 3, 4, 5, 6, 7]. DTU dataset which is already used in the paper is also commonly used to benchmark methods that perform a training step on a selection of scenes. While the setup of generalisable, and per-scene optimisation is favourable towards the former, I believe the proposed method would be competitive. This would potentially be an interesting comparison, both quantitatively (I would expect competitive scores), and qualitatively (varying artefacts between the two approaches), and would additionally increase the value of the benchmark.
- I believe that computational performance comparison is a missed opportunity. Proposing one of the first methods for sparse novel view synthesis would benefit greatly by emphasising the speed differences between the methods. It would be a great addition to show the training and inference speed with respect to NeRF-based methods (to show improvement in quality metrics and speed), and other GS-based methods (to show probably similar speed, but increased reconstruction quality).
- Both disparity consistency loss and opacity-based pruning rely on the choice of hyperparameters - namely camera shift $d_{max}$ and decay coefficient $\lambda$. It would be great if this choice was motivated and its influence on the performance analysed (including analysis of whether the same values should be applied across tested datasets).
- The authors show estimated depth maps for their method with and without the use of view consistency loss. It would be good to see the depth maps from other methods (e.g. DNGaussian) - it would be interesting to see a comparison with methods that use depth priors as supervision. It would also be nice to see the comparison to ground truth depth (where not available, an interesting comparison would be the reconstruction method in a dense setup scenario - as a proxy for ground truth)
- The authors mention in the checklist that results in the paper are reported as an average performance - it would be good to specify whether experiments were run multiple times, and investigate how the performance varies across the runs.
[1] Alex Yu, Vickie Ye, Matthew Tancik, Angjoo Kanazawa, *pixelNeRF: Neural Radiance Fields from One or Few Images*, IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2021
[2] Anpei Chen, Zexiang Xu, Fuqiang Zhao, Xiaoshuai Zhang, Fanbo Xiang
Jingyi Yu, Hao Su, *MVSNeRF: Fast Generalizable Radiance Field Reconstruction
from Multi-View Stereo*, IEEE/CVF International Conference on Computer Vision, 2021
[3] Mohammed Suhail, Carlos Esteves, Leonid Sigal, Ameesh Makadia, *Generalizable Patch-Based Neural Rendering*, European Conference on Computer Vision, 2022
[4] Haotong Lin, Sida Peng, Zhen Xu, Yunzhi Yan, Qing Shuai, Hujun Bao, Xiaowei Zhou, *Efficient Neural Radiance Fields for Interactive Free-viewpoint Video*, SIGGRAPH Asia, 2022
[5] Mukund Varma T, Peihao Wang, Xuxi Chen, Tianlong Chen, Subhashini Venugopalan, Zhangyang Wang, *Is Attention All That NeRF Needs?*, International Conference on Learning Representations, 2023
[6] Thomas Tanay, Matteo Maggioni, *Global Latent Neural Rendering*, IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024
[7] Haofei Xu, Anpei Chen, Yuedong Chen, Christos Sakaridis, Yulun Zhang, Marc Pollefeys, Andreas Geiger, Fisher Yu, *MuRF: Multi-Baseline Radiance Fields*, IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2024
Technical Quality: 3
Clarity: 4
Questions for Authors: - How was the camera shift $d_{max}$ and decay coefficient $\lambda$ chosen? Is the same value appropriate to all datasets? I would imagine it would be increased and decreased based on the distance of the object to the camera. Also, are the values of camera shift sampled uniformly?
- How would the application of disparity consistency loss to the existing methods look like? I would imagine it would be rather straightforward. Did the authors try that?
- Previous methods, as pointed out by the authors, use depth priors (using pre-trained models) as the supervision. Have the authors tried adding such supervision to their approach? I understand that the method is designed to be prior-free but it would be interesting to see an indication whether this would improve the performance.
- Zoom-in areas could be put side-by-side (possibly in the appendix, or another row in the figure with zoom-ins). It is a bit hard to compare small elements the authors want the reader to focus on when zooming the document - the focus areas (red boxes) are a bit far away and hard to notice small differences.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors provide a brief limitation section. I believe the issue with consistency constraints in textureless areas is a very good mention. Also, the experiment with masked training shows that the limitation is identified correctly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer UVce for the invaluable feedback and time invested in evaluating our work. We respond to each question below.
**_Q1: Comparison to generalizable methods._**
Thank you for listing some of the latest generalizable novel view synthesis methods. We run the state-of-the-art MuRF method on the LLFF dataset for comparison with our method under the same input conditions. For the DTU dataset, we directly use the evaluation results from the original paper. As shown in **Table D** and **Table E** in the attached PDF.
Our method outperforms the state-of-the-art MuRF method on the LLFF dataset, and also shows better performance when using 6 views and 9 views as input on the DTU dataset. MuRF performs better on the DTU dataset when using 3 views as input. However, generalizable methods require large scale datasets and time-consuming training to learn a prior.
**Figure C** in the attached PDF shows the visual comparison between our method and MURF on LLFF dataset.
**_Q2: Computational performance comparison._**
Thank you for your advice. We provide a comparison on computational performance of our method and prior works. All the results are tested under one single RTX3090 GPU. The times are listed in **Table F** in the attached PDF.
**_Q3: Ablation studies for hyperparameters $\lambda$ and_ $d_{max}$.**
**Ablation study for $\lambda$.**
We perform ablation studies on the LLFF dataset with the value of $\lambda$ ranging from 0.96 to 1.0, and the results are shown in **Table G** in the attached PDF.
We find that the best performance is achieved when the value of $\lambda$ is 0.995. We set the $\lambda$ to 0.995 for all datasets.
**Ablation study for $d_{max}$.**
We perform ablation studies for $d_{max}$ on the LLFF and DTU datassets. The value $d_{max}$ gradually increases from 0.1 to 0.8, the results are shown in **Table H** in the attached PDF.
We find that there is almost no performance increase on both the LLFF and the DTU dataset when $d_{max}$ is greater than 0.4, so we set the $d_{max}$ to 0.4 for all datasets.
The values of camera shift are sampled uniformly.
**_Q4: Compare depth maps with other methods and pseudo-GT depth._**
Figure 3 in our paper shows visual comparison of rendered novel images and depth maps between our method and others including DNGaussian. We apologize that due to unknown reasons, the method labels below the images cannot be displayed in some PDF readers. Please use Chrome to open our pdf to see the visual comparisons. We will address this issue in the revision.
Additionally, the **Figure B** in the attached PDF presents comparisons of depth maps from several methods, including pseudo-GT depth. Pseudo-GT depth is obtained by training using all available views in baseline 3DGS.
**_Q5: Experimental results from multiple run times._**
We run our methods 10 times for each scene in LLFF and DTU dataset. Error bars are shown in **Figure D** of the attached PDF.
**_Q6: Apply the disparity consistency loss to the existing methods._**
We conduct experiments to train DNGaussian on LLFF dataset using disparity consistency loss. The results are shown in **Table I** of the attached PDF.
Although we could not reproduce the performance reported in the original paper, we find that the performance is improved by comparing the quantification results before and after using disparity consistency loss. It indicates that disparity consistency loss is effective in other method.
**_Q7: Apply depth priors as supervision._**
We experiment with L1 depth loss and DNGaussian Depth Regularization on the LLFF and DTU datasets, employing dense initialization and opacity penalization by default. Monocular depth is obtained by pre-trained DPT, same as DNGaussian. We calibrate monocular depth using sparse point clouds from SfM when using L1 depth loss. The results are shown in **Table J** of the attached PDF.
The L1 depth loss and DNGaussian Depth Regularization both result in performance degradation. We attribute this mainly to errors in monocular depth, even after calibration, leading to decreased performance.
**_Q8: Put the zoom-in areas side-by-side._**
We really appreciate your suggestion, and we will make appropriate adjustments to the layout of zoom-in areas in the revision.
---
Rebuttal Comment 1.1:
Title: Thanks for detailed response
Comment: Dear Authors,
Thank you for the amount of information provided in response to my review. I have read your rebuttal.
I find the experiment with using your additional supervision with DNGaussian particularly convincing. I saw that performance is very competitive against MuRF (except 3 views DTU but that may be due to how DTU is evaluated, i.e. same views for all the scenes).
One last worry I have is whether the choice of $\lambda$ is empirically easy. The method seems to be rather sensitive to small changes. Unless the value of $0.995$ is the best or close to best for all dataset.
---
Rebuttal 2:
Title: Responses to Reviewer UVce
Comment: Dear Reviewer UVce,
Thanks for your feedback.
**_Q1: Whether the choice of $\lambda$ is empirically easy._**
We conduct ablation studies for $\lambda$ on the DTU and Blender datasets, the results are as follows:
Table G2: Ablation studies for $\lambda$ on DTU dataset with 3 input views.
| $\lambda$ | 0.960 | 0.970 | 0.975 | 0.980 | 0.985 | 0.990 | 0.995 | 0.998 | 1.0 |
|--------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| PSNR | 17.06 | 18.11 | 19.60 | 19.63 | 19.77 | 20.49 | 20.71 | 20.67 | 19.08 |
| SSIM | 0.785 | 0.813 | 0.846 | 0.848 | 0.853 | 0.861 | 0.862 | 0.862 | 0.832 |
| LPIPS | 0.223 | 0.196 | 0.158 | 0.152 | 0.139 | 0.121 | 0.111 | 0.109 | 0.131 |
Table G3: Ablation studies for $\lambda$ on Blender dataset with 8 input views.
| $\lambda$ | 0.960 | 0.970 | 0.975 | 0.980 | 0.985 | 0.990 | 0.995 | 0.998 | 1.0 |
|--------|-------|-------|-------|-------|-------|-------|-------|-------|-------|
| PSNR | 23.68 | 24.24 | 24.37 | 24.48 | 24.57 | 24.69 | 24.71 | 24.59 | 23.47 |
| SSIM | 0.851 | 0.864 | 0.869 | 0.871 | 0.873 | 0.872 | 0.872 | 0.870 | 0.861 |
| LPIPS | 0.143 | 0.125 | 0.118 | 0.114 | 0.107 | 0.103 | 0.101 | 0.101 | 0.112 |
We can see that the effect of $\lambda$ on performance is generally consistent across different datasets. | Summary: This paper proposes 3D Gaussian splatting from sparse views aided by pre-trained key points matching initialization, binocular stereo constraints, and opacity regularization. Binocular stereo constraints utilize perspective projection to warp synthetic stereo images into the training images for self-supervision. Opacity regularization guides the densification/pruning process to drop non-active Gaussians.
Strengths: Good initialization of Gaussians and geometrically inspired regularization for 3DGS (stereoscopic constrains and opacity decays)
Weaknesses: 1. Adding an opacity regularization does not seem enough to consider it a contribution.
2. Overstatements. The authors claim their method is prior-free, however, they use a very strong prior for dense initialization.
3. Unclear results. How is your method without any of your contributions (Table 4) 1.5 dB better than the baseline 3DGS?
4. Missing ablation studies. What is the performance if only stereo loss or only density regularization is applied?
5. I am afraid the main performance gains come from the dense keypoint initialization.
6. Inconsistent performance improvements in Table 4. Dense keypoint initialization provides ~3dB improvements, and stereo consistency and opacity regularization provides ~4dB. How is the combination of both only providing ~5dB? I understand improvements cannot be linearly added, but the gap seems unreasonable.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weaknesses
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors provided limitations. However, the occlusions due to stereoscopic synthesis where not addressed (even though they are small due to small camera baselines).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer sng2 for the invaluable feedback and time invested in evaluating our work. We respond to each question below.
**_Q1: Opacity regularization does not seem enough to consider it a contribution._**
Although opacity regularization is a very simple strategy, it significantly contributes to improving the quality of sparse view synthesis. Similar to FreeNerf [1], its primary contribution lies in the use of frequency regularization, which despite its simplicity, greatly aids in performance enhancement. We provide more comprehensive ablation studies on the LLFF, DTU, and Blender datasets in **Table B** and **Table C** in the attached PDF. It can be observed that there is a significant performance improvement when opacity regularization is applied. We believe the simple yet effective regularization term will greatly benefit other researches on sparse view synthesis.
**_Q2: Prior free claim._**
Indeed, we did not describe 'Prior Free' clearly. What we intended to convey is that, different from methods such as the FSGS [2] and DNGaussian [3] that use priors as supervision, we do not require any prior as supervision, since we can mine supervision using the self-supervised method. We will correct this claim in the revision.
**_Q3: 1.5dB higher than baseline 3DGS without any component used in Table 4._**
Our method employs opacity penalty strategy, which conflicts with the opacity reset operation in baseline 3DGS, hence we do not use opacity reset operation. The first row in Table 4 shows the performance without using opacity reset operation. We inadvertently omitted an explanation of it in the ablation study, which we will add in the revision.
**_Q4: Missing ablation studies._**
We provide more comprehensive ablation studies on LLFF, DTU, and Blender datasets, as shown in the **Table B** and **Table C** in the attached PDF.
**_Q5: The main performance gains come from the dense keypoint initialization._**
Although dense initialization brings significant performance gains, relying solely on dense initialization cannot achieve or approach optimal performance. View consistency loss and opacity penalty strategy also contribute significantly to performance improvement, as evidenced on ablation studies in **Table B** and **Table C** in the attached PDF..
**_Q6: Inconsistent performance improvements in Table 4._**
The three components we propose to improve the quality of novel view synthesis are essentially to make the Gaussians more accurately distributed on the scene surface, and the three components are complementary to each other. When only dense points are used as the initialization, the quality of Gaussians has a significant improvement space over the baseline 3DGS. Continuing with view consistency loss and opacity penalty further advances the already enhanced Gaussians, reducing the improvement potential. Therefore, there is less performance improvement when the three components are used together.
Additionally, this is also verified by other works based on 3DGS, such as the ablation study in DNGaussian [3], which shows that using only Depth Regularization leads to a performance improvement of 2.5dB, whereas adding depth Normalization results in only a 1dB performance improvement.
[1] Freenerf: Improving few-shot neural rendering with free frequency regularization. CVPR2023.
[2] FSGS: Real-time few-shot view synthesis using gaussian splatting. arXiv preprint arXiv:2312.00451 (2023).
[3] DNGaussian: Optimizing sparse-view 3d gaussian radiance fields with global-local depth normalization. CVPR2024.
---
Rebuttal Comment 1.1:
Title: Thanks for your reply
Comment: Thank you for addressing my comments. I am willing to raise the score, provided the clarifications and ablation studies are included in the final paper version.
---
Reply to Comment 1.1.1:
Title: Thanks to Reviewer sng2
Comment: Dear Reviewer sng2,
Many thanks for all the helpful comments and positive assessment. We really appreciate your expertise and the score upgrade.
Best,
Authors | Summary: This paper introduces a novel method for 3D Gaussian-based sparse view synthesis. Specifically, initialized from dense point clouds, the depth-warping loss and the opacity penalty strategy are introduced to obtain accurate 3D Gaussians. Extensive experiments on the Blender, LLFF and DTU dataset have demonstrated the effectiveness of the proposed method.
Strengths: 1. This paper is well-writen, the idea of depth-warping loss to improve view consistency is reasonable.
2. The proposed opacity penalty strategy is novel and does make sense. Through this simple operation, a better geometry can be obtained.
3. The experimental comparison is comprehensive.
Weaknesses: 1. Since depth-warping loss is common in many NeRF or 3D Gaussian-based papers, i'm not sure whether this can be viewed as a contribution.
2. Since the dense point clouds are generated from pretrained models, i do not think that this method can be claimed as prior free, please check the definition of what is prior free.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see weaknesses. Since the initialization is based on dense point clouds, i think it would be better to do more ablation studies on more dataset, such as the DTU and the Blender dataset. However, in general, i think this is a relatively good work and i would recommond a borderline accept for this paper. I'm glad to improve rating if the weaknesses can be addrssed.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Please see weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate the reviewer q3N3 for the invaluable feedback and time invested in evaluating our work. We respond to each question below.
**_Q1: Whether depth-warping loss can be viewed as a contribution_**
Although some papers regard unseen views or adjacent training views as source views and warp them to the reference view, similar to our image-warping method based on binocular vision, these approaches do not consider the impact of the distance and angle between the source view and the reference view on the effectiveness of supervision. **Figure A** in the attached PDF shows the warped images and error maps when using different source views. It can be observed that when the source view is rotated relative to the reference view or is far away from the reference view, the warped image suffers from significant distortions due to depth errors and occlusions. This results in a large error compared with GT image, which hinders the convergence of the image-warping loss. In contrast, slight camera translations that cause minor changes in view without rotation and negligible impact from occlusions, allow the errors between the warped image and the GT image to primarily stem from depth errors, facilitating better optimization of depth. This is what we would like to report. We will highlight this in our revision.
**Table A** in the attached PDF shows the quantitative comparison using different source images on the LLFF and the DTU dataset. The binocular vision-based view consistency method achieves the best performance.
**_Q2: Prior free claim_**
Indeed, we did not describe 'Prior Free' clearly. What we intended to convey is that, different from methods such as the FSGS and DNGaussian that use priors as supervision, we do not require any prior as supervision, since we can mine supervision using the self-supervised method. We will correct this claim in the revision.
**_Q3: Ablation study on DTU and Blender datasets_**
Thank you for your suggestion, we provide ablation studies on DTU and Blender datasets with 3 input views. The results are shown in **Table B** and **Table C** in the attached PDF. We can see that our modules are effect on other datasets. We will update our ablation studies in our revision. | Rebuttal 1:
Rebuttal: We sincerely thank the reviewers for their invaluable feedback and the time they dedicated to evaluating our work. We are pleased that the reviewers appreciated the representation and significance of the paper. We have addressed each reviewer’s comments separately, providing detailed analyses and ablation studies to resolve all the raised questions. The Visualization results and Tables are included in the attached PDF. Thank you once again for your insightful feedback, and we look forward to continuing the discussion.
Pdf: /pdf/ad6ebfb7d0c168e58cb6093e02eacc2e63a2cd6d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Implicit Regularization of Sharpness-Aware Minimization for Scale-Invariant Problems | Accept (poster) | Summary: This paper analyzes the implicit regularization effect of Sharpness-Aware Minimization (SAM), focusing specifically on scale-invariant problems. While existing research emphasizes sharpness, this study introduces a new concept called Balancedness, demonstrating both theoretically and empirically that SAM promotes Balancedness. Additionally, the proposed Balancedness-Aware Regularization (BAR) is shown to significantly improve computational efficiency while achieving superior performance in fine-tuning language models using LoRA.
Strengths: Originality
- Introducing a new concept of Balancedness and reinterpreting the implicit regularization effect of SAM from a new perspective.
Quality
- Consistent theoretical analysis and empirical validation with clear results.
Clarity
- Clear presentation with a structure that is easy for readers to understand.
Significance
- Revealing a new aspect of SAM's adaptability to data anomalies and proposing a computationally efficient BAR.
Soundness
- The technical claims, experimental methods, and research techniques of this paper are robust, and the central claims are well-supported by ample evidence. Both the theoretical analysis and experimental results are consistent, and the conclusions are clear. Potential concerns, such as those regarding m-sharpness, are addressed.
Contribution
- The paper theoretically analyzes the mechanism by which SAM promotes balance and proves that balance converges to |Bt| \rightarrow 0. It particularly highlights the strong impact of data anomalies (outliers). This explains why SAM outperforms SGD even in the presence of data anomalies. The implicit regularization of SAM is made explicit, and a new, computationally efficient variant of SAM called Balancedness-Aware Regularization (BAR) is proposed.
- Experiments using various models confirm that the proposed BAR exhibits superior performance in fine-tuning with LoRA compared to traditional SAM and SGD.
- While SAM has been noted as a promising optimization method to enhance the performance of large language models [1], its high computational cost and implementation complexity have limited its practical use. However, this study's regularization from the perspective of Balancedness has a potentiall to addresses these issues in conjunction with LoRA.
[1] https://arxiv.org/abs/2210.14199
Weaknesses: - While LoRA training is performed with AdamW in the original paper, demonstrating that SAM or BAR is more effective than these would make the paper more solid.
- Code such as `eval.py` was not included in the supplementary materials, and its release is expected.
Technical Quality: 4
Clarity: 4
Questions for Authors: - As cited in the Appendix, it is known that SAM promotes the acquisition of low-rank features [2]. However, it was pointed out that explicit regularization for acquiring low-rank features did not lead to improved generalization ability. My curiousity is about the relationship between Balancedness and low-rank features, and whether there is any connection to sharpness.
- Another question is how optimization methods commonly used in LLM training, such as AdamW, affect Balancedness.
[2] https://arxiv.org/abs/2305.16292
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors sufficiently discuss the limitations of this study, particularly regarding the application range and computational efficiency of BAR.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the time and efforts from the reviewer put into this review. We also want to thank the reviewer for recognizing the strength of our work. We will update our draft and code repo to further improve the quality of this work.
**W1.** *While LoRA training is performed with AdamW in the original paper, demonstrating that SAM or BAR is more effective than these would make the paper more solid.*
The LoRA baselines in our experiments are indeed trained with AdamW (or Adam for different experiments), and our implementation follows the official LoRA repo.
**W2.** *Code such as ```eval.py``` was not included in the supplementary materials, and its release is expected.*
The code for ```eval.py``` can be found in the official repo of LoRA; see [1]. We will update the ReadMe.md to provide improved instructions.
**Q1.** *As cited in the Appendix, it is known that SAM promotes the acquisition of low-rank features [2]. However, it was pointed out that explicit regularization for acquiring low-rank features did not lead to improved generalization ability. My curiousity is about the relationship between balancedness and low-rank features, and whether there is any connection to sharpness.*
The setting in [2] does not extend to our case. This is because that the feature learning happens in the principle space in [2], while LoRA learns features in the residue space. Recall that low rankness is induced by parameterizing the last layer as $\mathbf{A} \mathbf{B}^\top$; see section 6 of [2]. In other words, the learned feature is $\mathbf{X}\mathbf{A} \mathbf{B}^\top$, where $\mathbf{X}$ is data or feature from the previous layer. However, in our LoRA case, the learned feature is $\mathbf{X}(\bf{W} + \mathbf{A} \mathbf{B}^\top ) $, where $\mathbf{W}$ is the pretrained model. In sum, low rankness can play fundamentally different roles in [2] compared with our LoRA case.
Regarding the relation of balancedness and sharpness, there are indeed links such as what we have shown in Lemma 1. Regarding the relation between balancedness and low rank features, this is beyond the scope of this work. As there are multiple approaches to induce low-rankness (and [2] employs one of them), we believe that a more systematic study should be carried out for answering this question, and we leave it to future work.
**Q2.** *Another question is how optimization methods commonly used in LLM training, such as AdamW, affect Balancedness.*
It is not difficult to analytically show that a variant of ADAM (without momentum) decreases balancedness slightly on a simple loss function $f(x,y)=0.5(xy - 1)^2$ when learning rate $\eta \rightarrow 0$. However, the balancedness does not decrease after approaching near optimal. This can be seen from Fig. 1 in the additional PDF file (see general response). Note that we use $\eta = 5e-4$ with $10^4$ iterations. However, this analysis on the simple loss function may not extend to more general cases.
In practice, we perform the same experiments as Fig. 3 and plot the balancedness of all layers after training using Adam in Fig. 2 of the additional PDF. No specific trend on balancedness (2${\cal B}_{t,l}$) is observed, because the balancedness on some layers increases compared to initialization; while decreases for other layers. Note that the initialized balancedness is within (2.7 - 3.2) for all layers.
**References**
[1] LoRA official repo: https://github.com/microsoft/LoRA/blob/main/examples/NLG/eval/eval.py
[2] M. Andriushchenko, D. Bahri, H. Mobahi, N. Flammarion. Sharpness-aware minimization leads to low-rank features. NeurIPS 2023
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer t2ze
Comment: I thank the authors for their detailed response. I am satisfied with the answers provided, and I would like to keep my positive score.
---
Reply to Comment 1.1.1:
Title: Thank you for your comments
Comment: Thank you once again for acknowledging the strength of our work. We will revise our manuscript to incorporate your suggestions and include additional instructions in the ReadMe.md file. | Summary: This paper investigates the dynamics of SAM when the loss is of the form f(xy^T) or f(x^Ty). This formulation includes interesting scenarios such as LoRA. This paper shows that SAM will promote balancedness, which is the difference between the squared norms of two variables. Based on this new analysis, this paper proposes to regularize balancedness and show that this method leads to similar or sometime better result than SAM with significantly less compute.
Strengths: 1. This paper considers the interesting interplay between SAM and scale-invariance of the loss and discovers that SAM implicitly regularizes balancedness in this case.
2. The empirical verification of the theoretical prediction is thorough and convincing.
Weaknesses: 1. Arguments including that SGD will make balancedness unchanged are only correct for infinitesimal learning rate and claims like this should be made rigorous (for example at line 137).
2. In section 2.1, when the limitation of sharpness is discussed, the example that h(x,y) = xy is very confusing because there is no local minima for such loss.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Because SGD with a small learning rate will make balancedness almost unchanged, does the result imply that initializing LoRA to be completely balanced and using SGD should also improve performance?
2. Connecting to the first problem, as in Figure 3, the balancedness of the weight trained by SAM, while decreasing faster than SGD, still remains at a high level. Is this the case for BAR as well?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The limitation is clearly addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer for the nice questions. We will add these discussions to the draft.
**W1.** *Arguments including that SGD will make balancedness unchanged are only correct for infinitesimal learning rate and claims like this should be made rigorous (for example at line 137).*
Thank you for pointing this out. We will proofread a few more times and rephrase these sentences. In Figure 3, we have shown that even with a learning rate $0.1$, SGD only slightly changes the balancedness.
**W2.** *In section 2.1, when the limitation of sharpness is discussed, the example that $h(x,y) = xy$ is very confusing because there is no local minima for such loss.*
Thank you for pointing this out. We will rephrase this sentence. Our intention was that the function $h(x,y) = xy$ has the same Hessian (hence eigenvalues) for any $(x, y)$. Thereby, one cannot provide any implicit bias using the largest eigenvalue.
**Q1.** *Because SGD with a small learning rate will make balancedness almost unchanged, does the result imply that initializing LoRA to be completely balanced and using SGD should also improve performance?*
Unfortunately, it is difficult to initialize LoRA in a balanced fashion. Using the notation of equation (6), one has to set $\mathbf{Y} = 0$ to ensure that at initialization $\mathbf{X}\mathbf{Y}^\top = \mathbf{0}$. To achieve exact balancedness, the only choice is to choose $\mathbf{X}=0$ as well. This is not a good initialization because $\mathbf{X}=0$ and $\mathbf{Y}=0$ is a saddle point (as one can see from the gradients).
An alternative is to initialize $\mathbf{Y}=0$ and choose a small $\mathbf{X}$ in magnitude, i.e., initializing close to a saddle point. However, it is known that SGD escapes saddle points quite slow [1].
This means that LoRA initialized in a balanced manner can slow down convergence, and it in turn necessities our balancedness-aware regularization (BAR).
**Q2.** *Connecting to the first problem, as in Figure 3, the balancedness of the weight trained by SAM, while decreasing faster than SGD, still remains at a high level. Is this the case for BAR as well?*
In Figure 3, the balancedness of SAM is decreasing less faster because of the multiple layers of the RoBERTa. As we have discussed in lines 202 - 206 and Theorem 5 in Appendix, multiple layers slowdown the implicit regularization of SAM, and the slowdown is roughly proportional to the square root of (LoRA) layers. In the case of Figure 3, there are 24 transformer layers, and LoRA is applied to every query and value in each transformer layer. This gives a $\sqrt{24 * 2} \approx 7$x slowdown.
The proposed BAR does not suffer from the slowdown from number of layers, because the regularization can be applied individually on each LoRA module for the query and value layers. This gives a better regularization on balancedness and explains why nBAR can improve over SAM in Table 3.
**References**
[1] C Fang, Z Lin, and T Zhang. Sharp analysis for nonconvex SGD escaping from saddle points. COLT 2019.
---
Rebuttal 2:
Comment: Thank the authors for the detailed response. Regarding the initialization of LoRA, it does not appear natural to me that the initialization must be function-preserving (see for example [1]) but this seems beyond the scope of this paper. I think Theorem 5 is very interesting as an explanation for the small decrease in balancedness. I will increase my score to 7.
[1] LoRA-GA: Low-Rank Adaptation with Gradient Approximation https://arxiv.org/pdf/2407.05000
---
Rebuttal 3:
Comment: Thank you for your careful consideration and for recognizing the strengths of our work.
The balancedness observed in the LoRA-GA paper might indeed contribute to their strong empirical performance. While we also want to point out that balanced initialization can sometimes lead to saddle points, and therefore how to initialize seems to be quite problem-dependent. Consider a simple two-dimensional problem $(xy - 1)^2$ for example. A balanced initialization $(c, -c)$ for any constant $c$ drives GD to stuck on saddle points (0, 0). This can be seen by writing down the gradients.
We sincerely thank the reviewer once again for the nice suggestions. We will update our manuscripts accordingly to further improve its quality. | Summary: This paper investigates the implicit regularization effects of sharpness-aware minimization (SAM) on scale-invariant optimization problems, introducing "balancedness" as a new metric for analysis. The authors provide theoretical results showing that SAM promotes balanced solutions for both non-overparameterized and overparameterized scale-invariant problems, and demonstrate that SAM's regularization effect is stronger on noisy data. Based on these insights, they propose balancedness-aware regularization (BAR), a computationally efficient variant of SAM. The paper evaluates BAR on language model fine-tuning tasks using LoRA, demonstrating that it can match or exceed SAM's performance while significantly reducing computational overhead.
Strengths: - The theoretical analysis provides new insights into SAM's implicit regularization effects by introducing "balancing".
- The proposed BAR method offers a more efficient alternative to SAM for scale-invariant problems by adding explicit regularization.
- Comprehensive experiments on language model fine-tuning demonstrate practical benefits.
Weaknesses: - The scope is limited to scale-invariant problems, and LoRA types optimization problem.
- Assumption 1 requires Lipschitz continuous gradient which may not always hold in practice (e.g., ReLU network)
Technical Quality: 3
Clarity: 3
Questions for Authors: - How to understand the lower bound in Theorem 3, which is not always greater than zero during the optimization.
- How to choose a good parameter $\rho$?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and efforts devoted to this paper. Below please find our responses to weakness and questions.
**W1.** *The scope is limited to scale-invariant problems, and LoRA types optimization problem.*
We agree with the reviewer that the major application is on scale-invariant problems such as LoRA and its variants. However, we hope to emphasize that LoRA is already one of the most popular approaches for finetuning language models. LoRA is more economical and easier to serve in practice compared to full parameter-tuning. LoRA type approaches are actively developed and well welcomed by the community; see e.g., HuggingFace's PEFT codebase [1]. Given these evidences, we believe that this scenario is important and our results are useful.
Moreover, we definitely hope to inspire following-ups and even entirely new approaches to extend our results to more general settings. We are positive that this will take place in a step-by-step fashion, and we hope that our work is helpful for initiating this.
**W2.** *Assumption 1 requires Lipschitz continuous gradient which may not always hold in practice (e.g., ReLU network).*
Lipschitz continuous gradient (or smoothness) is a standard assumption for the theoretical understanding of gradient-based methods; see e.g., SGD [2], SGD with momentum [3], AdaGrad [4], and Adam [5]. While smoothness does not necessarily hold in every practical scenario, the theoretical insights from the analysis still yield robust and meaningful guidance for practice, evidenced by the popularity of [2 - 5] for training neural networks.
Moreover, we also hope to point out that in our LoRA scenario, there is even no ReLU activation for the trainable parameter (activations only apply for the frozen parameters). Thus, smoothness is slightly more reasonable compared to the case of Adam on deep neural networks.
**Q1.** *How to understand the lower bound in Theorem 3, which is not always greater than zero during the optimization.*
When it is smaller than $0$, the inequality trivially holds. However, when $\rho$ is chosen small, it is more likely that the RHS is positive given that $\rho^2 \ll \rho$. We will update the lower bound to $\max$ {0, lower bound in Theorem 3}.
**Q2.** *How to choose a good parameter $\rho$?*
Empirical experiences recommend $0.05$ or $0.1$. In practice, a grid search is also helpful.
**References.**
[1] https://github.com/huggingface/peft
[2] S Ghadimi, and G Lan. Stochastic First- and Zeroth-order Methods for Nonconvex Stochastic Programming. SIAM J-OPT 2013.
[3] Y Liu, Y Gao, and W Yin. An Improved Analysis of Stochastic Gradient Descent with Momentum. NeurIPS 2020
[4] R Ward, X Wu, and L Bottou. AdaGrad Stepsizes: Sharp Convergence over Nonconvex Landscapes. JMLR 2020
[5] X Chen, S Liu, R Sun, and M Hong. On the Convergence of A Class of Adam-Type Algorithms for Non-Convex Optimization. ICLR 2019
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed rebuttal. As my main concerns are addressed, I will increase my score to 7.
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful consideration. We appreciate your feedback and are glad that we could address your concerns. | Summary: ### Summary:
This work introduces balancedness (instead of sharpness) and proposes Balancedness-Aware Regularization (BAR), used for scale-invariant problems (e.g., LoRA). Given an objective such as $f(xy^T)$ or $f(x^Ty)$ with parameters $x,y$ and a fixed function $f$, the balancedness is defined as $B_t = \frac{1}{2}(\|x_t\|^2 - \|y_t\|^2)$. They prove that SAM decreases balancedness while SGD preserves it (Theorems 1&2&3) and argue that this new quantity goes beyond local information and relies on the entire SAM trajectory. Indeed, they show that whether $B_t$ goes to zero and gets quite small under SAM.
Moreover, they show that outliers strongly affect balancedness, which helps explain SAM behavior. Finally, they propose the BAR algorithm and conclude with several experiments.
Strengths: ### Pros:
- the studied problem is very important, and the result can be impactful
- the balancedness is completely new and well-motivated for SAM (previously some works studied related quantities but just for SGD)
- having several experiments
Weaknesses: ### Cons:
- the paper is not well-written, and it is difficult to follow it. The authors need to explain results, ideas, and notation via simple sentences and then deal with formulae/equations
- some ambiguities about Theorem 1
Technical Quality: 3
Clarity: 2
Questions for Authors: ### Questions/Comments:
This is an interesting paper. The authors nicely mention that balancedness changes under SAM and this could be one potential way to explain SAM behavior. Moreover, they propose balancedness minimization, which leads to even better algorithms. Unfortunately, the paper must be modified to reach the audience and become readable since the current version is hard to follow.
Some comments:
- line 37-39 -- this part is not well written and is confusing. What is the role of $d_1+d_2$? I suggest explaining it in a few sentences in the paper.
- Limitation of sharpness (Section 2.1): the max e.v. is not the only sharpness measure considered in the literature. Indeed, there is apparently a class of invariant sharpness measures for SAM; for example, see this:
- A Universal Class of Sharpness-Aware Minimization Algorithms, ICML 2024
- Do authors claim that Theorem 1 is the original result of this paper? I think this is known from previous works on the implicit biases of SGD, but it looks like the paper is claiming the results without references. See, for example, Equation 2.5 in this paper (already cited):
- Ji, Ziwei, and Matus Telgarsky. Gradient descent aligns the layers of deep linear networks. ICLR.
Please explain whether the theorem follows from the previous results or if there is a major difference between them.
- The BAR algorithm is barely discussed in the main body of the paper—the algorithm itself is provided in the appendix. Because it is one of the main contributions of the paper, authors should consider rearranging stuff in the paper to place it in the main body and discuss it more.
- Moreover, the ideas behind Algorithms 2 and 3 should be discussed in the main body because of their importance. What are the role of the hyperparameters in the algorithms?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and efforts devoted to this paper. Below please find our point-to-point responses. Please let us know if there are other questions or unclearness during discussion. We are always glad to improve the quality of our submission.
**W1.** *The paper is not well-written, and it is difficult to follow it.*
We hope a brief summary of this paper helps to clarify the overall logistic.
* We show that SAM promotes balancedness on scale-invariant problems. Unlike many of existing metrics, balancedness is a global metric for implicit regularization. Theoretical and empirical evidences are provided to support SAM's implicit regularization on balancedness.
* Since balancedness is computationally tractable, we mimic the dynamic of SAM (e.g., Theorem 2) and explicify balancedness as a regularizer -- BAR. BAR can be applied on top of other optimizers such as SGD or Adam in the same way as weight decay. BAR overcomes the need for the second gradient computation in SAM, yet achieving similar numerical performance as SAM.
We are also glad to clarify any ambiguities and revise the draft for better presentation -- please let us know during the discussion phase.
**W2.** *Some ambiguities about Theorem 1*
Since the confusion is detailed in questions, see our responses in Q3.
**Q1.** *line 37-39... What is the role of $d_1 + d_2$?*
Taking NOP problem $\min_{\mathbf{x}, \mathbf{y}}f(\mathbf{x}\mathbf{y}^\top)$ with $\mathbf{x} \in R^{d1}$ and $\mathbf{y} \in R^{d2}$ as an example. Here, we have that the loss function $f : R^{d1 \times d2} \mapsto 1$, in other words, the dimension of $dom f$ is $d1 \times d2$. The number of variables to be optimized is only $dim(\mathbf{x}) + dim(\mathbf{y}) = d1 + d2$ since we parametrize $dom f$ by $\mathbf{x}\mathbf{y}^\top$. Since the number of variables is smaller than the dimension of $dom f$, we are using insufficient variables to perform optimization, and hence its name -- non-overparametrization.
**Q2.** *Other sharpness ... There is apparently a class of invariant sharpness measures for SAM; for example, see this: - A Universal Class of Sharpness-Aware Minimization Algorithms, ICML 2024*
Thank you for pointing out this work. Unfortunately, it is impossible for us to cite this ICML paper because it appeared later on arXiv (June 6) than NeurIPS submission due date (May 22). However, we have cited and discussed the differences with an earlier workshop version of the same ICML paper; see lines 596 - 597 in Appendix. Per request, the main differences with the ICML paper are re-summarized below.
The ICML paper introduces generalized sharpness measures -- sharpness as any function of eigenvalues of Hessian. However, even the generalized sharpness cannot provide implicit regularization for function $h(x,y)=xy$, simply because the Hessian is the same for all $(x, y)$. In addition, when Hessian is negative definite, some of the generalized sharpness measures (e.g., determinate of Hessian) may not be necessarily meaningful. Balancedness overcomes these two problems.
In terms of practical algorithms inspired by theoretical derivations, our goal is in sharp difference with the ICML paper. Our balancedness targets at computational efficiency of SAM. On the other hand, the ICML paper targets at improving over SAM under the same or slightly more computational budget (when the number of samples $n$ is set to be greater than $1$ in their Algorithm 1).
**Q3.** *Do authors claim that Theorem 1 is the original result of this paper? I think this is known from previous works on the implicit biases of SGD, but it looks like the paper is claiming the results without references. See, for example, Equation 2.5 in this paper (already cited): - Ji, Ziwei, and Matus Telgarsky. Gradient descent aligns the layers of deep linear networks. ICLR.*
There is a potential misunderstanding. We do not hope to take credits from Theorem 1, as we have already cited several previous works in lines 131 - 134. Even in the ICLR paper of [Ji and Matus 2019], they have cited the same set of papers as we do. We slightly extend the results from these cited papers to the case with stochastic gradients. We will rephrase this paragraph to eliminate possible misunderstanding.
**Q4 \& Q5.** *The BAR algorithm is barely discussed in the main body of the paper—the algorithm itself is provided in the appendix ... Moreover, the ideas behind Algorithms 2 and 3 should be discussed in the main body because of their importance. What are the role of the hyperparameters in the algorithms?*
The design and the implementation of balancedness-aware regularizer (BAR) are already discussed in lines 280 - 285 in the main context. Since BAR is a regularizer, it can be used the same way as e.g., an $\ell_2$ regularizer or weight decay. Note that this is quite standard, hence we assume the use of BAR should also be straightforward. Algs. 2 and 3 are just detailed BAR implementation in the same format as weight decay. Given the space limitation, instead of duplicating this weight-decay type implementation in the main context, we provide a short yet informative sentence in line 284 - 285 that convey the same massage. However, we would love to expand our discussions if we had additional space.
The ideas of Algs. 2 and 3 have been already discussed in lines 274 - 278 and lines 280 - 282, respectively. To summarize, the idea is that since SAM promotes balancedness implicitly (e.g., in Theorem 2), we can explicify this dynamic and use it to regularize balancedness.
The only hyperparameter for BAR is the coefficient on this regularizer. It plays the same role as the coefficient for e.g., an $\ell_2$ regularizer, that is, determining the tradeoff between regularization and loss-fitting.
---
Rebuttal Comment 1.1:
Comment: Dear authors, thank you for your detailed response. I have a few follow-up questions:
- (Q3) what is your plan on Theorem 1? Could you detail how you want to change its presentation?
- (Q4&5) you will have more space for the final version, What do you plan to do about the explanation of the algorithm?
---
Rebuttal 2:
Comment: We appreciate the reviewer's suggestions regarding the presentation and clarity. We hope that the revisions we have made in response to your comments will improve the clarity of our work. Here's our detailed plans for the revision.
**Q3.** Our objective is to rephrase lines 131-134 to clearly indicate that these results have been documented in previous studies, including those by e.g., Aurora et al. (2018, 2019b), Ji and Telgarsky (2019), and Ahn et al. (2023). Additionally, we will explicitly reference these papers in Theorem 1.
An example of the revision could be:
> How does ${\cal B}_t$ evolve in different algorithms? To set a comparing benchmark of SAM, we first **borrow** results in (Aurora et al 2018, 2019b; Ji and Telgarsky, 2019; Ahn et al 2023).
>
> Theorem 1. (**[Aurora et al 2018, 2019b; Ji and Telgarsky, 2019; Ahn et al 2023].**) When applying SGD on the NOP (1a), the ...
**Q4 \& 5.** As suggested, our plan is to provide additional details and intuition to enhance understanding. We will ensure that our revision is presented in a clear and accessible manner. Our plans include, but are not limited to, the following:
- To move Algs. 2 and 3 into the main body. We hope that this can make the sentences like 'BAR can be implemented in the same manner as weight decay' more concrete.
- To include a dedicated paragraph, with a bold subtitle, to explain the ideas and intuitions behind BAR. We will expand lines 274 - 278 and 280 - 282 to illustrate how Theorems 2 and 3 can be adapted to derive BAR.
- To explain Fig. 2(b) in more depth to show that our BAR indeed mimics SAM's dynamics.
- To discuss the only hyperparameter of BAR and its role in balancing the trade-off between regularization and loss fitting.
---
Rebuttal Comment 2.1:
Comment: I appreciate the authors' detailed plan for the next revision of the paper. My comments and concerns have now been mostly addressed. For the accepted version of the paper, I ask the authors to ensure that they apply the promised changes (Q3, Q4, Q5 above). I decided to increase my score, provided the authors make the promised changes. Good luck!
---
Reply to Comment 2.1.1:
Comment: We will ensure that the revisions enhance presentation following what we have discussed above. Thank you for the suggestions to improve the quality of this work! | Rebuttal 1:
Rebuttal: We thank the ACs and reviewers for handling this submission. Your comments are appreciated, and the manuscript will also be updated accordingly. Our point-to-point responses can be found below, and a .pdf file is also attached as graphical illustration to questions from Reviewer t2ze.
Pdf: /pdf/0417543674475c75586aa38afc3b16ab793b583d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Score matching through the roof: linear, nonlinear, and latent variables causal discovery | Reject | Summary: This paper present novel theoretical results to identify causal effects in restricted ANMs even in case of unobserved confounders.
Strengths: **The paper provides novel contributions to the field of score-based causal discovery by extending previous works to confounded restricted ANMs. Based on these contributions, I strongly support this paper's acceptance.**
- the problem definition is very useful to orient readers
- the theoretical results are novel
- the experiments compare against a sufficient number of baselines, and though the proposed method is not SOTA, it compares well and has better theoretical guarantees
Based on the authors' response, I am eager to improve my score.
Weaknesses: I have a few remarks on improving the flow of the paper; however, even the first four points are not considered major issues.
- Even though condition 1 is a well-known result in the causality literature, I suggest explaining why that admits linear models and including some description of **restricted ANMs** in the main text (at least for me, it is not evident, especially since the condition lacks intuition). To be clear, even this point does not diminish the main contribution, which I see regarding the results for confounders.
- As **inducing paths** are an important concept for the main contributions, please _include it in the main text_ if space permits (suggestion: you can reduce spacing in $\texttt{itemize}$ by setting $\texttt{\\\\begin\\{itemize\\}[nolistsep]}$ )
- I could **not find the definition of an active path** (not even in Def. 5, where it is said to be defined); I presume it is a path that is not blocked, but it would be better to state this explicitly. Maybe it would even be better to use "a path that is not blocked" instead of introducing new terminology (this is the first time I encountered "active paths"; I could be wrong about this)
- The text is sometimes difficult to follow, due to heavy reliance on notation. I'd consider delegating the not crucial part to the appendix (potential candidates in 2.1) and using the remaining space to explain the main quantities better, especially the residuals (e.g., Eq. 12)
## Minor points
- please specify what you are calculating the expectation with respect to (using a bold E is also unconventional, though it's clear from the context it's an expectation)
- as the mathematical objects for d-/m-separation are distinguishable, you might consider dropping the superscript to simplify notation; also, I'd suggest adding whitespace after $\perp^d_\mathcal{G}$ and the like to make it easier for the reader to attribute the indices to $\perp$ and not the not on its right
- I could not find the definition for $\dot{\cup}$
- in the explanation of Prop 3., the wording makes it a bit hard to discern that you also provide intuition for the second part; it would help if you refer to _Part (ii)_ explicitly
- _Score matching through the roof_ in the title does not have added value for me, I'd consider rephrasing it to convey the message that "we propose score-based causal discovery methods for confounded restricted ANMs"
Technical Quality: 4
Clarity: 3
Questions for Authors: Some methods in the literature [1-4] use the Jacobian of a learned neural network instead of relying on the score for causal discovery/identifiability claims. How do you see the relation of your contribution to those Jacobian-based methods?
- [1] Sébastien Lachapelle, Philippe Brouillard, Tristan Deleu, and Simon Lacoste-Julien. Gradient-based neural DAG learning. In 8th International Conference on Learning Representations, ICLR 2020
- [2] Lazar Atanackovic, Alexander Tong, Jason Hartford, Leo J. Lee, Bo Wang, and Yoshua Bengio. DynGFN: Bayesian Dynamic Causal Discovery using Generative Flow Networks, February 2023.
- [3] Patrik Reizinger, Yash Sharma, Matthias Bethge, Bernhard Scholkopf, Ferenc Huszar, and Wieland Brendel. Jacobian-based Causal Discovery with Nonlinear ICA. Transactions on Machine Learning Research, April 2023.
- [4] Shohei Shimizu, Patrik O Hoyer, Aapo Hyvärinen, Antti Kerminen, and Michael Jordan. A Linear NonGaussian Acyclic Model for Causal Discovery. Journal of Machine Learning Research, 7(10), 2006.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors provide an **honest comparison** of their method in the experiments, clearly stating its limitations compared to other methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the thorough review, the constructive comments and the stimulating question. We will go through these points in what follows.
## Weaknesses
We thank the reviewer for the constructive comments: we propose to implement all the 4 main points suggested, as we agree they will be valuable improvements in the flow of the discussion. We add two specific remarks:
- we already provide a definition of *active path* in Definition 5 (L 485-486 “a path $\pi$ […] is **active** w.r.t. …”). For reference, the definition is taken from “Causal Reasoning with Ancestral Graphs”, 2008, Zhang; though we agree that using “a path that is not blocked” in the main text simplifies the life of the reader, as this is more commonly used.
- For the residuals estimation (eq. 12) we use kernel ridge regression, as proposed in NoGAM original paper. Note the target function in least squares regression is the expectation of the target given the covariates, i.e. $\mathbb{E}[Y|X]$ for generic X,Y variables, where Y is the target. This is consistent with our theoretical analysis, where the residuals of the score $\partial_{X_j} \log p(X)$ in equation 12 are defined as $\partial_{X_j} \log p(X) - \mathbb{E}[\partial_{X_j} \log p(X) | R_j]$ which are indeed the least squares regression residuals. We agree that the discussion about residual estimation is relevant, and we will add it in the main text of the paper.
Concerning the minor points raised, we also agree that they would make a valuable addition, we implement all points up to the last. For a title that better conveys the meaning of the paper we will consider something in the flavor of “Identifiability of latent variable additive noise models with score matching”
We are eager to mention the reviewer’s contribution in improving the clarity of our work in the acknowledgments of our paper.
## Questions
This question is very interesting. We will start focusing on [1], and then will consider [3, 4], as these are the three papers we are familiar with (and where I hope I can provide a compelling answer).
1. Comment about connections to [1] (GraN-DAG): one intuition about score methods for causal discovery is that they are based on the idea that if we could access the individual kernels composing the Markov factorization, we would have all the information to recover the causal relations (assuming identifiability as in restricted ANMs). Intuitively, this is achieved as follows
- Take the logarithm to transform the Markov factorization in a summation
- Take the partial derivatives of this log-likelihood: partial derivatives at one time (i) isolate the kernel of leaf nodes (i.e. find the causal direction, as in our Propositions 2 and 3) (ii) inform about the dependency of each kernel from the other variables (i.e. can replace conditional independence tests, our Proposition 1).
GraN-DAG uses the Jacobian to do the equivalent of (ii), i.e. replace conditional independence testing with sparsity of the Jacobian to find some kind of connection strength (see equation 15 of their paper). The reason why no logarithm appears is that while in our case we go through score matching estimation to try to access kernels of the markov factorization, they model such kernels directly with a neural network.
This connection between their Equation 15 and Spantini et al., 2017 is not explicit in their work, neither in any existing paper, to the best of our knowledge, so **we believe that this analysis would make a valuable addition to our paper**, as a discussion in the appendix. Moreover, one interesting conjecture we can make based on these considerations is that there may be a possibility to generalize GraN-DAG to do neural causal discovery in confounded scenarios with the same guarantees that we provide. Please take this with a grain of salt.
2. Connection with [3,4]: as [3] is the nonlinear version of ICA-LiNGAM proposed in [4], we believe that connections about one work would also apply to the other. In general, the two works deal with different types of Jacobian, compared to ours or to the Jacobian found in [1]: [3, 4] deal with Jacobian of the inverse transform (linear and nonlinear), whereas our work concerns the Jacobian of the score. Despite this note, we find some point of contact may exist: Lemma A.1 of [3] proposes a condition to detect the presence of a directed path from $X_i$ to $X_j$ based on the condition $\frac{\partial f_j(X)}{\partial N_i} \neq 0.$ This is not immediately related to our Jacobian of the score, where we can identify the Markov network by checking $\frac{\partial}{\partial X_i X_j} \log p(X)$: though, a closer relation appears in the nonlinear Gaussian additive noise model case, where for a sink node we have $\frac{\partial}{\partial X_i X_j} \log p(X) = -\frac{1}{\sigma_j^2}\frac{\partial f_j(X)}{\partial X_i}$ and edges are detected by the condition $-\frac{1}{\sigma_j^2}\frac{\partial f_j(X)}{\partial X_i} \neq 0$ (note the similarity with the condition of Lemma A.1 in [3]). As they consider the partial derivative with respect to noise $N_i$, they can find direct paths, while as we consider partial derivatives with respect to $X_i$ , we can find direct edges (this statement has subtleties: the one we notice here is that we find a direct edge if and only if we know that $X_j$ is a sink, else we simply find an edge in the Markov network). Please refer to Lemma 1 in [Montagna et al., 2023a] for the derivation of the latter expression and all relevant details. We can conjecture that, in SCMs more general than nonlinear Gaussian ANM, both our method and their methods find a way to use partial derivatives of the mechanisms to probe the presence of edges in the graph. This may also be something worth adding and expanding in our paper, for a profound view on the connection between our work and existing methodologies.
---
Rebuttal Comment 1.1:
Title: Score increased 7->8
Comment: Thank you for your detailed and thoughtful response. My concerns are addressed, I increase my score 7->8. | Summary: The authors propose AdaScore, a method for causal discovery that generalizes previous work based on score matching for SCMs with possibly latent nodes. They combine connections of the score to conditional independence as well as to additive noise SCMs and show that a NoGAM-type procedure works to recover the direction of non-confounded edges of the corresponding partial ancestral graph.
Strengths: - Adapting NoGAM [1] to the case allowing hidden variables is a practically meaningful contribution.
- Model assumptions are somewhat weakened compared to CAM-UV by allowing for general mechanisms within blocks of observed and latent parents.
Weaknesses: The novelty of the paper lies primarily in the application of NoGAM to orient very specific edges in a partial ancestral graph (PAG) (the ancestral graph that represents the Markov equivalence class, in analogue to a CPDAG). This falls significantly short of the main contributions as described by the authors on l33--55. Specifically:
- The authors state that they show how constraints on the Jacobian of the score can be used as conditional independence testing. However, the extent to which this is done is only by noticing the equivalence between conditional independence and the corresponding zero in the Jacobian term (previously noted in [1,2]), without any formal analysis of the proposed t-test (Appendix C) as a statistical test of conditional independence (which happens to be a notoriously difficult test).
- The authors state that their identification results for additive noise models generalize the previous results obtained by previous works. In l193, the authors state "we remove the nonlinearity assumption (of [3]) and make the weaker hypothesis of a restricted additive noise model", but 1) this is a stronger assumption than additive noise, not a weaker one, and 2) the authors in [3] also consider the same restricted additive noise model.
- The authors claim that AdaScore is able to handle a broad class of causal models (l54), but three out of four possible situations are direct applications of existing work. 1) Under no structural assumptions with or without latent confounders, AdaScore simply performs constraint-based causal discovery (FCI) using the conditional independence properties of the Jacobian of the score, a straightforward application of [1] also previously noticed in [2]. 2) Under an additive noise assumption, AdaScore is exactly equivalent to NoGAM. 3) Only under an additive noise assumption with hidden confounders, does AdaScore generalize NoGAM to orient unconfounded edges of the PAG returned by FCI, which may be very few of the discovered adjacencies.
### Other comments
- The experiments do not seem to suggest that AdaScore out performs other methods in any meaningful way---in fact, in Figure 1 a) AdaScore is completely equivalent to NoGAM, and is thus redundant. In Figure 1 b), where AdaScore should distinguish itself, it does not appear to be consistently better than CAM-UV.
- Much of the paper (> 5 pages) is spent on directly describing previous works, NoGAM[3] and/or provide basic background on DAGs and MAGs.
[1] Spantini et al., "Inference via low-dimensional couplings." JMLR 2018.
[2] Montagna et al., "Scalable causal discovery with score matching." CLeaR 2023.
[3] Montagna et al., "Causal discovery with score matching on additive models with arbitrary noise." CLeaR 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is it correct to say that the unconfounded edges that are oriented by AdaScore are the purely _undirected_ edges of the PAG, and not the potentially bidirected ones?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: The authors do not adequately discuss the limitations of their method---the limitations section in the appendix focuses purely on the empirical study. The authors claim that AdaScore is adaptive in the sense of being "less reliant on prior assumptions which are often untestable", but this is only in the sense that it performs different algorithms depending on user specification, which hardly constitutes one single unifying adaptive algorithm.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer TL8k for their thorough review and the valuable comments therein. One important criticism exposed by the reviewer is that the contribution of our work is limited in relation to the existing literature: in this regard, we address to general response and the first point of our response in the section below.
## Weaknesses
- *“The authors claim that AdaScore is able to handle a broad class of causal models (l54), but three out of four possible situations are direct applications of existing work. …“.* In reply to points 1) and 2) that are made, the fact that 3 out of 4 possible situations **are extension of existing works**, does not mean that the 4th one is not there. Our main contribution and the relation with previous work, are largely discussed in the paper and in the rebuttal, particularly in the **first point of the general response**.
Point 3) of the review says that the only novel part “may only apply to few edges.” We politely but firmly disagree with this statement: the number of edges whose direction is theoretically identifiable by AdaScore but not identified by methods such as FCI depends on the problem at hand: there is no ground to overlook this because of the subjective belief that this does not happen frequently. The reviewer agrees that there is a novel contribution w.r.t. the existing literature (from the review: “*Adascore does generalize NoGAM to orient unconfounded edges of the PAG returned by FCI”;*). The reviewer also acknowledge that this is not done by any other existing methods (throughout the review, it is mentioned that our method, and thus our theory, is more general than CAM-UV, FCI, and NoGAM. We mention that it is more general than RCD and LiNGAM, also). To these points we add that our work is the first one that provides identifiability theory for latent variables causal discovery with score matching, a recent and lively line of work (Montagna et al. (a, b, c), Amin et al., Zhu et al., Rolland et al.) discussed in the paper (L35) with connection to the field of causal representation learning Varici et al. (2024). The only point raised which may undermine our contribution is that our theory and method are not relevant given that the identifiability of the causal direction “*may apply to few edges”,* which is a purely subjective consideration, as it depends on the problem at hand and can not be simply disregarded. **Given these remarks, we kindly ask the reviewer to reconsider their score on the contribution, and the global score, accordingly.**
- *“The authors state that they show how constraints on the Jacobian of the score can be used as conditional independence testing […]”.* We do not state that the Jacobian of the score can be used as conditional independence testing or that we propose a statistical test. We say that we use the Jacobian of the score **as an alternative to conditional independence testing** (L9-10, L41-42): while constraint-based methods (e.g., PC) usually rely on conditional independence testing (e.g. Zhang et al., 2011), instead, we consider sparsity of the Jacobian of the score. These are two different statistical problems. Our task concerns the estimation of the Jacobian of the score (Rolland et al. for details on such estimation, Zhu et al. for its statistical efficiency, both references are in the paper), and using a t-test for finding entries of such Jacobian with zero means: the t-test is not part of our novel contributions, we will add a citation e.g. to Walpole et al., (1972) for relative details.
Later, the reviewer writes “[…] *the extent to which this is done is only by **noticing** the equivalence between conditional independence and the corresponding zero in the Jacobian term (noted in [1, 2])*”: **we don’t merely note** the equivalence. **We prove** **it** as a corollary of Spantini et al., (see the 1st point of the general response). Concerning the relation with [2],referenced in our paper, they proposed a version of our Proposition 1 in the limited setting of nonlinear additive Gaussian noise models, while our result holds under generic SCM.
- *“In L193, the authors state, "we remove the nonlinearity assumption (of [3]) and make the weaker hypothesis of a restricted additive noise model", but 1) this is a stronger assumption [...]”* [3] assume a *nonlinear restricted additive noise model,* while we assume a *restricted additive noise model.* Allowing for linear and nonlinear mechanisms is weaker than allowing for nonlinear mechanisms only.
## Limitations
We remark the adaptivity of AdaScore in the sense described in the paper, meaning that it can perform with theoretical guarantees on latent variable models, still identifying the causal direction when this is theoretically possible, which is not done by other methods (FCI, NoGAM, LiNGAM), or it is done under more restrictive assumptions (CAM-UV, RCD). We remark that this does not require any user specification. The user specification simply allows to turn off the search of latent variables, if this is thought as unnecessary. This is all described in the main paper (see L263-265, and L266 where “We […] describe the version of our algorithm whose output is a mixed graph”, i.e. AdaScore when the user is willing to account for latent variables). We could easily remove the possibility of interacting with AdaScore to provide prior knowledge, but this would be a poorer design choice (e.g. see [DoDiscover](https://www.pywhy.org/dodiscover/dev/tutorials/markovian/example-pc-algo.html#Define-the-context) for a discussion about this point from an existing library).
We __renew our thanks to the reviewer__ for the points raised, and are willing to incorporate suggestions to better reflect the nature of our contribution, naming Proposition 1 “Corollary 1”, which better reflects its link with Spantini et al. If our response is satisfactory relative to the points raised, we kindly ask the reviewer to reconsider their score to our paper.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their response. However, the rebuttal does not sufficiently address my main concerns with the paper, and I will opt to retain my score at this time.
## On the main contribution of the paper.
> the fact that 3 out of 4 possible situations are extension of existing works, does not mean that the 4th one is not there.
As mentioned in the original review, I acknowledge the novelty of using score matching to orient edges in a mixed graph. I think I am in agreement with the authors that this is the main contribution of the work. In view of this, the paper does not do a thorough enough investigation to warrant publication at this time. The specific example I mentioned pertains to the lack of an analysis of the proposed t-test as a conditional independence test.
> We do not state that the Jacobian of the score can be used as conditional independence testing or that we propose a statistical test. We say that we use the Jacobian of the score as an alternative to conditional independence testing (L9-10, L41-42): while constraint-based methods (e.g., PC) usually rely on conditional independence testing (e.g. Zhang et al., 2011), instead, we consider sparsity of the Jacobian of the score.
If using the Jacobian of the score as an alternative to a conditional independence test relieves it from being analyzed as a statistical test, then additional theory should be developed, e.g., adapting Thm. 3.2 from Zhu et al. on the theoretical properties of the discovered graph. This may not be a trivial addition, as Zhu et al. deals only with the non-linear Gaussian model.
> we don’t merely note the equivalence. We prove it as a corollary of Spantini et al., (see the 1st point of the general response).
As reviewer `mmww` also points out, this is a rather trivial application of the result when assuming faithfulness.
## On adaptiveness of the algorithm
> We remark the adaptivity of AdaScore in the sense described in the paper, meaning that it can perform with theoretical guarantees on latent variable models, still identifying the causal direction when this is theoretically possible, which is not done by other methods (FCI, NoGAM, LiNGAM), or it is done under more restrictive assumptions (CAM-UV, RCD).
This is closer to my understanding of the contribution of the paper, but not how I understood the authors interpretation of "adaptivity". For example, in l261: "The main strength of our approach is its adaptivity with respect to structural assumptions: based on the user’s belief about the plausibility of several modeling assumptions on the data".
> We remark that this does not require any user specification. The user specification simply allows to turn off the search of latent variables, if this is thought as unnecessary.
I do not understand how this is not user specification. For me, an algorithm that is adaptive in the sense of the paper would need a routine to detect the presence of (significant) latent confounders, then automatically apply the mixed graph version. Otherwise, the AdaScore algorithm reads more as a computer application that includes previous methods rather than a unifying algorithm.
### More minor points.
> Allowing for linear and nonlinear mechanisms is weaker than allowing for nonlinear mechanisms only.
My apologies, thank you for the clarification.
> Point 3) of the review says that the only novel part “may only apply to few edges.”
I acknowledge that the "few edges" comment was subjective. I recognize that this may have come off as dismissive and I apologize for that. This was however not a significant point for my score.
---
Reply to Comment 1.1.1:
Comment: We are surprised by the request for finite samples guarantees on the score matching estimation, as this request was not in the original review. Overall it is clear that we disagree with the reviewer about this paper. In any case, learning theory results are far beyond the scope of the paper: the majority of causal discovery algorithms in the literature do not come with finite sample guarantees.
Concerning the use of the word adaptivity: we agree with the reviewer that we could adopt a better wording; we will adopt the phrasing used in the rebuttal to better express the benefits of our method. | Summary: The paper extends theoretical results about causal discovery through score matching to encompass both linear and non-linear SCMs and lift the sufficiency assumption. The theoretical results relax the non-linearity assumption of Montagna et al 2023 by swapping it with the less restrictive one of restricted ANM from Peters et al 2009. As for the latent confounder detection a parallel with m-separation is drawn using results from Spantini et al 2018, to establish that the score will be non-zero in the presence of an active path. Following the theoretical results, an algorithm to estimate causal graphs from data is proposed and evaluated, generalizing the NoGAM algorithm of Montagna et al 2023, which only covers the non-linear case.
Strengths: The paper is clearly written and, if it wasn’t for some of the definitions relegated to the appendix, very easy to follow.
The theoretical results, particularly Propositions 2 and 3 are important extensions of the score-matching methodology for causal discovery, dealing with both the non-linearity and sufficiency assumptions of the method proposed in Montagna et al. 2023.
Weaknesses: The paper motivation is basically the weakening of current assumptions for causal discovery methods. However, assumptions and benefits of the proposed methodology are not clearly specified. In line 74, the authors state that faithfulness is assumed (I believe, it is kind of hidden in the background notions). If that is the case, the method adopts the same assumption as FCI, plus ANM. So the proposed method is relaxing assumptions compared to CAM-UV, RCD and NoGAM, but adding onto FCI. Regarding the benefits, the alleged flexibility of the method to output DAGs, MECs, MAGs, PAGs, which should make it preferable to FCI, is merely touched upon in the contributions and the experiment section.
Proposition 1 is a rather trivial application of the more general lemma in Spantini et al. and it does not specify the required faithfulness assumption to obtain the result from Eq. 6 in the paper.
The experimental results show limited added value according to the one metric chosen (SHD) in a synthetic setting. They are not comprehensive enough, with no application to common (pseudo-)real benchmarks (e.g. from bnlearn). More experiments and more metrics are needed as, as it stands, the proposed method seems to add no real value compared to the baselines. Additionally, it is not clear from the experiments if it is really able to identify confounders. Breaking down precision and recall by mark would show this. FCI and a random baseline should be also added for reference.
Experiments are conducted on data with at most 9 variables, and the scalability of the method is not shown nor discussed.
The model used to estimate residuals is not discussed, nor the assumption that the chosen model fits the data adequately to correctly estimate residuals, and what is needed to assess this.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Line 28: FCI weakness is highlighted to be that it outputs PAGs, does the proposed method improve on this at parity of assumptions?
- Line 44: ANM is an assumption on the causal mechanism, is it not?
- Line 217: "the noise terms recentered by the latent causal effects" what does this mean?
- Line 221: "N_i are assumed to be non-Gaussian when f_i is linear" This reduces to RCD, right?
- Line 257: "the remaining arrowheads … are identified no better than in the equivalence class" what does this mean?
- Line 279: "using hypothesis testing to find vanishing MSE..." how is this set up?
- Line 280: how much added value does pruning have? E.g. CAM pruning can have huge impact https://arxiv.org/pdf/1310.1533
- Alg. 1: do you start from a disconnected graph? Where do the "remaining" bidirected edges come from, do you reduce them prior to this?
- Line 292: exogenous variables are selected at random, hence could be sink nodes with no consequence on the discovery process. Alternative and more meaningful testing strategies could be to select confounders (e.g. https://proceedings.neurips.cc/paper_files/paper/2021/file/144a3f71a03ab7c4f46f9656608efdb2-Paper.pdf)
- Line 305: "presents better performance" I am not sure about this, they seem pretty aligned considering quartiles. Statistical tests to ascertain this are missing.
- Line 315: some realistic scenarios are available through e.g. the bnlearn repository (https://www.bnlearn.com/bnrepository/). Any reason not to test on some of the data available there?
Minor points and typos
- Line 73: reference for d-separation is missing
- Line 79: reference for MEC is missing
- Line 130: have -> has
- Line 147: "directed global Markov property": not introduced. I guess it is just the global Markov condition introduced before?
- Footnote 1: provides -> provide
- Line 162: "unobserved active paths" are not defined in Definition 5, m-separation.
- Line 209: full stop missing
- Line 236: the PAG -> a PAG
- Line 267: "Appendix C.2" what should the reader expect to find in there?
- Line 296: "We fix the […] level" significance
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the time and effort in reading our paper. One important concern unaddressed by our general response is about the limits of our experimental evaluation: we point to the first bullet in our response below, and the experimental results in the PDF of the rebuttal.
## Weaknesses
- “*The experimental results show limited added value according to the one metric chosen (SHD) in a synthetic setting. They are not comprehensive enough …*” We highly regard this suggestions and will include real data experiments in our paper. Our results on bnlearn datasets presented in the PDF of the rebuttal (Fig. 2) show that AdaScore always outperforms CAM-UV and RCD, outperforms NoGAM on 3 out of 4 datasets, and outperforms LiNGAM on 2 out of 4 datasets. Thus AdaScore is a practical alternative with better theoretical identifiability guarantees than any of the other methods we consider. Though, please note that real and synthetic benchmarks for causal structure learning are documented as far from being satisfactory, see e.g. “Self-Compatibility: Evaluating Causal Discovery without Ground Truth”, 2024, Faller et al; experimental results should be taken with a grain of salt.
- *“In line 74, the authors state that faithfulness is assumed (I believe, it is kind of hidden in the background notions).*” In L74 we write “*we assume that the reverse direction* $X_i \perp X_j | X_Z \implies X_i \perp^d_{\mathcal{G}} X_j | X_Z$ *hold*”: this is the definition of the faithfulness assumption (see our references therein), so the faithfulness assumption is not hidden in background notion, but explicitly made for the first time in L74.
”*If that is the case, the method adopts the same assumption as FCI, plus ANM. …*”. We point to the second bullet of the general response for the answer on this point. Concerning the request of an experimental comparison with FCI, consider that the two methods have different identifiability guarantees and so different types of graphs as output. In this sense, FCI is not the ground for a fair comparison, as it loosens the assumptions at the price of weaker identifiability guarantees. We propose to compare the FCI F1 accuracy in skeleton recovery versus AdaScore F1 accuracy in skeleton recovery when inference is done according to Proposition 1 (at the base of the skeleton inference in AdaScore), which does not require assumptions on the SCM. Experimental results are shown in the PDF of the rebuttal (Fig. 3): we see that AdaScore consistently outperforms FCI in the task of recovery of the PAG skeleton.
- “*Proposition 1 is a rather trivial application of the more general lemma in Spantini et al. and it does not specify the required faithfulness assumption to obtain the result from Eq. 6*” For the first part (that Proposition 1 is a trivial application of Spantini et al.) we point to the first bullet in the general response of this rebuttal. For the second point made about Spantini et al. and the faithfulness assumptions, some clarifications are needed: Eq. 6 in the paper is the Lemma of Spantini et al. (and not obtained from it, as written in the review) and does not require faithfulness assumption. To avoid any confusion, **we propose to write** “*Note that this result does not make use of faithfulness assumption”* right after Eq. 6. Proposition 1 is a corollary of Spantini et al. that makes use of the faithfulness assumption.
- *“[…] scalability of the method is not shown nor discussed.”* Scalability is discussed in Appendix C.3 and Proposition 5, also referenced in the main manuscript, L 284. We propose to add experiments with larger number of variables to the camera-ready version.
- *“The model used to estimate residuals is not discussed, nor the assumption that the chosen model fits the data adequately to correctly estimate residuals, and what is needed to assess this.”* We use kernel ridge regression, as proposed in NoGAM original paper. Note the the target function in least squares regression is the expectation of the target given the covariates, i.e. $\mathbb{E}[Y|X]$ for generic X,Y variables, where Y is the target. This is consistent with our theoretical analysis, where the residuals of a variable $V_k$ are defined as $V_k - \mathbb{E}[V_k|V_{Z\setminus \set{k}}]$ (Equation 15) which are indeed the least squares regression residuals. We agree that this point is worth discussing, and will add it to the main text of our paper.
## Questions
- *“Line 28 …”* As causal directions can not be identified at parity of assumptions, we do not improve upon FCI identifiability guarantees in that setting. Better identifiability comes from additional assumptions, as discussed in the general response.
- *Line 44 …* Yes, thank you.
- *”Line 217: ...*" “Recentered” is intended in the sense that we have the random variable $f_i(V_{PA_i})$, the observed causal effects, that are shifted by $g_i(U^i)$ addition (see equation 13). We will provide a clearer formulation of this point.
- *”Line 221: ..."* Although this certainly subsumes the setting of RCD, it does not necessarily. We can have a mixture of linear and non-linear dependencies between $X_i$ and its observable parents and also arbitrary relations to the unobserved parents.
- *”Line 257: ...”* Our method can provide no further orientations than the ones that already follow from the Markov-equivalence class. We will write this clearer form in the paper.
- *”Line 279: ...”* See section C.2
- *”Line 280 ...* We also observed that pruning has a non-negligible impact. We will add the ablation study in the camera-ready version of the paper.
- *”Alg. 1: do you start from a disconnected graph? ...* As we describe more explicitly in algorithm 2, we start by adding undirected edges between $X_i$ and $X_j$ if $X_i \perp X_j | X_V \setminus \{X_i, X_j\}$. We will make this more clear in the main text in the updated version.
- *“Line 305: ...* We will add statistical tests to the empirical evaluation in the camera ready version.
---
Rebuttal Comment 1.1:
Comment: Many thanks for the clarifications and the extra experiments provided. Responding in order:
- Thanks for providing the results on some of the bnlearn datasets. I agree on having to take them with a pinch of salt. Indeed I would create CI around the numbers you provided, by changing the seed of the BN DGP. By the looks of the plot, I believe that a lot of them could not be significantly different from each other.
- I apologise if my comment about the assumption being "kind of hidden" came across the wrong way. I was in no way accusing. I was just pointing out that, given the importance of assumptions in your claim about the added value of your proposed method, the organisation and wording of the paper do not make clear enough their importance, strictness or benefits. In my view, the way you explained in the second bullet of the general rebuttal is already much clearer and effective, at a high level, than the paper. As for the comparison with FCI, the skeleton accuracy is good. The way I would go about it is to transform the output mixed graphs from ADAscore to a PAG, then compare to FCI. Otherwise you only compare less than half of the work done by the algorithm.
- The point to stress is not that Eq 6 does not use faithfulness, it's that Proposition 1 does.
- Re scalability, thanks to the pointer to the appendix, I had indeed seen that. I still think that the discussion is not satisfactory. The main paper just reads: "This way, we get an algorithm that is polynomial in the best case (Appendix C.3)." and, to me, that is not a great way to discuss potential limitations. Why only 9 variables? I would show a comparison of elapsed/computing time to back up your claim that the proposed algorithm is practical and with superior guarantees.
- Line 28: great, now it is clear, thanks. Again, from a reader perspective, the way the motivation is presented raises expectations that the paper does not fulfill.
- Line 44: in my view, the paper reads like ANM is not an assumption worth noting. To go back to my previous point about "kind of hidden", assumptions should be better organised and elaborated upon.
- Line 279: Thanks for the pointer. I had not read that since you only reference it when talking about the pruning. I would separate the two discussions, title them more clearly and reference them accordingly in the main paper.
- Line 280: interesting, the reference I provided show a large impact of pruning.
- Line 305 and extra experiments provided: I still think that the experiments show that the method is not really good at doing what it is supposed to do better than the others algorithms: for linear systems with confounders it is basically the same as random and lingam is much better (though, of course, it is more specialised). For the rest of the scenarios it is almost always at par or worse than other baselines. Confounder detection is quite poor, as shown in Figure 4 in the additional results.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for dedicating time to our rebuttal.
---
**Q1** “*Indeed I would create CI around the numbers you provided.”* We agree with the reviewer relative to the need to produce confidence intervals: we commit to adding them in the camera-ready.
---
**Q2**
- *”the way you explained in the second bullet of the general rebuttal is already much clearer”* We will adopt this wording and add explicit assumption paragraphs.
- *“The way I would go about it is to transform the output mixed graphs from ADAscore to a PAG, then compare to FCI.”* We find the required experiment not as straightforward as it seems. The output of AdaScore a priori only makes statements about direct causal connections but not about d-separations (or m-separations). E.g. suppose our algorithm outputs $X \leftrightarrow Y \leftrightarrow Z$. This means we cannot orient the edge between X and Y and Y and Z, which could mean that either there is a hidden mediator between X and Y (and the edge could go in either way, i.e. $X \rightarrow L \rightarrow Y$ or $X \leftarrow L \leftarrow Y$) or a hidden confounder. In the former case, we could have both, $X \perp Z | Y$ and $X \not\perp Z | Y$. Therefore the PAG w.r.t. our output is not well defined. Thus, to test “the other half” of our work a more sensible approach is via experiments as in Figure 1 of the paper.
---
**Q3.** “*Re scalability …”*
- The pointer to Appendix C.3 is indeed to better discuss the scalability, as the content of the appendix would not fit in the main paper page limit.
- On scalability experiments
1. Fig 5 of the paper already shows AdaScore scalability with the number of samples.
2. We propose to add more experiments on AdaScore scalability with the number of nodes: we present preliminary results showing that AdaScore scales much better than CAM-UV, the method with best theoretical guarantees previous to ours. The values in the table are mean +- std in seconds.
| | 3 nodes | 5 nodes | 7 nodes | 9 nodes |
| --- | --- | --- | --- | --- |
| adascore | 1.555 ± 0.274 | 12.582 ± 6.568 | 43.596 ± 29.867 | 133.950 ± 81.151 |
| camuv | 3.999 ± 2.340 | 17.703 ± 3.894 | 80.196 ± 31.040 | 198.977 ± 69.306 |
| nogam | 2.374 ± 0.129 | 6.127 ± 0.631 | 12.042 ± 0.608 | 20.407 ± 0.796 |
| rcd | 0.409 ± 0.330 | 2.507 ± 0.960 | 10.720 ± 3.774 | 22.798 ± 3.641 |
| lingam | 0.015 ± 0.0 | 0.051 ± 0.001 | 0.124 ± 0.001 | 0.251 ± 0.001 |
---
**Q4 (Line 28)** *“great, now it is clear, thanks. Again, from a reader's perspective, the way the motivation is presented raises expectations that the paper does not fulfill.”* Reviewer originally asked whether we could identify more than FCI, at parity of assumptions: as we discussed, this is theoretically impossible (see Glymour et al.) Thus, stating “*The FCI algorithm [11] can only return an equivalence class from the data. Appealing to additional restrictions ensures the identifiability of some direct causal effects in the presence of latent variables*” as done in L28-30 does not raise unfulfilled expectations, as this is exactly what we do in our paper.
---
**Q5 (Line 44)** “*in my view, the paper reads like ANM is not an assumption worth noting.”*
L44 we write “*we prove that the score function identifies the causal direction of ANMs, with minimal assumptions on the causal mechanisms”,* the *minimal* part refers to additivity of the noise: this is a minimal requirement to achieve identifiability in the light that most of existing works place linearity/nonlinearity assumptions on the mechanisms.
We will remove the word minimal, stating “*we prove that the score function identifies the causal direction under the assumption of additive noise models, without further requirements on the causal mechanisms.”* We thank the reviewer to help us clarifying this point.
---
**Q6 (Line 305)**
- *“for linear systems with confounders it is basically the same as random and lingam is much better”* Out of the 16 experimental settings of Fig. 1 of the PDF rebuttal, AdaScore is comparable to random only in one or two - large-scale linear. LiNGAM is comparable to random in at least 5 out of 16 (see nonlinear).
- *“it is almost always at par or worse than other baselines”* Considering median performance (please, recall that we already committed to adding statistical tests in the main paper) we see CAM-UV better than AdaScore 4 out of 16 times, NoGAM 2 out of 16, RCD 4 out 16, LiNGAM 4 out of 16.
Reviewer's claim that other methods are better than AdaScore is not supported by empirical evidence, and it does not account that their performance is achieved via stricter assumptions. Additionally, we remark on the following points that emerged in the discussion agreed upon both by us and the reviewer
1. Conclusions from experiments must be drawn with care
2. Our baseline has better theoretical guarantees, which are very relevant as they are valid beyond the scope of our experiments, i.e. in the real world.
---
Rebuttal 2:
Title: Tables of p-values for dense graphs
Comment: For completeness, we present the table of p-values of all the MW tests on experiments relative to __dense graphs__. Please notice that adding the random and fully random baselines to the experiments changed the random state of our running relative to Fig. 1 in the main paper. Results are qualitatively the same, but we observe a few mismatches due to the variance resulting from different random states.
# Alternative: less
__Linear fully observable model__
| | 3 | 5 | 7 | 9 |
|:-------------|------------:|------------:|------------:|----------:|
| camuv | 0.657895 | 0.544101 | 0.485256 | 0.782064 |
| nogam | 0.657895 | 0.342105 | 0.139931 | 0.455899 |
| rcd | 0.514744 | 0.782064 | 0.514744 | 0.8425 |
| lingam | 0.657895 | 0.803476 | 0.999756 | 1 |
| random | 0.000243564 | 6.49505e-05 | 0.000525017 | 0.315264 |
| fully_random | 1.08251e-05 | 1.08251e-05 | 5.41254e-06 | 0.0216285 |
__nonlinear Fully observable model__
| | 3 | 5 | 7 | 9 |
|:-------------|------------:|------------:|------------:|------------:|
| camuv | 0.139931 | 0.240625 | 0.0177315 | 0.098781 |
| nogam | 0.397968 | 0.369682 | 0.602032 | 0.630318 |
| rcd | 3.78878e-05 | 0.000243564 | 2.16502e-05 | 5.41254e-06 |
| lingam | 0.0177315 | 0.00927169 | 0.0315064 | 0.00446535 |
| random | 0.000102838 | 1.08251e-05 | 1.08251e-05 | 1.08251e-05 |
| fully_random | 6.49505e-05 | 5.41254e-06 | 5.41254e-06 | 5.41254e-06 |
__linear Latent variables model__
| | 3 | 5 | 7 | 9 |
|:-------------|----------:|------------:|---------:|----------:|
| camuv | 0.485256 | 0.823659 | 0.544101 | 0.982269 |
| nogam | 0.264424 | 0.782064 | 0.289371 | 0.1575 |
| rcd | 0.426714 | 0.630318 | 0.735576 | 0.985597 |
| lingam | 0.455899 | 0.139931 | 0.982269 | 0.998955 |
| random | 0.0715701 | 0.000525017 | 0.139931 | 0.0715701 |
| fully_random | 0.037628 | 0.000525017 | 0.176341 | 0.426714 |
__nonlinear Latent variables model__
| | 3 | 5 | 7 | 9 |
|:-------------|---------:|-----------:|-----------:|-----------:|
| camuv | 0.735576 | 0.426714 | 0.684736 | 0.426714 |
| nogam | 0.630318 | 0.217936 | 0.630318 | 0.455899 |
| rcd | 0.973787 | 0.0525612 | 0.759375 | 0.426714 |
| lingam | 0.289371 | 0.00574812 | 0.0216285 | 0.00927169 |
| random | 0.123725 | 0.0019431 | 0.0116153 | 0.00342073 |
| fully_random | 0.176341 | 0.00036264 | 0.00927169 | 0.00342073 |
---
# Alternative: greater
__linear Fully observable model__
| | 3 | 5 | 7 | 9 |
|:-------------|---------:|---------:|------------:|------------:|
| camuv | 0.369682 | 0.514744 | 0.544101 | 0.240625 |
| nogam | 0.369682 | 0.684736 | 0.876275 | 0.602032 |
| rcd | 0.514744 | 0.264424 | 0.544101 | 0.196524 |
| lingam | 0.369682 | 0.240625 | 0.000525017 | 5.41254e-06 |
| random | 0.999897 | 0.999978 | 0.999637 | 0.710629 |
| fully_random | 1 | 0.999995 | 1 | 0.982269 |
__nonlinear Fully observable model__
| | 3 | 5 | 7 | 9 |
|:-------------|---------:|---------:|---------:|---------:|
| camuv | 0.891219 | 0.782064 | 0.988385 | 0.917253 |
| nogam | 0.630318 | 0.657895 | 0.455899 | 0.426714 |
| rcd | 0.999989 | 0.999838 | 0.999995 | 1 |
| lingam | 0.988385 | 0.992655 | 0.973787 | 0.997402 |
| random | 0.999935 | 0.999995 | 1 | 0.999995 |
| fully_random | 0.999962 | 1 | 1 | 1 |
__linear Latent variables model__
| | 3 | 5 | 7 | 9 |
|:-------------|---------:|---------:|----------:|----------:|
| camuv | 0.544101 | 0.196524 | 0.514744 | 0.026213 |
| nogam | 0.759375 | 0.264424 | 0.759375 | 0.876275 |
| rcd | 0.602032 | 0.426714 | 0.315264 | 0.0177315 |
| lingam | 0.573286 | 0.891219 | 0.0216285 | 0.0019431 |
| random | 0.938497 | 0.999637 | 0.891219 | 0.938497 |
| fully_random | 0.968494 | 0.999756 | 0.860069 | 0.602032 |
__nonlinear Latent variables model__
| | 3 | 5 | 7 | 9 |
|:-------------|---------:|---------:|---------:|---------:|
| camuv | 0.315264 | 0.602032 | 0.342105 | 0.630318 |
| nogam | 0.426714 | 0.823659 | 0.426714 | 0.602032 |
| rcd | 0.037628 | 0.955395 | 0.289371 | 0.630318 |
| lingam | 0.759375 | 0.996579 | 0.985597 | 0.992655 |
| random | 0.904842 | 0.99856 | 0.992655 | 0.998057 |
| fully_random | 0.860069 | 0.999756 | 0.994252 | 0.997402 |
---
Rebuttal 3:
Comment: We thank the reviewer for the active participation in the discussion. We are surprised that the reviewer specifies only now that all experiments with less than ten variables and in sparse settings are not meaningful in analyzing the algorithm performance, as this point was never raised in the review or in the authors-reviewers discussion, where the requests were limited to a better analysis of scalability (without even mentioning density as a discriminating point). Regarding the claim that these are the setting common in the literature, we point to CAM-UV paper (the work closest to ours), where they have experiments on at most nine variable, and one semi synthetic experiment on ten variables. Further, we notice that the analysis of the reviewer on the goodness of AdaScore compared to other methods is not based on statistically significant statements: e.g. CAM-UV is better than AdaScore with statistical significance in one experimental setting out of 32. Yet, the reviewer concludes that CAM-UV is better than AdaScore on inference on linear latent variables models, which encompasses 8 out of 32 of our experimental settings. As the statistical tests were an explicit and well-motivated request from the reviewer to be able to make statistically significant conclusions, we will stick to the conclusions supported by the statistical tests.
In this spirit, we provide the summary table with the level of the test $0.05$, as asked by the reviewer:
| | Alternative: Less | Alternative: Greater |
| --- | --- | --- |
| LiNGAM | 13/32 | 6/32 |
| RCD | 12/32 | 2/32 |
| NoGAM | 0/32 | 0/32 |
| CAM-UV | 1/32 | 1/32 |
| Random | 26/32 | 0/32 |
| Fully Random | 29/32 | 0/32 |
These results contrast with the analysis of the superiority of certain methods made by the reviewer. Specifically, we respond to specific points made by the reviewer and not supported by the empirical evidence:
- “Linear fully observable on par with random”: we have the following list of p-values for *alternative : less* on linear fully observable models comparing with random: 0.0002, 6.49e-05, 0.0005, 0.31, 5.41e-6, 5.41e-6, 5.41e-6, 5.41e-6. The only case in which p-value is higher than 0.05 is 0.31. The *alterantive : greater* pvalues instead are never below 0.05 threshold for random, and not even close: 1, 1, 1, 1, 0.99, 0.99, 0.99, 0.71. AdaScore is not on par with random.
- “linear latent: worse than camuv”: as already discussed, there is only one time in which *alternative :great* in comparison with CAM-UV, in the linear latent setting, goes below 0.05. The claim is not supported by empirical evidence
Further, the reviewer writes that the linear latent case is the focus of our work: we disagree, both the linear and nonlinear settings are equally important, as the point of the paper is indeed relaxing linearity and nonlinearity assumptions on the causal mechanisms, in the context of latent variable models. If we suggested that linear latent variable cases were the focus of our work, we kindly ask the reviewer to point to the specific parts of the paper.
Finally, concerning the claim that contributions need to be clarified, we again point to L48, the paragraph starting with “our main contributions” written in bold. Concerning the claim that assumptions need to be clarified, the only specific criticism expressed on this point was relative to the positioning of faithfulness assumption, that could be more upfront. We already addressed this and remark on our offer to make a separate paragraph. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for the time spent reading and understanding our paper, as well as for the insightful comments and questions. Our paper is **well received in terms of clarity** - with Presentation scores 3 from all reviewers - **and soundness** - with scores 3, 3, 4 from R TL8k, R mmww, R WsKx respectively. A more **polarized view concerns the amount of contribution** of our work, with score ranging from 1 to 4. R mmww and R WsKx find Proposition 3 an important contribution, whereas R TL8k recognizes the novelty in our result concerning the identifiability of directed edges in restricted ANM with latent variables but argues that in practice “very few of the discovered adjacencies” happen to be identifiable by our rule. We will go into the details of this in the individual response. In the PDF attached to the rebuttal, we present experimental results with a random baseline (Fig. 1), experimental results on bnlearn data (Fig. 2), a comparison with FCI performance (Fig. 3), and experiments with the F1 metric (Fig. 4), asked by R mmww.
In the general response, we address the __two concerns that are common to R TL8k and R WsKx__, which are the lack of novelty in Proposition 1 and the relation of our method relative to FCI.
- Proposition 1 is a corollary of an existing result from Spantini et al.: in our work, we extensively discuss the related result of Spantini et al., explicitly mentioning it as the main resource to derive Proposition 1 (which we specify to be *Adapted from Spantini et al.,* see Proposition 1 statement). To make this point clearer and tune down our contributions in the results of Proposition 1, we propose to call it Corollary 1, making its relation with Spantini et al. even more explicit. In addition to this, in our work we explicitly state that this is not our main contribution, and write that our main contribution (Proposition 3) builds on the existing results of Spantini et al., and Montagna et al., L46-47 “*On these results, we build the main contribution of our work*”. Specifically, in the subsequent paragraph (L48) we specify that our main contribution is that (i) we provide theoretical ground for score based causal discovery of hidden variables and identifiable direct causal edges in latent variable models and (ii) we translate these findings in an algorithm. In this sense, **we kindly ask the reviewer to consider what we explicitly state to be the main contributions** in the paper, which do not include Proposition 1.
- Concerning the relation to FCI, reviewers highlight how it requires fewer assumptions, compared to our method: it is indeed the case that we make the same assumption of FCI plus ANM. The point that is elaborated in the paper is that the ANM assumption (in addition to those made, e.g., by FCI) allows us to prove the identifiability of the direction of certain causal edges in the PAG, that are not identifiable under the FCI assumption. The trade off between identifiability and assumptions is well known, as it is intrinsic to the dichotomy between constraint-based and functional-based causal discovery, see e.g. “Review of Causal Discovery Methods Based on Graphical Models”, 2019, Glymour et al. Overall, our solution provides identifiability guarantees that are not given by any method in the literature that is based on restrictions of the SCM, which are the main ground of comparison of our work, as they belong to the same category of functional-based methods. Lastly, we answer the question from R TL8k asking whether “*Is it correct to say that the unconfounded edges that are oriented by AdaScore are the purely undirected edges of the PAG, and not the potentially bidirected ones?*”: in this context, to further tell apart our method from FCI we highlight the following, stronger, identifiability guarantees of AdaScore compared to FCI:
- AdaScore can direct unconfounded edges that are undirected in FCI.
- AdaScore can direct unconfounded edges that are potentially bidirected in FCI (i.e. $\circ \hspace{-1.4mm}\rightarrow$). An easy example is the three variable collider $X \rightarrow Y \leftarrow Z$. FCI will output the graph $X \circ \hspace{-1.4mm}\rightarrow Y \leftarrow \hspace{-1.4mm} \circ Z$, i.e. potentially bidirected edges. If there are no hidden confounders, under restricted ANM assumptions our method is able to orient both edges. Since the semantics of our directed edges imply that there are no hidden confounders, we also know that these edges would have tails at $X$ and $Z$ in the PAG semantics. This answers R TL8k question directly.
- Directed edges in AdaScore encode direct causal effects, whereas directed edges in FCI only encode ancestral relationships.
Note that at parity of assumptions, the empirical results presented in the PDF of the rebuttal show that AdaScore outperforms FCI in the inference of the PAG skeleton (Fig. 3 of the PDF)
**References of the rebuttal**
Scalable and flexible causal discovery with an efficient test for adjacency, 2024, Amin et al.
Scalable Causal Discovery with Score Matching, 2023a, Montagna et al.
Causal Discovery with Score Matching on Additive Models with Arbitrary Noise, 2023b, Montagna et al.
Assumption violations in causal discovery and the robustness of score matching, 2023c, Montagna et al.
Score matching enables causal discovery on additive noise models, 2022, Rolland et al.
Inference via low-dimensional couplings, 2017, Spantini et al.
Probability and statistics for engineering and scientists, 1972, Walpole et al.
Score-based Causal Representation Learning: Linear and General Transformations, Varici et al., 2024
Kernel-based Conditional Independence Test and Application in Causal Discovery, 2011, Zhang et al.
Pdf: /pdf/e426f2ee22783014ba3cc911d371ff96560b1f8f.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On Theoretical Limits of Learning with Label Differential Privacy | Reject | Summary: This paper investigates the theoretical boundaries of learning with Label Differential Privacy (Label-DP) in both central and local models.
Label-DP is a weakening of standard differential privacy, where only the privacy of the "label" of each example is to be protected (an example is a pair (feature vector, label)).
The key contributions of the paper are to establish min-max optimal rates for excess error in the settings of:
* (multi-class) classification,
* regression with bounded labels,
* regression with unbounded labels (but under a bounded moment condition).
The min-max rates are over the class of data distributions that satisfy $\beta$-Holder smoothness, admits a lower bound on probability density that is bounded away from zero, assumes that there are no “sharp corners” in the input space, and a $\gamma$-margin assumption (in case of classification), or bounded label range or bounded label moments (in case of regression).
These min-max rates are then compared against the previously known min-max rates for learning under “full” local-DP (that protects both features and labels), as well as non-private learning.
The key takeaways are:
* Local-DP vs Non-Private:
* For classification and regression with bounded labels, the sample complexity under Local-DP increases by a factor of $1/\varepsilon^2$, but has the same rate in terms of desired excess error. This is unlike “Full Local-DP”, where the sample complexity is larger even in terms of the desired excess error.
* For regression with unbounded labels, the dependence of sample complexity on desired excess error is worse than the non-private setting.
* Central-DP vs Non-Private:
* The excess error is the sum of the non-private excess error and an additional term that decays faster in the number of samples, so the additional sample complexity due to privacy is negligible for very small excess error.
Strengths: The paper provides a comprehensive study of the min-max rates for learning under label differential privacy, in both local and central models of DP, and for both classification and regression. This complements prior literature on min-max rates for learning (non-privately) and for learning under (full) differential privacy. The rates highlight the precise cost of _label_ differential privacy and the sample complexity benefits over full differential privacy.
Weaknesses: While there are many results in the paper, I think the proof techniques in both lower and upper bounds use mostly standard tools (This is not necessarily a weakness!).
The paper writing could be improved at several places though. Some comments are listed below under "Questions".
Technical Quality: 3
Clarity: 2
Questions for Authors: I don't have any significant questions.
Some suggestions about writing are listed below:
* Table 1: I understood that the last two rows of Table 1 are prior work, and the first two rows are the new contributions in this paper. But it would be good to include appropriate citations prominently, e.g. in the caption for Table 1.
* Line 122: $\mathbb{E}[l(g(X, Y))]$ should be $\mathbb{E}[l(g(X), Y)]$.
* Line 135 (Eq 8): I think $\eta^*(x) = \max_k \eta_k(x)$, but it is not formally defined anywhere?
* Line 137: Proof of Proposition 2 in Appendix A does not cover the regression setting. (It is quite standard, but might be good to include the proof for completeness.)
* Line 139: Is Assumption 1 only about the classification setting? If so, maybe it makes sense to move it to Section 4?
* Line 145: Assumption 1 (d): It is not explained until this point that $\mathcal{X} \subseteq \mathbb{R}^d$. I think this should be defined upfront if this is assumed throughout the paper.
* Line 169: If I understand correctly, the $\inf_{\hat{Y}}$ refers to $\hat{Y}$ that is a function of the outputs of the mechanism on all inputs $(M(x_i, y_i))_i$. This notation is implicit, but would be good to spell out.
* Line 202: Is there a reference for the bounds for “full” local DP, that protects both features and labels ?
* Line 244: How is Assumption 1 relevant for regression? What is $\eta_j$ ? Only points (c) and (d) make sense. Maybe (a) also makes sense for $\eta$ instead of $\eta_j$ ?
* Line 297: $Y_{Ti}$ notation is awkward to read. Might want to make it something like $Y^T_i$?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I do not see any potential negative societal impact of this work, as it is primarily theoretical.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your careful reading and valuable suggestions.
**Table 1**
Thanks for this comment, which is also raised by another reviewer kpNG. We will add citations directly in the table.
**Line 122**
Thanks. We will correct this typo.
**Line 135**
Thanks. $\eta^*(x)=\max_k \eta_k(x)$ is the maximum value of regression functions.
**Line 137**
Thanks. Although the proof is straightforward, we will add the proof for completeness.
**Line 139**
Assumption 1(b) is for the classification setting only. (a), (c), (d) are also necessary for regression. We will revise the paper accordingly. Further discussions are provided in the reply of line 244.
**Line 145**
We will add the point that $\mathcal{X}\subset \mathbb{R}^d$.
**Line 169**
Yes, $\hat{Y}$ is a function of outputs of the mechanism on all inputs. This is not a typo, but we agree that it is better to emphasize it in the paper.
**Line 202**
This is the same as the comment about Table 1. Line 79-85 provide a brief overview of related works about full DP. The following are three main references about full DP.
[1] Density estimation: Duchi, John C., Michael I. Jordan, and Martin J. Wainwright. "Minimax optimal procedures for locally private estimation." Journal of the American Statistical Association 2018.
[2] Classification: Berrett, T., C. Butucea. Classification under local differential privacy. arXiv:1912.04629, 2019.
[3] Regression: Berrett, T. B., L. Györfi, H. Walk. Strongly universally consistent nonparametric regression and classification with privatised data. Electronic Journal of Statistics, 15:2430–2453, 2021.
The main technical tools are developed in [1]. [2] and [3] generalize the results to classification and regression problems.
**Line 244**
Here we continue the discussions about line 139. Yes, for assumption (a), here it is about $\eta$ instead of $\eta_j$: $|\eta(x)-\eta(x')|\leq L||x-x'||^\beta$.
(c) and (d) remain the same for the regression case.
We will revise the paper accordingly.
**Line 297**
Thanks for this suggestion. $Y_i^T$ appears to be better. We will change the notation in our revised paper.
We thank again for your careful reading and fruitful suggestions. We will address these problems in our revised version.
---
Rebuttal Comment 1.1:
Title: Response to Rebuttal
Comment: I would like to thank the author(s) for their response. The proposed changes sound good to me, and I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you very much again for your feedback and for engaging in this discussion. If you have any remaining questions, please do let us know. | Summary: This work studies the minimax rates for classification and regression under (pure) label differential privacy in both the local and central models. They prove that rates of convergence for classification and regression with bounded label noise in the local label DP model are comparable to those for the non-private tasks, except for the expected $1/\varepsilon^2$ dependence. This represents an improvement over rates for standard DP in both settings, where there is a worse dependence on the dimension of the covariates. They also prove, however, in the case of regression with unbounded label noise, the convergence rate improvements over “full” DP aren’t as meaningful.
Strengths: This work makes notable progress in our theoretical understanding of the costs of label DP relative to non-private and full DP algorithms for the same learning task.
Weaknesses: The presentation could be improved in several places. Admittedly, this is written from a statistical perspective that is different from the one I am most familiar with, so some of the perceived presentation issues may just be a matter of convention, but the following changes might make this work more understandable to the general NeurIPS community:
Abstract:
The main challenge and the techniques to overcome them as stated in the abstract aren’t clear to me as a reader at this point. It’s not yet stated that the subject of interest is minimax rates, and so there’s no context for the statement “take infimum over all possible learners” and why that would present a challenge. Generally, I did not have a good idea of what the contribution of this work was from the abstract.
Introduction:
“the learning performances” -> “the learning performance”
“the label DP” -> “label DP”
In Table 1, attribution for the full DP rates in the local DP setting as well as the rates in the non-private setting should be given in the table. Also, I think there’s an issue with the parentheses in the local label DP rates for regression with bounded label noise.
Section 2:
In the “Minimax analysis for private data” paragraph, KNLRS11 is attributed with finding the relation between label DP and stochastic queries. This is not accurate, this work characterizes local DP learning by the statistical query model.
Section 3:
“We hope that $R - R^*$ to be as small as possible” -> “we seek to minimize this risk” or something similar
“the Bayes optimal classifier and the corresponding Bayes risk is” -> “the Bayes optimal classifier and the corresponding Bayes risk are”
In Proposition 2, f(x) is used before it is defined.
Section 4:
I didn’t find the proof outline for Theorem 1 or Theorem 3 to be informative at all. It would be good to add more specifics if possible.
“Let the privacy mechanism M(x,y) outputs” -> “Let the privacy mechanism M(x,y) output”
Technical Quality: 3
Clarity: 2
Questions for Authors: Do you have a sense of how the rates would change for approximate label DP?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes, the authors address the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive review and detailed comments.
Firstly, thanks for finding these grammatical errors. We will correct them during revision.
**The main challenge and the techniques to overcome them as stated in the abstract aren’t clear to me as a reader at this point. It’s not yet stated that the subject of interest is minimax rates, and so there’s no context for the statement “take infimum over all possible learners” and why that would present a challenge. Generally, I did not have a good idea of what the contribution of this work was from the abstract.**
"Take infimum over all possible learners" means that, standard packing method can only be used for statistical problems whose output dimension is fixed. However, to derive the minimax optimal risk of classification and regression, we need to consider all possible learners with arbitrary dimensionality. We will improve the abstract accordingly.
**In Table 1, attribution for the full DP rates in the local DP setting as well as the rates in the non-private setting should be given in the table. Also, I think there’s an issue with the parentheses in the local label DP rates for regression with bounded label noise.**
Thanks for this comment. Other reviewers also mention adding references to the table. Moreover, the second right parenthesis need to be moved left.
**Full DP.** references are:
[1] Density estimation: Duchi, John C., Michael I. Jordan, and Martin J. Wainwright. "Minimax optimal procedures for locally private estimation." Journal of the American Statistical Association 2018.
[2] Classification: Berrett, T., C. Butucea. Classification under local differential privacy. arXiv:1912.04629, 2019.
[3] Regression: Berrett, T. B., L. Györfi, H. Walk. Strongly universally consistent nonparametric regression and classification with privatised data. Electronic Journal of Statistics, 15:2430–2453, 2021.
**Non-private.** There are many related references. Here we only show some standard literatures.
[4] K. Chaudhuri et al. "Rates of convergence for nearest neighbor classification." NeurIPS 2014.
[5] J. Audibert et al.. "Fast learning rates for plug-in classifiers." Annals of Statistics, 2007.
[6] P. Zhao et al. "Minimax rate optimal adaptive nearest neighbor classification and regression." IEEE transactions on information theory, 2021.
**In the “Minimax analysis for private data” paragraph, KNLRS11 is attributed with finding the relation between label DP and stochastic queries. This is not accurate, this work characterizes local DP learning by the statistical query model.**
Thanks. It is a typo here. We actually mean "finding the relation between local DP and statistical queries." "label" should be "local", "stochastic" should be "statistical".
**“We hope that to be as small as possible” -> “we seek to minimize this risk” or something similar**
Thanks. "we seek to minimize the risk" is indeed better and we will change it during revision.
**In Proposition 2, f(x) is used before it is defined.**
Thanks. f(x) is defined in Assumption 1(c) now. We should place it within Proposition 2.
"I didn’t find the proof outline for Theorem 1 or Theorem 3 to be informative at all. It would be good to add more specifics if possible."
Thanks. This point is also raise by reviewer #26J5. We refer to the answer to question 1 and 4 there.
Theorem 1 is relatively simple. It uses existing techniques for standard non-private minimax analysis, as well as standard analysis for local DP (see question 1 in the reply of #26J5). Compared with existing works, we need to bound the divergences between two joint distributions with both public and private parts.
For Theorem 3, the first half and second half of the paragraph (line 214-219) are not connected. We will only expand the second half in the revised paper.
Similar to classical minimax theory, we derive the lower bound by dividing the support into $G$ bins. For each bin, we construct a binary hypothesis. The lower bound of excess risk can then be converted to the lower bound of the error of hypothesis testing. Towards this goal, we develop a new tool (Lemma 1) to derive such a lower bound.
**Do you have a sense of how the rates would change for approximate label DP?**
Thanks for raising this good question. For approximate $(\epsilon, \delta)$-DP, we usually consider the case with very small $\delta$. Our intuition is that the rates will not be significantly different from pure DP (with small $\delta$). However, currently it seems that there are no effective approaches to derive the minimax lower bound with approximate local DP. Therefore, currently we can not make rigorous statements.
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for addressing my concerns and committing to improving the presentation of the paper. My positive score was motivated by the technical contribution and the assumption that presentation issues would be addressed during the review process and so I will keep my score.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your response. We are glad that you maintain the score. Please let us know if you have any further questions or comments. | Summary: The paper considers the problems of classification and regression under the constraint of local/central pure label DP. The authors derive upper and lower bounds on the excess risk (compared to the non-private Bayes classifier/regression) for these problems, under somewhat standard assumptions on the 'ground truth' randomized label function $\eta$. For regression, both the case where the labels are bounded and have bounded moments are considered. For the lower bounds, the authors develop extensions of techniques from minimax estimation to label DP. For upper bounds, authors propose some algorithms combining 'binning' different examples with a privacy mechanism chosen according to the problem setting. The upper/lower bounds are matching in each setting up to logarithmic factors. For local label DP, the authors show the minimax excess risk with $N$ samples matches the non-private bounds using $N \min\{\epsilon^2, 1\}$ samples. In other words, with $\epsilon = \Omega(1)$ the minimax risk asymptotically matches the non-private risk, and otherwise there is an inherent separation. For central label DP, the minimax bound is one that approaches the non-private bound as $N \rightarrow \infty$ for any fixed $\epsilon$, showing a qualitative difference. For local "full" DP, i.e. the features are also private, even for $\epsilon = \Omega(1)$ and large $N$ one cannot achieve the non-private rate.
Strengths: * Derives optimal (up to log factors) upper and lower bounds for several different variants of classification/regression under label DP.
* To derive these bounds, introduces some new technical tools for minimax analysis of DP algorithms that might be useful in future work.
* Label DP is a variant of DP that is seeing attention in practice, and classification/regression are fundamental problems, so the results in the paper can have a practical impact easily.
* The authors do a good job making clear the comparison between the results in different settings. e.g. Table 1 is a very concise summary that allows one to draw all the essential comparisons between the different settings, and there are discussions like Remark 1 that give qualitative interpretations of the quantitative results, and also discuss other baselines to compare to.
Weaknesses: The main issue is with the presentation. Specifically, the presentation does a great job explaining what the final results are and helping the reader contextualizing them, but at some points the techniques used to obtain the result are discussed at a very high level in the main body and why they work remains obscure even after reading the proof outlines in the main body multiple times. There are some cases where the authors do a good job concisely describing a proof, e.g. Theorem 6's proof outline is very concise but it still gives a good idea what the proof looks like, even if they would have to check the appendix for details. But for others, like Theorems 1/2/3, the proof outline is not very informative. See Questions for more details.
I understand the authors are constrained by space requirements, but I think the allocation of space in the main body can be better thought out. For example, I think it might be better to try to give the reader a very good understanding of classification and/or bounded label regression (e.g., Lemma 1 from the Appendix could be brought to the main body without its proof, and the authors could explain how it is used), and omit all but the top-level points on bounded label moment regression, rather than giving a sparse understanding of all three.
Technical Quality: 4
Clarity: 2
Questions for Authors: These are some examples of points I think are not clear from the main body which a reader interested in these problems might be able to understand better if the presentation were improved.
* What techniques from local DP / minimax theory does Theorem 1 use / how would it vary from e.g. the proof for the corresponding non-private lower bound? Saying you use techniques from certain areas/papers without specifying what they are is not particularly informative to a reader. (even giving a high-level summary of the technique would be ok)
* How do you decide the number of bins in your upper bound for Theorem 2? Presumably by optimizing some tradeoff b/t the number of examples per bin and having narrower bins where $\eta$ cannot vary too much within a bin, but I'm not confident about this from reading the main body.
* $\mathcal{X}$ is a vector space, what does "a bin of length $h$" mean for bins defined over a vector space? I believe it means something like each bin has radius at most $h$ in some appropriate norm, but not sure.
* In Theorem 3, the proof outline's first and second halves seem disconnected. The authors mention the model complexity needs to increase with $N$, how does the second half of the proof outline address this necessity?
Confidence: 3
Soundness: 4
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your valuable suggestion on improvement of presentation.
We agree that Lemma 1 in the appendix is important. In our revised paper, we will move this lemma to the main body of the paper. Moreover, we will also provide a better idea of the proof.
The outline of proving theorem 1, 2 and 3 can be written in another way. In particular, in the revised paper, we will add some discussions (for the contents, see the answers of question 1 and 4 below).
**Question 1: What techniques from local DP / minimax theory does Theorem 1 use / how would it vary from e.g. the proof for the corresponding non-private lower bound? Saying you use techniques from certain areas/papers without specifying what they are is not particularly informative to a reader. (even giving a high-level summary of the technique would be ok)**
(1) **Techniques from non-private minimax theory.** In minimax lower bound, one takes minimum of all methods and maximum over all possible distributions. We can then narrowing the whole set of distribution to a small set parameterized by a binary vector $v\in \\{-1,1\\}^m$. Similar analysis can be found in the proof of Theorem 6 in [1], the proof of Theorem 3.5 in [2] and proof of Theorem 2 in [3].
[1] K. Chaudhuri et al. "Rates of convergence for nearest neighbor classification." NeurIPS 2014.
[2] J. Audibert et al.. "Fast learning rates for plug-in classifiers." Annals of Statistics, 2007.
[3] P. Zhao et al. "Minimax rate optimal adaptive nearest neighbor classification and regression." IEEE transactions on information theory, 2021.
These discussions are also used to reply to reviewer kpNG.
(2) **Techniques from local DP.** We use Theorem 1 in [4], in (c) in eq.(42).
[4] Minimax optimal procedures for locally private estimation. Journal of the American Statistical Association. 2018
(3) **New techniques that are not from either local DP or classical minimax theory.** The label local DP problem lies in between the full DP and non-private case, so we can not directly bound the divergence based on existing techniques. In this paper, we design a new bound for the divergence between two joint distributions that have both private and non-private parts. See eq.(46) and eq.(47) in the paper.
Theorem 1 is relatively easier than later theorems. We will also provide more discussion about other theorems in the paper.
**Question 2: How do you decide the number of bins in your upper bound for Theorem 2? Presumably by optimizing some tradeoff b/t the number of examples per bin and having narrower bins where $\eta$ cannot vary too much within a bin, but I'm not confident about this from reading the main body.**
Yes, the number of bins is selected to achieve a bias variance tradeoff. Too large bins will induce strong bias. Too narrow bins will make the random fluctuation of $\eta$ be high.
For the optimal number of bins, in line 189, we have determined $h$. Note that the volume of each bin is $h^d$, thus $G=V/h^d$, with $V$ being the total volume of support $\mathcal{X}$. Therefore $G\sim (N(\epsilon^2\wedge 1)/ln K)^{d/(2\beta+d)}$.
**Question 3. $\mathcal{X}$ is a vector space, what does "a bin of length $h$" mean for bins defined over a vector space? I believe it means something like each bin has radius at most $h$ in some appropriate norm, but not sure.**
Here $\mathcal{X}$ is a compact set. Note that assumption 1(c) requires that $f(x)\geq c$, which means that $f$ is bounded away from zero. The pdf is normalized to 1, thus the support must be bounded. $\mathcal{X}$ is segmented into many cells, and the length of each cell is $h$. We will clarify it in our revised paper.
**Question 4. In Theorem 3, the proof outline's first and second halves seem disconnected. The authors mention the model complexity needs to increase with $N$, how does the second half of the proof outline address this necessity?**
We agree that the first and second halves are disconnected. The first half is used to argue that the proof can not simply use existing methods (i.e. packing method), since packing method is only suitable for problems with fixed complexity. Therefore, this problem requires new technical novelties.
We think that it would be better to put the first half below, in order to avoid confusions.
In general, we thank again for these valuable suggestions on improving the readability of this paper. We will revise this paper accordingly.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. I am hopeful that the authors can incorporate the ideas presented in the rebuttal to improve the presentation. At this point, I agree with the sentiment of Reviewer kpNG - I do not oppose accepting the paper, though a round of reviews to help validate that the presentation has improved may be good. So I am keeping my score.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response. As the final version allows one additional page, there will be enough space for us to provide more detailed and informative outlines for each theorem. Please kindly let us know if you have further questions or comments. We will add all the new results and discussions to the final paper. Thank you very much! | Summary: This paper investigates the minimax risks of classification and regression (with both bounded and heavy-tailed noise) under label differential privacy (DP) in both central and local models.
Strengths: The paper provides a comprehensive analysis by considering both upper and lower bounds for the minimax risks.
It explores both central and local DP models and different settings, covering a broad spectrum of scenarios.
Weaknesses: The writing quality needs improvement to meet publication standards. Several sections are challenging to understand. Specific issues include:
(1) Around line 178, the output of the mechanism for classification is unclear. Why is it not a one-hot vector, or at least why is the L1 norm not equal to 1?
(2) Some notations are overused. For example, "c" refers to the lower bound of the density function in Assumption 1 and also denotes the classifier in line 186 and subsequent proofs.
(3) The description of the algorithm before Theorem 2 is vague and lacks clarity.
(4) The proofs in the appendix are hard to follow without explanations or discussions. For instance, how is $\phi$ defined in Equation (35), and what purpose does it serve? Why does the construction satisfy the assumptions? There seem to be some typos or missing elements in Equations (39) and (40).
Technical Quality: 2
Clarity: 2
Questions for Authors: The paper discusses non-private baselines. Are there citations provided for these baselines?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: See weaknesses and questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review and valuable suggestions. We will update our paper in our revised version accordingly.
**1. Around line 178, the output of the mechanism for classification is unclear. Why is it not a one-hot vector, or at least why is the L1 norm not equal to 1?**
As long as the privacy requirement is satisfied, it is not required to conduct one hot encoding. One hot encoding will induce unnecessary loss of information. If we use standard randomized response:
$P(M(x,y)(j)=1)=e^\epsilon/(e^\epsilon + K - 1)$ if $y=j$,
and
$P(M(x,y)(j)=1)=1/(e^\epsilon + K - 1)$ if $y=j$,
then with the increase of $K$, the probability of remaining the original label will decrease significantly. As a result, the classification risk will also increase.
Moreover, this privatization mechanism is similar to RAPPOR [1].
Also, the $\ell_1$ norm is not necessarily 1 since we use sigmoid instead of softmax activation in the last layer.
**2. Some notations are overused. For example, "c" refers to the lower bound of the density function in Assumption 1 and also denotes the classifier in line 186 and subsequent proofs.**
Thanks! There is indeed an overuse of notation $c$. We will use different notations in our revised paper.
**3. The description of the algorithm before Theorem 2 is vague and lacks clarity.**
Thanks for pointing out these. We thought that this part is easier than later theorems, due to space limitation, we only describe the algorithm in a concise way.
Here we explain the idea of eq.(11) again. The label can be one hot encoded to a vector first. However, this is not differentially private. To meet the privacy requirement, we randomly flip each element in the vector with probability $1/(e^{\epsilon / 2}+1)$. The vector after such random flipping satisfies $\epsilon$-local label DP.
Divide the support into $G$ bins. For each bin, we count the number of samples (denoted as $S_{lj}$) with $Z_i(j)=1$ and $X_i\in B_l$. Such number reflects the conditional probability of the label being $j$, given the feature $x\in B_l$. Therefore, the classification rule is to take argmax of $S_{lj}$ .
**4. The proofs in the appendix are hard to follow without explanations or discussions. For instance, how is $\phi$ defined in Equation (35), and what purpose does it serve? Why does the construction satisfy the assumptions? There seem to be some typos or missing elements in Equations (39) and (40).**
Thanks the reviewer for reading the appendix carefully. Although we are sure about the correctness of the proof, we also agree that are indeed some points that require more explanations.
**(1) About $\phi$.** As stated in line 489 and eq.(35), $\phi$ can be any function satisfying $\beta$-Holder assumption, i.e. $|\phi(x)-\phi(x')|\leq L||x-x'||^\beta$ which is supported on $[-1/2,1/2]^d$, and $0\leq \phi\leq 1$. Here it is not necessary to specify the form of $\phi$ exactly, as infinite number of possible functions $\phi$ are enough for our proof.
Our construction satisfies Assumption 1. For (a), i.e. the Holder assumption with parameter $\beta$, from the construction eq.(36), $|\eta_v(x)-\eta_v(x')|\leq |\phi(\frac{x-c_k}{h})-\phi(\frac{x'-c_k}{h})|h^\beta\leq L(||x-x'||/h)^\beta h^\beta\leq L||x-x'||^\beta$, satisfying Assumption 1(a).
For (b), from (36), $|\eta_v(x)|\leq h^\beta$. Therefore Assumption (b) holds for $t>h^\beta$. From eq.(37), $m\lesssim h^{\gamma \beta-d}$, we have $mh^d\lesssim h^{\gamma \beta}=(h^\beta)^\gamma$, indicating that Assumption (b) holds for $t=h^\beta$. The case with $t<h^\beta$ can be proved by the continuity of function $\phi$.
Our construction is common in minimax analysis in nonparametric statistics. We refer to the proof of Theorem 6 in [2], the proof of Theorem 3.5 in [3] and proof of Theorem 2 in [4] for other examples that use similar ideas.
(2) About eq.(39) and eq.(40). Thanks for pointing it out. In (39), $P(sign(\hat{\eta}(x)))$ should change to $P(sign(\hat{\eta}(x))\neq \eta_v(x))$, which means the probability of making wrong classification at $x$. The typo in (40) is similar.
We thank the authors for the suggestion. We will add more discussions in the revised version.
**Questions. The paper discusses non-private baselines. Are there citations provided for these baselines?**
The related works on non-private classification and regression are summarized in line 73-78 in the paper.
(1) Classification. We refer to [2], theorem 4(b), which gives $O(n^{-\alpha(\beta+1)/(2\alpha+1)})$. Here the notations are different: $\alpha$ in [2] corresponds to $\beta/d$ in our paper, and $\beta$ in [2] corresponds to $\gamma$ in our paper. After such adjustment, it becomes $n^{-\beta(\gamma+1)/(2\beta+d)}$ in our paper.
(2) Regression. Without privacy constraints, there are no significant differences between bounded and unbounded label noise. We refer to Theorem 5 in [4]. [4] considers heavy-tailed distributions, while current works focus on the bounded case, thus $\beta$ in Theorem 5 in [4] can be regarded as infinite. $p$ in [4] corresponds to $\beta$ in our paper.
Nonparametric classification and regression have been widely studied and the related works are far beyond those listed above. We refer to a book [5] for a complete overview of nonparametric classification and regression.
In our revised version, we will cite related works directly in the table.
# References
[1] P. Kairouz et al. "Discrete distribution estimation under local privacy." ICML 2016.
[2] K. Chaudhuri et al. "Rates of convergence for nearest neighbor classification." NeurIPS 2014.
[3] J. Audibert et al.. "Fast learning rates for plug-in classifiers." Annals of Statistics, 2007.
[4] P. Zhao et al. "Minimax rate optimal adaptive nearest neighbor classification and regression." IEEE transactions on information theory, 2021.
[5] Tsybakov. "Introduction to nonparametric estimation". 2009
---
Rebuttal Comment 1.1:
Comment: Thank you to the authors for the detailed response. After reviewing other comments, it appears that many reviewers share concerns about the presentation, which makes the paper difficult to read, especially for those unfamiliar with the existing literature. There are discussions in the paper whose correctness I am unable to verify or appreciate due to my unfamiliarity with the existing literature and the poor presentation. If the consensus is that these flaws do not overshadow the merits of the content, I would not oppose its acceptance. However, I also support the idea of rejecting this version to allow for further refinement and improvement in a future submission.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply
Comment: Thanks for your prompt response. Our feeling is that in general, reviewers raise good comments, indicating that they understand our paper well. The issues in presentation do not affect the readability significantly. We will definitely fix these issues in our revision. | Rebuttal 1:
Rebuttal: Thanks all the reviewers for your careful reading and valuable comments. We are encouraged to know that reviewers are positive about the novelty and value of our works. We have also received some detailed comments about definitions, notations and presentations that can be further improved. Some of them are raised by more than one reviewers. Thank you very much for these comments! We will definitely revise the paper based on these comments. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Penalty-based Methods for Simple Bilevel Optimization under Hölderian Error Bounds | Accept (poster) | Summary: This article presents a nice extension to the broad class of penalty method problems, particularly for the case when the objective functions have a Hölderian error bound. This assumption for this class of problems appears new, and a thorough coverage of interesting results are claimed.
Strengths: - New results presented for bilevel optimization for the Hölderian error bound setting
- New relationships between the minimizers of a penalty problem and the exact problem; this is a particularly mature field, and finding new results here is meritorious.
- Presentation of a standard prox-based method for solving these bilevel problems with their new theory, with complexity details explained.
Weaknesses: - The article does not distinguish between what type of subdifferential is used. Since the authors also make a mention of the applicability of their results to nonconvex objectives, this is crucial point. Many classical results in optimization and convex analysis are proven using the *convex* subdifferential; however, even for differentiable nonconvex functions (e.g., $-x^2$), the convex subdifferential is empty, while other notions like the Clarke subdifferential is nonempty.
- Assumption 3.2 is a very strong one; while I agree with the author(s) that the prox of a sum is widely studied, the list of settings in which one can compute this is still quite limited. However, this is not a major drawback for me, since handling HEB bilevel optimization even in the L-smooth setting appears to be new.
- In my opinion, leaving lines 140--142 to stake claim to results for non-convex which are exclusively in the appendix seems improper. The appendix is technically "unreviewed" material; if the formal statement of the result cannot fit into this article, then I do not think it is worth including. I suggest that the authors either try to find a way to incorporate these results into the article (since they are indeed quite interesting), or save the results for another work.
- The graphs of the numeric are too small for me to read, especially in-print.
Minor comments:
- The fact that the argmin of a convex lsc function is closed and convex is a classical result which has been known for decades, so it seems odd to cite [Beck and Sabach, 2014] instead of a classical book, or the article where it was first proven.
- The article makes mention to functions $f_2$, $g_2$ far before they are introduced, which is confusiung to the reader.
- Names "Pock, Tseng, etc." are not capitalized in the citations.
- Citation on lines 391-392 is missing volume number.
Technical Quality: 4
Clarity: 3
Questions for Authors: The classical article on penalty methods,
Dmitri P. Bertsekas, "Necessary and Sufficient conditions for a penalty method to be exact", Math. Program., vol. 9, pp. 87 - 99, 1975.
basically states that, if the penalty function is smooth near the solution set, then the penalty must go to infinity in order to achieve exact penalization; and, if the penalty function is nonsmooth near the solution set, then a finite penalty will suffice. My question is: are your results consistent with this article? In particular,
- In the setting where $G$ is smooth, can you confirm that $\lim_{\varepsilon\to 0}\gamma^*\to\infty$?
- On the other hand, if $G$ is nonsmooth at its solution set, does $\lim_{\varepsilon \to 0} \gamma^*$ approach a finite value? Violating this second requirement would not contradict the aforementioned article, but it would significantly strengthen your results.
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors do a good job of explaining the required assumptions for their results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We acknowledge the valuable insights you have provided for our paper.
# Weakness 1:
Thank you for pointing this out. Our definition of the subdifferential is derived from convex analysis. We use the subdifferential for a convex function $f$: $\partial f(x) = \\{ g : f(y) - f(x) \ge g^T(y - x) \\}$ (cf. [1, Section 4.2]). In our analysis, the usage of the subdifferential is limited to convex functions (see Assumption 2.1 and equation (8)). For nonconvex functions, we do not use Assumption 2.1 but instead directly assume the Lipschitz continuity of $F$ to simplify our analysis.
For the general nonconvex case, we can also use other subdifferentials for 'stationary points' as discussed in Section 4 of [2], such as B(ouligand)-stationary points or C(larke)-stationary points, using the B-subdifferential or C-subdifferential, respectively. However, it is often challenging to obtain a simple definition of stationarity for the original simple bilevel optimization problem when the lower-level problem is nonconvex. Therefore, in our future work, we will explore the relationship between stationary points of the original problem and the penalized problem using general subdifferentials in scenarios where the upper-level problem is nonconvex but the lower-level problem is convex.
# Weakness 2:
Thank you for recognizing the novelty of our work. The requirement for the nonsmooth term to be prox-friendly is widely adopted in optimization and is satisfied in many machine learning applications, such as $\ell_1$- and $\ell_2$-norms. It is also important to note that our assumption is more general than those found in existing literature.
Specifically, in the simple bilevel literature, when employing proximal mappings, researchers often consider the scenario where only one level contains a nonsmooth term (see, e.g., [3,4,5]). The proximal mapping of the sum $f_2 + \gamma g_2$ is then reduced to the proximal mapping of either $f_2$ or $g_2$, which is a more easily satisfied condition.
A similar case occurs when $f_2 = \beta g_2$ for some $\beta > 0$. For instance, when minimizing both the validation loss and training loss of the LASSO problem simultaneously, the nonsmooth terms of the upper- and lower-level objectives are identical, both being $\ell_1$-norms. In this situation, the proximal mapping of $f_2 + \gamma g_2$ corresponds to the proximal mapping of a $\lambda \ell_1$-norm, for some $\lambda>0$, which is straightforward to compute.
Generally, the prox-friendly properties of $f_2 + \gamma g_2$ are challenging to maintain. However, some studies have identified conditions under which the sum of two proximal mappings can be easily computed (e.g., [6,7]).
For example, in regression problems, it is common to encounter situations where the nonsmooth parts of the upper- and lower-level objectives are the indicator functions of an $\ell_1$-norm ball and an $\ell_2$-norm ball, respectively. The joint proximal mapping is then the projection onto the intersection of these two balls, and the method in [8] can be used to compute it. Another case involves one level having an $\ell_2$-norm regularizer and the other having an $\ell_1$-norm regularizer. The sum $\lambda_1\\|x\\|\_2^2 + \lambda_2\\|x\\|\_1^2$ ($\lambda_1, \lambda_2 > 0$) is known as the elastic net, for which the proximal mapping has a closed form.
# Weakness 3:
Thank you for recognizing the significance of our analysis of non-convex problems. Due to space constraints, we initially placed the results in the appendix. In our revision, we will restructure the paper to include the analysis of non-convex problems in the main paper.
# Weakness 4:
We apologize for any inconvenience. In our revision, we will adjust the size and format of the graphs to improve their readability and comprehensibility. Furthermore, we have included the experimental results of our main paper in 'Figure 4' and 'Figure 5' of the PDF in 'Author Rebuttal'.
- Minor comment 1: Thank you for pointing this out. The correct references should be [1, Proposition 1.2.2 and Page 49].
- Minor comment 2: We apologize for the omission of the introduction of $f_2$ and $g_2$ before their usage.
- Minor comments 3, and 4: Thank you for pointing out these typos. We will correct them in our revised paper.
# Questions:
Thanks for bringing Bertsekas [9] to our attention. We found that the assumptions and conditions in [9] are different from ours.
[9] suggests a scalar penalty function $p(t): R\to R$ satisfying that $p$ is convex and $p(t)=0$ for all $t\le 0$ and $p(t)>0$ for all $t>0$. However, in our paper, we use $p(t)=\gamma \cdot t$, which contradicts $p(t)=0$ for all $t\le 0$. Nevertheless, let us compare the results.
First, in Section 2, [9] requires an assumption (A.2) that problem $(P_{\text{val}})$
$$
\min F(x)\quad \text{s.t.} \quad G(x)-G^*\le0.
$$
has at least one optimal Lagrange multiplier. This assumption often fails (the multiplier is often $\infty$) in simple bilevel optimization as the Slater condition fails for $G(x)-G^*\le0$.
In the case of $\alpha=1$ in the HEB assumption, the multiplier exists. For $p(t)$ satisfying conditions in [9], Proposition 1 in [9] says that a necessary condition for exact penalization is $\lim_{t\to0^+}\frac{p(t)}{t}\ge y$ and a sufficient condition is $\lim_{t\to0^+}\frac{p(t)}{t}>y$, where $y$ is an optimal Lagrange multiplier. Unless some multiplier is zero (this means $(P_{\text{val}})$ is essentially unconstrained), the latter happens only if $p$ is nonsmooth. We guess this leads to your question. However, the smoothness is related to $p$, but not $G$.
In our case, we always have $\lim_{t\to0^+}\frac{p(t)}{t}=\gamma>0$. But as A.2 and C.2 in [9] fail here, there is no contraction between [9] and our results.
These observations indicate that the results proposed by [9] don't apply to our paper.
We sincerely hope that the preceding discussion addresses your inquiries. We appreciate your constructive questions once again.
---
Rebuttal Comment 1.1:
Comment: I thank the authors very much for their clear and thorough responses to both my questions and concerns. I am quite happy to hear that the revised version will include the precise subdifferential definition used, as well as the nonconvex results. I have decided to increase my score.
---
Reply to Comment 1.1.1:
Comment: We deeply appreciate your acknowledgement and decision to raise the score.
---
Rebuttal 2:
Title: References
Comment: [1] Bertsekas, Dimitri, Angelia Nedic, and Asuman Ozdaglar. Convex analysis and optimization. Vol. 1. Athena Scientific (2003).
[2] Cui Y, Liu J, Pang J S. Nonconvex and nonsmooth approaches for affine chance-constrained stochastic programs[J]. Set-Valued and Variational Analysis, 2022, 30(3): 1149-1211.
[3] Doron, Lior, and Shimrit Shtern. Methodology and first-order algorithms for solving nonsmooth and non-strongly convex bilevel optimization problems. Mathematical Programming 201.1 (2023): 521-558.
[4] Jiang R, Abolfazli N, Mokhtari A, Hamedani EY. A conditional gradient-based method for simple bilevel optimization with convex lower-level problem. In International Conference on Artificial Intelligence and Statistics. PMLR (2023): pp. 10305-10323.
[5] Merchav, Roey, and Shoham Sabach. Convex Bi-level Optimization Problems with Nonsmooth Outer Objective Function. SIAM Journal on Optimization 33.4 (2023): 3114-3142.
[6] Yu, Yao-Liang. On decomposing the proximal map. Advances in neural information processing systems 26 (2013).
[7] Pustelnik, Nelly, and Laurent Condat. Proximity operator of a sum of functions; application to depth map estimation. IEEE Signal Processing Letters 24.12 (2017): 1827-1831.
[8] Liu, Hongying, Hao Wang, and Mengmeng Song. Projections onto the intersection of a one-norm ball or sphere and a two-norm ball or sphere. Journal of Optimization Theory and Applications 187 (2020): 520-534.
[9] Bertsekas, Dimitri P. Necessary and sufficient conditions for a penalty method to be exact. Mathematical programming 9.1 (1975): 87-99. | Summary: This paper deals with a simple bilevel (the lower-level problem has no dependence on the upper-level variable) optimization problem, where both the upper- and the lower-level objectives are convex and potentially non-smooth. To simplify the original bilevel problem the authors consider a penalty-based single-level reformulation and establish the relationship between the solutions of the two problems. An accelerated proximal gradient method (PB-APG) is proposed to solve the penalty problem. Under a Holderian error bound condition and certain additional assumptions about the convexity and smoothness of the objectives, convergence of the PB-APG is shown to approximate global optimal solutions of the original bilevel problem.
Strengths: * Differently from most of the works in literature, this paper deals with a problem in which the global solutions of the original bilevel problem are attainable. As a result, from a theoretical perspective, this is an interesting problem class. In addition, the proposed method is shown to (approximately) converge to these global solutions (of the original bilevel problem) and not to the solutions of some reformulation, as it is usually the case in other bilevel works.
* The authors provide a comprehensive and detailed summary of algorithms developed for solving simple bilevel optimization problems (in table 1).
Weaknesses: * From an applications perspective, the problem class considered here does not seem very interesting. The bilevel problem is simple, the objectives are (strongly) convex, a Holderian error bound holds and the non-smooth term is required to be proximal-friendly. This significantly restricts the number and complexity of potential applications. Indeed, the applications presented in the appendix seem to revolve around (regularized) linear least-squares problems and simple learning models like logistic regression.
* The proposed algorithm solves the reformulated minimization problem $P_{\Phi}$ rather than the original bilevel. This minimization problem looks fairly standard, and it seems that known algorithms are applied directly for its solution. Thus, it appears that there is no significant novelty in terms of algorithm design.
Technical Quality: 4
Clarity: 4
Questions for Authors: * Are Holderian error bounds satisfied in problems involving training of complex learning models, like neural networks.
* In table 1 some of the algorithms depend on parameters $\alpha$ and b, but the property to which these parameters correspond to is not specified. Please clarify this.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: No negative societal impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate your valuable insights and thoughtful feedback regarding this paper.
# Weakness 1:
We should emphasize that the problem class considered here receives a lot of interest in the literature.
- Simplicity: Although bilevel problems may appear straightforward, they have numerous applications and have garnered significant attention in the optimization and machine learning communities. For example, dictionary learning [1], lexicographic optimization [2], and lifelong learning [3].
- Convexity: Almost all existing papers on simple bilevel optimization (SBO) investigate cases where the objectives at both levels are convex (see Section 1.1 and Table 1 of our paper). Although some practical models do not meet this assumption, they often exhibit local convexity in the vicinity of their minimizers. Additionally, we have considered the nonconvex case (refer to l. 140-142 and Appendix D).
- Holderian error bound (HEB): In the literature on SBO, the HEB is a commonly utilized assumption [4,5]. In Appendix C of our paper, we demonstrate that this assumption applies to many practical problems.
- Prox-friendly property: The requirement for the non-smooth term to be prox-friendly is widely adopted in optimization and is satisfied in many machine learning applications, such as $\ell_1$- and $\ell_2$-norms. It is important to note that our assumption is more general than existing literature.
In the simple bilevel literature, when employing proximal mappings, researchers often consider the scenario where only one level contains a nonsmooth term (see, e.g., [3,4,5,6]). The proximal mapping of the sum $f_2 + \gamma g_2$ is then reduced to the proximal mapping of either $f_2$ or $g_2$, which is a more easily satisfied condition.
A similar case occurs when $f_2 = \beta g_2$ for some $\beta > 0$. For instance, when minimizing both the validation loss and training loss of the LASSO problem simultaneously, the nonsmooth terms of the upper- and lower-level objectives are identical, both being $\ell_1$-norms. In this situation, the proximal mapping of $f_2 + \gamma g_2$ corresponds to the proximal mapping of a $\lambda \ell_1$-norm, for some $\lambda > 0$, which is straightforward to compute.
Generally, the prox-friendly properties of $f_2 + \gamma g_2$ are challenging to maintain. However, some studies have identified conditions under which the sum of two proximal mappings can be easily computed (e.g., [7, 8]). For example, in regression problems, it is common to encounter situations where the nonsmooth parts of the upper- and lower-level objectives are the indicator functions of an $\ell_1$-norm ball and an $\ell_2$-norm ball, respectively. The joint proximal mapping is then the projection onto the intersection of these two balls, and the method in [9] can be used to compute it. Another case involves one level having an $\ell_2$-norm regularizer and the other having an $\ell_1$-norm regularizer. The sum $\lambda_1\\|x\\|\_2^2 + \lambda_2\\|x\\|\_1$ ($\lambda_1, \lambda_2 > 0$) is known as the elastic net, for which the proximal mapping has a closed form [7, Example 5].
- Applications: The applications in Appendix A are widely adopted in the literature [3,4,6,10]. Additionally, there are other applications of SBO, such as sparsity representation learning, fairness regularization, and lexicographic optimization [2]. Following your suggestion, we conducted three additional, more practical, and complex simulations. The problem settings and experimental results are detailed in the global 'Author Rebuttal'.
# Weakness 2:
Utilizing the penalization method to solve the original SBO problem is a novel approach. Among the various studies on SBO (refer to Section 1.1 and Appendix B of our paper), we are the first to employ the penalization method. While the Tikhonov regularization appears similar to our framework, its origins differ. Implementing Tikhonov regularization requires the ''slow condition'' ($\lim_{k\to\infty} \sigma_k = 0, \sum_{k=0}^{\infty}\sigma_k=+\infty$), necessitating iterative solutions for each $\sigma_k$. In contrast, our method only requires solving a single optimization problem for a given $\gamma$, whose theoretical significance is clear. Theoretically, we establish the relationship between the approximate solutions of the original bilevel problem and those of the reformulated single-level problem $P_{\Phi}$ with a specific $\gamma$. This constitutes the first theoretical result linking the original bilevel problem to the penalty problem $P_{\Phi}$ with the optimal non-asymptotic complexity result.
Moreover, since $\gamma$ relies on many parameters, as demonstrated in Section 3.1.2 of our paper, determining $\gamma$ can be challenging. To address this, we propose an adaptive version of PB-APG that updates $\gamma$ dynamically and invokes PB-APG with varying $\gamma$ and solution accuracies.
# Question 1:
Table 2 of our paper provides examples and references [4,5] about the HEB, which is widely used in learning models. For instance, the piecewise maximum function and least-squares loss function can be employed to train classification and regression tasks, respectively. For problems involving the complex learning models of neural networks, although these models are generally nonconvex, they may have local convex structures and also KL (or PL) conditions, which are equivalent to the HEB in the convex case [11]. Indeed, [12, Proposition 2] demonstrates that many applications in neural networks (e.g., DNNs) satisfy the KL inequality. We will explore this in our future work.
# Question 2:
We apologize for omitting the definitions of some parameters in Table 1. For IR-CG [13], the range of $p$ is $p\in(0,1)$. For our methods, the ranges of $\alpha$ and $\beta$ are $\alpha \geq 1$ and $\beta > 0$, respectively. We will clarify this in our revision.
---
Rebuttal Comment 1.1:
Title: Comment from Reviewer fmaf
Comment: I would like to thank the authors for their responses. I am raising my score to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our response, and for raising the score.
---
Rebuttal 2:
Title: References
Comment: [1] Beck, Amir, and Shoham Sabach. A first order method for finding minimal norm-like solutions of convex optimization problems. Mathematical Programming 147.1 (2014): 25-46.
[2] Gong, Chengyue, and Xingchao Liu. Bi-objective trade-off with dynamic barrier gradient descent. NeurIPS 2021 (2021).
[3] Jiang R, Abolfazli N, Mokhtari A, Hamedani EY. A conditional gradient-based method for simple bilevel optimization with convex lower-level problem. In International Conference on Artificial Intelligence and Statistics. PMLR (2023): pp. 10305-10323.
[4] Doron, Lior, and Shimrit Shtern. Methodology and first-order algorithms for solving nonsmooth and non-strongly convex bilevel optimization problems. Mathematical Programming 201.1 (2023): 521-558.
[5] Sepideh Samadi, Daniel Burbano, and Farzad Yousefian. Achieving optimal complexity guarantees for a class of bilevel convex optimization problems. arXiv preprint arXiv:2310.12247, 2023
[6] Merchav, Roey, and Shoham Sabach. Convex Bi-level Optimization Problems with Nonsmooth Outer Objective Function. SIAM Journal on Optimization 33.4 (2023): 3114-3142.
[7] Yu, Yao-Liang. On decomposing the proximal map. Advances in neural information processing systems 26 (2013).
[8] Pustelnik, Nelly, and Laurent Condat. Proximity operator of a sum of functions; application to depth map estimation. IEEE Signal Processing Letters 24.12 (2017): 1827-1831.
[9] Liu, Hongying, Hao Wang, and Mengmeng Song. Projections onto the intersection of a one-norm ball or sphere and a two-norm ball or sphere. Journal of Optimization Theory and Applications 187 (2020): 520-534.
[10] Mostafa Amini and Farzad Yousefian. An iterative regularized incremental projected subgradient method for a class of bilevel optimization problems. In 2019 American Control Conference (ACC), pages 4069–4074. IEEE, 2019.
[11] Bolte, Jerome, et al. From error bounds to the complexity of first-order descent methods for convex functions. Mathematical Programming 165 (2017): 471-507.
[12] Zeng, Jinshan, et al. Global convergence of block coordinate descent in deep learning. International conference on machine learning. PMLR, 2019.
[13] Khanh-Hung Giang-Tran, Nam Ho-Nguyen, and Dabeen Lee. Projection-free methods for solving convex bilevel optimization problems. arXiv 2023. | Summary: This work proposes a penalty based algorithm for simple bilevel optimization problems. The paper studies the relationship between the solutions of the penalized problem and the original bilevel problem. It extends the existing results on general bilevel optimization problem, which are established under PL condition, to the more generic Hölderian error bounds. The results are also extended to non-smooth upper and lower-level functions.
Strengths: 1) The paper is well written and easy to follow.
2) The generalization of the existing results on PL condition to Hölderian error bound is interesting.
3) The extended results on non-smooth objectives are good contributions.
Weaknesses: 1. Missing literature comparison/review: The method proposed in this paper is a penalty-based method. There is a large body of work for penalty-based bilevel optimization methods. While in the related work section in the main text, there is no detailed review/comparison of the previous penalty-based bilevel optimization algorithms. It might be beneficial if the authors could discuss about the advantage of the proposed method compared with the existing ones.
2. In the current state of the paper, the experiments seem to be purely synthetic and might be a little too toy. Though the paper is majorly a theoretical one, it would benefit to have more practical experiments. It is also of great importance to find meaningful applications for the simple bilevel optimization problem.
3. The method proposed in this work falls into the penalty-based methods. It might be interesting if the authors include a comparison with existing penalty-based bilevel optimization methods in, e.g.,
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The proposed method is for convex simple bilevel optimization problems, which might be restrictive. Can the author explain how the algorithm can be extended to the non-convex case?
2. How does this algorithm compare to the penalty-based algorithms for general bilevel optimization problems? Are there any advantages demonstrated by this algorithm as compared to the generic ones?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: no potential social impact
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Your expert insights are crucial to our research, and we greatly appreciate your guidance and suggestions.
# Weaknesses 1:
We should point out that for simple bilevel optimization (SBO), there is no existing penalty method, although there exists a closed related method that is based on Tikhonov regularization (please refer to l.44-65 in our paper). Your point is correct for general bilevel optimization methods, as referenced in [1, 2, 3]. However, the general bilevel optimization model, with the form $\min\ F(x,y)$ $\text{s.t.}$ $x \in \text{argmin}_x G(x,y)$ is different from SBO as the previous one is general nonconvex, while the latter is convex. Indeed, our penalty method is partially motivated by the penalty method for general bilevel optimization in [1], but our method uses many properties from the structure of SBO. Nonetheless, following your suggestion, we will include a comparison of our method and other existing methods on general bilevel penalty-based methods in our revised paper.
# Weaknesses 2:
Thanks for the comment. In fact, these experiments are widely adopted in the SBO literature [10, 11, 12, 13]. It would be more persuasive to add more practical experiments. Following your suggestion, we conduct three more practical and complex simulations. The problem settings and experimental results are presented in the global 'Author Rebuttal'. Our experimental results show that our methods exhibit faster convergence rates, and achieve the smallest optimal gaps compared to the baseline methods for both upper- and lower-level objectives. This finding confirms our theoretical predictions in the context of this practical simulation. Moreover, the third experimental results (fair classification problem) also suggest that our methods may perform well even in non-convex cases.
# Weaknesses 3:
Please refer to the answer to Weakness 1.
# Question 1:
We actually considered the nonconvex case in l.140-142 of our main paper. However, due to the page limit, we put our discussions in Appendix D.1 of our paper (please refer to "full\_paper.pdf" in the supplementary zip), we extend the penalization framework to the setting where $F$ is non-convex with the assumption of the Lipschitz (or local Lipschitz) continuous of $F$.
# Question 2:
- For a comparison with penalty-based methods for general bilevel optimization problems, please refer to 'Weaknesses 1'. Our method explores the structure of SBO and specifies the penalty parameter \\(\gamma\\) to establish a relationship between the original SBO and the penalty formulation.
- For general bilevel optimization problems, there have been recent results on convergent guarantees [1,14,15, 16,17]. Among those, the one that is the most related to ours is [1]. [1] investigates the case when the upper-level objective is nonconvex and gives convergence results under additional assumptions [1, Theorem 3 and 4]. However, as the general bilevel optimization problem is nonconvex, the algorithms in the literature often converge to weak stationary points, while our method for SBO converges to global optimal solution.
---
Rebuttal 2:
Title: References
Comment: [1] Han Shen and Tianyi Chen. On penalty-based bilevel gradient descent method. ICML 2023.
[2] Lu, Zhaosong, and Sanyou Mei. First-order penalty methods for bilevel optimization. SIAM JOPT 2024.
[3] Kwon, Jeongyeol, Dohyun Kwon, Steve Wright, and Robert Nowak. On penalty methods for nonconvex bilevel optimization and first-order stochastic approximation. arxiv 2023.
[4] Phillips, David L. A technique for the numerical solution of certain integral equations of the first kind. JACM 1962.
[5] Doron, Lior, and Shimrit Shtern. Methodology and first-order algorithms for solving nonsmooth and non-strongly convex bilevel optimization problems. MP 2023.
[6] Jiang, R., Abolfazli, N., Mokhtari, A. and Hamedani, E.Y. A conditional gradient-based method for simple bilevel optimization with convex lower-level problem. AISTATS 2023.
[7] Gong C, Liu X. Bi-objective trade-off with dynamic barrier gradient descent[J]. NeurIPS 2021, 2021.
[8] Cao J, Jiang R, Abolfazli N, et al. Projection-free methods for stochastic simple bilevel optimization with convex lower-level problem[J]. Advances in Neural Information Processing Systems, 2024, 36.
[9] Cui Y, Liu J, Pang J S. Nonconvex and nonsmooth approaches for affine chance-constrained stochastic programs[J]. Set-Valued and Variational Analysis, 2022, 30(3): 1149-1211.
[10] Mostafa Amini and Farzad Yousefian. An iterative regularized incremental projected subgradient method for a class of bilevel optimization problems. In 2019 American Control Conference (ACC), pages 4069–4074. IEEE, 2019.
[11] Ruichen Jiang, Nazanin Abolfazli, Aryan Mokhtari, and Erfan Yazdandoost Hamedani. A condi- tional gradient-based method for simple bilevel optimization with convex lower-level problem. In International Conference on Artificial Intelligence and Statistics, pages 10305–10323. PMLR, 2023.
[12] Sepideh Samadi, Daniel Burbano, and Farzad Yousefian. Achieving optimal complexity guarantees for a class of bilevel convex optimization problems. arXiv preprint arXiv:2310.12247, 2023.
[13] Lingqing Shen, Nam Ho-Nguyen, and Fatma Kılınç-Karzan. An online convex optimization-based framework for convex bilevel optimization. Mathematical Programming, 198(2):1519–1582, 2023.
[14] Risheng Liu, Yaohua Liu, Shangzhi Zeng, and Jin Zhang. Towards gradient-based bilevel opti- mization with non-convex followers and beyond. Advances in Neural Information Processing Systems, 34:8662–8675, 2021.
[15] Daouda Sow, Kaiyi Ji, Ziwei Guan, and Yingbin Liang. A primal-dual approach to bilevel optimization with multiple inner minima. arXiv preprint arXiv:2203.01123, 2022.
[16] Lesi Chen, Jing Xu, and Jingzhao Zhang. On bilevel optimization without lower-level strong convexity. arXiv preprint arXiv:2301.00712, 2023.
[17] Feihu Huang. On momentum-based gradient methods for bilevel optimization with nonconvex lower-level. arXiv preprint arXiv:2303.03944, 2023. | Summary: The authors propose algorithms to solve simple bilevel optimization problems of the form $\min_x F(x)$ s.t. $x$ minimizes $G$, in the case where $F$ and $G$ are composite convex functions (i.e. some of 2 convex functions, one of which is also smooth), and assuming Lipschitz continuity of $F$ on the set of minimizers of $G$ and $G$ verifying a Holder error bound.
The proposed algorithms actually tackle a simple minimization problem, solving a relaxed version of the original problem that authors previously proved to be equivalent.
Strengths: The link between the solutions of the original problem and the ones of the relaxed one is interesting and allows considering a simpler problem to tackle.
Weaknesses: I am not very familiar with this specific piece of literature, so I cannot pretend to be sure of the novelty of the results.
In particular, in line 47, the authors say « Tikhonov regularization most related to $P_\gamma$ », I do not see any difference between the two formulations. Can the authors develop on this? What is exactly done in Tikhonov's paper? And what is new here?
Minors:
- Usually, a proximal-based algorithm can be applied to the particular case where the non-smooth part of the problem is null, then the prox map is simply the identity map, and all the guarantees hold. But in this paper's setting, the non-smooth part is essential and cannot be considered null as a particular case. Indeed, otherwise, $G$ would be smooth, therefore upper bounded by a quadratic function, and $G$ is also assumed to grow faster than $||x-\bar{x}||^{alpha}$ which cannot happen (if alpha<2).
- A natural oracle assumption in the case where $F$ and $G$ contain each a non-smooth part would be to assume access to the proximal map associated with each. However, authors assume access to the proximal of all the linear combinations of the 2 non-smooth parts, and this choice seems to be only motivated by the reformulation of the problem. Doing this makes the derivation of PB-APG trivial once Thm 2.5. has been proven. But how could we tackle the same problem with the sole knowledge of the 2 prox maps independently?
Technical Quality: 3
Clarity: 3
Questions for Authors: - l.32: Why exclude the particular case where $X_{opt}$ is a singleton? This assumption is not used, therefore if we do not know in advance whether $X_{opt}$ is a singleton or not, one can still run this paper's algorithms, and guarantees will follow.
- l.39: Even if what $G^*$ is seems clear, I think it is worth mentioning it before using it in line 35.
- l.50: What about the upper problem?
- l.50: « where b in (0, 0.5) ». The authors should be more specific here. Is it true for one specific b in the interval, depending on some parameters of the problem? Is it true for any (one proposed algorithm for each value of b)?
- l.55: same remark
- l.61: It would be more convenient if the authors keep a consistent way of displaying complexity.
- l.68: What is g_2? Not defined yet at this stage. Moreover, note that the authors could not look at what happens in their case when $g_2=0$, as discussed in the "weakness" section.
- l.101: What is f_2? Again not defined. And I think here that the authors meant $F$.
- l.193: typo: « an » is repeated.
- l.205: « while samadi … » If they consider the smooth case only, studying $\alpha<2$ would be pointless because it is empty.
- It would be great if authors could add explicit stopping criteria in their algorithms. Indeed, if $F^*$ and $G^*$ are not known, I guess we stop PB-APG after K iterations where K is set according to Th3.3. But then, PB-APG is not only a function of $\phi$, $\psi$, $L_{f_1}$, $L_{g_1}, x_0, \epsilon$, but also $\ell_F$, $\alpha$ and $\beta$. Which is important to actually understand the iterates of aPB-APG.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: No limitation
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for providing these valuable suggestions.
# Weakness:
Solodov [1] applied Tikhonov regularization (TR) [2] to solve simple bilevel optimization problems. Although the formulation of TR (l.47-48 in our paper) is similar to our method (l.36-37 in our paper), their origins and theories differ. Implementing TR requires the "slow condition" ($\lim_{k\to\infty} \sigma_k=0,\sum_{k=0}^{\infty}\sigma_k=+\infty$) [2], while we do not have this constraint; Our method provides non-asymptotic convergence rates, whereas TR only provides asymptotic results on either the upper-level or the lower-level problem (l.537 in Appendix B); We give an explanation on $\gamma$ while they do not.
- Minor weakness 1: Thank you for pointing this out. The $\alpha$ in the error bound assumption is not arbitrary. Indeed, it is determined by the lower-level problem. When there exists some $\alpha$ such that $\text{dist}(x,X_{\text{opt}})^\alpha\le\rho p(x)$, the subsequent analysis is correct. For your example, suppose the function $G$ is $L$-smooth and $x^*$ is unique, we have $G(x)-G^*\le \frac{L}{2}\\|x-x^*\\|^2$. In this case, there is no contradiction if $G$ enjoys quadratic growth ($\alpha=2$), i.e., there exists $\mu>0$ such that $G(x)-G^*\ge\frac{\mu}{2}\\|x-x^*\\|^2$. But in the L-smooth case, we cannot find some $\alpha<2$ satisfying our assumption as you observed.
- Minor weakness 2: You are right that Assumption 2.2 is strong in some scenarios. However, we should emphasize that it is more general than existing literature. In the simple bilevel problem, when using proximal mappings, people often only consider specific cases. E.g., [3] explores 'norm-like' upper functions, [4,5] require both $F$ and $G$ to be smooth, and [6] assumes that the upper-level objective is smooth and strongly convex. Additionally, although [7] assumes the nonsmooth terms of $F$ and $G$ are prox-friendly, respectively, their complexity, $O(\max\\{1/\epsilon_F^{\frac{1}{1-a}},1/\epsilon_G^{\frac{1}{a}}\\})$ with $a\in(0.5,1)$, is significantly worse than ours.
Furthermore, in practice, the sum of proximal mapping can be reduced to the proximal mapping of either $f_2$ or $g_2$. One example is when $f_2=\beta g_2$, for some $\beta>0$. E.g., when we want to minimize the validation loss and training loss of the LASSO problem simultaneously, the non-smooth terms of the upper- and lower-level objectives are identical, both of which are $\ell_1$-norms. Then the proximal mapping of $f_2+\gamma g_2$ is the same as the proximal mapping of $\lambda\ell_1$-norm, for some $\lambda>0$, which is easy to obtain.
In addition, there also exist studies that investigate the sum of two proximal mappings ([8]: decompose it into the sum of individual proximal maps; [9]: compute the sum of proximal mappings efficiently). For example, when we are dealing with the regression problem, we often come into a situation where the nonsmooth part of the upper- and lower-level objectives are the indicator functions of an $\ell_1$-norm ball and an $\ell_2$-norm ball, respectively. The joint proximal mapping is the projection onto the intersection of these two balls. [10] shows how to compute it. Another case is that one level has a squared $\ell_2$ norm regularizer and the other has a $\ell_1$-norm regularizer. The sum of $\lambda_1\\|x\\|_2^2+\lambda_2\\|x\\|_1$ ($\lambda_1,\lambda_2>0$) is known as the elastic net, where the proximal mapping admits a closed form [8, Example 5].
# Questions:
- l.32: Yes, you are correct. The algorithm presented in our paper can still be executed without knowing whether $ X_{\text{opt}}$ is a singleton. We will fix this in our revision.
- l.39: Sorry for the confusion. We will introduce $G^*$ before using it in our revised version.
- l.50: [11] guarantees asymptotic convergence for the upper-level problem as shown in Appendix B in our paper. We will clarify this in our revision. For the parameter $b\in(0,0.5)$, it does not depend on any parameter of the problem and is arbitrary in $(0,0.5)$, as stated in [11, Theorem 2]. Thank you for pointing out the ambiguity.
- l.55: According to [12, Theorem 3.3 and Corollary 3.4], parameter $b$ is also arbitrary in $(0,0.5)$. It is not related to any parameter of functions but is instead related to the adaptive choice of step size $\eta_k$ in Algorithm 2.1 and 3.1, where $\eta_k=\frac{\eta_0}{(k+1)^b}$.
- l.61 Thanks for your suggestion. In l.61, the convergence rate of '$O(\epsilon^{-0.5})$' is equivalent to '$O(1/K^{2})$' if we use the total number of iterations $K$. We will modify it in our revised paper.
- l.68: Sorry. We should define $g_2$ before its first use. Or the sentence "when $F$ is strongly convex and $g_2 = 0$" should be modified to "when $F$ is strongly convex and $G$ is smooth". Moreover, for $g_2=0$, please refer to Minor weakness 1.
- l.101 and 193: Yes, we mean $F$ instead of $f_2$. Thanks for pointing out these typos.
- l.205: If they consider the smooth case only, it is still possible for $g_1$ to satisfy the error bound with $\alpha \ge 2$; please refer to Minor weakness 1.
- Thanks for your constructive suggestions to our algorithms. For the stopping criterion and $K$:
- In Algorithm 1, we can use the following stopping criterion: we stop the loop of l.3-4 if $\frac{2(L_f+\gamma L_g)R^2}{(k+1)^2} \le\epsilon$, where $R$ is such that $\\|x_0-x^*\\|\le R$.
- In Thm3.3, the complete expression of $K$ is
$K=\sqrt{\frac{2(L_{f_1}+\gamma L_{g_1})}{\epsilon}}\\|x_0-x^*\\|-1$.
- In both the above criterion and $K$, parameter $\gamma$ is chosen as:
- If $\alpha>1$, $ \gamma=\rho l_F^{\alpha}(\alpha-1)^{\alpha-1}\alpha^{-\alpha}\epsilon^{1-\alpha}+2l_F^{\beta}\epsilon^{1-\beta}$;
- If $\alpha=1$, $ \gamma=\rho l_F+l_F^{\beta}\epsilon ^{1-\beta}$.
- In Algorithms 2,3,4, and the corresponding theorems, we can also provide criteria and the value of $K$. We will include these in the revision.
---
Rebuttal Comment 1.1:
Title: Response to authors' rebuttal
Comment: I thank the authors for responding to my minor concerns.
Overall, the paper is well-written, sound and, I think, interesting for the Neurips community.
If the authors process the discussed modifications/clarifications, then I am in favor of the acceptance of this paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for acknowledging our paper and response, and for the positive score.
---
Rebuttal 2:
Title: References
Comment: [1] Mikhail Solodov. An explicit descent method for bilevel convex optimization. Journal of Convex Analysis, 14(2):227–237, 2007.
[2] Andre Nikolaevich Tikhonov and V. I. A. K. Arsenin. Solutions of ill-posed problems. Wiley, 1977.
[3] Doron, Lior, and Shimrit Shtern. Methodology and first-order algorithms for solving nonsmooth and non-strongly convex bilevel optimization problems. Mathematical Programming 201.1 (2023): 521-558.
[4] Jiang R, Abolfazli N, Mokhtari A, Hamedani EY. A conditional gradient-based method for simple bilevel optimization with convex lower-level problem. In International Conference on Artificial Intelligence and Statistics. PMLR (2023): pp. 10305-10323.
[5] Khanh-Hung Giang-Tran, Nam Ho-Nguyen, and Dabeen Lee. Projection-free methods for solving convex bilevel optimization problems. arXiv preprint arXiv:2311.09738, 2023.
[6] Sabach, Shoham, and Shimrit Shtern. A first order method for solving convex bilevel optimization problems. SIAM Journal on Optimization 27.2 (2017): 640-660.
[7] Merchav, Roey, and Shoham Sabach. Convex Bi-level Optimization Problems with Nonsmooth Outer Objective Function. SIAM Journal on Optimization 33.4 (2023): 3114-3142.
[8] Yu, Yao-Liang. On decomposing the proximal map. Advances in neural information processing systems 26 (2013).
[9] Pustelnik, Nelly, and Laurent Condat. Proximity operator of a sum of functions; application to depth map estimation. IEEE Signal Processing Letters 24.12 (2017): 1827-1831.
[10] Liu, Hongying, Hao Wang, and Mengmeng Song. Projections onto the intersection of a one-norm ball or sphere and a two-norm ball or sphere. Journal of Optimization Theory and Applications 187 (2020): 520-534.
[11] Mostafa Amini and Farzad Yousefian. An iterative regularized incremental projected subgradient method for a class of bilevel optimization problems. In 2019 American Control Conference (ACC), pages 4069–4074. IEEE, 2019.
[12] Kaushik, Harshal D., and Farzad Yousefian. A method with convergence rates for optimization problems with variational inequality constraints. SIAM Journal on Optimization 31.3 (2021): 2171-2198. | Rebuttal 1:
Rebuttal: In this global 'Author Rebuttal', we upload our additional experimental results, as well as the clearer and more readable experimental results of Sections 4.1 and 4.2 in our paper.
# Linear regression problem
The first experiment is the sparse linear regression problem on the data ($3,000$ instances of 'YearPredictionMSD') same as the one used in Section 4.2 in our paper. We allocate $60\\%$ of the data as the training set $(A_{tr},b_{tr})$, $20\\%$ of the data as the validation set $(A_{val},b_{val})$, and the rest as the test set $(A_{test},b_{test})$ with $\frac{1}{2}\\|A_{test}x - b_{test}\\|^2$ as the test error. The sparse linear regression problem has the following form:
$$
\min\_{x}\frac{1}{2}\\|A\_{val}x-b\_{val}\\|^2 \quad \text{s.t.}\quad x\in\text{argmin}\_z ~\frac{1}{2}\\|A_{tr}z-b_{tr}\\|^2+\\|z\\|\_1.
$$
For the experimental results, please refer to 'Figure 1' in the PDF.
# Integral equation problem
In the second experiment, we explore the regularization impact of the minimal norm solution on ill-conditioned inverse problems arising from the discretization of Fredholm integral equations of the first kind [1], i.e., the solution of the integral equation problem. Following the same setting of [2], we solve the following problem:
$$
\min\_{x}x^T Q x\quad \text{s.t.}\quad x\in\text{argmin}\_{z\ge0}\frac{1}{2}\\|Az-b\\|^2,
$$
where $[A,b_T,x_T] = phillips(100)$ and $b = b_T + 0.2w$ with $w$ is sampled from a standard normal distribution, $Q = L^T L+I$ with $L=get\_l(100)$ and $I$ is the identity matrix. The functions '$phillips$' and '$get\_l$' follow from the ''regularization tools'' MATLAB package [3].
For the experimental results, please refer to 'Figure 2' in the PDF.
# Fair classification
Moreover, in the third experiment, we address the fair classification problem as described in [4], which features a non-convex upper-level objective. We randomly sample $3,000$ instances as the training set and $2,000$ as the test set. The domain of the lower-level objective is set to be $\{x:\\|x\\|_1\le10\}$.
For the experimental results, please refer to 'Figure 3' in the PDF. Given the non-convex nature of the upper-level objective, this experiment demonstrates that our algorithm can effectively handle some practical non-convex scenarios.
# Experimental results of our main paper
Here, we present the experimental results of our main paper (Sections 4.1 and 4.2) after resizing and formatting. For the detailed results of Sections 4.1 (Logistic regression problem (LRP)) and 4.2 (Least squares regression problem (LSRP)), please refer to 'Figure 4' and 'Figure 5' in the PDF, respectively.
# References
[1] Phillips, David L. A technique for the numerical solution of certain integral equations of the first kind. JACM 1962.
[2] Doron, Lior, and Shimrit Shtern. Methodology and first-order algorithms for solving nonsmooth and non-strongly convex bilevel optimization problems. MP 2023.
[3] Hansen, Per Christian. Regularization tools version 4.0 for Matlab 7.3. Numerical algorithms 46 (2007): 189-194.
[4] Jiang, R., Abolfazli, N., Mokhtari, A. and Hamedani, E.Y. A conditional gradient-based method for simple bilevel optimization with convex lower-level problem. AISTATS 2023.
Pdf: /pdf/e57314a1652d979f0f0527101e11104a3b521fb3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Referencing Where to Focus: Improving Visual Grounding with Referential Query | Accept (poster) | Summary: This paper focuses on the visual grounding task, which proposes a query adaption module that can be seamlessly integrated into CLIP. By strategically inserting this module into different layers of CLIP, the learnable query can adaptively learn target-related information from multi-level image feature maps, and iteratively refine the acquired information layer by layer. Additionally, it can offer prior information to the decoder, effectively mitigating the learning difficulty of the decoder, and accurately concentrating on the target object. Extensive experiments on five visual grounding benchmarks (RefCOCO/+/g, ReferItgame, Flickr30k) validate the effectiveness of the proposed method.
Strengths: 1.This paper introduces a query adaption module that can adaptively learn target-related information from different layers in CLIP, and provides the visualization results of the attention map in experiments. This design attracted me and made me feel interested.
2.The proposed QA module can not only capture target information adaptively, but also act as an adapter to avoid adjusting Backbone's parameters. Compared with the previous fine-tuning Backbone's work, this module has advantages and achieves better performance with smaller training parameters.
3.In this task, unlike most prior DETR-like research that primarily concentrates on decoder design, this paper focuses on learnable query optimization , making it innovative in its approach.
4.The authors conduct experiments on multiple visual grounding benchmarks (RefCOCO/RefCOCO+/RefCOCOg, Flickr30K, and ReferItGame) and also experiment on the RES benchmark, achieving improved performance. This provides robust empirical support for the effectiveness of the method.
Weaknesses: 1.It is better to provide the experiment results without using auxiliary loss, which can further observe the influence of auxiliary loss on the referential query.
2.Is it feasible to directly utilize the referential query without subsequent decoding operations? I am interested in the accuracy of the referential query.
3.Typo: On line 72, "we propose a query adaption module, RefFormer...". It appears that there might be an error in this sentence.
Technical Quality: 3
Clarity: 4
Questions for Authors: See weakness, please. If the author can solve my above confusion well, I will consider raising the score.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: In the limitations section, the author states that this method still has room for improvement in RES tasks. In addition, by visualizing the attention map, I find that this method accurately locates the area of the target object. Therefore, I believe this method holds great potential in RES tasks, and I hope the author can make further advancements in future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1:It is better to provide the experiment results without using auxiliary loss, which can further observe the influence of auxiliary loss on the referential query.
We conduct the ablation study with auxiliary loss on RefCOCOg, and the results demonstrate the effectiveness of auxiliary loss. By employing auxiliary loss, the reference query can capture the target-related visual contexts more effectively.
| Method | val | test |
|--------------------|-------|-------|
| w/o auxiliary loss | 74.24 | 73.82 |
| Ours | 76.33 | 75.33 |
| | | |
Thank you, we will add this experiment to the ablation studies.
> Q2: Is it feasible to directly utilize the referential query without subsequent decoding operations? I am interested in the accuracy of the referential query.
The referential query is designed to provide prior information to the decoder. Since the channel dimension in the QA module is lower, the reference query may not accurately predict the coordinates of the targets. We conduct the experiment on RefCOCOg, and the results are illustrated below.
| val | test |
|-------|-------|
| 52.92 | 51.87 |
| | |
Thank you, we will add this explanation to the method.
> Q3: Typo: On line 72, "we propose a query adaption module, RefFormer...". It appears that there might be an error in this sentence.
Thank you. We will rectify "RefFormer" to "QA".
---
Rebuttal Comment 1.1:
Title: Re-response to the author:
Comment: Thank the author’s careful response. After carefully reviewing the authors' responses to my concerns and those of other reviewers, the author included a large number of experiments for further explanation and analysis. This has deepened my appreciation for the significance of this paper.
The authors have excellently addressed my concerns. As a result, I have decided to raise my score. | Summary: This paper addresses the generation of queries for the decoder. The authors propose RefFormer to generate the referential query with the prior context. A query adaption module is proposed to capture extensive target-related context and provide valuable referential knowledge for the decoder. Extensive experiments validate the effectiveness of the proposed method.
Strengths: 1. This paper proposes a new method for query initialization, where multimodal information is continuously embedded during the feature extraction stage to provide the query with prior knowledge.
2. Extensive experiments validate the proposed method.
Weaknesses: This idea seems straightforward but lacks some innovation. The method mentioned in the paper has been applied in other fields, such as R2-tuning. The proposed module is also simple to implement. Although the model achieves good results, I believe the paper does not meet the standards of NIPS.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Could you provide the number of trainable parameters and the inference speed?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. Limitations are addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Q1: This idea seems straightforward but lacks some innovation. The method mentioned in the paper has been applied in other fields, such as R2-tuning.
Thank you for your question. We analyze the differences between our approach and R2-tuning [a] from two aspects: motivation and implementation:
**[Motivation]:** [a] designs an **image-to-video transfer framework** that applies CLIP to video temporal grounding tasks without fine-tuning. In contrast, our work focuses on **improving learnable queries in DETR-like structures** for visual grounding tasks. Specifically, [a] designs a parameter-tuning strategy to model spatial-temporal information from coarse to fine using CLIP, which can be served as an adapter. Instead, our work aims to generate prior information for the decoder to improve the learnable query by integrating the QA module into the CLIP layers.
**[Implementation]:** In our work, we focus on improving learnable queries rather than merely tuning gradient flows. The comparison is shown as follows:
1. *The feature interaction in CLIP layers.* It primarily encompasses three key differences: **1) the generated features (frame-level spatial-temporal features vs. referential queries), 2) vision-language interaction (unidirectional vs. bidirectional), and 3) gradient flow tunning (visual side vs. visual and language sides)**. Specifically as follows:
+ [a] introduces the $R^2$ blocks to interact the patch-level representations with language representations for adaptively pool spatial features, and then combine the pooled features with the [CLS] token to extract the frame-level spatial-temporal features. In our work, we incorporate the learnable referential queries to interact with the language and visual features for capturing target-related visual contexts.
+ The interaction in [a] is unidirectional, focusing solely on extracting query (language)-modulated spatial features. Conversely, the interaction in our work is bidirectional, not only highlighting language-related context within visual features but also integrating context-aware visual features into language features.
+ The gradient flows tuning in [a] that forward in CLIP are solely within the vision encoder and are specific to the [CLS] token, while in our method, they are adjusted in both the vision and language encoders and apply to patch features.
2. *Decoding.* **[a] apply the traditional convolution layers** to spatial-temporal features to predict the temporal boundary, while **our work adopts the DETR-like structure** and employs **reference queries** to provide the prior information for the decoder.
3. *Multi-level features.* We introduce language-guided multi-level fusion to **aggregate the image features from the different layers** rather than applying **1D convolutions to generate the temporal feature pyramid in [a]** for subsequent decoding.
To further validate the effectiveness of our approach, we integrate the method from [a] into our framework. The results, presented below, demonstrate that our method achieves superior performance.
| Method | val | test |
|--------------------------------------------------|-------|-------|
| Decoding with convolution layers | 70.02 | 69.87 |
| Tunning on the visual side and using [CLS] token | 66.97 | 66.12 |
| Unidirectional interaction | 72.45 | 72.02 |
| Ours | 76.33 | 75.33 |
|
**Additionally, we would like to highlight that [a] is a concurrent work that has been accepted by ECCV.** We will reference it and discuss it in the related work.
[a] Liu Y, He J, Li W, et al. $ R^ 2$-Tuning: Efficient Image-to-Video Transfer Learning for Video Temporal Grounding[J]. arXiv preprint arXiv:2404.00801, 2024.
>Q2: Could you provide the number of trainable parameters and the inference speed?
We provide the number of trainable parameters and the inference speed below and compare them with other open-source methods. When compared to Transvg and RefTR, which also use the DETR-like structure, our approach demonstrates superior performance in both trainable parameters and inference speed.
| Method | Trainable parameters(M) | Inference speed(ms) |
|---------|----------------------|-----------------|
| ReSC [39] | 174.85 | 53 |
| Transvg [7] | 152.61 | 62 |
| RefTR [19] | 144.73 | 44 |
| Ours | 29.20 | 32 |
| | | |
Thank you, we will add this experiment to the experiments.
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's rebuttal. I still think the novelty of this paper is limited. So I keep my score.
---
Rebuttal 2:
Title: Please let us know whether you have any follow-up questions
Comment: Dear Reviewer DDuF,
We hope you are doing well. As the discussion period is coming to an end (Aug 13), we wanted to reach out to see if you have any follow-up questions. If so, we would appreciate the opportunity to respond before the discussion period ends. We believe our above messages should have addressed your concerns, and therefore may warrant an increase in score if you find them helpful as well. Would you please let us know if you have any further questions or concerns? We are happy to continue the discussion.
Thank you very much again for your thoughtful review and help in improving the paper. We appreciate your time and consideration.
Regards, Authors
---
Rebuttal 3:
Title: Response to Reviewer DDuF
Comment: Thank you for your feedback. Our approach fundamentally differs from [a]. As previously mentioned, our study concentrates on improving learnable queries within DETR-like architectures, as opposed to solely concentrating on parameter adjustments as in [a]. Furthermore, experiments have demonstrated significant improvements resulting from our approach.
We would like to emphasize once again our motivation:
1) We focus on improving the learning process of the learnable queries, different from the previous work that emphasizes the design of sophisticated multi-modal decoders.
2) We propose a query adaption module that not only adaptively captures the target-related context, providing valuable referential knowledge for the decoder, but can also serve as the adapter.
3) By strategically inserting the QA module into different layers of CLIP, the query adaptively learns target-related information from multi-level image feature maps, and iteratively refines the acquired information layer by layer.
Additionally, we conduct extensive experiments to validate the effectiveness of our method and provide the visualization results to demonstrate the reasonability of our proposed referential query.
We sincerely hope you will reconsider our paper and appreciate your valuable suggestions for improving this work. If there are any other questions or areas you'd like to discuss, we welcome further conversation. | Summary: The existing one-stage visual grounding methods suffer from cross-modal learning difficulty and focus simply on the deepest visual features. This paper designs a query adaption (QA) module to provide target-related referential queries for the decoder. The proposed architecture Reformer is based on a CLIP model with multiple QA modules inserted in different layers of CLIP. The proposed method can be applied to both REC and RES tasks. The performance is validated on various benchmarks.
Strengths: - The paper is generally well-written and easy to follow.
- The motivation for addressing two issues present in the existing one-stage grounding model is clear, and the idea of enhancing cross-modal interaction is intuitive.
- The attention maps show the refining process across different layers as expected.
- It works for both object-level and dense grounding.
Weaknesses: - The proposed module QA has to be inserted into specific positions inside the VLM to improve grounding performance. Experiments in Table 5 indicate the importance of layer selection. However, it is unclear why the 5 layers perform the best and why these indices are selected. As mentioned in the introduction, the low and mid-level features are crucial for grounding (line 48), yet lower layers (e.g., 2nd) containing rich low-level features diminish the performance. In other words, the selection of layers is not explained theoretically, while 3 combinations are not enough to justify the selection empirically.
- In condition aggregation and multi-modal fusion (CAMF) module, it seems that the query interacts more with the visual representations (i.e., the upper part in Figure 3), while the alternative that mainly integrates query with textual representations seems feasible (i.e., place the query to the lower part in Figure 3). The motivation and advantage of design are missing.
- The backbone and training data information (especially the version of Reformer) is not provided in Tables 2 and 3.
Technical Quality: 2
Clarity: 4
Questions for Authors: - How does the paper select the number of inserted QA modules? How does the paper select the indices of inserted layers? Why do more layers or lower layers hurt the performance?
- In the condition aggregation and multi-modal fusion (CAMF) module, the query seems to interact more with the visual representations (i.e., the upper part in Figure 3). Why is it designed in such a way? What is the advantage over mainly based on textual representations?
- Does QA support different model architectures?
- What is the version of the reformer in Tables 2 and 3?
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: While the motivation is clear and the idea is intuitive, the technical implementation of essential modules/structures lacks theoretical or empirical justification.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: How does the paper select the number of inserted QA modules? How does the paper select the indices of inserted layers? Why do more layers or lower layers hurt the performance?
We provide more experiments on layer selection on RefCOCOg below. We categorize [1-4], [5-8], and [9-12] as low-level, mid-level, and high-level layers, respectively. As shown in the table below, we observe that introducing more layers can improve performance. However, with continued layer addition, the performance gains become less significant. Therefore, **to strike a balance between performance and computational cost, we opt for [4,6,8,10,12]**. Compared to IDs 7, 9, and 10, line 8 leads to a slight performance decline. This could be attributed to the fact that shallower layers focus on the local details and convey less semantic information, which **may introduce noise**. Similarly, in object detection, RPN-based models [a,b] do not use C1 feature maps for the same reason. Additionally, we explore other combinations of layers, but the performance changes show low sensitivity.
| ID | Layer | val | test |
|------|----------------|-------|-------|
| 1 | None | 65.50 | 65.54 |
| 2 | 3,7,11 | 73.94 | 72.94 |
| 3 | 4,8,12 | 74.08 | 73.82 |
| 4 | 3,5,7,11 | 74.43 | 74.18 |
| 5 | 4,6,8,12 | 74.82 | 74.20 |
| 6 | 3,5,7,9,11 | 75.06 | 74.94 |
| 7 | 4,6,8,10,12 | 76.33 | 75.33 |
| 8 | 2,4,6,8,10,12 | 75.84 | 75.32 |
| 9 | 4,6,8,9,10,12 | 76.40 | 75.31 |
| 10 | 4,6,8,10,11,12 | 76.51 | 75.43 |
| | | | |
Thank you, we will improve Table 5 and the corresponding experimental analysis.
[a] Ren S, He K, Girshick R, et al. Faster R-CNN: Towards real-time object detection with region proposal networks[J]. IEEE transactions on pattern analysis and machine intelligence, 2016, 39(6): 1137-1149.
[b] He K, Gkioxari G, Dollár P, et al. Mask r-cnn[C]//Proceedings of the IEEE international conference on computer vision. 2017: 2961-2969.
> Q2: In condition aggregation and multi-modal fusion (CAMF) module, it seems that the query interacts more with the visual representations (i.e., the upper part in Figure 3), while the alternative that mainly integrates query with textual representations seems feasible (i.e., place the query to the lower part in Figure 3). The motivation and advantage of design are missing.
We would like to clarify that **reference queries aim to capture the target-related visual context.** Initially, they interact with expressions to aggregate text conditions (in CAMF). Subsequently, they interact with visual features based on text conditions to capture and refine visual contexts about the target object (in TR). If we place them below, we will not be able to capture the target-related visual features based on the conditional information. Additionally, we provide the performance comparison on RefCOCOg as below. The results further demonstrate the effectiveness of the method we designed.
| Method | val | test |
|-----------------|-------|-------|
| Query on the text side| 74.21 | 73.52 |
| Ours | 76.33 | 75.33 |
| | | |
Thank you, we will add this experiment to the ablation studies.
> Q3: The backbone and training data information (especially the version of Reformer) is not provided in Tables 2 and 3.
For a fair comparison, we utilize the ViT-Base 32 and train on a single dataset (Flickr30K Entities and ReferItGame, respectively) to compare with other methods. Consistent with prior approaches, we perform ablation studies (Table 3-6) using ViT-Base 32 and train on a single dataset (RefCOCOg).
Thank you, we will further improve the experimental details in our paper.
> Q4: Does QA support different model architectures?
We apply our method to single-modal encoders, i.e., Swin-base + Bert, and conduct experiments on RefCOCOg. The results, as presented below, demonstrate that our method is also compatible with single-modal encoders.
| Method | val | test |
|-----------------|-------|-------|
| Swin-base + BERT| 75.25 | 75.61 |
| CLIP-base | 76.33 | 75.33 |
| | | |
Thank you, we will add this experiment to the experiments.
---
Rebuttal Comment 1.1:
Comment: The authors have addressed my concerns, and it would be better if they could include the supplement experiments in their revision. I tend to accept the paper after the rebuttal. | Summary: This paper proposes a novel visual grounding framework, called RefFormer, aims to improve the learning process of learnable queries. Specifically, it introduces a query adaption module (QA) that can be seamlessly integrated into different layers of CLIP, which can not only provide prior information to the decoder, but also act as an adapter to learn task-specific knowledge. The effectiveness of the proposed method is extensively validated through experiments conducted on the five popular visual grounding benchmarks, namely RefCOCO, RefCOCO+, RefCOCOg, Flickr30K, and ReferItGame. Furthermore, the authors extend this approach to dense grounding tasks, demonstrating its effectiveness and generalization.
Strengths: 1. The proposed method is novel, which integrates query learning and adapter into one module, effectively leveraging various levels of feature information within CLIP.
2. By introducing QA module, models can adaptively extract target-related information from the backbone and provide prior information to the decoder instead of relying on randomly initialized queries.
3. This paper achieves better performance with fewer training parameters and data.
4. Sufficient experiments have been conducted to show the promising results of the proposed method.
5. The writing of this paper is good and clear.
Weaknesses: While the paper is clearly written, some areas can still be improved. Some suggestions are as follows:
1. Line 72 should correct "RefFormer" to "QA".
2. A more detailed description is needed for the text side in the QA module, such as how the interaction with visual features is initiated.
3. Why is a direct use of the referential query not preferred, and why is an additional learnable query introduced in the decoder?
4. Specify the dimension along which the concatenation operation in Section 4.1 is performed.
5. Correct "[r_t, F^i_t]" to "[r_t; F^i_t]" in Equation 9.
6. Provide experimental details regarding the channel dimension of the QA module in the down-projection process.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. Why is a direct use of the referential query not preferred, and why is an additional learnable query introduced in the decoder?
2. 6. Provide experimental details regarding the channel dimension of the QA module in the down-projection process.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: See the above weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > Q1: Line 72 should correct "RefFormer" to "QA".
Thank you. We will correct the typos in our paper.
> Q2: A more detailed description is needed for the text side in the QA module, such as how the interaction with visual features is initiated.
In the CAMF block, we take the language features as the query, and interact them with the image features using cross-attention. By doing so, we can incorporate rich visual context into language features to better indicate the target object. Subsequently, in the TR block, we employ self-attention to enhance the context-aware language features produced above to refine the expression condition.
Thank you, we will add this description in Sec 4.1.
> Q3: Why is a direct use of the referential query not preferred, and why is an additional learnable query introduced in the decoder?
As outlined in line 189, we feed the referential query into $\phi_q(\cdot)$ to adjust its significance. When the referential query is inaccurate (i.e., its significance approaches zero), the query in the decoder degenerates to the vanilla query.
Thank you, we will add this explanation in line 189.
> Q4: Specify the dimension along which the concatenation operation in Section 4.1 is performed.
Thank you for your suggestion. We will further improve the description details in our paper. In Eq.7,9,11,12, the concatenation is along with the patch dimension.
> Q5: Correct "[r_t, F^i_t]" to "[r_t; F^i_t]" in Equation 9.
Thank you. We will correct the typos in our paper.
> Q6: Provide experimental details regarding the channel dimension of the QA module in the down-projection process.
Thank you for your suggestion. In our experiments, we set the channel dimension of the down-projection to 128. Furthermore, we provide an ablation study on the channel dimension of the down-projection below. Increasing the dimension improves performance, but to strike a balance between performance and computational cost, we set the channel dimension to 128.
| Dimension | val | test |
|-----------|-------|-------|
| 64 | 74.15 | 73.88 |
| 128 | 76.33 | 75.33 |
| 256 | 76.62 | 75.39 |
| | | |
Thank you, we will add this experiment to the ablation studies.
---
Rebuttal 2:
Title: Please let us know whether you have any follow-up questions
Comment: Dear Reviewer RwV9,
I am glad for the recognition of our work. Please feel free to raise any further questions, and we are more than happy to continue the discussion with you. Thanks again for your great efforts and constructive advice in reviewing this paper!
Regards, Authors
---
Rebuttal Comment 2.1:
Title: Comment to the author
Comment: Thanks to the author's response, which has effectively addressed my concerns regarding the details of this paper. Overall, I find this work to be impressive, I am inclined to maintain my recommendation to accept it. I hope the author can address the aforementioned issues, as well as those raised by other reviewers, in the revised version. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Enhancing Efficiency of Safe Reinforcement Learning via Sample Manipulation | Accept (poster) | Summary: This paper proposes a safe RL algorithm that adjusts the sample number during one on-policy update based on the conflict between the reward gradient and the cost gradient. When the policy's cost value is near the constraint threshold and there is a gradient conflict, a larger sample number is used; otherwise, a smaller sample number is applied. The paper also provides theoretical results about convergence rates, reducing oscillation, and sample efficiency. Finally, the paper evaluates the algorithm on two safe RL benchmarks.
Strengths: - It is an interesting and innovative attempt to improve sample efficiency for safe RL by dynamically adjusting the sample number of on-policy updates based on gradient conflict.
- Some efforts on theoretical analysis are provided to justify the proposed approach.
- Empirical results demonstrate the capability of this work to improve performance and sample efficiency.
Weaknesses: - The sample manipulation is an interesting mechanism, and the gradient conflict, as a manipulation signal, occurs widely in many safe RL algorithms, including both primal and primal-dual methods. However, it is solely applied to PCRPO, which may weaken its generalization and persuasiveness and makes it seem like a minor improvement tailored to PCRPO.
- The description of the main backbone PCRPO is insufficient, making it difficult to understand the whole algorithm pipeline without prior knowledge of PCRPO. See question 1, 2.
- Some points about the theoretical results remain to be clarified. See questions 3 and 4.
- Some experimental settings are debatable. See question 5.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Eq. (3)(4) of PCRPO, serving as an essential component of your algorithmic pipeline, is kind of indigestible. How to set $x^r_t$, $x^c_t$? And how is Eq. (3) functioning as 'projecting reward and cost gradients onto their normal planes'? It would be better to add more details or illustrations about PCRPO (at least in the appendix).
2. I see the $h^+,h^-$ dynamically change in rows 4, 7 in Algorithm 1 ESPO. Is it an existing component of PCRPO or a new trick in ESPO?
3. From the description of Theorem 4.1 and Proposition 4.2, I cannot find the association between sample manipulation and these two theoretical results. Can I view them as conclusions about PCRPO and not tailored to ESPO?
4. It seems that Assumption A.7, which bridges sample size and performance, serves as a very important theoretical base to verify the sample size manipulation. There should be more intuitive explanations and discussions to justify this assumption.
5. I guess you use a fixed number of training epochs for all algorithms. But training epochs are not a good metric for comparison, especially when each epoch corresponds to different sample steps, so maybe the x-axis of Figures 2 and 3 should be set to sample steps rather than training epochs to fairly demonstrate the sample efficiency of all algorithms. Besides, Table 1 should report the sample steps used to reach the same performance rather than the sample steps in a fixed number of epochs.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reply to Reviewer qrLu
We appreciate the reviewer for recognizing our contributions in both practice and theory, and for providing constructive suggestions.
> **Q1:** The sample manipulation is an interesting mechanism with the gradient conflict as a signal. Is it solely applied to PCRPO or can be extended to other safe RL algorithms?
**A1:** Many thanks to the reviewer's recognition of the potential generalization of our sample manipulation approach. The reviewer is insightful that using the gradient conflict as a manipulation signal (metric) can improve the sample efficiency of extensive safe RL algorithms, not only PCRPO. Please refer to **Q1** in the general response.
> **Q2:** How to set $x^r_t$ or $x^c_t$ ? Explain Eq. (3)(4) of PCRPO and how is Eq. (3) functioning as 'projecting reward and cost gradients onto their normal planes'? Add more details about PCRPO.
**A2:** Thank the reviewer for the clarification suggestions.
- $x_t^r$ and $x_t^c$ represent the weights of the reward gradient and the safety cost gradient in the final gradient $w_{t+1}$. Please refer to **Q2** of the general response for more details, where we explain how to choose/set $x_t^r,x_t^c$ and also **conduct new ablation studies using varying $x_t^r$ and $x_t^c$**.
- As the reviewer suggested, we have added an intuitive description for Eq.(3) in the main text and a more detailed introduction and illustration for PCRPO in the appendix. Recall the first line of Eq.(3) serves as the update rule when there is some conflict between the reward gradient and the safety cost gradient. In that case, to explain "project the reward and cost gradients onto their normal planes", we take the first term of Eq.(3) $g_r - \frac{g_r \cdot g_c}{ ||g_r||^2} g_c$ as an example. $\frac{g_r \cdot g_c}{ ||g_r||^2} g_c$ represent the vector projection of reward gradient $g_r$ on the cost gradient $g_c$. And the corresponding surrogate reward gradient $g_r - \frac{g_r \cdot g_c}{ ||g_r||^2} g_c$ becomes a vector that is perpendicular to $g_c$ --- the surrogate reward gradient not only improves reward but also won’t conflict with the cost gradient $g_c$ (since it is perpendicular to $g_c$, which we called the normal plane).
> **Q3:** I see the dynamically change in rows 4, 7 in Algorithm 1 ESPO. Is it an existing component of PCRPO or a new trick in ESPO?
**A3:** We apologize for this confusion. The dynamic change in rows 4 and 7 of Algorithm 1 (ESPO) is indeed an existing component from PCRPO, not a new trick introduced in ESPO. This mechanism in PCRPO is designed to dynamically adjust the range of soft region (by setting $h^+, h^-$) to adapt the optimization behaviors. This trick can improve the trade-offs between reward performance and safety constraints objectives, which we maintained in ESPO to leverage its efficiency.
The contributions of ESPO focus on developing the new sample manipulation paradigm using gradient conflict as the metric/signal, which shows powerful advantages in sample efficiency, as shown in the paper and our new experiments (please refer to the previous answers **A1** to Q1).
> **Q4:** Are the theoretical results in Theorem 4.1 and Proposition 4.2 conclusions about PCRPO but not tailored to ESPO?
- **A4:** This is indeed one of the essential theoretical contributions of this work. The reviewer is correct that the convergence guarantees (Theorem 4.1) and the advantages of reduced optimization oscillation (Proposition 4.2) for our ESPO also hold for the prior work PCRPO, which are direct implications of our results. Notably, PCRPO is currently one of the state-of-the art safe RL algorithms, while still needing theoretical guarantees. This work closes this gap for PCRPO. In addition, ESPO is a broader algorithm framework --- PCRPO can be seen as a special case with identical sample size choices for all training iterations, which all the theoretical results hold for.
> **Q5:** Assumption A.7 serves as a key to verify the sample size manipulation. Give more intuitive explanations and discussions to justify this assumption.
- **A5:** We have added discussions to justify Assumption A.7 in the new version: In words, Assumption A.7 assumes a local Lipschitz property of the Q function error term $\delta_t = \|Q_r^{\pi_{w_t}} - \overline{Q}_t^r\|_2$ with respect to the sample size $s_t^B$ used for estimating $\overline{Q}_t^r$ at any time step $t$. Typically, this can be satisfied easily since $\delta_t$ generally will decrease monotonically as the sample size increases in a polynomial order, such as $\delta_t \approx O(\sqrt{\frac{1}{s_t^B}})$ that usually holds with high probability verified by high-dimensional statistics (some concentration inequalities).
> **Q6:** Change the x-axis of Figures 2 and 3 to sample steps rather than training epochs for fair comparisons. Besides, Table 1 should report the sample steps used to reach the same performance rather than that used in a fixed number of epochs.
**A6:** As the reviewer suggested, to provide a more fair and informative evaluation for the proposed method, we have made the following changes to show the results:
- For Figures 2 and 3: We have revised them to use the number of sample steps as the x-axis rather than training epochs, shown in **Figures 5 and 6 in the pdf of the general response**. This more reasonable illustration further underscores the sample efficiency of our proposed ESPO compared to baselines.
For both Table 1 and Table 2: We have revised them (in the general response pdf file Table 1 and Table 2) to report the number of sample steps required to reach reasonable performance thresholds, rather than after a fixed number of epochs.
- Using these more fair metrics that the reviewer suggested, the results demonstrate even more efficiency of our ESPO compared to the baselines, in terms of not only sample efficiency, but also reward and safety performance.
---
Rebuttal 2:
Comment: Thanks for the authors' response. The supplementary experimental results are comprehensive, and the additional clarifications address most of my confusions. Thus, I decide to raise the score to 4. However, I still hold a slightly negative opinion due to two main concerns: (1) the limited applicability of sample manipulation to broader algorithms (the paper structure seems overly tailored to PCRPO, even though some additional experimental results about TRPO-Lag and CRPO are provided), and (2) the weak connection between the theoretical analysis and the main contribution (sample manipulation).
Just a reminder, it seems there is a typo in your response A2 where $g_r-\frac{g_r\cdot g_c}{||g_r||^2}g_c$ should be corrected to $g_r-\frac{g_r\cdot g_c}{||g_c||^2}g_c$ according to the paper.
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer,
Thank you for engaging in discussion with us and for appreciating our new experimental results! We will discuss on your further insightful comments. We have corrected the typo to $g_r - \frac{g_r - g_c}{\|g_c\|^2}g_c$. Regarding your other comments:
- **Adapting the presentation to highlight the generalization of sample manipulation to extensive safe RL algorithms and problems.** The reviewer is correct that we did a step-by-step presentation to introduce the proposed algorithm ESPO, it is indeed built on PCRPO which serves as an example in our algoirthm framework. As the reviewer suggested, we have decided to adapt the presentation for further clarification (in section 4 and appendix) by following the structure as below:
- **Showing the key module sample manipulation** --- depending on the conflict signal of reward and cost gradients.
- **How to integrate it into diverse safe RL algorithms**: introducing PCRPO and then the corresponding ESPO (PCRPO + sample manipulation) as a detailed example; introduce other examples regarding the primal method CRPO and primal-dual method TRPO-Lagrangian. (The experimental results are presented in Section 5 and **Q1** in the general response.)
- **How to integrate it into more complex safe RL problems** --- multi-objective safe RL problems. (The experimental results are presented in **Q1** in the general response.)
We appreciate the reviewer for enabling us to verify the generalization power of our sample manipulations and helping us improve the current paper organization.
- **Main contributions on the theoretical side:** Thanks for raising this question, which is indeed an essential contribution in our theoretical part. A brief answer would be: our theoretical analysis is a more general framework that is not only for sample manipulation module, but extensive primal-based safe RL algorithms. The separation of the theoretical framework from sample manipulation is not a flaw, but an intentional advantage for generalization.
Specifically, recall that we provide three provable advantages for ESPO --- 1) convergence in terms of both the optimal reward and the constraint requirement (Theorem 4.1); 2) efficient optimization with reduced oscillation (Proposiiton 4.2); sample efficiency with sample size manipulation (Propositon 4.3).
- **The sample efficiency guarantee directly results from sample manipulation.** The reviewer is correct in noting that the provable advantages of ESPO also depend on other modules since all modules influence the optimization process. But sample manipulation plays a key role in supporting the sample efficiency guarantees for ESPO.
- **The separation ability of the theoretical framework from sample manipulation is not a flaw, but an intentional advantage for generalization.** We would like to highlight that our theoretical guarantees (also the technical tools) for optimization perspective --- convergence (Theorem 4.1) and the advantages of reduced optimization oscillation (Proposition 4.2) --- can potentially work for a lot more primal-based safe RL algorithms even without sample manipulation (such as PCRPO). This is attributed to the fact that the theoretical analysis can be (mostly) decomposed into an optimization part and a statistical part, where sample manipulation primarily has an impact. Notably, PCRPO is currently one of the state-of-the-art safe RL algorithms, while still needing theoretical guarantees for convergence. Our theoretical results for ESPO also hold for the prior work PCRPO, which are direct implications of our results. This work closes this gap for PCRPO and can be useful to provide convergence guarantees for extensive primal-based safe RL algorithms.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer,
Thank you for your valuable comments. We have responded to your comments regarding the differences compared to PCRPO and our theoretical contributions. If you have any further questions or comments, please don't hesitate to let us know.
As the rebuttal deadline is approaching, we hope our response can address your concerns. We appreciate your time and expertise in reviewing our work.
With gratitude
Authors | Summary: The paper introduces an approach, Efficient Safe Policy Optimization (ESPO), aimed to improve the efficiency of safe reinforcement learning (RL). ESPO tries to enhance sample efficiency through sample manipulation, addressing the challenges of sample inefficiency in safe RL, which often requires extensive interactions with the environment to learn a safe policy. The proposed method dynamically adjusts the sampling process based on the observed conflict between reward and safety gradients.
Strengths: 1. The paper is well motivated, articulates the challenges in existing safe RL methods and justifies the need for dynamic sample manipulation.
2. ESPO's dynamic sample manipulation based on gradient conflicts seems relevant to the field of safe RL, potentially reducing computational costs and improving learning efficiency.
3. The paper provides both theoretical analysis and empirical validation.
Weaknesses: 1. While the paper evaluates ESPO on two benchmarks, additional evaluations on more diverse and complex environments would strengthen the generalizability claims.
2. More in-depth comparisons with a broader range of SOTA methods could provide a clearer picture of ESPO's relative performance.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. How does ESPO perform in environments with high-dimensional state and action spaces compared to low-dimensional ones?
2. What is the impact of noisy gradient estimates on the performance and stability of ESPO?
3. Can ESPO be effectively integrated with off-policy or model-based RL methods to further enhance sample efficiency?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: 1. ESPO's reliance on gradient conflict signals for sample manipulation might limit its applicability in environments where gradient estimation is noisy or unreliable.
2. The method's scalability to very large or real-time environments is not thoroughly explored, raising questions about its practical deployment in such settings.
3. The sensitivity of ESPO to various hyperparameters, such as learning rates and sample size thresholds, needs further investigation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reply to Reviewer by8k
Many thanks to the reviewer for recognizing our contributions in terms of theory and practice.
> **Q1:** Adding evaluations on more diverse, complex, or real-time environments would strengthen the generalizability claims.
**A1:** Thank the reviewer for insightful comments. We agree that testing on additional benchmarks or safe RL tasks is essential to substantiate the generalizability of ESPO. Following the reviewer's suggestion,
- **We add three new sets of experiments on either a more complex class of safe RL problem --- multi-objective safe RL, or integrating our proposed technical modules with other safe RL algorithms to show the generalizability**. Please refer to **Q1** in the general response for details. The results from these additional experiments affirm that our method not only generalizes well to diverse, safe RL problems but can potentially bring benefits for extensive safe RL algorithms.
- More experiments on large and real-time environments leave to future work: (1) extending ESPO to safe MARL tasks, which involve multi-agent strategic interactions while maintaining safety constraints with large-scale applications in collaborative robotics or autonomous systems. (2) Deploying ESPO in real-robot control tasks requires addressing real-world challenges such as sensor noise, actuator delays, and environmental uncertainties.
> **Q2:** More in-depth comparisons with a broader range of SOTA methods
- **A2:** Thank you for raising this point. First, we recall that we include the SOTA and classical ones of both primal methods (CRPO, PCRPO) and primal-dual methods (PCPO, CUP, PPOLag), which are the main classes of safe RL algorithms. So, instead of including more similar baselines, we choose to extend our ESPO to a new safe RL problem --- multi-objective safe RL and compare it to this problem's SOTA baseline CRMOPO. Details can be referred to the previous answer **A1**. These additional experiments verify the efficiency of ESPO (or its variants) not only in the standard safe RL problems, but also in more complex tasks.
> **Q3:** How does ESPO perform in environments with high-dimensional state and action spaces compared to low-dimensional ones?
**A3:** Thank you for your insightful question. We observe that the proposed method ESPO demonstrates superior performance over baselines in both low-dimensional tasks (SafetyHopperVelocity-v1 (action: 3D, state: 10D) and SafetyReacher-v4 (action: 2D, state: 10D)) and relatively high-dimensional tasks (SafetyAntVelocity-v1 (action: 8D, state: 27D), SafetyWalker-v4 (action: 6D, state: 17D), and SafetyHumanoidStandup-v4 (action: 17D, state: 45D). ), but with particularly strong performance in low-dimensional tasks. For details, please refer to Figures 5 and 6 in the general response file.
- It is an interesting direction to implement ESPO for tasks with higher dimensional state and action, such as using an image as the state. This will involve more challenges, such as representation learning and computer vision. ESPO has potential in such high-dimensional tasks due to its sample efficiency advantages compared to prior art matches the pressing need of reducing required samples in those tasks -- such as the video game benchmark Atari (with the image as input) is typically one of the hardest benchmarks to solve and cost a lot of times to collect samples [1].
[1] Ye, Weirui, et al. "Mastering atari games with limited data." NeurIPS 2021.
> **Q4:** What is the impact of noisy gradient estimates on the performance and stability of ESPO due to its reliance on gradient conflict signals?
**A4:** We appreciate the reviewer's insightful question. Typically, for all policy-based or actor-critic (safe) RL algorithms, the noisy gradient estimates can bring challenges since they are usually estimated with a batch of samples but not infinite samples.
- **Inherent challenges for safe RL problems:** The reviewer is correct that the noisy gradient estimates bring challenges for ESPO, while it is not primarily brought by algorithm design, but the inherent daunting challenges for safe RL problems ---a good balanced update direction is hard to find using noisy reward gradient and safety cost gradient estimates (there may be conflict happens between reward and cost).
- **The sample manipulation using gradient conflict signals (our key technical module in ESPO) can potentially reduce the effects of noisy gradient estimates.** Instead of limiting the applicability, the proposed sample manipulation is actually inspired by the noisy gradient estimate issue and designed to reduce its effect. In summary, the sample manipulation module will use more samples to estimate the final gradient when it needs more accuracy (when there is possibly a conflict between the reward and cost and a balance is required) and fewer samples when it tolerant more error --- improve sample efficiency. We acknowledge that the gradient conflict signal metric may also be noisy, but it won't directly hurt the final gradient direction. ESPO's superior performance implicitly shows that the performance is not sensitive to the gradient conflict signal errors.
> **Q5:** Can ESPO be effectively integrated with off-policy or model-based RL methods?
- **A5:** The answer is Yes. The core capability of ESPO lies in its use of sample manipulations based on reward and cost gradients to improve sample efficiency. This mechanism can be seamlessly incorporated into the frameworks of off-policy and model-based approaches, where gradients are readily available. While it has not been tested, its fundamental principles suggest that such integration is feasible and potentially very beneficial. This is an exciting direction for future work.
> **Q6** The sensitivity of ESPO to various hyperparameters, such as learning rates and sample size thresholds.
- **A6:** Thank the reviewer for the comments on hyperparameters. Please refer to **Q3** in the general response for details.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer,
We sincerely appreciate your valuable comments on our paper. In response, we have deployed our method to more challenging tasks and other algorithms, including: (1) Primal-based methods (e.g., CRPO [1]); (2) Primal-dual based methods (e.g., TRPOLag [2,3]); (3) Safe multi-objective reinforcement learning (e.g., CRMOPO [4]). The results of these experiments demonstrate that our method exhibits superior performance compared to state-of-the-art baselines across diverse tasks and algorithms in terms of safety, reward, and sample efficiency. These findings suggest that our sample manipulation approach can serve as a general method for safe RL and potentially extend to multi-objective learning scenarios.
If you have any further questions or require additional clarification, please don't hesitate to let us know. As the rebuttal deadline approaches, we hope our response can address your concerns, and we are grateful for your time and expertise in reviewing our work.
With gratitude,
The Authors
> [1] Xu, T., Liang, Y., & Lan, G. (2021, July). Crpo: A new approach for safe reinforcement learning with convergence guarantee. In International Conference on Machine Learning (pp. 11480-11491). PMLR.
[2] Ray, A., Achiam, J., & Amodei, D. (2019). Benchmarking safe exploration in deep reinforcement learning. arXiv preprint arXiv:1910.01708, 7(1), 2.
[3] Ji, J., Zhou, J., Zhang, B., Dai, J., Pan, X., Sun, R., ... & Yang, Y. (2023). Omnisafe: An infrastructure for accelerating safe reinforcement learning research. arXiv preprint arXiv:2305.09304.
[4] Gu, S., Sel, B., Ding, Y., Wang, L., Lin, Q., Knoll, A., & Jin, M. (2024). Safe and Balanced: A Framework for Constrained Multi-Objective Reinforcement Learning. arXiv preprint arXiv:2405.16390. | Summary: The paper introduces Efficient Safe Policy Optimization (ESPO), an approach that enhances safe reinforcement learning by dynamically adjusting sample sizes based on gradient conflicts. ESPO optimizes reward and safety, improves convergence stability, and reduces sample complexity. The experiments shows the proposed approach outperforms existing methods for both higher reward and improved safety with fewer samples and less training time.
Strengths: 1. The paper introduced Efficient Safe Policy Optimization (ESPO), which dynamically adjusts sample sizes based on observed conflicts between reward and safety gradients. The approach is novel
2. The paper provides a comprehensive theoretical analysis of ESPO, including convergence rates and optimization stability.
3. The paper conducted experiments on the Safety-MuJoCo and Omnisafe benchmarks, ESPO demonstrates significant improvements over existing primal-based and primal-dual-based methods.
Weaknesses: The paper experimented on SafetyReacher-v4, SafetyWalker2d-v4, and SafetyHopper/AntVelocity. Those safety tasks does not test generalization, such as those with safety gym.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discussed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reply to Reviewer oXwv
> **Q1:** The paper experimented on SafetyReacher-v4, SafetyWalker2d-v4, and SafetyHopper/AntVelocity. Those safety tasks do not test generalization, such as those with safety gym.
**A1:** We appreciate the reviewer's insightful comments. To the best of our understanding of the reviewer's question, we clarify the relationship between the benchmarks ([Safety-MuJoCo](https://github.com/SafeRL-Lab/Safety-MuJoCo) and [Omnisafe](https://github.com/PKU-Alignment/omnisafe)) used in this paper and prior popular benchmark (safe gym). We have added this important clarification in the new version.
- [Safety-MuJoCo](https://github.com/SafeRL-Lab/Safety-MuJoCo) is a new environment benchmark adapted from [MuJoCo](https://github.com/google-deepmind/mujoco) and [Omnisafe](https://github.com/PKU-Alignment/omnisafe) is an algorithm benchmark (does not include new safe RL environments) that support the environments in [Safety-Gymnasium](https://github.com/PKU-Alignment/safety-gymnasium). Here, Safety-Gymnasium is a popular suite, an evolved version of the original Safety-Gym, developed to support the latest gym API and maintain compatibility with ongoing research needs, given that [Safety-Gym](https://github.com/openai/safety-gym) is no longer actively maintained.
- We use SafetyReacher-v4, SafetyWalker-v4, SafetyHumanoidStandup-v4 in [Safety-MuJoCo](https://github.com/SafeRL-Lab/Safety-MuJoCo). [Safety-Gymnasium](https://github.com/PKU-Alignment/safety-gymnasium) employs cost constraints such as velocity limits and rewards based on the robot’s speed. In contrast, [Safety-MuJoCo](https://github.com/SafeRL-Lab/Safety-MuJoCo) benchmarks enable more kinds of safety cost constraints, such as the robot’s health—monitoring falls and joint forces— which haven’t been included/emphasized in [Safety-Gymnasium](https://github.com/PKU-Alignment/safety-gymnasium). This provides a more comprehensive evaluation of the algorithms to generalize across more types/numbers of safety-critical constraints.
- Other environments (SafetyHopper/AntVelocity) that we used in this paper are exactly from [Safety-Gymnasium](https://github.com/PKU-Alignment/safety-gymnasium). Omnisafe mainly serves as an algorithm platform that is used to test safe RL baselines in environments from Safety-Gymnasium.
---
Rebuttal 2:
Comment: Apologies for the confusion.
Notice that the experiments conducted in the paper mainly use velocity as the safety constraint and is a fixed threshold.
In the original safety gym environment (openai/safety-gym), the safety constraints are typically obstacles and hazards (e.g., a car reaching the goal without hitting obstacles/overlapping with hazards). The location of the obstacles/hazards varies in each run, thus testing generalization in some sense.
I felt the safety-velocity benchmarks are easier than the openai safety-gym original settings (commonly used in SafeRL community). However, I acknowledge this is a weakness but not a solid rejection reason.
---
Rebuttal Comment 2.1:
Comment: Thank you for engaging in the discussion and providing further clarification of the question. We really appreciate the reviewer for proposing this constructive suggestion to improve the quality of our work, making the generalization ability of ESPO more convincing and clearer. We provide **new experiments** as well as more dicussions. The results show the generalization ability of the proposed method ESPO to not only varying unsafe factors, but also diverse types of safety constraints.
- **New experiments on SafetyCarGoal1-v0 and SafetyPointGoal1-v0: ESPO has generalization ability to handle varying unsafe factors**. As the reviewer suggested, we conduct experiments on two new benchmarks SafetyCarGoal1-v0 and SafetyPointGoal1-v0: the car or the robot ball needs to navigate to the Goal’s location while circumventing Hazards that could vary in each running time. Hazards bring risks that could result in costs when an agent enters unsafe regions [5].
We follow the same experimental settings in this paper, The results show the required number of training steps when they reach the desired performance (reward) and satisfy the safety constraints, which show the superior sample efficiency of the proposed method ESPO.
- **The proposed algorithm ESPO also has the generalization ability to handle diverse types of safety constraints and a combination of them.** Notably, we would also like to highlight that we also conducted experiments on safe RL with constraints not only on velocities (SafetyHopperVelocity-v1 and SafetyAntVelocity-v1), but also others such as robot’s control force energy (SafetyReacher-v4, SafetyWalker-v4, SafetyHumanoidStandup-v4, see Section 5.1). ESPO showed superior performance on not only sample efficiency, but also reward perforamnce and safety satisfication.
**Table 1: Comparisons of the required sample steps for achieving the same desired reward while ensuring safety (cost limit: 15) on SafetyCarGoal1-v0 and SafetyPointGoal1-v0.**
| Task \ Algorithm | ESPO (Ours) | CUP [1] | PPOLag [2,3] | PCPO [4]|
|-----------------------------------|-------------|------|-------|---|
| SafetyCarGoal1-v0 (Reward:6.6) | 1.9 M | 4+ M | 2.4 M | 2.3 M|
| SafetyPointGoal1-v0 (Reward:3.7) | 0.7 M | 1.1 M | 4+ M | 1.7 M|
> [1] Yang, L., Ji, J., Dai, J., Zhang, L., Zhou, B., Li, P., ... & Pan, G. (2022). Constrained update projection approach to safe policy optimization. Advances in Neural Information Processing Systems, 35, 9111-9124.
[2] Ji, J., Zhou, J., Zhang, B., Dai, J., Pan, X., Sun, R., ... & Yang, Y. (2023). Omnisafe: An infrastructure for accelerating safe reinforcement learning research. arXiv preprint arXiv:2305.09304.
[3] Ray, A., Achiam, J., & Amodei, D. (2019). Benchmarking safe exploration in deep reinforcement learning. arXiv preprint arXiv:1910.01708, 7(1), 2.
[4] Yang, T. Y., Rosca, J., Narasimhan, K., & Ramadge, P. J. Projection-Based Constrained Policy Optimization. In International Conference on Learning Representations, 2020.
[5] Ji, J., Zhang, B., Zhou, J., Pan, X., Huang, W., Sun, R., ... & Yang, Y. (2023). Safety gymnasium: A unified safe reinforcement learning benchmark. Advances in Neural Information Processing Systems, 36.
---
Reply to Comment 2.1.1:
Comment: Dear Reviewer,
Thank you for your constructive comments. In response, we have deployed our method to Safety-Gym tasks to demonstrate its generalizability, including experiments on safe robot and car navigation in obstacle/hazard environments (SafetyCarGoal1-v0 and SafetyPointGoal1-v0 tasks).
If you have any further questions or comments, please don't hesitate to let us know. As the rebuttal deadline approaches, we hope our response adequately addresses your concerns.
We sincerely appreciate your time and expertise in reviewing our work.
With gratitude,
The Authors | Summary: This paper presents a novel algorithm for safe reinforcement learning, ESPO,
which independently collects gradient information for reward optimization and
constraint satisfaction. It then makes a dynamic choice about how to combine
these two gradients in order to find an optimal safe policy. In addition, the
proposed algorithm dynamically adjusts the number of samples used for each
gradient computation resulting in a more sample-efficient learning process than
existing techniques. Theoretical results show that ESPO achieves near-optimal
reward and constraint satisfaction with high probability. Additional results
show that ESPO spends more time in safe regions during training than prior work.
In experiments, ESPO is able to achieve comparable or better reward and safety
behavior to prior approaches while requiring less training time.
Strengths: - Safe RL is quite an important area, and advancements in efficient safe RL make
it more applicable to real-world scenarios.
- The proposed algorithm is quite intuitive and handles a tricky issue
(oscillation) appearing in other safe RL algorithms
- The algorithm is well-grounded in theory with Theorem 4.1. I also appreciate
4.2, showing the improved constraint satisfaction at training time compared to
existing work.
- The experimental results are promising. ESPO achieves comparable or better
results in terms of both reward performance and constraint satisfaction to
existing work, but uses fewer samples.
Weaknesses: - There is minimal discussion of the coefficients $x_t^r$ and $x_t^c$ even
though these seem like they should be critical to the algorithm's performance.
- It seems that the hyperparameters of ESPO require careful tuning which likely
inhibits the deployment of this algorithm in practice. (I'm looking at Tables
5 and 6 for this claim which show that the hyperparameters are quite different
for different benchmarks.)
- In some cases the either $h^-$ or $h^+$ is infinite, meaning the algorithm
never performs a pure constraint satisfaction step or a pure reward
optimization step.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How are the coefficients $x_t^r$ and $x_t^c$ computed? What impact do they
have on performance?
- How are the other hyperparameters ($\chi^+$, $\chi^-$, $h^+$, $h^-$) chosen?
Does it require careful tuning for each environment?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately discuss limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Reply to Reviewer qn1v
> **Q1:** More discussion of the critical coefficients $x_t^r$ and $x_t^c$ of the algorithm's performance. How are the coefficients $x_t^r$ and $x_t^c$ computed?
**A1:** Thanks for raising this point! We conduct new ablation experiments and have added a detailed discussion for $x_t^r$ and $x_t^c$ in the algorithm design section with an intuitive explanation. Please refer to **Q2** in the general response.
> **Q2:** Do the hyperparameters of ESPO require careful tuning? (Tables 5 and 6 use different hyperparameters for different benchmarks.)
**A2:** Thanks for raising this point; we appreciate the opportunity to clarify it.
- Tuning hyperparameters for different tasks is not unique to ESPO, but widely considered in designing safe RL algorithms. For instance, PCPO [1] requires fine-tuning the projection distance $D$ within a primal-dual safe RL framework. CUP [2] involves fine-tuning the safety hyper-parameter $v$, and FOCOPS [3] necessitates fine-tuning the safety temperature $\lambda$. For practical considerations, hyperparameter tuning is somewhat necessary for heavily distinct tasks, while it's often a one-time process for each new kind of environments.
- Table 5 and Table 6 show the choices of the hyperparameters for different tasks: sample size adjustment parameters ($\zeta^+, \zeta^-$ defined in Equation (6-7)), safety constraint threshold (limit $b$), and soft region threshold parameters ($h^+, h^-$). **We indeed provide an ablation study in Figure 4 in Appendix C lines 688-709**, over the benchmark SafetyWalker2d. Specifically, Figures 4(a)-\(c) show the results of ESPO when the safety constraint threshold (limit $b$) is different (30 or 40) with comparisons to baseline CRPO, which exhibits similar and stable reward performance when the limit $b$ varies. In addition, Figures 4(d)-(f) show the ablation w.r.t. sample size adjustment parameters ($\zeta^+, \zeta^-$) and demonstrate the stability of the performance under different ($\zeta^+, \zeta^-$) settings. We acknowledge the need of reasonable fine-tuning hyperparameters and are actively working on adaptive techniques in future work.
- > [1] Yang, T., et al. (2020). Projection-Based Constrained Policy Optimization. ICLR 2020.
> [2] Yang, L., et al. (2022). Constrained update projection approach to safe policy optimization. NeurIPS 2022.
> [3] Zhang, Y., et al. (2020). First order constrained optimization in policy space. NeurIPS 2020.
> **Q3:** In some cases either $h^-$ or $h^+$ is infinite, meaning the algorithm never performs a pure constraint satisfaction step or a pure reward optimization step. How are they chosen and do they require tuning for each environment?
**A3:** Our ESPO algorithm framework intentionally designs and allows for such choices. Different tasks have different preferences for optimizing rewards or prioritizing safety constraints. So generally, we need to choose/tune diverse $h^+, h^-$ to both satisfy the goal of different tasks and also the efficiency of the learning process, with intuitions shown below:
- We can choose $h^- = -\infty$ (i.e., always consider the safety constraint objective even if constraints are already satisfied). Such a choice will potentially make less oscillation around the safety constraint requirement threshold and always update the reward objective and safety constraint objective together. So this choice fits the tasks that the agent also wants to keep safe during the learning process, but not only exploiting the process, or less oscillation of the optimization process, which can accelerate the learning process.
- We can choose $h^+ = \infty$ (i.e., always consider the reward objective even if constraints are violated). This choice is suitable for those tasks that have less priority for satisfying the safety constraint, or less oscillation of the optimization process can accelerate the learning process.
- When $h^-, h^+$ are finite (the common choice): creating a soft region $[h^- +b, h^+ +b]$. When the safety cost objective falls into this soft region around the required limit $b$, both the reward objective and the safety cost objective will be considered for the optimization update rule, which can reduce the oscillation around the safety limit to some extent. With such finite choices, the algorithm keeps the possibility of optimizing only the reward or safety cost objective when the safety objective is pretty good or is largely violated, respectively.
> **Q4:** How are the other hyperparameters ($X^+$, $X^-$, $h^-$, $h^+$) chosen? Does it require careful tuning for each environment?
**A4:** The reviewer raises an important point about hyperparameter selections. To the best of the author's understanding, the reviewer is asking about how to choose the sample size adjustment parameters ($\zeta^+, \zeta^-$ defined in Equation (6-7)), and soft region threshold parameters ($h^+, h^-$).
- The intuition of choosing $h^-$, $h^+$ are provided in above answer **A3**. Note that different tasks have different preferences for optimizing rewards or prioritizing safety constraints. So generally, we need to choose/tune diverse $h^+, h^-$ to both satisfy the goal of different tasks and also the efficiency of the learning process.
- The sample size adjustment parameters $\zeta^+$ and $\zeta^-$ determine the final sample size $X(1+\zeta^+)$ or $X(1+\zeta^-)$ in different scenarios (whether the gradient conflict occurs). As such, the final sample size typically is set close to the original sample size $X$. **We provide ablation study w.r.t. $\zeta^+, \zeta^-$ in Figure 4(d)-(f) in Appendix C lines 688-709**, over the benchmark SafetyWalker2d. Figures 4(d)-(f) demonstrate the stability of the performance under different ($\zeta^+, \zeta^-$) settings. It verifies that the ESPO is typically not sensitive to $\zeta^+, \zeta^-$ and just needs reasonable tuning.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response and I apologize for the error in my question. I did indeed mean to ask about $\zeta^+$ and $\zeta^-$ rather than $\chi^+$ and $\chi^-$. I will keep my score in favor of accepting the paper.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your follow-up message and for clarifying the focus of your question. We are grateful for your favorable scoring towards our paper.
Your insights are valuable to our work. Thank you once again for your thorough review and consideration.
Best regards,
The Authors | Rebuttal 1:
Rebuttal: # General Response:
We thank the reviewers for their careful reading of the paper and their insightful and valuable feedback. Here, we provide **new experimental results** and discussions to answer some common questions raised by reviewers.
**We attached a pdf file to show the required new experimental results/updated figures** for all reviewers, listed below:
* Figure 1: Integrate our algorithmic techniques in primal-dual based safe RL.
* Figure 2: Integrate our algorithmic techniques in primal based safe RL.
* Figure 3: Integrate our algorithmic techniques in a more complex class of problems ---- safe multi-objective RL.
* Figure 4: Conduct new ablation experiments regarding hyperparameters of gradient update rules or learning rates.
* Figures 5 & 6: Update figures to use the number of sample steps as the x-axis rather than training epochs.
**Q1: Generalization of our key technical module in the algorithm (sample manipulation) to other safe RL algorithms or problems (Q1 of reviewer qrLu)/ Broader evaluation of the proposed method ESPO (Q1 of reviewer by8k).**
**A1:** We appreciate the reviewers' acknowledgment of the potential for our sample manipulation approach as a general method. Following the reviewers' suggestions, we conduct **three new sets of experiments** to demonstrate the generalization and advantages of our sample manipulation module (the key technical module in our algorithm) by integrating them with representative existing safe RL algorithms:
- **1) Integrating with primal-dual method --- [TRPO Lagrangian](https://cdn.openai.com/safexp-short.pdf) for standard safe RL problems:** see Figure 1 in the general response file.
- **2) Integrating with primal method --- [CRPO](https://proceedings.mlr.press/v139/xu21a/xu21a.pdf) for standard safe RL problems:** see Figure 2 in the general response file.
- **3) Adapting ESPO to a class of more complex safe RL problems --- multi-objective safe RL problems:** The results are shown in Figure 3 in the general response file. We adapt ESPO to the multi-objective safe RL problem (termed as ECRMOPO) and evaluate it on a [safe multi-objective RL benchmark](https://arxiv.org/pdf/2405.16390), with comparison to the SOTA baseline [CRMOPO](https://arxiv.org/pdf/2405.16390).
Summing up the results from these experiments affirms that our sample manipulation method can improve the sample efficiency of diverse safe RL problems and also extensive safe RL algorithms (including both primal-based and primal-dual-based methods). Those results also serve as a broader and more comprehensive evaluation of our proposed sample manipulation approach. We will emphasize the broad power of the sample manipulation design in the revised manuscript.
**Q2: Questions for choosing and tuning hyperpaprameters: $x_t^r$ and $x_t^c$ of the optimization update rule.**
**A2:** We have added a detailed discussion for $x_t^r$ and $x_t^c$ in the algorithm design section and provide an intuitive explanation here:
- **How to choose $x_t^r, x_t^c$.** $x_t^r$ (resp. $x_t^c$) represents the weight of the reward gradient (resp. the safety cost gradient) in the final gradient $w_{t+1}$. So $x_t^r + x_t^c=1$ all the time and for instance, $x_t^r=1$ (resp. $x_t^c=1$) means we only use reward gradient (resp. safety cost gradient), and $x_t^r = x_t^c = 0.5$ means reward and safety cost objectives are considered equally important in the overall gradient. In general, $x_t^r$ and $x_t^c$ are hyperparameters in the framework that we can either pre-set as a fixed value or adaptively adjust during the running process as needed. For instance, we can set $x_t^r$ to be larger if we care more about the reward performance; otherwise, we set $x_t^c$ to be larger to enhance safety. Throughout our experiments, we just set $x_t^r = x_t^c = 0.5$ for simplicity, which also showed superior performance than prior arts (see Section 5).
- **As the reviewers suggested, we add an ablation study for the hyperparameter $x_t^r$ and $x_t^c$ to evaluate its effect on the performance, shown in Figures 4(a)-\(c\) in the general response pdf file.** To test the sensitivity of the performance w.r.t. the hyperparameters $x_t^r$ and $x_t^c$, we add experiments using other two combinations of $x_t^r$ and $x_t^c$ ($x_t^r=0.4, x_t^c=0.6$ or $x_t^r=0.6, x_t^c=0.4$). The results show that our proposed ESPO performs well using such different $x_t^r, x_t^c$ settings and is even better than using $x_t^r = x_t^c = 0.5$ (in our paper) sometimes.
> **Q3: Ablation studies for other parameters, such as learning rates and sample size thresholds.**
- **A3:** We appreciate the reviewers' valuable comments. We indeed have ablation studies in Appendix C (due to the limit of the main text), and we have **conducted more ablations on learning rates and gradient weights**, see Figure 4 in the general response file.
- **During rebuttal, we add ablation studies on key hyperparameters --- learning rates ($l_r$) and the weights of gradients ($x_t^r, x_t^c$), shown in Figure 4 in the general response file**, reveal that ESPO is robust to variations in learning rates. The results demonstrate that ESPO performs well under reasonably varying learning rates or weights of gradients.
- **Ablation study for sample size thresholds ($\zeta^+,\zeta^-$) are provided in Figure 4(d)-(f) in Appendix C lines 688-709 of the paper**, over the benchmark SafetyWalker2d. Figures 4(d)-(f) demonstrate the stability of the performance under different ($\zeta^+, \zeta^-$) settings. It verifies that the ESPO is typically not sensitive to $\zeta^+, \zeta^-$ and just needs reasonable tuning.
Pdf: /pdf/82a468d0ee48c753b86b7df06ef44d4995908701.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Label is Worth A Thousand Images in Dataset Distillation | Accept (poster) | Summary: The paper studies the effect of synthetic image soft labels on training performance, and show that the success of DD methods is attributed to the use of informative labels. The authors showed that the structured information in soft labels is important, and there is a tradeoff between knowledge and data. Generally, the paper conducted extensive ablations to analyze the synthetic soft labels.
Strengths: 1. The paper studies the interesting and novel aspect of the role of synthetic labels in DD.
2. Thorough experiments are done and the comparisons are fair and meaningful.
3. The experimental results and analysis provide important and interesting insights that are beneficial for the DD community.
Weaknesses: I do not observe major weaknesses of the paper. However, I do have some minor concerns that I would like to discuss with the authors:
1. Observing Figure 4, it is interesting that when swapping the top-1 label with the last one, the relative performance can still be even preserved to relatively 20% and even 50% (IPC=1). The top-1 label should be the correct prediction, and swapping makes all images wrongly labeled. I wonder why there is still 20%~50% relative performance when all the training data are wrongly labeled.
2. For the experiments corresponding to Figure 7 (treatment 2), why no re-normalization? This results in probabilities that do not sum up to 1 and may cause unexpected consequences during training.
3. It seems that there are two types of labels used for analysis. One type is directly downloaded synthetic dataset (image-label pairs) in section 4. Another type is generating soft-labels via ensembling in section 5. Does the second guarantee that the ensembled soft-labels are correct (same one-hot encodings as the original pairs)? Also, using early-stopping experts have drawbacks that it does not provide correct label information (e.g. Figure 3). To what extent (how early) may the experts be useful for label generation?
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: Yes, limitations are adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer’s constructive comments and are glad they found our work interesting. We respond to their specific comments below:
__Response to Weakness__
> *Figure 4, regarding performance gain when data are wrongly labeled*
The argmax certainly contains a lot of information. However, our intuition regarding why replacing such an important logit with a wrong one lies in the fact that not all useful information is stored in the top-1 prediction. For example, for IPC=1, the labels are generated with an expert who has only trained for 7 epochs on the real data. As shown in Figure 3, panel 2, the argmax at epoch 7 only has a 2% probability. In fact, the top 5 predictions all share roughly the same likelihood. These top logits contain semantic information such as class feature similarities (see Figure 10 in the Appendix). Therefore, even with the top-1 label being incorrect, the rest of the logits still contain useful and correct information for the model to learn from.
> *For the experiments corresponding to Figure 7 (treatment 2), why no re-normalization?*
It is a perfectly valid concern regarding re-normalization. During our experiment design, we carefully weighed the pros and cons of using normalization for label treatment. The rationale behind not using re-normalization is two-fold.
First, as shown in Figure 2 (right), label entropy significantly impacts final model performance. By using re-normalization, we would alter the label entropy of the treatment group. Second, for each training input, re-normalization only impacts the gradient with a constant scalar shift. Therefore, we believe that whether we perform re-normalization should not matter.
To eliminate the possibility that using un-normalized logits causes downstream effects, we re-ran the treatment 2 group with re-normalization and compared it against the original results (no re-normalization). The results, shown in the attached PDF, confirm that re-normalization does not make a statistically significant difference to the experimental outcomes.
> *It seems that there are two types of labels used for analysis. One type is directly downloaded synthetic dataset (image-label pairs) in section 4. Another type is generating soft-labels via ensembling in section 5. Does the second guarantee that the ensembled soft-labels are correct (same one-hot encodings as the original pairs)?*
In Section 3, for the SOTA methods we compare our soft label baselines to, we obtain soft labels directly from the synthetic dataset (i.e., downloaded). In our soft label baseline, the images are randomly sampled from the training data, and labels are generated by experts with epoch tuning.
In Section 5, we further demonstrate that ensembling can bring additional improvements to the soft label baseline. We do not impose the restriction that ensemble soft labels are correct. In other words, no restrictions are imposed to ensure that the argmax of the ensemble labels corresponds to the “correct” class for that particular image.
To our understanding, correctness is less important in the dataset distillation setting, which we discuss further below.
> *Also, using early-stopping experts have drawbacks that it does not provide correct label information (e.g. Figure 3). To what extent (how early) may the experts be useful for label generation?*
You have pointed out a very counterintuitive phenomenon! Indeed, early experts generate labels that are “incorrect.” Here, we can loosely define “incorrect labels” as those whose argmax does not lead to a correct classification. In Figure 6 (right), we used experts trained until various stages (from early to late) as the labelers for different data budgets (i.e., different IPC values). We observe that on TinyImageNet, experts as early as Epoch 11 could be useful for label generation under small data budgets (IPC=1). As we increase the data budgets, it becomes more optimal to use later experts.
Our intuition regarding why incorrect labels are more optimal, especially under low data budget regimes, is that when the student model is learning with a limited amount of data, it benefits from mimicking behaviors from a less well-trained teacher, learning only simple features and functions. For example, the student network may learn features/functions that can distinguish between broad categories like dogs (corresponding to many classes in TinyImageNet) and vehicles (also corresponding to many classes in TinyImageNet). Such a crude classifier is the best the model can achieve given the data limit. In other words, with so little data, the student network learns a simpler function because it does not have enough data to learn a complex decision boundary to distinguish between a golden retriever and a German Shepherd. In these “data-limited regimes,” a less well-trained teacher provides better guidance for the student model.
---
Rebuttal Comment 1.1:
Title: Thank you for the rebuttal
Comment: The rebuttal has addressed my concerns. I believe the paper is insightful and meaningful for the DD community. I am keeping my score and vote for acceptance. | Summary: This paper investigates the importance of soft labels for dataset distillation methods, conducting detailed ablation experiments on the role of labels under various settings. It deeply explores the impact of labels on learning, providing an in-depth analysis and study of the intrinsic properties of labels. The work also provides empirical scaling laws that characterize the effectiveness of soft labels as a function of images-per-class in the distilled dataset and establishes an empirical Pareto frontier for data-efficient learning.
Strengths: 1. This paper clearly demonstrates the crucial role that soft labels play in the effectiveness of data distillation methods. This aspect has never been carefully studied in previous dataset distillation work; it was usually considered an additional trick and did not receive much attention. This work points out new directions for data-efficient learning.
2. The designed experiments are interesting and comprehensive, presenting what constitutes good soft labels, the importance of knowledge (soft labels) in learning, and how to effectively obtain higher-quality soft labels.
Weaknesses: 1. According to Table 7, the expert model (epoch) used to produce soft labels in the soft label baseline appears to be carefully selected. Does the choice of epoch introduce additional costs? Additionally, do the other mentioned dataset distillation methods also use the best soft labels from these epochs to ensure a fair comparison?
2. MTT and SRe2L are not the SOTA methods currently. I would like to know if more advanced methods like DATM [1] and G-VBSM [2] still heavily rely on soft labels and whether their performance still can not surpass the soft label baseline.
Minor: line 185 In Figure 3 left.
[1] Towards lossless dataset distillation via difficulty-aligned trajectory matching. ICLR 2024
[2] Generalized large-scale data condensation via various backbone and statistical matching. CVPR 2024
Technical Quality: 3
Clarity: 4
Questions for Authors: Please refer to weaknesses.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for the reviewer’s constructive comments and thoughtful insights. We respond to their specific comments below:
__Response to Weakness__
> *According to Table 7, the expert model (epoch) used to produce soft labels in the soft label baseline appears to be carefully selected. Does the choice of epoch introduce additional costs?*
Yes, tuning the expert epoch is used for our soft label baseline, and it does come at a cost. Other SOTA methods spend the majority of compute on generating images. For example, Ra-BPTT requires more than 100 GPU hours to generate distilled images for the results shown in Figure 1. In contrast, the soft label baseline simply uses randomly selected training images, incurring no costs on image generation or selection.
For methods like FrePo and SRe2L, generating the best images also requires extensive hyper-parameter tuning in their distillation pipelines. Therefore, we do not believe the additional cost incurred for epoch tuning will be the bottleneck.
Table 7 might give the impression that the epoch needs to be “carefully” selected. However, we hope these hyperparameters will ensure easy reproducibility. In Figure 6 (right), we showcase how to choose the “optimal expert epoch” by establishing a Pareto frontier. From this figure, our understanding is that we can establish a robust Pareto front with only five expert epochs, covering the optimal expert for IPC values ranging from 1 to 200.
> *Additionally, do the other mentioned dataset distillation methods also use the best soft labels from these epochs to ensure a fair comparison?*
Thank you for the question! We address this concern in the global response with new plots included in the PDF attachment.
> *MTT and SRe2L are not the SOTA methods currently. I would like to know if more advanced methods like DATM [1] and G-VBSM [2] still heavily rely on soft labels and whether their performance still can not surpass the soft label baseline.*
The field has been advancing rapidly. DATM introduces innovation to MTT based on “difficulty-level” trajectory matching. They claim that by using late trajectories, they improve performance for larger synthetic sets (i.e., large IPC) and achieve lossless distillation (i.e., matching distilled data to the quality of training data). Section 4.3 in DATM shows that they also heavily rely on soft labels. For TinyImageNet, they achieve 39.7% “lossless” test performance with IPC=50. Our soft label baseline achieves 35.6%. While the soft label baseline underperforms compared to DATM, the comparison still reflects the great importance of labels.
Similarly, reading Section 4.1in G-VBSM, our understanding is that G-VBSM also heavily relies on soft labels. Several improvements they leverage, including ensemble techniques, align with our observations. Since G-VBSM builds on SRe2L and the soft label baseline performs on par with SRe2L, G-VBSM is able to achieve further improvements.
We do not aim to outperform all SOTA methods using such a simple baseline, instead we hope to bring some understanding of what information is being distilled that leads to data-efficient learning and correct the misconception that labels are less important compared to images in dataset distillation.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply
Comment: I have read this paper and the author rebuttal (regarding all the reviewers' questions) thoroughly. I appreciate the contribution of this work and keep my score. | Summary: This paper introduces soft probabilistic labels to the dataset distillation task. Specifically, it finds that the labels should consider structured information and perform unequally. Experiments on diverse datasets demonstrate its effectiveness.
Strengths: 1) This paper proposes the introduction of soft labeling in dataset distillation and provides an interesting analysis of the intrinsic properties of data distillation, rather than focusing on improving the base module of specific methods.
2) The method is well-supported, and the results are reliable.
Weaknesses: 1) The motivation needs to be stated more clearly. Why are label-level methods regarded as superior to image-level methods?
2) In the introduction, the author introduces several techniques such as 'expert' models, Pareto frontier, and knowledge-data scaling laws but lacks detailed explanation.
3) Section 3.2 requires reorganization to enhance readability.
4) The structure information needs to be expressed more clearly and intuitively.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1) In the Method section, could different soft-labeling strategies (excluding cutmix) influence the entire pipeline?
2) Does early stopping result in varied performances across different baselines and datasets? How do the authors address this issue?
3) In the experiment, the author uses the swap test to validate the structure information and claims “Top labels contain structured information and the non-top contain unstructured noise”. Why not just remove the unstructured noise?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: 1) The writing requires improvement as the author attempts to cover too information without adequately establishing connections and justifying the necessity of the key technologies.
2) The tables and figures need reorganization to enhance clarity. Reviewer currently struggles to understand their intended messages.
3) Some typos need to be corrected, such as those in the representation of Figure 4.
4) The limitations mentioned in the Conclusion are not clearly stated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback! We respond to their specific comments below:
__Response to Weakness__
> *The motivation needs to be stated more clearly. Why are label-level methods regarded as superior to image-level methods?*
To our best understanding, the dataset distillation community does not consider label-level methods superior to image-level methods. In fact, it appears that more work has been dedicated to image-level methods, often treating labels as a minor additional “trick” to boost performance.
The main motivation behind our study is to emphasize the importance of soft probabilistic labels in the dataset distillation task. We aim to correct this misconception and argue that labels are, in fact, quite central to data distillation. Overall, we do not believe that label-level methods are necessarily superior to image-level methods. We hope our work inspires future research to explore methods that effectively utilize both images and labels to achieve the best outcomes.
> *In the introduction, the author introduces several techniques such as 'expert' models, Pareto frontier, and knowledge-data scaling laws but lacks detailed explanation.*
Thank you for the constructive feedback! Our introduction will indeed be clearer with a more detailed explanation of key terms when we first introduce them. We will incorporate your suggestions in the camera-ready version for better readability. We also included a detailed explanation in Official comment.
> *The structure information needs to be expressed more clearly and intuitively.*
We define structured information as the softmax values in logits that contain semantic information, such as class or feature similarities. These are also referred to as “dark knowledge” [1] or “supervisory signals” [2] by the knowledge distillation community. For example, a soft label for a goldfish image may assign a 70% likelihood to the “goldfish” class and a 10% likelihood to the “orange” class. The 10% likelihood assigned to “orange” is not noise; it indicates that the two classes are similar, likely due to color. We consider these logits to contain “structured noise.” We expect this pattern (goldfish-orange) to appear consistently in many goldfish images.
Conversely, if the model randomly assigns a 2% likelihood to the “pizza” class due to spurious features or other noise, we do not expect all goldfish images to have a consistent pattern of being slightly identified as pizza. We consider this as “unstructured noise.”
__Response to Questions__
> *In the Method section, could different soft-labeling strategies (excluding cutmix) influence the entire pipeline?*
Yes, different labeling strategies can impact the distilled outcome. In this paper, we employ a simple labeling strategy, which involves using experts trained at different epochs as the labelers to generate labels. Specifically, our soft label baseline uses this expert-based soft labeling strategy on randomly selected training images. We demonstrate that this simple baseline achieves performance comparable to state-of-the-art distillation results.
In Section 5, we explore a few alternative labeling strategies and show how these strategies impact the distillation pipeline and outcomes, still within the setting where images are randomly selected from the training data. We hope that future work will further explore and develop labeling strategies to improve the dataset distillation pipeline.
> *Does early stopping result in varied performances across different baselines and datasets? How do the authors address this issue?*
Based on the dataset and the data budget (IPC), the optimal early stopping epoch varies, as shown in Figure 6 (right). In this figure, we vary both the expert epoch and the data budget. For larger data budgets (IPCs), it is more effective to use a later epoch to generate labels. From this observation, we understand that the optimal information stored in soft labels can differ significantly depending on the data budget.
We believe it would be an interesting follow-up for future work to study how one can predict the optimal epoch given a data budget for different datasets. In this work, we aim to shed light on the fact that early stopping plays an important role in determining the quality of labels. For instance, in Figure 2 (left), we demonstrate this effect on ImageNet-1K.
> *In the experiment, the author uses the swap test to validate the structure information and claims “Top labels contain structured information and the non-top contain unstructured noise”. Why not just remove the unstructured noise?*
Indeed, we completely agree that this is a valid suggestion. By removing unstructured noise, we might further improve the performance of our soft label baseline. In our analysis, we also identified additional modifications to our soft label baseline for boosting performance, such as ensembling (Section 5.1) and temperature smoothing (Section 4.2). Practitioners could benefit from these additional enhancements, potentially including the removal of unstructured noise.
In our reported soft label baseline, we did not include any of these add-ons because we aimed to keep our baseline as simple as possible. Our goal was to demonstrate that soft labels are crucial for dataset distillation.
__Response to Limitations__
> The limitations mentioned in the Conclusion are not clearly stated.
Thank you for pointing out that our work could benefit from a more in-depth exploration of limitations. We appreciate this valuable feedback. Here is a brief version of what we intend to add to our conclusion to further address these limitations:
> Writing and Presentation
We will carefully revise writing and presentation for the camera ready version.
[1] Distilling the Knowledge in a Neural Network https://arxiv.org/abs/1503.02531
[2] Rethinking soft labels for knowledge distillation: A Bias-Variance Tradeoff Perspective https://arxiv.org/pdf/2102.00650
---
Rebuttal 2:
Title: Glossary of Key Terms
Comment: __Expert Models__: Also referred to as teacher models, these are models that have been trained on the original training data and are used to generate soft labels.
__Pareto Frontier__: In the context of dataset distillation, the objective is to optimize for model performance and data budgets. The Pareto frontier represents the set of points where each point corresponds to the best model accuracy achievable for a given data budget. One cannot achieve better model performance with the same data budget.
__Knowledge-Data Scaling Laws__: Data scaling laws describe how the performance of a model (e.g., measured by test accuracy) improves as a function of dataset size. We propose knowledge-data scaling laws to describe how the use of expert knowledge (i.e., soft labels) can shift the standard scaling law.
---
Rebuttal 3:
Title: Limitations
Comment: __Soft Label Baseline__: We have highlighted the importance of soft labels using a simple soft label baseline. We leave it for future work to explore the best ways to optimize both labels and images during distillation, and to study how each can impact student learning in different ways. Additionally, future work can investigate what other information, beyond expert knowledge, can be distilled to achieve data compression.
__Label Generation Strategy__: We have explored label generation strategies based on existing methodologies, including using pretrained experts and Ra-BPTT. We believe future research could further explore optimal ways to generate more informative labels.
__Data Modality__: Similar to most dataset distillation work, we have primarily focused on image classification tasks. While we believe our conclusions can generalize to other data modalities, a limitation of this work is the diversity of tasks explored. | Summary: This paper analyses the role of soft labels used in dataset distillation. Experiments with different ablation studies show that the performance of soft labels based data distillation approaches is primarily attributed to the use of soft labels. Secondly, the authors study the various types of soft labels and their effect on model performance. Additionally an empirical scaling law is provided to characterize the relation between effectiveness of soft labels and image per class in distilled dataset.
Strengths: The paper is written clearly with well-presented motivation and is easy to follow
Extensive analysis and ablations are presented providing a better understanding of role of labels in dataset distillation
The paper focuses on a largely overlooked aspect of dataset distillation methods: the degree of contribution of soft labels.
Weaknesses: The paper could benefit from a theoretical analysis of why soft labels are so effective. Additionally, the generalizability of the findings to data distillation in other modalities could be insightful.
Minor comments:
1. The description in line 222 appears inconsistent with results in figure 4. IPC=1 appears more robust to swapping of top labels compared to IPC=10
2. A few writing and grammar errors exist in related work, line numbers 87-90. Also, 'generation' should be 'general' in line 167
3. In Figure 2, it might be better to use the same dataset to highlight the dependence on expert accuracy and label entropy.
Technical Quality: 3
Clarity: 3
Questions for Authors: I am concerned about the use of experts at tuned epoch for comparison of soft label baseline with previous methods. Are the expert epochs also tuned for previous approaches?
Can the authors provide insight as to why structured information is more beneficial in soft labels in the case of dataset distillation while the opposite is true for knowledge distillation as mentioned in related work?
It is unclear how past approaches in figure 1 right (which as I understood to originally use soft labels) were adapted to hard labels. Is it done by performing argmax on the soft labels?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the constructive feedback! We are glad that you found our work interesting. We respond to their specific comments below:
__Response to weakness__
> *The paper could benefit from a theoretical analysis of why soft labels are effective. The generalizability of the findings to data distillation in other modalities could be insightful.*
Soft labels are effective because compared to their hard label counterparts, they contain probability for each class for a given image. Those logits not only convey the class information for the given image but also contain other information such as class similarities. While it can be challenging to study the theoretical framework for a large ConvNet on an image classification task, we believe one can demonstrate the effectiveness for smaller convex optimization problems, such as simple regression. Specifically, one can show how providing logits can reduce sample complexity to learn the same classifier. We believe formalizing this form from a learning theory persepective is an exciting avenue for future work.
To date, almost all dataset distillation work has focused exclusively on image classification tasks. To our knowledge, the only work extending this line of research beyond image classification is [1], where the authors explored dataset distillation for vision-language models. We believe that the conclusions drawn from our work, particularly the importance of labels, should apply to other vision tasks and classification tasks in other modalities. As future dataset distillation research begins to explore other modalities, we hope the soft label baseline we established will serve as a strong starting point for future work. We acknowledge that our work could benefit from additional results in other modalities, and we are currently considering including language modeling tasks for the camera-ready version.
> *The description in line 222 appears inconsistent with results in figure 4. IPC=1 appears more robust to swapping of top labels compared to IPC=10*
Thank you for pointing out the mistake. Indeed, the description in line 222 is not entirely accurate. If we only consider $i=1$ in the "Swap i-th label" test, IPC=1 is more robust to swapping top labels compared to IPC=10. However, when we consider $2≤i≤32$ (swapping top-k labels for $k>1$), IPC=1 relies more on top-k labels than IPC=10, and thus is less robust to swapping top-k (k>1) labels than IPC=10. We will ensure this clarification is made in the camera-ready version.
__Response to questions__
> *I am concerned about the use of experts at tuned epoch for comparison of soft label baseline with previous methods. Are the expert epochs also tuned for previous approaches?*
Thank you for the question! We address this concern in the global response with new plots included in the PDF attachment.
> *Can the authors provide insight as to why structured information is more beneficial in soft labels in the case of dataset distillation while the opposite is true for knowledge distillation as mentioned in related work?*
You have pointed out a very interesting contradiction! Our understanding is that the key difference between data distillation and knowledge distillation lies in data budgets. In knowledge distillation, the student network has full access to the entire training dataset, along with soft labels generated by the expert (teacher). Conversely, in the data distillation setting, we impose a strict limitation on the size of the data. As a result, the student network must rely more heavily on the labels generated by the teacher. Therefore, it relies more on structured information in data-poor settings.
> It is unclear how past approaches in figure 1 right (which as I understood to originally use soft labels) were adapted to hard labels. Is it done by performing argmax on the soft labels?
For SRe2L, MTT, and FRePo, the distilled images are initialized with real images from the training data. To obtain hard labels, we use the label based on the “intialization” image. Specifically, in MTT (TESLA) and FRePo, the original works include comparisons of hard versus soft labels, and they both chose to use hard labels in the same way (i.e., hard labels based on the initialization image). We maintained the choice made by the original authors for consistency and reproducibility. However, we suspect that the outcome would be the same if we used the argmax of the soft labels to obtain hard labels, as each distilled image is supposed to be representative of its respective class.
For Ra-BPTT, where images are initialized with random Gaussian noise, we use the argmax of the soft labels as the hard labels.
[1] Vision-Language Dataset Distillation https://arxiv.org/abs/2308.07545 | Rebuttal 1:
Rebuttal: We want to thank all the reviewers for the detailed and thoughtful response! We are glad that reviewers have found our work interersting and scientifically sound. We have carefully read through all the comments and we believe that all your feedback will bring improvements to our work!
We address some common questions below.
__Soft label generation with epoch tuning__
Among all the reviews, we noticed a common theme regarding experiment details for Figure 1, where we compare our soft label baseline with SOTA methods. Specifically, there are questions regarding how soft labels are generated for SOTA methods. We provide further details below, and the new experiment results can be found in the attached PDF.
We have compared our soft label baselines to four existing methods, each with their own soft label generation strategies proposed by the original authors. In Figure 1, we applied the original soft label generation strategy used by each method. Overall, some of these methods already include epoch tuning (MTT/TESLA), while others do not (SRe2L), and for some, the concept of epoch tuning is not well-defined (Ra-BPTT and FRePo).
To address your concern, we have also included a version of Figure 1 where we apply the exact same label generation strategy used for the soft label baseline (expert label generation with epoch tuning) on SRe2L. Further analysis is provided in the supplementary PDF.
We clarify how soft labels are obtained for each of the methods below:
* SRe2L: The original method uses labels generated from a fully trained expert without epoch tuning. Therefore, in our reported results for SRe2L, the soft labels are not epoch-tuned.
* MTT (TESLA): In the original method, the expert used to generate labels is already epoch-tuned, and the epoch also impacts the image generation pipeline.
* Ra-BPTT: The original method generates labels along with images using a bi-level optimization objective (Eq 1), so no experts are trained during the process. As labels are not generated by experts, epoch tuning is not applicable. We experimented with using pre-trained experts to generate labels for Ra-BPTT generated images but observed that experts trained on real images perform poorly on Ra-BPTT generated data. This is likely because the generated images are too out-of-distribution for experts trained on real training data. Thus, the original labels generated by Ra-BPTT should be considered optimal for this method.
* FRePo: Similar to Ra-BPTT, FRePo is based on Back-Propagation in Time (BPTT), and no experts are trained during the distillation process. Like Ra-BPTT, labels are learned during the distillation process along with images.
We included a revised version of Figure 1 in the PDF. Our additional results show that (1) soft label strategy used by the original method is more effective for the given method than epoch tuning (2) our soft label baseline remains competitive. Please refer to the supplementary PDF for further details.
__Figure 7 revision__
In addition, we included a revised version of Figure 7 (data-efficient experiment with zero shot learning). In this new figure, we address the concerns regarding the softmax re-normalization raised by reviewer Zv9o.Please refer to the supplementary PDF for further details.
Pdf: /pdf/23dc620558e52e4d211ec6e42a192ad6fc9f3c2c.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Prompt-Agnostic Adversarial Perturbation for Customized Diffusion Models | Accept (poster) | Summary: The paper presents Prompt-Agnostic Adversarial Perturbation (PAP), a method to enhance privacy and security in customized text-to-image diffusion models like Stable Diffusion. These models, while enabling high-quality image synthesis, pose risks of privacy breaches and unauthorized artwork usage. Existing adversarial methods, limited by their reliance on specific prompts, fail with unseen prompts. PAP addresses this by modeling the prompt distribution with a Laplace approximation, generating robust perturbations effective against diverse prompts. Experiments on datasets such as VGGFace2 and Wikiart show PAP's superior performance and robustness. This method provides a significant improvement in protecting images from unauthorized use and manipulation in diffusion models.
Strengths: 1. I appreciate the idea of designing a prompt-agnostic adversarial perturbation for privacy protection.
2. The method is concise and paper organization is logical.
3. The formulas, tables, and figures in the paper are easy to follow.
Weaknesses: 1. Although the motivation for designing a prompt-agnostic adversarial perturbation is reasonable, the method seems too complex. That is, maybe we can expand the prompt with LLM to get a lot of prompts (different scenes) containing the keyword. Then we can use all these prompts together as a set to calculate a gradient for adversarial attack at each iteration.
2. For definition 3.1, the assumption of Z = p(x0, c0) as a constant lacks evidence.
Minor suggestion:
1. For tables 1, 2, 3, and 4, there are some left parentheses that have no space between the previous metric names.
Technical Quality: 3
Clarity: 4
Questions for Authors: Please see weakness
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer Q3Lh,
Thank you for your thorough review and constructive feedback. Your perceptive comments and suggestions have helped us improve our work.
Q1: a) PAP seems too complex? b) Simply expand prompts with LLM and gradient ensemble?
> a) The final implementation of PAP, as presented in **Algorithm 1**, is notably straightforward and exceptionally user-friendly.
The pipeline of the entire model involves initially modeling a prompt distribution through Laplace approximation, with the development of two estimators to compute the distribution parameters. Subsequently, Monte Carlo sampling is applied to each input distribution to optimize a disturbance expectation. The expressions for the two estimators are depicted in Equations (9) and (12) in the paper. On this basis, our implementation merely entails sampling attacks on Gaussian distributions with means from Equations (9) and variances from Equation (12), as Reviewer JLDf remarked, **"PAP is simple and easy to use."**
In order to further expound on the precision and interpretability of our estimation methods, we rigorously assess the errors between the two estimators and the ground truth, and proceed with deriving upper bounds for these errors in Appendix A. Hyperparameters are meticulously selected to ensure that these error bounds stay within manageable limits. While the derivation of this error assessment may have initially appeared complex, these efforts are aimed at streamlining the final algorithm's implementation. Through above detailed mathematical reasoning, we have arrived at a **concise** algorithm and implementation code outlined in **Algorithm 1**, which stands as a significant highlight of our work.
> b) The approach you mentioned involves expanding the prompt using a Large Language Model (LLM) to generate multiple prompts (different scenes) containing the keyword. Subsequently, all these prompts are utilized collectively as a set to calculate the gradient for the adversarial attack at each iteration.
This gradient ensemble approach relies on a crucial **assumption**: through a straightforward ensemble technique like averaging the gradients of loss concerning all $g_i$ during the adversarial attack process, one can obtain an ensembled $g$ that maximizes the overall loss for each prompt. However, this fundamental premise has faced criticism for "often overlooking the unique attributes of each model, resulting in suboptimal outcomes" **[1]**, making it challenging to achieve the desired optimal results.
Contrary to this simple ensemble strategy, by modeling the prompt distribution and conducting targeted attacks based on the probabilistic sampling within the modeled distribution, our method accounts for the **diverse characteristics** of each prompt and their respective influences on the overall attack, thus leading to a more adaptive and effective adversarial strategy.
Q2: The assumption of $Z$ as a constant in definition 3.1
> In Definition 3.1, $x_0$ represents the given input image, while $c_0$ represents the given descriptive text. Both are given inputs rather than variables, and as such, $Z = p(x_0, c_0)$ is a constant.
Q3: For tables 1, 2, 3, and 4, there are some left parentheses that have no space between the previous metric names.
> We appreciate your kind reminder. If there are any additional suggestions or concerns, please feel free to share them with us.
We sincerely appreciate your considerate review and the insightful feedback you provided for our paper. It seems that you have viewed our work favorably, for which we are grateful. Would you kindly consider adjusting the rating to better align with your positive sentiments towards our research? Your understanding and support mean a great deal to us.
[1]An Adaptive Model Ensemble Adversarial Attack for Boosting Adversarial Transferability | Summary: This paper proposes a novel adversarial training method for text-to-image diffusion models, enhancing robustness against prompt-agnostic attacks. Specifically, the authors utilize prompts (embeddings) from a prompt distribution rather than a specific prompt for adversarial training. The proposed method is evaluated in the contexts of face privacy and artistic style protection.
Strengths: 1. This paper aims to bridge the gap between prompt-specific protection and prompt-agnostic protection, which is essential for text-to-image diffusion models.
2. The proposed method is principled and effective.
3. The empirical evaluation is comprehensive and convincing.
Weaknesses: See Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is the proposed protection method robust against adversarial prompts?
2. Is it possible to demonstrate the connection between the test prompt and the prompt distribution used for training? Alternatively, can sampled embeddings from the prompt distribution be projected into natural language? Such studies would be beneficial for illustrating that the constructed prompt distribution adequately covers potential user prompts.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discussed the limitations in Appendix H.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer nNzN,
We appreciate the time and effort you put into providing feedback on our work. Your insightful comments have contributed to the enhancement of our paper.
Q1: Demonstrate the connection between the test prompt and the prompt distribution
> We demonstrate the robustness of PAP against adversarial prompts from two perspectives:
> a) **Theoretical Perspective**: Unlike previous approaches, we target the entire prompt distribution. Optimizing adversarial perturbations in this manner yields effective results even when faced with unknown prompts, thus achieving a more robust attack effectiveness against different adversarial prompts. Our modeling of prompt distribution is based on Laplace approximation. Subsequently, we offer a detailed derivation, encompassing the Laplace modeling distribution and introducing two estimators for mean and variance estimation, along with an estimation of upper bounds for errors in Appendix A. A series of derivations demonstrate that our estimations fall within controllable margins.
> b) **Experimental Perspective**: In the paper, Table 1 showcases the average results of PAP across three datasets with 10 different test prompts each. Figure 2 displays visual representations of the results for different test prompt inputs, while Figure 3 exhibits the outcomes for combined inputs of different test prompts. Table 10 reveals the results of test prompts with other pseudo words used as inputs. Additionally, Appendix I presents further visual results of different training and test prompts. Collectively, these results indicate that PAP is robust against adversarial prompts.
Q2: Connection between the test prompt and the prompt distribution
> Thank you very much for your constructive feedback.
As emphasized in Appendix H.1, selecting prompt samples that approximate the mean and convey meaningful semantic information is vital for bridging the semantic gap between the estimated prompt distribution and natural language. In our ongoing efforts, we propose a restriction module to discretize the Gaussian distribution, filtering out semantically irrelevant prompts to improve the generation of more meaningful text options, as detailed in Appendix H.
However, we acknowledge the challenge of projecting sampled embeddings from the prompt distribution into natural language in the current version.
Despite these challenges, we are committed to enhancing visualization. Thus, we visualize the projection of natural language into embeddings to examine its relationship with the modeled prompt distribution. Specifically, in our visual experiment presented in **Figure R2** of the rebuttal.pdf, we reduce the dimensionality of embeddings for $c_N$ and 10 test prompts and utilize **PCA** for visualization. This illustration showcases how the test prompts' principal components are discretely distributed within the modeled prompt distribution, indicating a flexible probability of selection for adversarial attacks. This underscores the **adaptability** of our modeling in encompassing a broad spectrum of natural language inputs in the semantic space.
We are grateful for your continued support of our work and the invaluable insights you have shared. Your feedback is crucial in refining our efforts, and we truly appreciate your contributions as we endeavor to enhance our work.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: Thank you to the authors for their efforts in addressing my concerns and conducting additional experiments. I will maintain my original rating. | Summary: - This work is about a method to craft perturbation images to protect users/artists from personalized text to image diffusion methods (specifically DreamBooth and Textual inversion), that generalize better to unseen prompts than previous works.
- The core algorithm is "Prompt-Agnostic Adversarial Perturbation" (PAP). Instead of crafting the perturbation with a fixed condition prompt, PAP model the prompt distribution as a Gaussian dist and sample prompt embedding from it during the perturbation crafting process.
- The prompt distribution is modeled as a Gaussian using Laplace approximation. The mean is estimated by minimizing the diffusion loss starting from a reference prompt. The variance is approximated using a simplified formula based on the difference between the reference and estimated prompts, and their respective loss values.
- Experiments show PAP outperforms previous prompt-specific methods on metrics like FID, CLIP similarity, and LPIPS across different datasets (CelebA-HQ, VGGFace2, Wikiart) and generalization to unseen test prompts.
(after rebuttal)
The authors addressed my main concerns by including results from newer methods with PAP, and clarifying the inconsistent implementation. Despite some issues regarding efficiency and application to more recent personalized techniques/model, I've increased my score from 4 to 5.
Strengths: - The paper is generally well-written, with consistent improvement on unseen prompts when compare with previous works.
- PAP is simple and easy to use, not very costly to add on top of other protection method
- The explanation of the method is easy to understand
- The paper also includes performance under DiffPure, and the result under DiffPure seems to be positive
- The supplementary is informative, with extensive evaluation
Weaknesses: The experimental evaluation need to be improved:
- Apply PAP to other methods like AdvDM + PAP, not just Anti-DreamBooth ASPL, and other recent method such as Diff-Protect [1] to better demonstrate that PAP is a good plug in to the current methods.
- Lacking a simple baseline of adding Gaussian noise to $c_N$ to show the value of approximating H.
- Questionable metric choices, as using LAION aesthetic score for Celeb-HQ and VGGFace2 datasets is not well-justified for face images. And images generated from protected model often contain high-frequency and colorful patterns, which may lead to unreliable LAION aesthetic scores.
- Also evaluation need to include identity score (as used in Anti-DreamBooth) for face datasets, which is crucial to ensure generated images don't contain user's identity.
- A suggestion would be to add an assessment against recent encoder-based personalized techniques, such as InstantID [2], for a more comprehensive comparison.
Technical Quality: 2
Clarity: 3
Questions for Authors: - I wonder if PAP can help reduce the sensitivity to the initial prompt when crafting the perturbation between different domain. For example, when testing on human faces or artwork datasets like Wikiart, how would the performance change if in both dataset, $c_0$ is initialized with a very general term, such as an empty string ""
- Pseudo code seems to be not consistent with the implementation. As in your provided implementation, $c_N$ will be recalculated every step in M, while the pseudo code $c_N$ is calculated only once at the start. I wonder if it could affect the performance of the method
- Nitpick: L296 Typo BROSQUE. Table 4 need to have metrics such as FID, LPIPS to be more comprehensive
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Since PAP still build upon anti-dreambooth, 4-6 minutues A800 when crafting a set of images is still very impractical. Again, I want to see performance of PAP on top of more recent/efficient methods
[1] Xue, Haotian, et al. "Toward effective protection against diffusion-based mimicry through score distillation." The Twelfth International Conference on Learning Representations. 2023.
[2] Wang, Qixun, et al. "Instantid: Zero-shot identity-preserving generation in seconds." arXiv preprint arXiv:2401.07519 (2024).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer JLDf,
We are grateful for your constructive criticism and insightful comments on our work. We greatly value your feedback and have made significant improvements based on your suggestions:
Q1: Apply PAP to other methods including AdvDM + PAP/Diff-Protect
> Per your advice, we have conducted further experiments by integrating PAP with AdvDM and Diff-Protect methods, in addition to Anti-DB. Additionally, based on your feedback on metrics, we have also included additional identity metrics such as the detectable face rate (FDFR) and identity score (ISM) in Anti-DB. The consistent performance showcased in **Table R3**, surpassing baselines across metrics, validates PAP as an effective general plug to enhance the robustness of adversarial attack methods with minimal computational cost.
> **Table R3**: Integrating PAP with AdvDM/Diff-Protect on the VGGFace2 dataset.
| Method| LPIPS($\uparrow$) | FDFR($\uparrow$) | ISM($\downarrow$) | BRISQUE($\uparrow$) | Time($\downarrow$) |VRAM($\downarrow$) |
|---|---|---|---|---|---|---|
|AdvDM|0.66|0.58|0.42|31.41|262s|22G|
|/.+PAP|0.69|0.61|0.40|33.43|270s|22G|
|Anti-DB|0.69|0.65|0.38|28.95|288s|28G|
|/.+PAP|**0.70**|**0.67**|**0.34**|35.02|297s| 29G|
|Diff-Protect|0.66 |0.63|0.44|31.72|191s|16G|
|/.+PAP|0.69|**0.67**|0.37 |**36.70**|198s|17G|
|No Defense|0.55|0.05|0.52|25.67|-|-|
Q2: Lacking a simple baseline of adding Gaussian noise to $c_N$ to show the value of approximating H.
> We have conducted a simple baseline experiment by directly adding Gaussian noise (with variances 1, 5, 10, and 20) to $c_N$ to evaluate the value of approximating H. As shown in the **Table R4**, our proposed PAP method, with variance estimate $\sigma=H$, achieves the best performance across all metrics. Specifically, it outperforms the second-best method by **3\%($\uparrow$), 2\%($\uparrow$), 3\%($\downarrow$), 2.27($\uparrow$)** on LPIPS, FDFR, ISM, BRISQUE metrics.
These findings underscore the necessity of the estimation of variance $H$, to generate more effective adversarial perturbations.
> **Table R4**: Simple Baseline of Adding Gaussian Noise to $c_N$ on VGGFace2 datasets
Variance | LPIPS($\uparrow$)|FDFR($\uparrow$)|ISM($\downarrow$)|BRISQUE($\uparrow$)
---|--- |---| ---|---
1 |0.67|0.64|0.38|31.92
5 | 0.67 | 0.65 | 0.38 |32.75
10 | 0.66 | 0.62 |0.40 |29.21
20 | 0.64 | 0.60 |0.44|27.01
$H$ | **0.70**| **0.67**|**0.34** | **35.02**
Q3: Evaluation need to include identity score for face datasets
> In **Table R3**, we have added the calculation of the detectable face rate (FDFR) and identity score (ISM). In both of these metrics, method + PAP continues to significantly outperform the method itself, including the recent method Diff-Protect.
Q4: Sensitivity to the initial prompt
> In **Table R5**, we present the outcomes when $c_0=$"", showcasing a significant decline in performance compared to the original PAP. This is attributed to:
a) $c_0$ serves as a crucial initialization prior for estimating $c_N$, facilitating rapid iteration (only 20 steps) to achieve a reliable approximation of $c_N$;
b) $c_0$ is involved in the modeling of H estimation. The approximate expression for H estimation is based on the Taylor expansion modeling of $c_0$ and $c_N$.
> **Table R5**: Results with $c_0=$""
Dataset | FID($\uparrow$)|CLIP-I($\downarrow$)|LPIPS($\uparrow$)|LAION($\downarrow$)|BRISQUE($\uparrow$)|CLIP($\downarrow$)
---|---|---|---|---|---|---|
Celeb-HQ|154.57| 0.72|0.50|5.83|30.18|0.32
VGGFace2|232.34|0.61|0.60|5.65|29.20|0.29
Wikiart |320.79|0.69|0.71|5.72|31.88|0.30
Q5: Inconsistency between pseudo code and implementation
> Initially, we recalibrated the prompt distribution model for each iteration, as reflected in the supplementary material code. Subsequently, we optimized this process and derived the algorithm in the paper, with all experimental results based on this improved algorithm. We intend to upload the code for the current version soon.
Q6: Table 4 in the paper need to have metrics such as FID, LPIPS to be more comprehensive
> Per your advice, we have supplemented the evaluation with results for FID, CLIP-I, and LPIPS metrics to provide a more comprehensive assessment, as shown in **Table R6**.
> **Table R6**: Performance comparison after applying DiffPure
| | FID(↑)| CLIP-I(↓)| LPIPS(↑)|
|---|---|---|---|
| AdvDM+DiffPure|301.22(91.48-)| 0.71(0.06+)| 0.70(**0.06-**) |
| Anti-DB+DiffPure|335.94(50.46-)| 0.69(0.04+)| 0.68(**0.06-**) |
| IAdvDM+DiffPure|271.02(**118.98-**)| 0.72(0.01+)| 0.68(0.03-)|
| PAP+DiffPure|**379.60**(68.70-)| **0.64(0.08+)**| **0.72(0.06-)**|
| No Defense|198.71| 0.77|0.62|
Q7: 4-6 minutues A800 when crafting a set of images is impractical.
> In **Table R3**, we present the time and VRAM required for each method, including the time required when integrating PAP with other methods. The results demonstrate that PAP introduces an average computation time of processing a set of images (<300s) and memory consumption (<30G). In the future, we aim to further optimize the algorithm to reduce the time to within 4 minutes and lower memory consumption to below 24G, thus enabling the model to be used on the GTX 3090.
Q8: Add an assessment against recent encoder-based personalized techniques, such as InstantID.
> In Table 3 of our paper, we have demonstrated the results on popular customized models such as LORA, TextInversion, and Dreambooth, effectively illustrating the generalizability of our method across different customized models. However, to further continuously validate the reliability and generalizability of our method, we will include results for the InstantX series (including InstantID, InstantStyle) in our future work.
We have made an effort to address your questions and have added additional experiments based on your guidance. We look forward to engaging in fruitful discussions with you and welcome any suggestions for improvement. I hope you will consider providing a more favorable evaluation. Thank you.
---
Rebuttal 2:
Title: Addressed my concerns
Comment: I appreciate the additional experiments and clarifications you provided. In general, the performance when applying PAP to recent methods seem to be positive, and the authors also provide the missing metrics. I have few comments:
- Regarding Gaussian noise baseline, I notice the variance magnitude tested (1, 5, 10, 20) are quite high with a trend that the higher variance the worst. I wonder if the variance is smaller (0.1, 0.5) could yield improve performance.
- I expect that PAP could help in term of initial prompt sensitivity when crafting protection on different domains (since the first step is approximating $c_n$). But the quality just slightly lower than the Clean version make me wonder that incorporate PAP could make the protection even more sensitive to the initial prompt (usually a string like "a photo of a person" for human face domain - which we both know that it is not optimal choice)
- One additional question: For Stable Diffusion 2.x models, which incorporate masking in the text encoder (unlike the fixed 77 tokens cross attention in SD1.x), how does the PAP algorithm adapt to handle variable-length prompts?
(Additional comments)
- In my opinion, current protection methods still struggle against recent encoder-based personalization, since these personalized methods enable Unet to condition not only on the input latent (which usually OOD because of the added adversarial noise), but also the additional visual information from other encoder(s), so it can by pass the protection. To effectively protect against this threat, incorporating these encoder(s) into the protection mechanism could be a potential future work that you might try.
- The community is moving towards more recent models, methods, and it's like a cat and mouse game. While PAP shows promising in improving protection on SD, I encourage the authors to continue working on and improving the method, particularly as new personalization techniques and new models emerge
While I still have some concerns, the authors did clarify most of my questions and provide results for the missing experiments. I will raise my score to 5
---
Rebuttal Comment 2.1:
Title: Thanks for your feedback. We hope we can further address your concerns and engage in constructive discussions.
Comment: Thank you for your valuable reply. We are glad to address the additional questions you've raised, and we hope our responses will further address your concerns and facilitate constructive discussions on the future work of PAP.
> Q1: If the variance is smaller (0.1, 0.5) could yield improved performance.
Firstly, within the tested variance range (1, 5, 10, 20), the performance trend is indeed a slight initial rise (from 1 to 5) followed by a decline (from 5 to 20). Additionally, to better address your concern, we have supplemented **Table R7** with control experiments using variances of 0.1 and 0.5 as you mentioned.
**Table R7**: Simple Baseline of Adding Gaussian Noise
| Variance | LPIPS(↑) | FDFR(↑) | ISM(↓) | BRISQUE(↑) |
|----|-----|------|---------|------|
| 0.1 | 0.66 | 0.62 | 0.41 | 30.98|
| 0.5 | 0.66 | 0.63 | 0.39 | 31.74|
| H | 0.70 | 0.67 | 0.34 | 35.02|
> Q2: Incorporate PAP could make the protection even more sensitive to the initial prompt.
Regarding the quality in **Table R5** being slightly lower than the Clean version, we note that these results used the same setup as described in the paper: 20 gradient descent steps starting from "" to obtain $c_N$. We plan to adapt our approach by dynamically tuning the learning rate and stepsize. Moreover, instead of using an empty string, further experiments are needed to identify a generic $c_0$ and optimization settings, thereby eliminating the need to manually select prompts for different tasks.
> Q3: How does the PAP algorithm adapt to handle variable-length prompts?
Currently, for SD2.x, our approach is to pad all prompts to a fixed length of 77 tokens for processing, consistent with our approach for SD1.x. Although this may not be the optimal solution, we adopted this strategy for the following reasons:
1. Prompt Length Sufficiency: In general, a length of 77 tokens is adequate to accommodate most prompts, as they rarely exceed this limit in practical applications.
2. Optimization Landscape Smoothness: Maintaining a consistent prompt length ensures a smoother optimization landscape, which is advantageous for solving the optimization problem effectively.
Despite this approach's potential limitations, the results in **Figure R1** from the rebuttal.pdf demonstrate that our method still achieves satisfactory performance.
> Additional comments1: Incorporating more encoders into the protection mechanism
While the paper considers attacks using the VAE encoder, which is one of the most popular and widely used image encoders in SD, other encoders are worth considering for integration into the attack process. This could help enhance the generalizability of the proposed method.
To this end, in our future work, we will further design attack strategies for image encoder-agnostic scenarios. Specifically, we will consider several of the most commonly used image encoders, and by integrating these encoders with our modeling of image feature distributions, we will sample and attack the feature distributions of images across various encoders. This will be further
fused with our original PAP approach in the future.
We anticipate that the above proposed **Prompt & Encoder-Agnostic Perturbation (PEAP)** method will be robust against various attack prompts and image encoders, aligning with our pursuit of image protection that better reflects real-world scenarios.
> Additional comments2: Continue working on and improving the method, as new personalization techniques and new models emerge
Thank you for your encouragement. We are aware of the rapid development in the field of customized generation, such as the InstantX series. We will continue to enhance PAP by experimenting with new personalization techniques and models, while also striving to improve the attack efficiency and reduce the computational cost of PAP.
Finally, we sincerely appreciate your suggestions and encouragement for our work. We welcome further discussions on the ideas mentioned, and we are deeply grateful for your willingness to increase your score.
---
Rebuttal Comment 2.2:
Title: A gentle reminder
Comment: Dear Reviewer JLDf,
We greatly appreciate your comprehensive review and valuable feedback.
As noted in your recent response, we have addressed most of your concerns, and you kindly expressed your intention to raise the score.
With the discussion deadline approaches, we respectfully remind you of your previous commitment to increase the score and sincerely request your final assessment.
Best,
Authors | Summary: The authors propose a prompt-agnostic adversarial perturbation (PAP) method for customized diffusion models. They first use Laplace approximation to model the prompt distribution. Then they derive the attacks by maximizing the disturbance expectation. Extensive experiments on three datasets validate their performance.
Strengths: Experiments on three datasets.
Interesting topic.
Mathematical Proof.
Weaknesses: Potential semantic gap between the estimated prompts and natural language.
Experiment Scope: only do experiments on SD 1-5.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1 This paper only mentions the time complexity of their approach in Line 226, but what are the time comparisons with baselines?
2 Only considering SD 1-5, however more versions of stable diffusion models and other diffusion models should also be considered. Could you please provide more results on other models to show the generalization of your approach?
3 The authors only show the average performance across 10 similar prompts. What is the performance of the trained prompts?
4 The authors use a Laplace to approximate the prompt distribution, but why is it a proper approximator? Additionally, in my point of view, the approximation is in the semantic space. However, the users will input natural language. Therefore, how to combat the domain gap is another concern.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer JA4c,
Thank you for your valuable feedback and insightful comments on our work. We appreciate your suggestions and have made the following improvements:
Q1: Potential semantic gap between the estimated prompts and natural language
> a) We emphasize the importance of considering prompt samples that are not only close to the mean but also embody meaningful semantic information to bridge the semantic gap between the estimated prompt distribution and natural language in Appendix H.1. As part of our future work, a proposed restriction module could discretize the Gaussian distribution, thereby sieving out prompts devoid of semantic relevance to facilitate the generation of more meaningful text options, as detailed in Appendix H.
> b) To further explore the relationship between the estimated prompts and natural language, we have conducted a visual experiment in **Figure R2** of the rebuttal.pdf. By reducing the dimensionality of 10 test prompts' embeddings and $c_N$ using PCA, we visualize the estimated prompt distribution and test prompts projected in a low-dimensional space for the tasks of "Facial Protection" (left) and "Preservation of Artistic Style" (right) respectively. The figures demonstrate that the two principal components of test prompts are discretely distributed within the modeled prompt distribution, indicating a flexible probability of being selected for adversarial attacks. This illustrates that our modeling effectively covers a range of natural language inputs in the semantic space.
Q2: More versions of stable diffusion models should be considered.
> Per your advice, we have conducted additional experimental evaluations on the Wikiart dataset using SD1.4 and SD2.0.
The results in **Figure R1** of the rebuttal.pdf indicate that the PAP method continues to show significant improvements on various diffusion versions, demonstrating the generalization of our approach.
Q3: Time comparisons with baselines
> In **Table R1**, we present the time required for each method. The results demonstrate that PAP introduces an average computation time of processing a set of images (<300s). In the future, we aim to further optimize the algorithm to reduce the time to within 4 minutes.
> **Table R1**: Time and VRAM comparisons with baselines.
| Method | AdvDM | Anti-DB |IAdvDM | PAP|
|---|---|---|---|---|
| Time| 262s|288s|204s|297s
Q4: What is the performance of the trained prompts?
> In Table 6 and Figure 5 of Appendix F, we have already presented the performance of previous methods for each prompt, including the trained prompts. Furthermore, we have extended the display of the performance of the trained prompts for all methods. In **Table R2**, our method slightly trails behind the SOTA by 0.01/39.57 in LPIPS/FID respectively. However, our method still maintains a leading position in ISM, FDFR, BRISQUE, and CLIP metrics.
> **Table R2**: Performance of the trained prompts.
| Method| LPIPS($\uparrow$) | FDFR($\uparrow$) | ISM($\downarrow$) | BRISQUE($\uparrow$) | FID($\uparrow$) | CLIP($\downarrow$) |
|---|---|---|---|---|---|---|
| AdvDM| 0.65| 0.61| 0.39| 33.68| **301.55**| 0.25|
| Anti-DB| **0.71**| **0.68**| 0.34| 32.24| 277.24| **0.24**|
| IAdvDM| 0.65 | 0.57| 0.43| 33.55| 296.31| 0.28|
| PAP| 0.70|**0.68**| **0.33**| **36.43**| 261.98| **0.24**|
| No Defense|0.50| 0.01| 0.55| 23.22| 128.31| 0.38|
Q5: Why is Laplace Approximation a proper estimator?
> a) Approximation of Gaussian distributions: The Laplace approximation often yields a Gaussian distribution. In many cases, especially for large amounts of observed data and problems applicable to the central limit theorem (which aligns with the prompt embedding space), the true distribution may approximate a Gaussian distribution. This implies that, for large samples, the Laplace approximation provides a good asymptotic approximation of the shape of the probability density function;
> b) Computational simplification: Compared to more complex methods such as Monte Carlo simulations, the Laplace approximation often has computational advantages. It provides a relatively straightforward way to approximate the true distribution, especially when the analytic form of the distribution is difficult to handle or unavailable (our ideal prompt distribution is challenging to solve analytically as discussed in Section 3.2). This simplification makes the Laplace approximation practical in real-world problems.
> c) We analyze the properties that the ideal prompt embedding distribution should meet: **P1**. The distribution should be centered around the extreme points $c_x$, with probability decreasing as semantic relevance diminishes, and with a large number of samples; **P2**. The analytical form of the distribution is unavailable. **P1** aligns with the use of Gaussian distribution for approximation, as discussed in a), while **P2** corresponds well with the situation described in b). These indicate that the scenario we are addressing is highly suitable for Laplace modeling.
Subsequently, by Taylor expanding at $c_x$, we determine an approximate Gaussian distribution (Section 3.3.1) and introduce two estimators (Section 3.3.2): one minimizing the generation loss and the other performing a Taylor expansion around $c_x to estimate their mean and variance, respectively. We also provide a detailed explanation of the error bounds resulting from the approximation of these estimators in Appendix A, thereby theoretically supporting the validity of our Laplace estimation. Furthermore, the extensive experimental results presented in the paper (Tables 1/3/4/10, Figures 2/3/4/6, visualization results in Appendix I) empirically reinforce the validity of our approach.
In conclusion, we look forward to further discussions and insights on our work. Your feedback has been invaluable in shaping our research, and we are eager to continue this dialogue. Thank you for your time and consideration.
---
Rebuttal Comment 1.1:
Comment: Thanks for the rebuttal and parts of my concerns are addressed. I raise my score to 6. | Rebuttal 1:
Rebuttal: Dear AC and all the reviewers,
We would like to express our sincere gratitude to you for your comprehensive evaluation of our manuscript, as well as your insightful feedback and constructive suggestions.
We have tried our best to answer all questions of the reviewers about our paper. We wander if our responses address all the concerns?
Thanks all !
Pdf: /pdf/eb6ee846a6811731fe4f75b50cab8cb7a3fef937.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Self-Taught Recognizer: Toward Unsupervised Adaptation for Speech Foundation Models | Accept (poster) | Summary: This paper proposed a Self-TAught Recognizer (STAR) that leverages unlabeled data to enhance the robustness of automatic speech recognition (ASR) systems in diverse target domains. The proposed method includes a novel indicator that empirically integrates step wise information during decoding to assess the token-level quality of pseudo labels without ground truth and utterance level pseudo labels filtering . The STAR seems to have sufficient novelty for publication. Sufficient experiments are presented to show the effectiveness of the proposed method. The work should be reproducible since the code will be released.
Strengths: The strengths of the papers are:
1. The proposed method is novel.
2. Sufficient experiments and good analysis.
3. Good presentation and the paper are easy to follow.
Weaknesses: Major concerns about the paper.
1. It was shown in Figure 3 that the UDA converged with about 1 hour speech data for the target domain and the gain is very small with more data. Given that the performance of the STAR is still a lot worse (some are close) than the true-label model (The upper bound of STAR), I am concerning about the usage of STAR in the real scenario. One or several hours data transcription is sometimes affordable for some organizations, so that they might get better numbers than STAR.
2. In other words, UDA has an upper bound while true-label model not. Did authors have the experiments to show how many hours of true-label data finetuning is equivalent to the performance of STAR? This might help the readers understand better on selecting UDA or human-transcription in different scenarios.
Technical Quality: 3
Clarity: 3
Questions for Authors: Some minor concerns or suggestions:
1. Maybe I missed the sentence in the paragraph, but I don't find the description of which layer or head of the attention weights are used for the indicator A. It is worth a mention in the paragraph especially in Equation 5.
2. Continuing the indicator A, did the authors compare the confusion matrix for other layers? In other words, does the attentive score confusion matrix behave similarly in Figure 2 for specific layers or for all layers (head)?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation of the method can be restricted to certain use cases. However, the proposed method is still helpful to the community.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer 8Y8C for the valuable and constructive comments. Please find detailed responses below:
- ***Weakness 1 & 2***
Thanks for your feedback. We clarify that the true-label models use all labeled data from the training set of each domain. Therefore, how good this upper-bound is also depends on the size of their training set, with more training details provided in Appendix F.
Furthermore, fitting a 1B model to a few hours of labeled data can lead to overfitting issues. Without corresponding measures to mitigate this, the model's performance in other domains would significantly degrade. However, the training labels of STAR are self-generated, and reweighting is merely an attempt to selectively update without forcing the model to overfit to a new data distribution. This is also why STAR avoids catastrophic forgetting.
- ***Question 1 & 2: Which layer or head of the attention weights are used for the indicator A.***
We appreciate your suggestion. In our experiment, the self-attention matrix is drawn from the last transformer layer of the foundation model, averaging on all heads. We also found that the performance is not sensitive to the selection of layers or heads. The last two layers, with a single head, can also achieve comparable performance.
For the confusion matrix, we confirm that the attentive score from the last five layers is very similar to Figure 2. The earlier layers can also show similar patterns but are somewhat less indicative of the pseudo-label quality, as the high layers learned more linguistic semantics.
---
Rebuttal Comment 1.1:
Title: Reply the rebuttal
Comment: Thank you for the rebuttal. I am keeping my score which is already an accept. Please make sure the explanations can be reflected in the revised paper.
---
Reply to Comment 1.1.1:
Title: Thank you for feedback
Comment: Dear Reviewer 8Y8C,
Thank you very much for positive feedback! Please do not hesitate to reach us if you have any further questions or comments.
Best,
Paper 10759 Authors | Summary: The paper proposes STAR, a novel algorithm for unsupervised domain adaptation (UDA) of speech foundation models (e.g., one that has a decoder like Whisper). The UDA setting is the semi-supervised setting where unlabeled data from the target domain is available, and the speech foundation model is available. The unlabeled data is first pseudo-labeled and then used to train the ASR model. The paper specifically proposes the STAR Indicator, which combines the advantages of the auto-regressive confidence score and the attentive score (the attentive score is self-defined by the authors and derived from attention scores) to finally produce a score that can improve the fine-tuning process on pseudo-labels (named as "informed fine-tuning" by the authors and a method prevalent in literature where you guide the training process in a re-weighting manner).
Strengths: The strengths of the paper are as follows:
- The paper is well-written and easy to follow. The illustrations are also well-made.
- The algorithm proposed for uncertainty estimation is novel, to the best of my knowledge.
- The evaluation setup chosen is sufficient, and no further additions can be made to the best of my knowledge.
- I like the pre- and post-result analysis of the paper. Provides intuition into the proposed algorithm and results.
- Section 5.2 and 5.3 is interesting. These kind of analysis is important for speech papers and should be promoted (which is generally not found in other papers).
- The Appendix is well done and provides nice extra details
Weaknesses: - As also claimed by the authors, the overall methodology of using uncertainty estimation for improving UDA is not novel. Only the estimation score proposed is novel. However, these works are properly cited. This makes the paper sound, but not very exciting to me. However, I also acknowledge that this is not a primary reason to reject.
- I am not convinced why only Whisper (and some other models in Table 4) was employed for evaluation (a model that does not have a lot of training details revealed). I have generally found methods evaluated solely on Whisper to not work in a real-world setting. For example, what would have happened if we had adapted a purely LibriSpeech-trained model to the domains mentioned in the paper with the STAR algorithm? Though I understand that the paper title says "Speech Foundation Models," for me, this makes the applicability of STAR a bit less to me. I would leave this decision to other authors
- Some key baselines are missing: What happens to STAR without Utterance-level Filtering? Why was the LM-rescoring baseline not used (LM-rescoring is the most traditional method for UDA where the language model is further trained on target domain texts, may be you can fine-tune the LM on pseudo transcripts? (open to discussion if this is not valid)?
Technical Quality: 3
Clarity: 3
Questions for Authors: The algorithm is simple and sound. The evaluation setup is sufficient except the Whisper part. I might have further questions during the rebuttal period after I see other perspectives. But overall, the paper is well done (again, I do not find it too exciting but very sound) and I am inclined towards accepting. I hope the authors can respond to my points in Weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: - I don't see the Limitations of this paper discussed in the main paper. Can the authors please add it to the main paper? Limitations is a very important section of a NeurIPS paper. Additionally, can the authors please elaborate the limitations section with more details like: Latency and resource requirement over plain UDA fine-tuning, high noise scenarios where UDA may fail, etc.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer zXq9 for your valuable and constructive comments. Please find detailed responses below:
- ***Q1: References to previous work***
We sincerely appreciate your feedback. While this is not a completely new topic, our proposed method only requires a downloadable foundation model and one hour of unlabeled speech to enhance the model in this domain without catastrophic forgetting. With the introduction of more speech foundation models, we believe this is a meaningful exploration.
- ***Q2: What would have happened if we had adapted a purely LibriSpeech-trained model to the domains?***
Thanks for your question. Allow us to point out that the robustness of a model trained on LibriSpeech is insufficiently robust to serve as a pseudo-labeler. For instance, using the ATIS test set as an example, the LibriSpeech ASR model cannot recognise many city names. In comparison, Whisper, which is a widely used foundation model trained on much larger and more diverse datasets, is a lot more robust and can achieve one-to-many adaptation to accommodate different speech domains. Related discussions are included in Appendix A (Q1 & Q2).
- ***Q3: What happens to STAR without Utterance-level Filtering?***
Thanks for your question, the ablation study without Utterance-level Filtering is in Table 8, and discussed in Appendix D. Since Table 1 can no longer accommodate additional columns, and Utterance-level Filtering is merely a process of removing particularly difficult-to-recognize samples. The effect is similar across different methods, so we have placed it in the appendix.
- ***Q4: Why was the LM-rescoring baseline not used?***
Thanks for your suggestion. We need to point out that in the STAR experiments, each domain used only about one hour of unlabeled speech, which is equal to a few hundred utterances (Figure 3). This amount of text is insufficient to directly train or even adapt a useful LM for LM rescoring. Furthermore, all of our experiments were based on a single end-to-end model, while the additional LM-rescoring stage will increase the complexity making it no longer a purely end-to-end system.
---
Rebuttal Comment 1.1:
Title: Thank You for the rebuttal
Comment: Thank You for the rebuttal. I am keeping my score which is already an accept. The paper takes a decent step towards advancing the domain.
---
Reply to Comment 1.1.1:
Title: Thank you for feedback
Comment: Dear Reviewer zXq9,
Thank you very much for positive feedback! Please do not hesitate to reach us if you have any further questions or comments.
Best,
Paper 10759 Authors | Summary: This paper investigates the use of audio-only data to enhance ASR performance for domain adaption in Speech Foundation Models. The approach is straightforward: recognition results are used to compute a confidence score for each token (e.g., BPE in this paper), which then weights the loss function, as shown in Equation (4). The key innovation is the computation of token-level confidence scores by adopting attention scores instead of the softmax output for confidence estimation. Unlike traditional methods that operate at the utterance level, this approach focuses on token-level confidence. The concept is simple: use the attention weight as the confidence score rather than the predicted next-word probability. The proposed STAR method combines information from both token-level probabilities and the attention matrix. ASR performance appears promising, especially when compared to the strong baseline model Whisper large v3.
Strengths: 1. I find the proposed confidence estimation using attention scores intriguing.
2. The authors conducted extensive experiments on various ASR corpora to demonstrate the effectiveness of the proposed STAR algorithms, along with comprehensive ablation studies.
3. Significant performance improvements are reported over the strong baseline, which is Whisper large v3.
Weaknesses: 1. Several details are missing. How is the attention matrix calculated given multiple layers and heads? Are the attention weights from cross-attention or self-attention used?
2. Do you need to tune the threshold for each corpus, or simply use a fixed threshold for all datasets? What is the specific value for the threshold lambda?
3. The definition of conflict is A_l^2/C_l, which seems intuitive. Is there any particular reason for using this formulation?
4. Many studies have attempted to use entropy from the output layer instead of probability for confidence estimation in end-to-end ASR systems.
5. I am interested in whether the informed fine-tuning shown in Equation (4) is proposed in this paper. The confidence score is used to filter utterances and weigh the importance of each word. Do you have any ablation studies demonstrating the effectiveness of this approach?
6. It would be better to combine Equations (6) and (7) into an equation array to clearly state the formulation.
7. In Equation (5), the attention score is computed using the attention weight for the current word to the previous words and from future words to the current word. Is there any particular reason for this choice? Is it acceptable for the A_l to be larger than 1?
Overall, this paper seems to rely heavily on empirical or intuitive settings, making it difficult to extend to a theoretically sound approach. Additionally, many key details are missing. It would be appreciated if the authors could provide more details and present the information more clearly.
I am open to reconsidering my rating based on the authors' rebuttal.
Technical Quality: 3
Clarity: 3
Questions for Authors: I have listed my questions in the above section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate Reviewer LswZ for valuable and constructive comments, and we believe our detailed responses below can solve your concerns on missing details. Please let us know if you have further questions or recommendations.
- ***Q1: Several missing details: Are the attention weights from cross-attention or self-attention used? How is the attention matrix calculated given multiple layers and heads?***
Thanks for your question. We use self-attention weight as we introduced in the paper (**line 65** and **line 180**).
In our experiments, the attention matrix is obtained by averaging all heads in the last transformer layer, as the high layers learned more linguistic semantics. We also found that the performance is not sensitive to the selection of layers or heads. The last two layers, with a single head, can also achieve comparable performance. More details can be found in our submitted code.
- ***Q2: Do you need to tune the threshold for each corpus, or simply use a fixed threshold for all datasets? What is the specific value for the threshold lambda?***
Thanks for your question. We apply a fixed threshold of lambda$=2$ for all corpus and it works well generally, as mentioned in **line 264**.
- ***Q3: The definition of conflict is $A_l^2/C_l$, which seems intuitive. Is there any particular reason for using this formulation?***
Thanks for your question. The design of $A_l^2/C_l$ can be decoupled into two terms, $A_l/C_l$ and $A_l$, which means to match the “conflict” criterion, we not only require $A_l$ to be much larger than $C_l$, but the absolute value of $A_l$ should also be large. This is to avoid special cases with two small scores where one is many times the other (e.g., $A_l = 0.1, 0.01$). We have mentioned this particular design in **line 212 to 214** and **footnote 2**.
The specific equation definition of “conflict” is not unique, if only the above condition is met roughly.
- ***Q4: Many studies have attempted to use entropy from the output layer instead of probability for confidence estimation in end-to-end ASR systems.***
Thanks for your question. Entropy is indeed an alternative choice for confidence estimation. However, in our case, since the probability $P$ cannot reliably indicate pseudo-label quality (Figure 2, Figure 5, and Figure 6), the minus entropy $P*logP$ that is positively correlated to $P$ cannot serve as a reliable indicator either. Our preliminary results have confirmed this, but considering $P$ is a more typical and intuitive choice, we only select $P$ as the confidence score baseline.
- ***Q5: I am interested in whether the informed fine-tuning shown in Equation (4) is proposed in this paper. The confidence score is used to filter utterances and weigh the importance of each word. Do you have any ablation studies demonstrating the effectiveness of this approach?***
Thanks for your question. The informed finetuning with confidence score in Eq. (4) is followed from prior works as discussed in the Introduction (**line 59 to 62**). However, we observe that the confidence score is unreliable in assessing pseudo labels so we propose a reliable attentive score based on the self-attention matrix. Finally, we combine conventional confidence and proposed attentive score as our final STAR indicator, which guides the token-level reweighting. On the other hand, utterance-level filtering is implemented by Monte-Carlo Sampling introduced in Section 3.3.
For experiments, Table 1 presents the ablation study of utterance-level filtering, and token-level reweighting (confidence score, attentive score, STAR score).
- ***Q6: It would be better to combine Equations (6) and (7) into an equation array to clearly state the formulation.***
Thanks for your suggestion and careful check, we will revise it in the next version.
- ***Q7: In Equation (5), the attention score is computed using the attention weight for the current word to the previous words and from future words to the current word. Is there any particular reason for this choice? Is it acceptable for the A_l to be larger than 1?***
Thanks for asking this question. For the choice of history and future tokens in calculating the attentive score (Eq. (5)), we use both of them to capture the comprehensive global context to better assess the role of the current token, in terms of its semantic significance in the entire sentence. Table 7 presents an ablation study on this choice, indicating that both history and future tokens are helpful. The $A_l$ will be normalized using sequence length after calculation so that the absolute value of $A_l$ does not matter, we will add these details in the next version.
---
Rebuttal 2:
Title: Regarding the deadline of author-reviewer discussion period
Comment: Dear Reviewer LswZ,
Thank you for your kind efforts in providing initial reviews for our work! We have taken them into careful consideration and provided the responses accordingly.
Since the deadline of author-reviewer discussion period is approaching, could you please take some time to confirm whether our responses have satisfactorily addressed your concerns? If you believe so, may you please consider adjusting your initial score accordingly? Please do not hesitate to reach us if you have any further questions or comments.
Best,
Paper 10759 Authors
---
Rebuttal Comment 2.1:
Title: Thank You for the rebuttal
Comment: I appreciate the authors' detailed rebuttal addressing the concerns I highlighted in my review. I think the rebuttal addresses some of my questions effectively. I have adjusted my score accordingly. | Summary: The paper proposes STAR, a novel ASR domain adaptation technique that requires no labeled data and only a few unlabeled samples. STAR utilizes the confidence score and self-attention score obtained during decoder inference to calculate the reliability score (STAR indicator) for each token. The score of each token is then used as a multiplier to adjust the fine-tuning loss, which is based on generated pseudo-labels. In addition, STAR employs utterance-level filtering to remove noisy predictions. Extensive experiments across various ASR models, datasets, and fine-tuning techniques demonstrate that STAR achieves significant accuracy improvements compared to the baseline self-training approach.
Strengths: * The logical flow of this paper is very interesting, and the authors provide empirical evidence for each step. The motivations behind the research and method are inspiring.
* The authors have conducted comprehensive experiments using multiple datasets, models, and noisy conditions. I appreciate the authors’ effort on this.
* STAR does not seem to suffer from catastrophic forgetting, and this is a very important advantage.
Weaknesses: * The proposed method is designed for a Transformer-based model with auto-regressive generative decoder architectures. As such, it may not be easy to adopt STAR for CTC or RNN-T-based ASR models (as the authors also discussed in Appendix A).
* It would be important to discuss the differences and similarities between STAR and noisy student-based training [1][2]. NST also employs pseudo-labeling, heavy filtering, and iterative refinement.
* [1] Pushing the Limits of Semi-Supervised Learning for Automatic Speech Recognition
* [2] Improved Noisy Student Training for Automatic Speech Recognition
Technical Quality: 3
Clarity: 4
Questions for Authors: * The attentive score A (Equation 5) seems to be affected by the total length of the generated transcript. The longer the transcript, the (potentially) higher the attentive score. In contrast, confidence scores are bounded to [0, 1]. Maybe missing a normalization term in Equation 5?
* Any intuitive reasons not to incorporate cross-attention scores?
* Comparing Table 1 and 2, the numbers of “Frozen” are the same but “Self-train” and “STAR” are different. For example, TED3 WER is (5.2, 4.9, 4.1) in Table 1 but (5.2, 5.3, 5.0) in Table 2. What’s the difference?
* How many beams are used in the beam search? Are the beam search-based pseudo-labels also used for self-training baselines?
* It would strengthen the paper’s claim if the pseudo-label and STAR score could be cross-transferred between different models (for example, Whisper-Large generates training resources for Whisper-Tiny).
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: * The paper adequately addresses the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely appreciate Reviewer vnSx for valuable and constructive comments. Your suggestion is also instructive for further analysis and our future work.
Please find detailed responses below:
- ***Q1: Appling STAR for CTC or RNN-T-based ASR models.***
Thanks for your feedback, As the illustration in section 3, applying STAR to CTC or RNN-T, auto-regressive Whisper is still employed as a pseudo-labeler to provide pseudo transcription, as well as token-level or utterance-level indicators. Therefore, the RNN-T model can be adapted based on the provided information. We acknowledge that the process requires compositional efforts, but this connection to popular ASR variants also has high potential application value.
- ***Q2: Discussion of the differences and similarities between STAR and noisy student-based training [1][2].***
Thanks for your feedback. We will add a paragraph of related discussion to the paper. Although both STAR and NST contain a pseudo labeling and filtering process, they have several distinctions listed as follows:
- **Motivation:** STAR focuses on domain adaptation, and NST focuses on leveraging abundant unlabeled data to improve general ASR accuracy, where the unlabelled data are from similar domains (LibriSpeech and LibriLight datasets).
- **Data:** STAR only requires **1 hour** of unlabeled data from any target domain, NST requires both labeled data (960 hours) and abundant unlabeled data (60k hours).
- **Method:** STAR focuses on the token-level indicators for informed finetuning (which could be attributed to the strong linguistic ability of the Whisper decoder), and NST focuses on utterance-level data filtering.
- ***Q3: Missing a normalization term in Equation 5.***
Thanks for your valuable question. When we obtain the attention matrix, it is normalized according to the sequence length. We will clarify this point with a note.
- ***Q4: Why not use the cross-attention matrix?***
Thank you for this discussion. As we described in the paper, self-attention in the decoder better reflects the **linguistic acceptability** of the predicted transcription. As for cross-attention, it is more directly linked to the acoustic representation–vulnerable to variable speech domain. In contrast, self-attention is text-based and Whisper's multi-task pre-training also includes text-only learning objectives. Therefore, self-attention potentially gives more linguistic knowledge. Our preliminary results also confirmed that the cross-attention matrix could not provide a reliable indicator.
- ***Q5: Difference between Table 1 and Table 2.***
Thanks for your question. Table 1 shows the main results where Whisper is both finetuned and tested on each domain individually. Table 2 aims to explore the forgetting issue of STAR, where Whisper is finetuned on the CHiME-4 dataset and then tested on other datasets. Therefore, Table 2 presents worse results than Table 1. We will clarify this in the captions of the tables.
- ***Q6: How many beams are used in the beam search? Are the beam search-based pseudo-labels also used for self-training baselines?***
Thanks for your question. For beam search, we set beam$=5$. Our preliminary results show that beam search-based pseudo labels show almost the same self-training performance as the greedy search ones. Therefore, for higher efficiency, we use greedy search only.
- ***Q7: Can Whisper-Large generate training resources for Whisper-Tiny?***
Thanks for the question. Yes, we confirm that STAR can distill knowledge from large models to smaller models (but not vice versa). We will include this advantage in the next version.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thank you for addressing my questions. I am keeping my score toward accept, and I believe this paper can inspire many following studies.
---
Reply to Comment 1.1.1:
Title: Thank you for feedback
Comment: Dear Reviewer vnSx,
Thank you very much for positive feedback! Please do not hesitate to reach us if you have any further questions or comments.
Best,
Paper 10759 Authors | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
AdaPKC: PeakConv with Adaptive Peak Receptive Field for Radar Semantic Segmentation | Accept (poster) | Summary: The PeakConv (PKC) model specialized for radar signal analysis effectively characterizes the target signatures of radar signals. However, the fixed predefined peak receptive field limits the performance of PKC due to significant variations in target features and associated interference within radar signals. To solve this problem, the authors propose AdaPKC, aimed at adaptively adjusting the peak receptive fields in PKC through two data-adaptive band-pass filtering mechanisms. The experimental results show that the proposed method is advanced to some extent.
Strengths: Originality: This paper introduces an adaptive method for adjusting the PRF based on an existing baseline, demonstrating some innovation.
Quality: The paper is technically sound. The proposed dataset and the results of the proposed model are analyzed in detail.
Clarity: Overall, the paper is clearly written, although some sections require clearer articulation.
Significance: The general applicability of the proposed method needs further validation.
Weaknesses: 1.The explanation of Figure 1 is unclear, and the captions for Figures 2 and 3 do not sufficiently summarize the contents depicted, resulting in poor readability.
2.The experiments lack an evaluation of the model's computational efficiency
3.The effectiveness of the proposed method requires further validation.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1.Please provide a clearer explanation of the content presented in Figure 1. Which part of the Figure 1(a) does the red ellipse in the first row of radar frequency map represent? Why does it disappear in the second radar frequency map of the first row? Why does the purple ellipse reappear in the second row? What do the corresponding image frames look like, and how are they related to these radar frequency maps? Please also supplement the captions for Figures 2 and 3 to facilitate reader comprehension.
2.In Section 3.1.1, "...we divide the observed radar signals into three subsets...". How are these subsets divided? How are the expressions for each segment in Equation 1 derived?
3.Please add an analysis of the computational efficiency of the model. How much computational cost is added by the two modules proposed?
4.Dilated convolution can also adaptively adjust the receptive field of the convolution kernel. What are the advantages of your method compared to dilated convolution techniques?
5. Would the method proposed in this paper still work if other baselines similar to PKC was used?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See Weakness and Questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your constructive and thoughtful comments. We appreciate the recognition of the strengths of our work: the innovation of our method, technically sound paper and overall clarity in writing. We are glad to answer all your questions.
**Q1**: The correspondence between interference in radar frequency maps and that in image frames presented in Figure 1.
**A1**:
Thank you for your constructive feedback. However, the fluctuating interference in the radar frequency map is indeed difficult to correlate with the optical images, primarily due to the following reasons:
1. **Dissimilar Coordinate Systems and Fields of View**: The radar frequency maps and the optical images have different coordinate system definitions and fields of view, making it challenging to establish a direct correspondence between them. For instance, in Range-Doppler (RD) maps, targets which are spatially distinguishable in the optical image but share the same radial distance will be compressed into the same range bin in the RD map, thus probably becoming indistinguishable. Furthermore, Doppler domain reflects the velocity of target, it cannot be mapped into the optical image.
2. **Different Imaging Mechanisms**: The radar frequency maps and optical images are formed through distinct mechanisms. The amplitude in a radar frequency map actually represents the strength of the received echoes, which can be affected by invisible interference noise in the optical spectrum and system noise within the radar system itself. These factors do not directly translate into optical images, making it difficult to visually correlate the fluctuating interference seen in the radar frequency maps with the corresponding optical images.
Therefore, while we can observe fluctuating interference in the radar frequency map, it is challenging to visualize this interference using the corresponding optical images.
**Q2**: A clearer caption for Figure 2 and 3.
**A2**: We will add the following descriptions into the captions of corresponding figures.
1. *Caption for Figure 2:* The illustration of AdaPRF in AdaPKC$^\xi$. (a) illustrates the definition of PRF in PKC, whose area is governed by the reference bandwidth and guard bandwidth; (b) describes the estimation process of AdaPRF in AdaPKC$^\xi$, including denoting K candidate PRFs for each CUT, translating these PRFs into metric scores, and finally selecting an appropriate PRF as the AdaPRF with these metric scores.
2. *Caption for Figure 3:* The illustration of AdaPRF in AdaPKC$^\theta$. (a) illustrates an example of candidate PRFs in AdaPKC$^\theta$, where the guard bandwidth is in quadruple form; (b) describes the flowchart of the optimal guard bandwidth estimation network, which consists of two parallel branches that sample representative points in their corresponding directions, then automatically measures and selects the optimal guard bandwidth.
**Q3**: Analysis of the computational efficiency of the model.
**A3**: Please see our comments to all authors that clarify the **[Complexity and FPS]** of our work and PKC.
**Q4**: The effectiveness of the proposed method requires further validation.
**A4**: The effectiveness of our proposed method has been widely validated within multi-view and single-view radar semantic segmentation frameworks and across three large scale radar datasets. And we are also collecting new real-measured radar datasets to more extensively validate the effectiveness of our methods and other works.
**Q5**: More clearer explanation of the division of observed radar signals and derivation of Eq. (1).
**A5**: It is natural to devide the radar signals into the three subsets: (1) signals reflected from a target, $S_t$; (2) noise with signals that leaks out of the target, $S_{t-n}$; and (3) noise, $S_n$. For simplicity, we introduce an attenuation factor $\eta<1$ to indicate the leaking of signals from targets. For some CUT $x_c=\psi(s;W)$ and its candidate reference unit $x_r=\psi(s';W)$,
- if $s'\in S_t$, then $s'$ and $s$ belong to the same target, so $s’=s$ and $E(x_c x_r^T)=E(\psi(s;W)\cdot \psi(s';W)^T)=E(||\psi(s;W)||_2^2)$
- if $s'\in S_{t-n}$, then $s’=\eta\cdot s$ and $E(x_c x_r^T)=\eta E(||\psi(s;W)||_2^2)$
- if $s'\in S_{n}$, then $E(x_c x_r^T)=E(\psi(s;W))\cdot E(\psi(s';W)^T)=0$
From this equation, we can see that the inner product transformation assigns three statistical boundaries to $x_r$ from the three subsets, and this attribute significantly serves to facilitate the subsequent localization of reference units from $S_{t-n}$.
**Q6**: Advantages of AdaPKC compared to dilated convolution techniques.
**A6**: Compared to dilated convolution, AdaPKC has the following advantages: (1) The receptive field of dilated convolution can only be manually adjusted by setting different dilation rates, whereas AdaPKC can adaptively adjust the receptive field for each CUT based on the distribution of the CUT and its surrounding area; (2) While dilated convolution indirectly establishes a guard field around the CUT, it always couples the CUT with its surrounding points during interference estimation and band-pass filtering. In contrast, AdaPKC decouple the CUT from the surrounding points, explicitly estimate interference noise, and perform adaptive band-pass filtering for each CUT.
**Q7**: Would the method proposed in this paper still work if other baselines similar to PKC was used?
**A7**: Yes! Our AdaPKC is indeed a universal convolution operator. If you review our code, you can find that AdaPKC can be used conveniently ****just like any other convolution operator, and it can be seamlessly integrated into any convolutional framework designed for radar signal processing.
---
Rebuttal Comment 1.1:
Title: Reply for rebuttal
Comment: Thanks the authors for their explaination. I suggest the current version can be considered for fianl acceptance.
---
Reply to Comment 1.1.1:
Title: Replying to Official Comment by Reviewer i1nF
Comment: Dear Reviewer i1nF,
We are genuinely appreciative of your decision to upgrade the score to a weak accept and to recommend our work for final acceptance! Your insightful feedback will be incorporated into the revision.
Additionally, we would like to make an effort to see if we can earn an even higher evaluation from you. ***If you have any further questions or suggestions, please don't hesitate to reach out!*** We are eager to provide any additional clarification needed and look forward to continuing discussions that will enrich the revision of our paper.
Warm regards,
Authors of Paper 9766 | Summary: This paper presents a radar semantic segmentation method, AdaPKC, which combines PeakConv and Adaptive Peak Receptive Field (APRF) concepts. The author demonstrates extensive applicability of AdaPKC in radar perception including autonomous driving, drone surveillance, and ocean monitoring. The method significantly enhances radar semantic segmentation performance with robustness and scalability.
Strengths: 1.The method's performance and efficiency have been demonstrated through experiments to exceed both a strong baseline in radar semantic segmentation.
2.The author successfully achieved incremental optimization in PeakConv [1], reaching state-of-the-art performance.
3.Rigorous ablation studies were conducted, providing solid evidence of the proposed method's efficacy.
4.The paper includes a good review of existing work and contributes to the development of radar semantic segmentation.
Weaknesses: This method builds upon existing methods and is an improved version of the existing PeakConv (PKC). It combines them in a novel way. The novelty is limited for NeurIPS.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1.Can you provide a more detailed analysis of the computational complexity?
2.Can you provide a more detailed analysis of the impact of the proposed AdaPKC on real-time performance?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. We appreciate the recognition of the strengths of our work: superior performance than SoTA, solid evidence of the method's efficacy by rigorous ablation studies, contribution to the development of radar semantic segmentation. We are glad to answer all your questions.
**Q1**: This method improves PKC in a novel way. The novelty is limited for NeurIPS.
**A1**: We apologize for not fully summarizing the novelty of our work. Next, we provide a detailed explanation of the innovative aspects of our research.
- Conceptually and in principle, to the best of our knowledge, adaptive peak receptive field (AdaPRF) in this work is **the first attempt** specifically tailored for radar signal processing to dynamically adjust the receptive field (RF) for convolution operators, which is a further great breakthrough compared with PKC. Its design is based on both (1) radar signal principles, including radar signal generating mechanism, the difference between target and interference distribution, and (2) deep learning principles, such as differentiable high-dimensional representation learning for data-driven end-to-end network optimization. With AdaPRF the original PKC is updated into a more data-adapted version, AdaPKC, and is verified in different convolution frameworks, which can also be **extended to more radar-oriented learning algorithms and wider range of radar detection scenarios**. Since **more adaptive interference estimation is always a key topic** in radar signal processing, **AdaPKC is novel and significant.**
- Additionally, **although PKC has been proposed, its inherent limitations cannot be ignored.** Continuous introduction and validation of new ideas and methods are essential for developing new radar sensing algorithms to better meet practical needs, just like YOLO series.
- From a design perspective, **AdaPRF is innovative compared to existing methods**. In this paper, we first analyze existing mature works in both RSS and deep learning fields, then deeply examining their shortcomings from radar perspective, such as PKC's fixed RF issue, the difficulty of making CFAR's dynamic RF learnable, and the limitations of DCN's dynamic RF mechanism in handling radar signals. Finally, rather than simply using or combining existing research, we propose the AdaPKC, and validate it with various real-measured radar data, providing new insights for this research area.
- Technically, it is worth noting that **implementing AdaPRF is a non-trival task**. We have summarized and explained main challenges and solutions in our paper and code, offering valuable experience for future research, such as:
- How to reliably assess the reference points belonging to appropriate PRF in high-dimensional radar signal representation;
- How to design the assessment method to meet the requirements of smooth differentiability and parallel computation of deep learning;
- How to handle the changing number of sampling points for fixed convolution kernels due to dynamically changing receptive fields;
- How to optimize inference to meet the requirements of practical radar signal processing application.
- Finally, we emphasize that the goal of this work is to meet the practical needs for radar application and contribute effective and efficient component for the next generation of radar signal processing paradigm. We are well aware of the many bottlenecks in exsiting radar signal processing, which has been stagnant for many years, and the challenges in applying deep learning to radar. Therefore, our goal is to provide a practical extension for new radar signal processing methods, freeing them from existing rigid workflows, which is of significant industrial value.
**For the conference of NeurIPS**, our work presents a practical study of adapting deep learning to radar recognition application, aligning with the Call for Papers section of NeurIPS that focus on Applications and Deep Learning, etc, which is also strongly represented in NeurIPS every year.
**Q2**: A more detailed analysis of the computational complexity and real-time performance.
**A2**: We compute the computational complexity and frame rates of AdaPKC and PKC on a Tesla V100 GPU and summarize it as follows. Compared to PKC, AdaPKC incurs minimal additional computational complexity and inference speed overhead.
| Dataset | Conv Type | GMACs | Runtime (ms) | FPS |
|---| ----------- | ----------- | :---: | --- |
| CARRADA | PKC | 109.8 | 47.5 | 21.1 |
| CARRADA | $\text{AdaPKC}^{\xi}$ | 109.8 | 48.4 | 20.7 |
| CARRADA | $\text{AdaPKC}^{\theta}$ | 110.1 | 53.2 | 18.8 |
| KuRALS | PKC | 162.4 | 49.4 | 20.2 |
| KuRALS | $\text{AdaPKC}^{\xi}$ | 162.4 | 50.6 | 19.8 |
| KuRALS | $\text{AdaPKC}^{\theta}$ | 162.6 | 52.0 | 19.3 |
---
Rebuttal 2:
Comment: Thank you for your response and clarification on the novelty of the work, particularly the introduction of AdaPRF.
Your explanation effectively highlights the innovative aspects of AdaPKC, including the integration of radar signal principles with deep learning techniques, and the unique challenges your method addresses. I appreciate your further explanation of AdaPKC and the additional analysis of computational complexity, which demonstrates a better balance between extra computational complexity and inference speed overhead compared to PKC.
Overall, the author has addressed some of the issues raised during the review process, reinforcing the significance of AdaPKC as an innovative approach in the field of radar signal processing. I look forward to seeing the proposed improvements.
---
Rebuttal Comment 2.1:
Title: Replying to Official Comment by Reviewer mTDV
Comment: Dear Reviewer mTDV,
We are delighted that our rebuttal has effectively clarified the novelty of our work, and we greatly appreciate your recognition of the better balance AdaPKC achieved between computational complexity and inference speed overhead.
We are pleased to inform you that we have thoroughly addressed all the questions and concerns you raised in your reviews. However, **we are puzzled as to why our work did not fully gain your recognition, resulting in a missed opportunity for a higher evaluation**.
***Please do not hesitate to reply if you have any further questions or suggestions!*** We look forward to improving the clarity and depth of our work with your valuable input!
Warm regards,
Paper 9766 Authors
---
Rebuttal Comment 2.2:
Title: We sincerely appreciate your decision to upgrade the score!
Comment: Dear Reviewer mTDV,
We sincerely appreciate your decision to upgrade the score to a borderline accept! Your insightful feedback will be incorporated into the revision.
Wishing you a wonderful day!
Paper 9766 Authors | Summary: This paper proposes an idea of adaptive peak receptive field, and upgrades PKC to AdaPKC based on this idea. Beyond that, a novel fine-tuning technology to further boost the performance of AdaPKC-based RSS networks is presented.
Strengths: The adaptive version of PeakConv (PKC) is motivated by the adaptive selection of reference cells in the classical radar detector, CFAR.
Weaknesses: The adaptive version of PeakConv (PKC) is considered to be incremental work. The numerical results also indicated the AdaPKC performance improvement over PKC is very limited.
The comparison with classical CFAR with adaptive reference cell selection is not included in the validation part.
Technical Quality: 2
Clarity: 3
Questions for Authors: It is not clear how many annotated radar data is sufficient to train the proposed network.
What is complexity of the proposed network? Is it running faster than CFAR with adaptive reference cell selection?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Limitation of the proposal is not discussed in the submission.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments. We are glad to answer all your questions.
**Q1:** The adaptive version of PeakConv (PKC) is considered to be incremental work.
**A1:** Please see our response to all authors that clarify the **[Novelty]** of our work.
**Q2:** AdaPKC performance improvement over PKC is very limited.
**A2:** We are sorry that the illustration of performance comparison may be not clear enough that cause some confusion. However, the performance improvement of this work is not small for the Radar Semantic Segmentation (RSS) task.
- **From the perspective of performance improvements in previous state-of-the-art methods**, **these performance improvements are rather hard and slow**. Unlike optical images, radar frequency maps lack shape information for the targets and contain significant interference, making the RSS task particularly challenging. TMVA-Net shows improvements over RAMP-CNN only in the RA view, while exhibiting decreased performance in the RD view; TransRadar introduces numerous Transformer and convolution modules, yet offers only an average improvement of 1.1% over TMVA-Net in the RD view; and PKCIn-Net demonstrates almost no improvement over T-RODNet in the RA view. In summary, from the initial FCN model to our AdaPKC-Net, the mDice in RD view has advanced only from 66.3% to 74.0%. In contrast, our models, with modifications only to fundamental convolution kernels, achieve an average improvement of 1.4% over SoTA methods in the RD view of the CARRADA dataset and an average improvement of 1.5% in the RA view, which is a notably significant enhancement.
- **From the perspective of comparison with DCNv2,** we compare the performance improvements of AdaPKC and DCNv2 over their respective baseline convolution operators ***In Table R2***, and results show that even when PKC has already reached much higher performance than DCNv1 on CARRADA dataset, AdaPKC still achieves a larger performance increase compared to DCNv2, highlighting the effectiveness of our proposed approach.
- **From the perspective of comparison with PKC,** we present the performance improvements of AdaPKC on the Ku band radar dataset ***in Table R3***. It can be observed that even when PKC fails to deliver satisfactory results, AdaPKC is still able to improve the situation and achieve more significant performance enhancements over PKC.
**Q3:** The comparison with classical CFAR with adaptive reference cell selection is not included in the validation part.
**A3:** Thank you for your constructive feedback. We have added a comparison with various CFAR methods ***in Table R1*** of the Rebuttal PDF, and the comparison reveals several limitations of CFAR methods: (1) they can only detect foreground targets but cannot distinguish specific categories of these targets; (2) they show poor target identification performance, struggling with complex target and interference scenarios; (3) they rely on manual parameter tuning and lack adaptive learning capabilities. Therefore, it is both necessary and practical to improve radar target perception paradigms using deep learning methods (it is also one of the motivations for both PKC and AdaPKC).
**Q4:** How many annotated radar data is sufficient to train the proposed network?
**A4:** We use the same amount of annotated data as in previous works such as PKC. For the CARRADA dataset, the training set includes a total of 8088 labeled frames. For the KuRALS dataset, the training set comprises 2064 labeled frames.
**Q5:** Complexity of the proposed network.
**A5:** We compute the computational complexity and frame rates of AdaPKC and PKC on a Tesla V100 GPU and summarize it as follows. Compared to PKC, AdaPKC incurs minimal additional computational complexity and inference speed overhead.
| Dataset | Conv Type | GMACs | Runtime (ms) | FPS |
|---| ----------- | ----------- | :---: | --- |
| CARRADA | PKC | 109.8 | 47.5 | 21.1 |
| CARRADA | $\text{AdaPKC}^{\xi}$ | 109.8 | 48.4 | 20.7 |
| CARRADA | $\text{AdaPKC}^{\theta}$ | 110.1 | 53.2 | 18.8 |
| KuRALS | PKC | 162.4 | 49.4 | 20.2 |
| KuRALS | $\text{AdaPKC}^{\xi}$ | 162.4 | 50.6 | 19.8 |
| KuRALS | $\text{AdaPKC}^{\theta}$ | 162.6 | 52.0 | 19.3 |
**Q6:** Is it running faster than CFAR with adaptive reference cell selection?
**A6: *In Table R1*** of the Rebuttal PDF, we compare the fps of AdaPKC with CFAR methods. Currently, AdaPKC's inference speed is lower than that of CFAR methods, constrained by the original inference speed of PKC. However, the detection performance of CFAR methods is much worse than AdaPKC. Furthermore, we intend to leverage CUDA acceleration to improve AdaPKC's inference speed going forward.
**Q7:** Limitation of the proposal is not discussed in the submission.
**A7:** Sorry for the confusion! Due to the page limit, we have detailed the limitations in Section G of the Appendix.
---
Rebuttal 2:
Title: We look forward to further improving the clarity and depth of our work with your valuable input!
Comment: Dear Reviewer,
We want to express our sincere gratitude for the time and effort you've dedicated to reviewing our paper. We're pleased to inform you that we've taken great care in addressing each of the questions and concerns you raised in your reviews. ***Please do not hesitate to reply if you have any further questions or suggestions!*** We look forward to further improving the clarity and depth of our work with your valuable input!
Warm regards,
Paper 9766 Authors
---
Rebuttal 3:
Title: We sincerely hope you could take the time to review our responses!
Comment: Dear Reviewer qbsg,
We have thoroughly analyzed your questions and concerns you raised in your reviews, dedicating substantial time and effort to provide comprehensive explanations and corresponding revisions. We believe our responses should effectively address the issues you raised. **We sincerely hope you could take the time to review our responses**. If our response is **adequate**, we kindly ask you to give a **fair score upgrade**. Should you have any **other concerns**, we are eager to engage in **further discussions with you**. Thank you for your valuable input in enhancing the quality of our paper, and we also appreciate your respect for our hard work.
9766 Authors | Summary: This paper works on the improvement of Radar semantic segmentation. Motivated by the limitation of learning ability of the SoTA PKC method due to the fixed peak receptive field (PRF), an adaptive version named AdaPKC is proposed. The method can be metric-based and learning-based. The advantages of the proposed method are validated by the extensive experiments on two datasets.
Strengths: - The motivation of the study is practical, and the paper is well-written and easy to follow. The figures are well-illustrated .
- The experiments, visualization and analysis are extensive.
- The proposed AdaPKCs outperform the state-of-the-art baselines.
Weaknesses: - The novelty of the proposed method is limited, considering the existing PKC work.
- Although the method is practical and lightweight, the performance gain is not significant.
- I also have concerns about the practicality of the problem setting for radar semantic segmentation in this work. It appears to be more similar to the object detection task found in other datasets. Additionally, other modalities such as cameras or LiDAR are usually available and can provide complementary information even in adverse weather conditions. Moreover, scanning Radar and 4D Radar sensors (such as ORR, Radiate, and K-Radar datasets), which offer much higher resolution, are also becoming increasingly popular.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How are the frame rates for $AdaPKC^θ$, $AdaPKC^ξ$ and PKC?
- Is any relationship between the guard bandwidth and the specifics of the Radar sensor?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitation of the evaluation of the method for pulse-Doppler radar is discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful comments on our work. We appreciate the recognition of the strengths of our work: the practical motivation and good presentation, extensive experiments and analysis, and superior performance than SoTA. We are glad to answer all your questions.
**Q1:** The novelty of the proposed method is limited, considering the existing PKC work.
**A1:** Please see our response to all authors that clarify the **[Novelty]** of our work.
**Q2**: Although the method is practical and lightweight, the performance gain is not significant.
**A2**: We are sorry that the illustration of performance comparison may be not clear enough that cause some confusion. However, the performance improvement of this work is not small for the Radar Semantic Segmentation (RSS) task.
- **From the perspective of performance improvements in previous state-of-the-art methods**, **these performance improvements are rather hard and slow**. Unlike optical images, radar frequency maps lack shape information for the targets and contain significant interference, making the RSS task particularly challenging. TMVA-Net shows improvements over RAMP-CNN only in the RA view, while exhibiting decreased performance in the RD view; TransRadar introduces numerous Transformer and convolution modules, yet offers only an average improvement of 1.1% over TMVA-Net in the RD view; and PKCIn-Net demonstrates almost no improvement over T-RODNet in the RA view. In summary, from the initial FCN model to our AdaPKC-Net, the mDice in RD view has advanced only from 66.3% to 74.0%. In contrast, our models, with modifications only to fundamental convolution kernels, achieve an average improvement of 1.4% over SoTA methods in the RD view of the CARRADA dataset and an average improvement of 1.5% in the RA view, which is a notably significant enhancement.
- **From the perspective of comparison with DCNv2**, ***In Table R2***, we compare the performance improvements of AdaPKC and DCNv2 over their respective baseline convolution operators, and results show that even when PKC has already reached much higher performance than DCNv1 on CARRADA, AdaPKC still achieves larger performance increase compared to DCNv2, highlighting the effectiveness of our proposed approach.
- **From the perspective of comparison with PKC**, ***in Table R3***, we present the performance improvements of AdaPKC on the Ku band radar dataset. It can be observed that even when PKC fails to deliver satisfactory results, AdaPKC is still able to improve the situation and achieve more significant performance enhancements over PKC.
**Q3**: The practicality of the problem setting for radar semantic segmentation (RSS) in this work.
**A3**: We clarify each of the concerns you have raised:
- ***“It appears to be more similar to the object detection task found in other datasets”**.* In datasets like CRUW, radar object detection (ROD) tasks use a single point on the radar frequency map as target labels. However, in practical scenarios, multiple positions of a target reflect echoes, and the Fast Fourier Transform process in generating radar frequency maps can cause unavoidable spectral expansion. This results in more than one pixel belonging to the target range, thus mask labels in RSS task can better cover the target range. Furthermore, during training for ROD tasks, researchers (i.e., RODNet, T-RODNet) typically expand the single-point labels using methods like Gaussian smoothing to better cover the target area, whereas RSS labels inherently match the target distribution without requiring such preprocessing.
- ***“Other modalities such as cameras or LiDAR are usually available and can provide complementary information even in adverse weather conditions”.*** This work focuses on developing general perception methods for different radar systems and in various scenarios, while modalities such as cameras and LiDAR may fail to provide effective information in certain scenarios. For instance, the Ku-band radar used in our work for UAV surveillance and marine monitoring tasks has a maximum detection range of 6375 meters, far exceeding the effective detection range of cameras and LiDAR.
- ***“Scanning Radar and 4D Radar sensors (such as ORR, Radiate, and K-Radar datasets), which offer much higher resolution, are also becoming increasingly popular”.*** High-resolution radars are more expensive and generate much larger data volumes within the same detection range, making them unaffordable for the aforementioned long-range detection scenarios. Additionally, datasets including ORR, Radiate, and K-Radar only provide bounding box labels, which offer a lower level of granularity in depicting target areas compared to the mask labels used in RSS tasks.
**Q4**: Frame rates for AdaPKC and PKC.
**A4**: Please see our response to all authors that illustrates the **[Complexity and FPS]** of our work and PKC.
**Q5**: Relationship between the guard bandwidth and the specifics of the Radar sensor.
**A5**: The guard band is indeed influenced by the specific characteristics of the radar sensor. Setting the guard band ensures that the energy at the center point does not leak into the interference estimation process. Appropriate guard band settings are affected by the i) electromagnetic scattering characteristics of the target, including effective radar cross section (RCS), the appearance material , motion characteristics; ii) the detection environment of radar, such as environmental clutters and weather, which would cause different interference distributions; iii) radar's own operating mode, frequency band, waveform modulation and transmit power, thus resulting in different range and Doppler resolution. Finally, various factors will affect the gurad bandwidth setting. Hence, motivated by the impact of various factors on guard bandwidth, we try to design an adjustment mechanism that could be both automatically learned (absent in CFAR) and data-driven (missing in PKC), and that leads to AdaPKC.
---
Rebuttal 2:
Title: We look forward to further improving the clarity and depth of our work with your valuable input!
Comment: Dear Reviewer,
We want to express our sincere gratitude for the time and effort you've dedicated to reviewing our paper. Your feedback has proven to be invaluable in elevating the quality of our work. We're pleased to inform you that we've taken great care in addressing each of the questions and concerns you raised in your reviews. ***Please do not hesitate to reply if you have any further questions or suggestions!*** We look forward to improving the clarity and depth of our work with your valuable input!
Warm regards,
Paper 9766 Authors
---
Rebuttal 3:
Title: We sincerely hope you could take the time to review our responses!
Comment: Dear Reviewer ixjc,
We have thoroughly analyzed your questions and concerns you raised in your reviews, dedicating substantial time and effort to provide comprehensive explanations and corresponding revisions. We believe our responses should effectively address the issues you raised. **We sincerely hope you could take the time to review our responses**. If our response is **adequate**, we kindly ask you to give a **fair score upgrade**. Should you have any **other concerns**, we are eager to engage in **further discussions with you**. Thank you for your valuable input in enhancing the quality of our paper, and we also appreciate your respect for our hard work.
9766 Authors | Rebuttal 1:
Rebuttal: We extend our gratitude to the reviewers for their valuable feedback. In this section, we commence by tackling the concerns that have been collectively raised. These shared concerns correspond to the three keywords in the title:
**[Novelty] What is the novelty of this work compared to existing PKC and other works?**
- Conceptually and in principle, to the best of our knowledge, adaptive peak receptive field (AdaPRF) in this work is **the first attempt** specifically tailored for radar signal processing to dynamically adjust the receptive field (RF) for convolution operators, which is a further great breakthrough compared with PKC. Its design is based on both (1) radar signal principles, including radar signal generating mechanism, the difference between target and interference distribution, and (2) deep learning principles, such as differentiable high-dimensional representation learning for data-driven end-to-end network optimization. With AdaPRF the original PKC is updated into a more data-adapted version, AdaPKC, and is verified in different convolution frameworks, which can also be **extended to more radar-oriented learning algorithms and wider range of radar detection scenarios**. **Since more adaptive interference estimation is always a key topic** in radar signal processing, **AdaPKC is novel and significant.**
- Additionally, **although PKC has been proposed, its inherent limitations cannot be ignored.** Continuous introduction and validation of new ideas and methods are essential for developing new radar sensing algorithms to better meet practical needs, just like YOLO series.
- From a design perspective, **AdaPRF is innovative compared to existing methods**. In this paper, we first analyze existing mature works in both RSS and deep learning fields, then deeply examining their shortcomings from radar perspective, such as PKC's fixed RF issue, the difficulty of making CFAR's dynamic RF learnable, and the limitations of DCN's dynamic RF mechanism in handling radar signals. Finally, rather than simply using or combining existing research, we propose the AdaPKC, and validate it with various real-measured radar data, providing new insights for this research area.
- Technically, it is worth noting that **implementing AdaPRF is a non-trival task**. We have summarized and explained main challenges and solutions in our paper and code, offering valuable experience for future research, such as:
- How to reliably assess the reference points belonging to appropriate PRF in high-dimensional radar signal representation;
- How to design the assessment method to meet the requirements of smooth differentiability and parallel computation of deep learning;
- How to handle the changing number of sampling points for fixed convolution kernels due to dynamically changing receptive fields;
- How to optimize inference to meet the requirements of practical radar signal processing application.
- Finally, we emphasize that the goal of this work is to meet the practical needs for radar application and contribute effective and efficient component for the next generation of radar signal processing paradigm. We are well aware of the many bottlenecks in exsiting radar signal processing, which has been stagnant for many years, and the challenges in applying deep learning to radar. Therefore, our goal is to provide a practical extension for new radar signal processing methods, freeing them from existing rigid workflows, which is of significant industrial value.
**[Performance Improvement] The performance gain is not significant.**
We are sorry that the illustration of performance comparison may be not clear enough that cause some confusion. However, the performance improvement of this work is not small for the Radar Semantic Segmentation (RSS) task.
- **From the perspective of performance improvements in previous state-of-the-art methods**, **these performance improvements are rather hard and slow**. Unlike optical images, radar frequency maps lack shape information for the targets and contain significant interference, making the RSS task particularly challenging. TMVA-Net shows improvements over RAMP-CNN only in the RA view, while exhibiting decreased performance in the RD view; TransRadar introduces numerous Transformer and convolution modules, yet offers only an average improvement of 1.1% over TMVA-Net in the RD view; and PKCIn-Net demonstrates almost no improvement over T-RODNet in the RA view. In summary, from the initial FCN model to our AdaPKC-Net, the mDice in RD view has advanced only from 66.3% to 74.0%. In contrast, our models, with modifications only to fundamental convolution kernels, achieve an average improvement of 1.4% over SoTA methods in the RD view of the CARRADA dataset and an average improvement of 1.5% in the RA view, which is a notably significant enhancement.
- **From the perspective of comparison with DCNv2**, ***In Table R2***, we compare the performance improvements of AdaPKC and DCNv2 over their respective baseline convolution operators, and results show that even when PKC has already reached much higher performance than DCNv1 on CARRADA dataset, AdaPKC still achieves a larger performance increase compared to DCNv2, highlighting the effectiveness of our proposed approach.
- **From the perspective of comparison with PKC**, ***in Table R3***, we present the performance improvements of AdaPKC on the Ku band radar dataset. It can be observed that even when PKC fails to deliver satisfactory results, AdaPKC is still able to improve the situation and achieve more significant performance enhancements over PKC.
**[Complexity and FPS] The computation complexity and frame rates of AdaPKC and PKC.**
***In Table R4 and R5*** of the rebuttal PDF, we summarize the computational complexity and frame rates of AdaPKC and PKC. Compared to PKC, AdaPKC incurs minimal additional computational complexity and inference speed overhead.
Pdf: /pdf/b831f3132357d15f6825d380d7c802d1a601957a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Masked Pre-training Enables Universal Zero-shot Denoiser | Accept (poster) | Summary: The paper proposes a novel zero-shot image denoising method named Masked Pre-train then Iterative fill (MPI). This method leverages a pre-trained model on vast natural images using a masking strategy to learn generalized image distributions, enabling effective denoising without prior knowledge of the specific noise type.
Strengths: - Novel and sound idea, with clear benefits.
- Good generalization performance for unseen noise types.
- Thorough analysis of their method and results.
Weaknesses: - Regarding real noise removal, it is well known that spatial correlation of the real noise makes pixelwise masking based methods like N2V or N2S to fail at denoising, making them inappropriate comparatives. Several self-supervised denoising methods have been designed to remove real world noise (e.g., AP-BSN [1], MM-BSN [2], etc.). While these are not zero-shot methods, zero-shot modifications of these blind spot networks would serve as better comparative methods, especially since the authors have already modified N2N and N2S into zero-shot versions.
- Understanding and discussion of masking ratio can be improved. Perhaps employing a random masking ratio within a certain range (e.g., $p\in[0.3, 0.6]$ would be more effective and could result in an optimal point that is applicable to both synthetic and real noise situations. And why does the masking ratio differ for different noise types? What is the main factor in choosing the best masking ratio?
Also, if this masking ratio is a crucial hyperparameter that changes depending on the image and noise types, it could weaken the paper's key contribution regarding its generalization ability for any noise types.
.
References
[1] Lee, Wooseok, Sanghyun Son, and Kyoung Mu Lee. "Ap-bsn: Self-supervised denoising for real-world images via asymmetric pd and blind-spot network." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022.
[2] Zhang, Dan, et al. "Mm-bsn: Self-supervised image denoising for real-world with multi-mask based on blind-spot network." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
Technical Quality: 2
Clarity: 2
Questions for Authors: Please check weaknesses.
1. What is the main reason that higher masking rate $p$ is required for real-world noise?
Minor points and typos
- I cannot see any difference in the images in Figure 2, even for the noisy image. Same to Figure 6, it would be better if there are enlarged views.
- Missing space in line 89 i.e.Masked -> i.e. Masked
- typo in line 248? forward pass 2.3 ?
- Maybe a mistake? In the caption Table 4, what does the `defaults in gray` mean?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\textbf{Q1: More comparison with zero-shot modifications of blind spot methods}$
$\textbf{A1:}$ Thank you for your thorough consideration. We have revised AP-BSN, MM-BSN, and PUCA (as shown in the overall rebuttal to all reviewers), and the results are listed in $\textit{Table 1}$ in provided PDF. Here are some Implementation details below:
We followed the original settings of these methods. In each iteration, we cropped 8 same-size patches from a noisy image to form a batch for training. Every 10 iterations, we performed inference on the full image to obtain denoised images. These denoised images were then combined using the same ensemble strategy as our method to ensure fairness.
These methods can effectively denoise spatially correlated noisy images. However, using only one noisy image for training can lead to overfitting and produce artifacts, as shown in $\textit{Fig. 5}$ of the PDF.
$\textbf{Q2: More discussion about masking ratio}$
$\textbf{A2:}$ Thank you for your suggestion; it is very insightful. We are still exploring this issue. However, we are concerned that using a large masking ratio during inference might degrade synthetic noise removal performance. Here is a summary of the impact of different masking ratios $p$ on denoising:
Small $p$: Risk of learning noise patterns from real noisy images.
Large $p$: Loss of detail in the denoised image.
The difference in masking ratios for various noise types is mainly because real noise has strong spatial correlation, necessitating a larger masking ratio to avoid learning the noise distribution (see more analysis in $\textbf{A3}$).
Choosing the masking ratio is crucial. When noise is spatially uncorrelated (e.g., Gaussian, Poisson, S&P noise), a consistent $p$=30 works well across all cases. However, the spatial correlation of real noise complicates this issue. Many blind-spot networks (e.g. AP-BSN) designed for self-supervised denoising also aim to address this problem.
$\textbf{Q3: Main reason for different $p$}$
$\textbf{A3:}$ This primarily depends on the spatial correlation of the noise in the image. Synthetic noise is spatially uncorrelated, with noise signals are uncorrelated in neighbor positions. In contrast, real noise, after passing through a series of ISP processes, has a much more complex distribution, resembling blurred spots rather than independent points (refer to $\textit{Fig. 2}$ in PDF). For synthetic noise, choosing a small masking ratio can help quickly recover more details in the image. However, for real noise, with a small masking ratio, the model can fit the noise distribution based on neighboring pixel values. In this case, a larger masking ratio can mitigate the impact of noise. One paper [Ref1] also discusses using different dropout ratios for different types of images.
$\textbf{Q4: Minor points and typos}$
$\textbf{A4:}$ Thanks very much for your corrections.
1. Regarding Fig. 2 and Fig. 6 in main text, we have provided enlarged views, as seen in $\textit{Fig. 1 and Fig. 4}$ in PDF.
2. Yes, we have addressed the spacing issue on line 89.
3. "2.3" on line 248 refers to Section 2.3, which we have clarified in the main text.
4. Due to formatting constraints, we shortened the caption. "Defaults in gray" means that the gray background in the table indicates the default settings used in our work for comparison with other methods.
$\textbf{Ref:}$
Ref1: Self2Self+: Single-Image Denoising with Self-Supervised Learning and Image Quality Assessment Loss, Arxiv'23
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response and the additional comparisons. I hope the points raised during this review are well reflected in the final version.
---
Reply to Comment 1.1.1:
Title: Thanks for replying
Comment: Thank you very much for replying so fast. We will carefully revise the article and all modifications made in rebuttal will be reflected in the next version of our paper. If you have any further questions, please let us know. | Summary: This paper proposes a method that could handle image denoising regardless of the noise types and intensity. To achieve this goal, the proposed method includes two crucial steps: first, the model will be pretrained on a large amount of images (with masking); second, the pretrained model will be fine tuned on the given noisy image so that denoising. The authors validate the efficiency of their method on images corrupted by different noises.
Strengths: 1) The paper is well written and it is easy to follow.
2) The authors have conducted experiments on different types of images including natural images and medical images.
3) In addition to synthetic noises, the authors also conducted experiments on real-world noise.
Weaknesses: 1) The idea sounds very trial and the novelty is limited. It is similar to MetaDIP[ref1] and DGP[ref2].
2) The idea of "ensemble for total T steps" is also proposed by DIP. DIP has shown that by averaging the outputs, the performance could be improved. So, the question here is: when the authors compare the performance with DIP, have you applied this similar "average smoothing" to DIP's outputs? If not, the comparisons here may not be fair.
3) For comparisons, the authors may consider other DIP models such as Ref3.
4) How this proposed method compare with other SOTA models such as Diffusion Models Ref4.
Ref1: Zhang, Kevin, et al. "MetaDIP: Accelerating deep image prior with meta learning." arXiv preprint arXiv:2209.08452 (2022).
Ref2: Pan, Xingang, et al. "Exploiting deep generative prior for versatile image restoration and manipulation." IEEE Transactions on Pattern Analysis and Machine Intelligence 44.11 (2021): 7474-7489.
Ref3: Jo, Yeonsik, Se Young Chun, and Jonghyun Choi. "Rethinking deep image prior for denoising." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2021.
Ref4: Wang, Yinhuai, Jiwen Yu, and Jian Zhang. "Zero-shot image restoration using denoising diffusion null-space model." arXiv preprint arXiv:2212.00490 (2022).
Technical Quality: 3
Clarity: 3
Questions for Authors: I have listed my questions in the section of [Weaknesses].
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I have listed the limitations in the section of [Weaknesses].
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\textbf{Q1: Similarities and differences compared to MetaDIP and DGP}$
$\textbf{A1:}$ Thank you for your pointing out. Our method shares some similarities with MetaDIP and DGP. MetaDIP learns denoising by obtaining initial weights beneficial for downstream tasks, while our method uses masked training on natural images to enhance downstream denoising. However, MetaDIP and DGP seem to require known degradation models, which may not be applicable for some noise types. In contrast, our method learns to recover denoised images from masked noisy ones, without relying on models tailored to specific degradation types. This eliminates the need for degradation modeling, making it adaptable to unknown noise types, real noise, and various image types.
$\textbf{Q2: Comparison fairness issues caused by ensemble}$
$\textbf{A2:}$ Thanks for your attention. We acknowledge that ensemble technique are existing methods. Our focus is on demonstrating the advantages of masked pre-training priors.
To ensure fair comparison, we explained this in the experimental section of the main text. We used EMA ensemble for DIP and FasterDIP, as detailed in $\textit{lines 170-173}$ in main text. Results for other comparison methods (N2V and N2S) with EMA ensemble are shown in $\textit{Supplementary Materials G.2 (lines 600-608, Tables 12 and 13)}$. Our “faster” version performs not the best in some settings, but the 1000-step version consistently leads.
We chose the EMA version for other methods because our experiments showed that EMA yields the best results compared to other ensemble methods (like averaging), as detailed in $\textit{Table 5} of the main text.
$\textbf{Q3: Comparison with other DIP methods}$
$\textbf{A3:}$ We considered the work you mentioned (DIP-SURE) when selecting comparison methods. They designed denoising solutions for Gaussian and Poisson noise to achieve high-quality denoising. However, their method requires additional noise variance as input, which could lead to unfair comparisons. Additionally, their method seems to be specific to certain types of noise, and their official code only supports Gaussian and Poisson denoising. This is why we did not compare with their method initially.
We included results for DIP-SURE using EMA ensemble, reporting both peak performance and final performance. For a fair comparison, we should compare our method with the final performance, as peak PSNR is not known without ground truth. Please refer to $\textit{Table 1}$ in PDF for a comparison between our method and DIP-SURE.
For real datasets (SIDD, PolyU, and FMD), we computed the variance from the difference between noisy and clean images to obtain denoised results. Their approach cannot remove real-world noise well, and some artifacts exists (refer to $\textit{Fig.5}$ in provided PDF).
$\textbf{Q4: Comparison with other Diffusion methods}$
$\textbf{A4:}$ Existing SOTA diffusion models, like DDNM, recover degraded images by decomposing into the null-space and adjusting the noise scheduler for denoising. Although diffusion models are Gaussian denoisers and can handle Gaussian noise well with proper adjustments (see PDF $\textit{Table 1}$), they also require known noise variance in advance. Additionally, when set for denoising, the diffusion model relies solely on the noise scheduler, effectively becoming a Gaussian denoiser. This limitation may prevent the model from fully removing more complex real-world noise (see $\textit{Fig. 5}$ in PDF).
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the detailed rebuttal. My previous concerns have been addressed partially. I think MetaDIP and DGP may generalize to other noise types, which at least should be explored/experimented/compared. Also, I think there are other Diffusion models that could handle other noises except Gaussian noise, which again should be compared in the experiments.
---
Reply to Comment 1.1.1:
Title: Reply to VFTG
Comment: Since there is less than 12 hours before discussion period ends, the time may not be enough.
$\textbf{DGP:}$
We did not find the code of MetaDIP in rebuttal period, and we believe DGP and other diffusion methods share the same issues in denoising, so we did not compare with MetaDIP, here we provide DGP results, see results at table below:
| Method | DGP |
|-------------------------|:------------:|
| CSet+Gauss $\sigma$=25 | 28.72/0.746 |
| SIDDval | 23.21/0.452 |
| PolyU | 32.18/0.890 |
| FMD | 22.19/0.308 |
(due to time limitation, we chose 100 random patches from SIDDval for comparison)
DGP, like the DDNM and DDPG models I compared, relies on a generative prior. Although DGP uses GAN loss to make the model robust to degradation, the loss of DGP includes the process of adding degradation of generated images by known degradation operators or attackers, during which unknown noise degradation can also affect it, resulting in a gap between synthetic degradation and real degradation, and it seems to perform poorly on images that are significantly different from natural images (such as FMD).
Using attackers to add perturbations can theoretically remove various types of degradation, but it may lead to longer inference time and increased training difficulty. However, the exploration of adversarial defense in DGP seems to only include jigsaw puzzle tasks, and the code they provide does not include examples of adversarial defense. The remaining time is not enough for us to explore the possibility of using adversarial defense to achieve real-world denoising; Although I believe that the denoising results under synthetic noise (Gaussian=25) can to some extent explain the problem, as the degradation disturbance in this case is consistent with that in the real image (may lead to excessively smooth denoising results or artifacts caused by GAN).
In addition, the generation ability of GANs is slightly weaker than diffusion, especially for image restoration under unconditional conditions.
$\textbf{Diffusion models:}$
Existing diffusion work mostly focuses on other image restoration tasks (such as super-resolution and deblurring), and possibly suffer from long inference time. We haven't found a training-free diffusion-based method that can adapt well to unknown real-world noisy images yet.
However, I think it is possible to use diffusion for training-free real-world denoising, but it may require additional design, and it can be a good research direction.
---
Rebuttal 2:
Title: Reminder: Reviewers please do acknowledge the rebuttals and react to them.
Comment: Dear reviewer,
thanks for your review. Please look at the rebuttal and give the authors some feedback whether they could address your concerns.
Best regards Your AC. | Summary: This paper proposes a zero-shot image denoising method called Masked Pre-train then Iterative fill (MPI). The key idea is to pre-train a model on natural images using masked image modeling, then apply this pre-trained model to denoise new images in a zero-shot manner through an iterative optimization process. The authors demonstrate that their approach outperforms existing zero-shot denoising methods across a variety of noise types and datasets, while also being more computationally efficient.
Strengths: 1. Masked modelling in image denoising and Iterative processing is interesting.
2. And the above idea works, it is more interesting.
3. Somewhat good results.
Weaknesses: 1. I have to say that the writing of this paper is problematic. The method of this paper is not complicated, and it is even a little simple. However, the presentation of this paper does not show its core method. Some of the descriptions are irrelevant, but important analysis such as the core of its method is missing.
2. This paper uses the idea of pre-training, but follows zero-shot. A natural question is, what does the model learn from pre-training, and how is it applied to zero-shot denoising? I understand that it is difficult to give a theoretical answer, but I think there is at least an abstract explanation to verify and demonstrate the essence of its method.
3. Assume that the model learns the distribution of images from pre-training. But according to its experiments, the trained model can be used for unnatural images. If the model learns denoising from pre-training, the model can generalize to other noises to a certain extent. Unless it learns a common property of some noise. The key is, what is this property?
4. The iterative approach is interesting, and I think the author may have been inspired by Diffusion to a large extent. But it is not enough to simply assume without a clearer theoretical motivation to support this approach.
5. The choice of denoising is also questionable. In fact, I think the method in this article may be helpful for any image restoration task that meets the conditions. Why do we only discuss denoising?
Technical Quality: 2
Clarity: 2
Questions for Authors: Is it possible to combine mask prediction and iterative filling, and actually use the statistical properties of noise to denoise? Make predictions under different masks each time, and integrate multiple predictions through iterative filling, and use methods similar to finding the mean between multiple predictions to achieve some denoising purpose? This is about the working nature of this method. I think it is very necessary to use experiments to prove what the model has learned and why it denoises.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: $\textbf{Q1: Lack of essence of our method}$
$\textbf{A1:}$ Thank you for your pointing out. The core of our method lies in the inherent denoising capability of a model pre-trained with masked natural images. This motivation is demonstrated in main text. The pixel-level random masking we employ can be viewed as a form of noise, disrupting the image structure, and the model is trained to restore these structures. Due to training on a large set of natural images, the limited model parameters are unable to precisely fit all image distributions and instead tend to prioritize learning the main features of the images while discarding noise that varies across different images, thus acquiring a certain denoising capability. Similar masked training strategies are used in many self-supervised denoising tasks [Ref4, Ref5, Ref6 in author rebuttal], where models are trained on a large set of noisy images to learn the corresponding clean images. We further extend this to a zero-shot version, combining it with a large amount of easily accessible natural images to enhance the performance of zero-shot methods.
$\textbf{Q2: What model learns from pre-training and how it is applied to zeroshot denoising}$
$\textbf{A2:}$ We address your concerns from three aspects:
$\textit{What does the model learn from pre-training?}$
The model learns to restore randomly masked image content from a large set of natural images. This restoration process is somewhat noise-resistant. In short, a model pre-trained on a large dataset tends to recover denoised image content, functioning as a kind of denoising autoencoder (see $\textit{Fig.1}$ in the PDF for motivation).
$\textit{How is it applied to zero-shot denoising?}$
The pre-trained weights are iteratively optimized on a noisy image through random masking, and predicted pixels are integrated using an exponential moving average, resulting in a denoised image. The learned denoising representation provides a better initial weight and helps avoid over-fitting to the noisy image.
$\textit{Analysis of why pre-trained features helped zero-shot denoising.}$
We analyzed the impact of pre-trained features on zero-shot denoising at the hidden layer level (see $\textit{Fig.3}$ in the PDF). Features extracted with pre-trained weights significantly differ from those extracted from scratch (Baseline, i.e., the usual zero-shot denoising approach). The pre-trained model restores the complete image, with more distinct features between layers, whereas the baseline model's features are less differentiated between layers, tending to only restore the masked parts, leading to local optima.
$\textbf{Q3: Why a model does not learn denoising can denoise and generalize to unnatural images}$
$\textbf{A3:}$ As per $\textbf{A1}$, we believe that the model pre-trained with masked images acquires a certain robustness to noise, enabling it to perform denoising. Regarding its applicability to non-natural images, the use of pixel-level random masking allows the model to restore masked content based on the unmasked areas. Since the unmasked pixels are spatially distributed evenly, the model tends to focus more on local, low-level features such as texture and color rather than high-level semantic features. These low-level features share some commonalities across all types of images, allowing the model to be applicable to significantly different types of images, such as medical images and natural images.
$\textbf{Q4: Motivation of our approach}$
$\textbf{A4:}$ Thank you for pointing out, our motivation comes from the blind spot network in self supervised denoising, which learns to reconstruct noisy images cropped by blind-spots (much resemble pixel-wise random masking) to denoise on noisy datasets that do not include ground truth. This is a widely explored field, and we further expand it with using natural images to obtain a better zero shot denoising algorithm.
$\textbf{Q5: Why only discuss denoising}$
$\textbf{A5:}$ Thank you for your suggestion. Our motivation stems from findings in the self-supervised denoising field, so this paper focuses solely on denoising. As per $\textbf{A1}$, the trained model tends to acquire smooth representations beneficial for denoising. Since our current model uses a small number of parameters for efficient denoising, it might struggle to generate sufficient details in other tasks such as deblurring or super-resolution. However, we believe that with further improvements, this method has great potential in other types of degradation tasks.
$\textbf{Q6: Answer to Questions}$
$\textbf{A6:}$
1. During the inference phase, we generate predictions using random masks and iterative filling, with the final denoised image comes from the exponential moving average of multiple inferences for each pixel. The only difference is that we use weights pre-trained on natural images with random masking as the initial weights, which speeds up inference and reduces the risk of over-fitting.
2. Regarding what the model has learned and why it denoises, we believe that masked pre-training on large datasets causes the model to focus on the main features of images (relatively lower-frequency, easier-to-recover features), which gives the model robustness to noise and enables it to perform denoising.
3. we have analyzed the differences in features between pre-trained and from-scratch models during inference to demonstrate the advantages of pre-training over starting from scratch.
---
Rebuttal 2:
Title: Reminder: Reviewers please do acknowledge the rebuttals and react to them.
Comment: Dear reviewer,
thanks for your review. Please look at the rebuttal and give the authors some feedback whether they could address your concerns.
Best regards Your AC.
---
Rebuttal 3:
Title: Response to the rebuttal
Comment: I have read the author's response, as well as the comments and discussions with other reviewers. The author has partially addressed my concerns. However, I still think that the presentation of the paper is lacking at this stage. I will improve my score. However, since I cannot see the revised paper, I cannot judge whether the final presentation meets the requirements of NeurIPS.
---
Rebuttal Comment 3.1:
Title: Reply to iS6g
Comment: Thank you for your valuable suggestions. They have indeed helped improve the quality of the paper. We will carefully revise the manuscript and incorporate the theoretical analysis from the response into the main text to make the paper clearer and more insightful.
This article explores the practical role of mask pre training in denoising. Perhaps masking-based generative pre-training can go further by providing a clearer theoretical instruction and demonstrating its effectiveness in more low-level downstream tasks. | Summary: The paper introduces a novel zero-shot image denoising paradigm called Masked Pre-train then Iterative fill (MPI). The key contributions are:
MPI first pre-trains a model on a large dataset of natural images using a pixel-wise masking strategy. This allows the model to learn the underlying distribution and representations of natural images.
During the zero-shot inference stage on a single noisy image, MPI optimizes the pre-trained model to predict the masked regions, and only uses the predictions of the masked regions to assemble the final denoised output.
This approach leverages the generic knowledge from pre-training, preventing overfitting during the inference stage and enabling high-quality denoising with a marked reduction in inference time compared to prior zero-shot methods.
The paper demonstrates MPI's superior performance across various noise types and its ability to generalize to medical images, which are distinct from the natural images used in pre-training.
Strengths: Originality:
The paper presents a novel zero-shot denoising paradigm that is significantly different from prior approaches. The key innovation is the use of masked pre-training on natural images to build a generic and robust model for zero-shot denoising. This is an original idea that departs from previous zero-shot methods that relied on carefully designed priors or network architectures. Leveraging masked pre-training for this task is a novel and creative approach.
Quality:
The technical details and experimental evaluations in the paper are of high quality. The authors provide a clear and thorough explanation of the MPI framework, including the masked pre-training and iterative optimization steps. The experimental setup is comprehensive, analyzing performance on various noise types, real-world datasets, and even medical images. The results demonstrate significant improvements over prior zero-shot methods, validating the effectiveness of the proposed approach.
Clarity:
The paper is well-written and easy to follow. The introduction provides a clear motivation and background for the problem. The method section explains the MPI framework in a structured and logical manner. The experimental section is organized in a way that allows the reader to easily understand the different evaluations and findings. Overall, the clarity of exposition is a strength of this paper.
Significance:
The problem of zero-shot image denoising is an important and challenging task in computer vision. Prior methods have limitations in terms of generalization and computational efficiency. The MPI approach presented in this paper addresses these limitations in a novel way. If successful, this could lead to significant practical impact by enabling high-quality denoising with minimal user intervention or computational overhead. The ability to generalize to diverse noise types and even medical images further enhances the significance and potential impact of this work.
Weaknesses: Comparison to other zero-shot methods:
The paper primarily compares MPI to a few selected zero-shot denoising methods. However, it would strengthen the work to include a more comprehensive comparison to a wider range of zero-shot techniques, including recent advances. This would help contextualize the performance gains of MPI and highlight its specific advantages over the state-of-the-art.
[1] Xie Y, Yuan M, Dong B, et al. Unsupervised image denoising with score function[J]. Advances in Neural Information Processing Systems, 2024, 36.
[2] Jang H, Park J, Jung D, et al. PUCA: patch-unshuffle and channel attention for enhanced self-supervised image denoising[J]. Advances in Neural Information Processing Systems, 2024, 36.
[3] Garber T, Tirer T. Image restoration by denoising diffusion models with iteratively preconditioned guidance[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 25245-25254.
Technical Quality: 3
Clarity: 3
Questions for Authors: no
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: see weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your suggestions. We studied the three works you provided carefully. The first is a score-based denoising algorithm, the second is an improvement on blind-spot networks (PUCA), and the third is a diffusion-based image restoration method (DDPG). The first two seem to be unsupervised denoising approaches based on datasets, which learn to recover denoised images unsupervisely from a dataset containing only noisy images.
For the first paper, we could not find their code on GitHub. Reproducing this paper within a week is challenging, so we did not compare our method with theirs.
For the second paper, we provide results for its modified zero-shot version PUCA* and the original PUCA in the table below. The zero-shot method uses an ensemble approach to ensure a fair comparison.
For the third paper, we provide results (DDPG) in the table below. This diffusion method requires the noise variance as known information. We use the variance calculated from the difference between the noisy and clean images to ensure the best performance for this method (though this might risk ground truth leakage).
We have also compared several other self-supervised and diffusion methods. If interested, please see $\textit{Table 1}$ in PDF. For the visual results on SIDD, see $\textit{Fig. 5}$ in PDF.
| | PUCA* | PUCA | DDPG |
|:--------------------:|:---------------:|:----------------:|:---------------:|
| CSet+Gauss σ=10 | - | - | 32.43/0.826 |
| CSet+Gauss σ=25 | 24.74/0.640 | - | 27.07/0.606 |
| CSet+Gauss σ=50 | - | - | 15.95/0.183 |
| SIDD validation | 33.52/0.816 | 37.49/0.880 | 29.84/0.612 |
| PolyU | 33.31/0.927 | - | 35.79/0.887 |
| FMD | 30.22/0.808 | - | 30.41/0.735 |
| Avg. Infer. time (s) | 450.0 | - | 24.3 |
$\textbf{Analysis:}$
Blind-spot methods like PUCA handle strong spatially correlated noise well in real-world scenarios. However, in its zero-shot version, it may risk over-fitting due to a lack of sufficient training data.
Diffusion methods like DDPG can handle various types of degradation and are inherently robust to noise due to their training on Gaussian noise. However, they struggle to generalize to real-world noise scenarios.
---
Rebuttal Comment 1.1:
Title: I have read through the rebuttal, and the author has addressed all of my questions.
Comment: I have read through the rebuttal, and the author has addressed all of my questions.
---
Reply to Comment 1.1.1:
Title: Thanks for your reply
Comment: Thank you very much for your prompt reply. If you have any further questions of interest, please feel free to ask me. | Rebuttal 1:
Rebuttal: We sincerely appreciate the time and efforts of all the reviewers, as well as their valuable suggestions provided during the review process. We are encouraged by the reviewers' recognition of our work and acknowledge that there are still many weaknesses in our current work. We carefully considered and responded to every question from the reviewer to the best of our ability.
Here we list some common weaknesses and corresponding brief responses, more details can be found in responses to each reviewer.
$\textbf{1. More comparison methods}$ including additional zero-shot methods, such as DIP-based (DIP-SURE [Ref1]) and diffusion-based methods (DDNM [Ref2], DDPG [Ref3]), and zero-shot modifications from unsupervised methods (AP-BSN [Ref4], MM-BSN [Ref5], PUCA [Ref6]), refer to $\textit{Table 1}$ in provided PDF. The implementation details of each method can be found in rebuttal to each reviewer.
$\textbf{2. Analysis of the Priors Learned from Pretraining}$ (Refer to $\textit{Fig. 3}$ in the provided PDF). In the main text, we have explained that this masked pre-training not only helps provide a better starting point for the model, but also helps prevent the model from over-fitting to some extent (see $\textit{Section 4.1}$ Pre-trained weights in main text), offering a more stable zero-shot denoising process. Here we analyze the representations learned by the pre-trained model and try to explain why this helps avoid over-fitting. In Fig. 3, we used CKA analysis [Ref7] to show that the image features extracted by the pre-trained model are significantly different from those extracted by the model trained from scratch (“Baseline”). Due to insufficient data, baseline model tends to acquire similar representations across layers and focus more on recovering the masked parts of the image. This can lead to local optima and early over-fitting to noise. In contrast, pre-trained model learns from a larger variety of images and provides a better startpoint. They focus more on recovering the entire image rather than just the masked parts, which is smoother and reduces the risk of over-fitting.
$\textbf{3. Restatement of Motivation and Focus in This Work.}$ This paper focuses on leveraging a large amount of easily accessible natural images to achieve better zero-shot denoising performance. To ensure the method is simple and efficient, and to highlight the importance of pre-training, we do not extensively explore more complex network mechanisms, since it is not the focus of our paper. By relying on minimal dependencies for denoising, our approach can be further improved when combined with existing methods like random sub-samples (see $\textit{Section 4.2}$ in main text for discussion).
$\textbf{Ref:}$
Ref1: Rethinking deep image prior for denoising, ICCV’21
Ref2: Zero-shot image restoration using denoising diffusion null-space model, ICLR’23
Ref3: Image restoration by denoising diffusion models with iteratively preconditioned guidance, CVPR’24
Ref4: Ap-bsn: Self-supervised denoising for real-world images via asymmetric pd and blind-spot network, CVPR’22
Ref5: Mm-bsn: Self-supervised image denoising for real-world with multi-mask based on blind-spot network, CVPR’23
Ref6: PUCA: patch-unshuffle and channel attention for enhanced self-supervised image denoising, NIPS’24
Ref7: Similarity of Neural Network Representations Revisited, Arxiv’19
Pdf: /pdf/5b9e815b911c961f7fab5038277e3379b91a28d8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Grounding and Validation of Algorithmic Recourse in Real-World Contexts: A Systematized Literature Review | Reject | Summary: This paper provides a review of previous works that study "algorithmic recourse", i.e. conceptual and practical approaches for giving people actionable recommendations to change how they are impacted by algorithmic systems. This literature is deeply connected with counterfactual explanations and understanding models through small changes to test data, answering questions such as "how would the model M produce a different output if changed attribute x about myself". The authors review 127 archival publications and answer 9 questions about how these works frame and study algorithmic recourse.
Strengths: In terms of originality, quality, and clarity:
- While the primary novel contributions of this draft are to highlight themes in previous work, the overall level of novelty is reasonable. Some concerns here, see below.
- Quality: the "Systematized review" methods are described such that they are replicable and seem justified. I don't expect readers to have major issues with inclusion criteria of papers, or any of the analyses presented.
- Clarity: Writing is clear throughout.
In terms of significance, the paper could have impact on future work studying algorithmic recourse, and might motivate NeurIPS community members (including those in companies or working with governments) to support recourse methods. This would be a large positive impact.
This kind of review can certainly be useful to researchers trying to incorporate ideas or findings from recourse-related research. The calls to engage with HCI and systems-level thinking are reasonable (though, some of the broader discussion/motivation in the paper is more convincing on this front than any of the empirical results from the 127 recourse-related papers). If a version of this paper were able to unify definitions in the recourse space, this could be powerful (though further expansion of Section 2.2 might be necessary: the paper does note that reference [70] is highly similar -- the current draft was a bit vague in comparing these and clarifying the added contribution here.).
A few other notes: There are 9 overall sub-research questions answered. Overall, these results seem likely to be useful to researchers entering the algorithmic recourse field (though, see below, some of these felt very general and not domain-specific in the current draft). The paper does fit into the "Social and economic aspects of machine learning" category listed in the CFP this year.
Weaknesses: Overall, I do think the current draft may not achieve the full impact that a future revision could provide.
The current discussion section feels like it largely echoes other calls in the community to apply systems thinking to ethical/responsible/pro-social AI/ML initiatives, and while each of the suggestions has some connection to one of the analyses, the current draft is not totally clear about the extent to which these recommendations stem from the findings vs. are motivated by first principles. The paper is overall very critical of the AR field, i.e. "Why hasn't this field engagement with any real world deployments". However, it's also not entirely clear in the current draft how any of the general recommendations would be applied in specific AR domains.
One aspect of the paper that I think would have been most directly helpful to the NeurIPS audience in particular would be to take a stance on how recourse should be defined -- is the definition on line 62 "endorsed" by the paper? Is the "imagine a counterfactual input x*" an advisable approach to take for future work. Does this review support the definition, highlight core definitional or epistemic issues, etc.
Ultimately, given the intended goals of this draft, it seems success and impact (on top of the core empirical contribution provided by writing a systematized review) here are dependent on the ability for the provided recommendations to shape future research positively. While the five recommendations here could have a some positive impact, taking a stronger stance on the core definitions and framing of recourse "tasks" could have an even larger impact.
Technical Quality: 2
Clarity: 3
Questions for Authors: The primary questions that I would pose to the authors would be:
- Is it possible to use the current "data" (i.e. selected papers) to provide more actionable domain-specific recommendations and/or pragmatic guidance about which contexts are likely to see real-world engagement with AR?
- What is needed to get organizations that develop and/or operate algorithmic systems to engage with recourse? Are there circumstances that the current literature treats as "futile"?
I think using the data that's already been collected and analyzed, focusing on domain-specificity of recommendations could go very far in strengthening the draft.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: - No major concerns regarding unmentioned social impacts.
- Regarding the limitations of systematized literature review, the current draft discusses these reasonably.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for this insightful review and feedback!
As mentioned in the "global rebuttal" we cut some parts of Section 5 to fit within the nine-page limit. As we see it, we are able to address your points using the data that we have already collected and processed.
---
> If a version of this paper were able to unify definitions in the recourse space, this could be powerful
> is the definition on line 62 "endorsed" by the paper? (...)
We quote the definition on line 62 as it comes from the work that introduced AR, but we believe that it misses out on some important aspects that have been brought up by later authors. Hence, on lines 360-361, we propose the following operational definition of recourse: *"the aim of AR [is] the provision of recommendations aligned with the preferences of non-expert users in an attempt to help them improve outcomes in an ADM setting"*. It accounts for the concepts identified in Section 4.2:
1. AR is a recommendation, i.e., a form of *"advice about what is best to do"* (from Cambridge Dictionary)
2. The recommendation should be (where applicable) aligned with the preferences of their recipients to facilitate actionability.
3. While most authors agree that AR is aimed at the end-users, we found arguments that it may be provided to other stakeholders (lines 224-226). Ultimately, the goal remains the same: it should be understandable to individuals who are not necessarily knowledgeable about algorithms.
4. The goal of AR is to improve outcomes (the most common theme among the definitions), but the ability to improve an outcome will depend on the context and the willingness of the individual to implement it, hence the word "attempt".
5. Finally, the ADM settings seem to be assumed given the nature of existing work.
We do not define what "actionability" entails because this concept ultimately depends on factors such as the domain, the context, or the stakeholders (see our answer to Reviewer pd5W).
---
> the paper does note that reference [70] is highly similar -- the current draft was a bit vague in comparing these and clarifying the added contribution here.
Our draft is distinct from the earlier literature reviews in that we take a step back from evaluating technical *solutions*, instead focusing on the practical understanding of the AR *problem*. We explain the differences with [70] further in the response to Reviewer pd5W.
---
> Is it possible to use the current "data" (i.e. selected papers) to provide more actionable domain-specific recommendations and/or pragmatic guidance about which contexts are likely to see real-world engagement with AR?
Very good point! Yes, we can address this based on our data. In the current draft, we briefly touch upon the aspects that should be considered for specific applications in Section 5. More concretely, one of the goals of our recommendations is to encourage thinking about the meaning of AR for a specific domain, e.g.:
* [Recommendations. 1 and 3] What (types of) stakeholders will interact with the recourse mechanisms?
* [Rec. 2] What constraints need to be satisfied to ensure that recourse can fulfill its goals?
* [Rec. 3] What procedures are already in place? What will be the added value of recourse?
* [Rec. 4] What (unsafe) group-level dynamics should we expect when recourse is implemented into the system?
* [Rec. 4] If we should expect unsafe group-level dynamics, how can we mitigate them?
* [Rec. 5] Which existing AR solutions could be applied in this domain? How to tailor them to its specific requirements?
Regarding the second part of the question, we can make some predictions from the data. On the one hand, we note that many researchers discuss recourse in banking: this goes from commonly used examples to the dominance of banking datasets in the evaluations, although the availability of high-quality datasets may influence these choices. On the other hand, we see a variety of domains in the applications and case studies: education, medicine, public administration, or employment.
A common characteristic in the latter domains is that the model owners have shared interests with the end-users in achieving the best outcomes possible (see also lines 318-325), or other responsibilities towards them. This may stem, e.g., from legal acts such as Art. 22(3) of the European Union's GDPR which bestows the right to obtain human intervention in certain situations (see also [151]).
Finally, there may even exist settings where recourse adds value to ADM systems using models that tend to be perceived as "explainable". For instance, governance typically employs "decision trees" that implement rules from legislation, but even these trees can grow to a point where they are hard to understand (e.g., tax systems).
---
>What is needed to get organizations that develop and/or operate algorithmic systems to engage with recourse? Are there circumstances that the current literature treats as "futile"?
This question is perhaps best answered by looking at the challenges for future work (Section 4.6). Several major categories — accounting for causality, robustness, or even ensuring actionability — relate to a problem that will be highly relevant for the engagement with recourse in real-world contexts: how can we ensure that the recommendations lead to meaningful improvement *and* guarantee a better outcome for the individual? This results in Rec. 3, which states that organizations may want to consider societal solutions as well.
We have not identified any circumstances that would be considered completely "futile" in that for all challenges recognized in the literature, we have observed at least some suggestions on how they could be resolved, although not necessarily extensive follow-up work (yet). In our opinion, AR should not be treated as a cure-all solution for problems brought upon by non-interpretable models. This insight tends to be missing in the literature, even though it does not make AR research any less valuable.
---
Rebuttal Comment 1.1:
Comment: Thank you for these detailed answers to the questions. This rebuttal
- answers above questions about defining / redefining recourse. I think this can easily be improved in a camera ready
- clarifies relation to prior work
- provides quite a few a concrete additional empirical insights that can be added to flesh out some additional aspects of the paper
Overall, this addresses some of my concerns. I expect the main lingering concern will be the question of venue fit. As argued in the global rebuttal, I can see this kind of paper fitting into the "catch all" category. Furthermore, as noted in my original review I agree this is a strong argument for trying to publish this work in venues that are most likely to impact recourse implementations in practice. That said, I expect this may come down to subjective interpretation of the CFP here.
---
Rebuttal 2:
Comment: Thank you for the reply and the increased score! We are very happy that our answers satisfied your concerns about the contents of the draft — your suggestions have been very helpful, we have no doubts that going further in the recommendations will strengthen the document.
We also appreciate your repeated support for our choice of the venue.
Should you have any further questions, please let us know. | Summary: The paper provides a comprehensive review of the algorithmic recourse research literature, concentrating on understanding the recourse research "in the wild", by focusing on the practical application of these techniques in real-world scenarios. The authors then provide some suggestions to practitioners to push future research to better practical applications.
Strengths: The paper is well-written and well-structured. Considerable effort has been put into this work to provide a comprehensive review of the area, highlighting the need for a more down-to-earth approach when considering recourse. The data collection and analysis are well-motivated and described sufficiently (Section 3 and Section 4). The recommendations in Section 5.1 are on point and all true, and they highlight issues that everyone in the community is aware of but that are largely ignored.
Weaknesses: I feel NeurIPS is not the right venue for this kind of contribution, since this paper does not provide the level of technical novelty required by the conference. Being a review, I think it does not fit the requirement of "new and original research" given by the Call of Papers. I suggest the authors not be discouraged, since I think the contribution is still valuable for the community. Potential other venues I believe are more in line with the scope of this work could be the following (the order is random):
- IJCAI Survey Track (https://ijcai24.org/call-for-papers-survey-track/)
- ACM FAccT (https://facctconference.org/)
- AAAI/ACM AIES (https://www.aies-conference.com/2024/)
- ICML Position Papers Track (https://icml.cc/Conferences/2024/CallForPositionPapers)
- ACM Computing Surveys (https://dl.acm.org/journal/csur)
- TMLR (https://jmlr.org/tmlr/)
Lastly, I would like to point out some potential additional papers on algorithmic recourse which could complement some remarks made by the authors:
- Line 182 "We did not identify any applications evaluated with humans in the loop": there has been some development in providing human-in-the-loop algorithms to identify better recourse options:
- [1] De Toni, Giovanni, et al. "Personalized Algorithmic Recourse with Preference Elicitation." Transactions on Machine Learning Research, https://openreview.net/forum?id=8sg2I9zXgO
- Recommendation 4, "Accounting for emergent effects": there has been some research regarding providing recourse to multiple individuals, where they are competing for a limited pool of resources, looking also at the fairness of these systems:
- [2] Fonseca, João, et al. "Setting the right expectations: Algorithmic recourse over time." Proceedings of the 3rd ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization. https://dl.acm.org/doi/pdf/10.1145/3617694.3623251
- [3] Bell, Andrew, et al. "Fairness in Algorithmic Recourse Through the Lens of Substantive Equality of Opportunity." arXiv preprint arXiv:2401.16088, https://arxiv.org/pdf/2401.16088
I also point the authors to some new papers considering human-in-the-loop interfaces for recourse (Recommendation 1, Section 5.1):
- [4] Esfahani, Seyedehdelaram, et al. "Preference Elicitation in Interactive and User-centered Algorithmic Recourse: an Initial Exploration." Proceedings of the 32nd ACM Conference on User Modeling, Adaptation and Personalization. https://dl.acm.org/doi/pdf/10.1145/3627043.3659556
- [5] Koh, Seunghun, Byung Hyung Kim, and Sungho Jo. "Understanding the User Perception and Experience of Interactive Algorithmic Recourse Customization." ACM Transactions on Computer-Human Interaction. https://dl.acm.org/doi/pdf/10.1145/3674503
Technical Quality: 2
Clarity: 3
Questions for Authors: I do not have any questions for the authors.
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The authors have highlighted the limitations of their work in Section 5.2.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for this feedback, we are very happy that you perceive our work as useful for the community!
---
> I feel NeurIPS is not the right venue for this kind of contribution, since this paper does not provide the level of technical novelty required by the conference.
We understand your reasoning and really appreciate the effort you took to propose other venues. As Reviewer pd5W brought up a similar concern, please see the "global rebuttal" for an explanation of why we decided to submit this publication to NeurIPS and still believe that it is a suitable venue.
---
> I would like to point out some potential additional papers on algorithmic recourse which could complement some remarks made by the authors
Thank you for bringing these further publications to our attention. We checked them against the complete set of 3092 records that have been collected for the review. None of the papers occur in our database, so we are glad to say that their omission is not an error in the screening process. We will not include them in Section 4 (Results) to keep our analysis consistent, but we will make sure to highlight them in the other sections of the final draft!
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications provided in the rebuttal. After reading the rebuttal and the other reviewers, I am still unconvinced that NeurIPS is the right venue for this contribution, so I will not change my evaluation.
My main concern is still the lack of novel methodological contributions required by the Call for Papers. I praise the work done by the authors, but the points raised by the paper (although all true) are far from giving a novel and original view on the topic.
Regarding point 1 made by the authors in the general rebuttal, I find it difficult to imagine raising awareness of algorithmic recourse and engaging with industry and governmental partners through a simple poster session. Probably, a nicer way to attract attention to the shortcomings of algorithmic recourse could have been either propose a tutorial (https://neurips.cc/Conferences/2024/CallForTutorials) or a workshop on the topic (https://neurips.cc/Conferences/2024/CallForWorkshops).
Moreover, other surveys on algorithmic recourse and, more broadly, counterfactual explanations (e.g., [1,2]) gained a lot of traction even if not submitted to NeurIPS.
[1] Karimi, Amir-Hossein, et al. "A survey of algorithmic recourse: contrastive explanations and consequential recommendations." ACM Computing Surveys 55.5 (2022): 1-29.
[2] Verma, Sahil, et al. "Counterfactual explanations and algorithmic recourses for machine learning: A review." ACM Computing Surveys (2020).
---
Rebuttal 2:
Comment: > I am still unconvinced that NeurIPS is the right venue for this contribution, so I will not change my evaluation.
> the points raised by the paper (although all true) are far from giving a novel and original view on the topic
We can see where you are coming from, but we have a different reading of the Call For Papers.
While we cannot infer how the authors of the CFP understand the "new and original research" clause, we still believe that the draft fulfills its requirements. As mentioned in the "global rebuttal" and the response to Reviewer pd5W, the novel aspects of our work include:
1. evaluating how the *problem* ("tasks") of algorithmic recourse is understood by authors;
2. quantifying the insights about this problem, showing, e.g., little interest in looking at AR in the context of specific domains;
3. providing a toolkit for other researchers and practitioners to facilitate this systems-oriented outlook on recourse.
We acknowledge and cite in the draft other works that provide various recommendations for AR research that are pertinent (although typically they are challenges for *research* rather than challenges for *research practices*), so we understand if the third point cannot be considered a fully novel contribution of our work.
Nonetheless, to the best of our knowledge points 1. and 2. have not been addressed in the literature before. Given that AR is a broad challenge, we find the lack of its shared understanding, as seen in our results, an important obstacle for the research field.
In our opinion, the "original research" requirement of the CFP is fulfilled due to our choice of the research method: the systematic (systematized) character of this literature review makes it empirical in nature. It involved data collection, analysis, and interpretation, akin to more common forms of contributions at NeurIPS. Thus, we go beyond "only" summarizing the existing literature, as would generally be the case in non-systematic reviews.
---
> I find it difficult to imagine raising awareness of algorithmic recourse and engaging with industry and governmental partners through a simple poster session
We agree with you that increasing industry and governmental engagement with algorithmic recourse cannot be achieved by any single publication, but we believe that our work is a good point of departure in that direction. On the one hand, the draft is introductory enough to be valuable for people who are not yet familiar with the field (as also recognized by Reviewer pd5W who commended Section 2 on providing solid background). On the other hand, even researchers who have much experience with research on algorithmic recourse can benefit from a more grounded, shared understanding of the problem.
In any case, we focused our recommendations on key factors that will be relevant for researchers regardless of their exact interest in the field. For instance, the lack of open-source code or documentation will be an important obstacle to the uptake of all forms of algorithmic recourse solutions. We are aware that such recommendations can be perceived as "simple", but they relate to substantial shortcomings that we have identified based on our results.
Following the discussion with Reviewers Wg7R and pd5W we will also make sure that practitioners receive further guidance on how to apply this form of systems-oriented analysis to implement recourse in specific domains.
---
> Probably, a nicer way to attract attention to the shortcomings of algorithmic recourse could have been either propose a tutorial (...) or a workshop on the topic (...)
We appreciate the suggestion on tutorials or workshops, this is something we will definitely consider as a next step! | Summary: The paper is a review of algorithmic recourse (AR) literature. The authors deploy a systematic framework to investigate research trends in algorithmic recourse and evaluate their incorporation of practical concerns like societal and institutional considerations of AR, or lack thereof. The review finds that current research is focused on methods and technical considerations. The authors encourage researchers in AR to consider real-world implications of their work and conduct user studies.
Strengths: - Paper is well-organized and easy to follow
- Section 2 provides solid background information on algorithmic recourse
- The questions in Section 4 are pertinent
Weaknesses: While I agree with the points being made and appreciate the findings in the paper, I question their novelty. As mentioned in Section 4.6, there are papers (albeit in smaller numbers than we would want) that already provide real-world examples and attempt to discuss ethics within recourse. Previous work by [Doshi-Velez and Kim](https://arxiv.org/pdf/1702.08608), [Vaughan and Wallach](https://www.jennwv.com/papers/intel-chapter.pdf) have called for more user studies in interpretable ML, which resulted in studies like [Sixt et al.](https://openreview.net/pdf?id=v6s3HVjPerv). Considering that many researchers working on recourse is also in the field of ML interpretability, I am not sure if the paper's results and call for more user studies are very substantive.
Spending more time differentiating this work from other related works (especially other literature reviews like [70]) rather than listing their contributions in Section 2.2 may be helpful in making your case.
A more thorough discussion and evaluation of results (attempted in Section 5) may resolve some of these questions. As it stands, there is a disconnect between Section 4 and 5. The message of the first part of Section 5 (lines 318-350) is not clear. The second paragraph of the section does not seem to be a discussion of survey results but rather an argument the authors are trying to make (without using the results). The paper would benefit from expanding on the contents in lines 352 to 356, pointing to results in Section 4 and bridging them to the suggestions in Section 5.1.
The paper reads more like a position paper, trying to convince researchers in algorithmic recourse to not only focus on technical methods (which, again, I agree with). But I am not sure if a literature review or a position paper suitable for NeurIPS, considering its call for papers, does not seem to suggest so.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What was the motivation for this literature review?
- Why do you say that recourse should be aligned with the *preferences* of the end-users?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and feedback points, we appreciate it!
---
> As mentioned in Section 4.6, there are papers (albeit in smaller numbers than we would want) that already provide real-world examples and attempt to discuss ethics within recourse.
We believe that the main novel contribution of our work is elsewhere. We recognize that several great works on algorithmic recourse, and/or interpretable ML have called for user studies or considered aspects such as the ethics of AR. In fact, we refer to a few of them in the draft.
Instead, our results point to a broader problem associated with the potential implementation of AR in real-world systems: existing research tends to look at it as a generic mechanism whose implementation does not depend on the context where it would be employed. As one example, even though "actionability" seems to underpin recourse, over 50% of the reviewed publications restrict their definitions of what is actionable to (at best) acting on mutable features. We cannot blame them: this concept is very difficult to define "in the vacuum", without accounting for a specific domain.
We hold fundamental research in high regard. Nonetheless, for algorithmic recourse to become useful, it must be able to address the needs of the systems that it aims to support. Yet, we found few papers that attempt to apply AR. We believe this may stem from the lack of guidance on how it could be operationalized, hence we formulate our recommendations.
---
> Spending more time differentiating this work from other related works (especially other literature reviews like [70]) rather than listing their contributions in Section 2.2 may be helpful in making your case.
We would be very happy to spend an extra paragraph in the camera-ready version to elaborate on the differences. We will address [70] here, as it was also mentioned by Reviewer Wg7R. If you believe this would strengthen the draft, we are also happy to explain in more depth the differences with other surveys.
The great work by Karimi et al. focuses on the available solutions, while our draft looks at the problem that these solutions aim to address. As the authors note, a major contribution of theirs is a comparison of 60 counterfactual explanation and AR algorithms on technical criteria such as supported models (e.g., neural networks), desiderata (e.g., plausibility), or data types (e.g., tabular). Thus, the analysis in [70] is primarily concerned with publications that propose algorithms; publications with other focus are used in a supporting manner. Moreover, [70] takes an unsystematic approach to reviewing existing publications: while the authors collect an impressive number of documents, it is not clear to what extent they are representative of the field. Their results are also qualitative, rather than quantitative.
Our survey takes a step back compared to [70]: we focus on the understanding of the AR problem and the challenges that authors perceive as necessary to address. Hence, we pose questions about the *characteristics* of research on algorithmic recourse, rather than the *outcomes* of this research. This is also why our analysis is not limited to publications that propose new methods.
---
> A more thorough discussion and evaluation of results (attempted in Section 5) may resolve some of these questions. As it stands, there is a disconnect between Section 4 and 5.
Again, we agree with you. As explained in the "global rebuttal" we decided to shorten the discussion due to space considerations, but we see your point that in its current shape Sections 4 and 5 may be disconnected. This is something that we can address, as ultimately the entirety of Section 5 follows from our results, even if this is understated in the current draft.
---
> What was the motivation for this literature review?
Before commencing with this literature review, we had already been relatively familiar with the AR landscape. While we had seen lots of high-quality theoretical work, we were interested in the reasons why it attracts little applied interest. Recourse is a fascinating challenge because it is *necessarily* socio-technical, so we also wanted to evaluate in what ways the social components of recourse are considered in the existing research.
AR is slowly becoming a mature research field, and thus it may impact ADM systems in the future, but as we observe pilot applications remain few and far between, and several challenges may need to be addressed before the industry and governance can benefit from the existing research. Ultimately, we believe that researchers in this domain want to see engagement with the technologies they are creating; our goal was to evaluate how algorithmic recourse is understood by the authors, and through that, why engagement with AR is still missing.
---
> Why do you say that recourse should be aligned with the _preferences_ of the end-users?
We emphasize the preferences of end-users to address the actionability desideratum of recourse. In many publications, the actionability is assumed to be informed a priori, so we cannot say with certainty where these types of constraints are expected to come from, though many publications mention domain experts as the source of actionability constraints. Nonetheless, we also note that a large majority of authors understand algorithmic recourse as the process of helping end-users overturn undesirable (algorithmic) decisions. Thus, a common understanding of AR seems to be that it is a service to the individual end-users.
For AR to function as a service and give the end-users a real opportunity to overturn undesirable decisions, we see accounting for individual preferences as a crucial aspect of an operational definition. Whether a system can account for these preferences in the form of constraints on features, ranges of features, or something else entirely is a secondary consideration here. Of course, the extent of this alignment is likely domain-dependent.
---
Rebuttal Comment 1.1:
Comment: Appreciate the detailed answers!
Since I slightly missed the main contribution of the paper, it might be worth taking some time to make that clearer in the paper. Perhaps the reason why I understood the main takeaway as calling for more user studies is from the title "grounding and validation of algorithmic recourse in real-world contexts", which one would achieve by running user studies (especially in the context of academic research).
Additional things I missed in my first review:
- I find the term "socio-technical" vague and unnecessary --- readers reading papers on algorithmic recourse already know its close relation to our lives
- The same goes for "society and institutional components". It's an abstract concept that is not easy to grasp (at least for me).
Provided that the camera ready version includes an improved discussion section that weaves in its findings, I am willing to adjust my score.
Having said that, I am still not confident that the content of the paper is suitable for NeurIPS. As Reviewer yANZ pointed out in their review, the Call for Papers mentions "new and original research" (the paper doesn't fit in the "interdisciplinary" category). I doubt 1) whether the findings are sufficiently novel and 2) contributions are technical enough for NeurIPS, despite their significance in AR research. I will defer this decision to ACs.
---
Rebuttal 2:
Comment: Thank you for getting back to us!
---
>Since I slightly missed the main contribution of the paper, it might be worth taking some time to make that clearer in the paper.
That is very good feedback. We will outline the contribution of this work in Section 1. Unless you have other suggestions in that regard, we will make use of the (summarized) explanation provided above in our response.
---
>I find the term "socio-technical" vague and unnecessary --- readers reading papers on algorithmic recourse already know its close relation to our lives
>The same goes for "society and institutional components". It's an abstract concept that is not easy to grasp (at least for me).
We fully agree with you that readers, in general, will understand that recourse would have impacts on people's lives. We talk about these social and institutional dimensions of algorithmic recourse problem in a somewhat broader sense, similar to how they are understood in fields such as systems safety, e.g.:
* Leveson, N. G. (2012). _Engineering a Safer World: Systems Thinking Applied to Safety_ (p. 69). The MIT Press.
* De Bruijn, H., & Herder, P. M. (2009). System and actor perspectives on sociotechnical systems. _IEEE Transactions on systems, man, and cybernetics-part A: Systems and Humans_, _39_(5), 981-992.
With this, we aim to emphasize that the understanding of recourse in a specific domain will be influenced by factors such as the involved stakeholders or the organizational processes (see, e.g., lines 377-379), besides the technical factors that are the focus of the existing body of research. Moreover, many problems associated with algorithmic recourse, such as the meaning of actionability or its propensity to lead to unexpected emergent dynamics, can be understood only when accounting for all these components. In any case, we are happy to provide explicit definitions for these terms in the Introduction for ease of exposition.
---
> Provided that the camera ready version includes an improved discussion section that weaves in its findings, I am willing to adjust my score.
Of course! To recap, we will reorganize Section 5 so that it starts with a longer discussion of the results, including what is now lines 352-356 to better explain the provenance of our five recommendations. Next, we will associate with each recommendation the guidelines for practitioners (from our answer to Reviewer Wg7R), explaining how these can be implemented in specific domains. Finally, we will integrate the current lines 318-350 as they follow from the recommendations, reducing the "disconnect" that you have highlighted in your initial review.
We are confident that these changes can be duly implemented within ≈three-fourths of the additional page in the camera-ready version of the draft, leaving enough space to better differentiate our work and the earlier reviews, as we also discussed.
---
> I doubt 1) whether the findings are sufficiently novel and 2) contributions are technical enough for NeurIPS, despite their significance in AR research. I will defer this decision to ACs.
We respect your judgment and appreciate your deferral of this decision to the ACs.
---
*Edit:* We have not received any notification from OpenReview that the above comment was received, so we are updating it to ensure that a notification was sent to the Reviewer. There are no changes in our response compared to its earlier version. | Summary: The authors present a survey regarding algorithmic recourse scientific literature. In their work, the authors analyze what types of contributions do the authors choose to make to the AR research, what are the criteria covered in the authors’ definitions of AR, what are the criteria covered in the authors’ definitions of actionability, the roles of end users, what types of real-world considerations motivate existing research, what types of real-world considerations are seen as challenges for future work, what types of group-level dynamics are addressed in the existing research, what are the approaches to the realistic evaluation of proposed methods, and what are the open source and documentation practices in AR research. They conclude their paper by providing recommendations on how to make future algorithmic recourse solutions better suited for real-world needs.
Strengths: - the authors invested much effort into explaining the procedure followed to ensure a high-quality survey
- the authors very synthetically review scientific literature related to algorithmic recourse and provide a great insight into the field within a few pages
- the authors reviewed a vast amount of literature (165 references!)
Weaknesses: We did not identify important weaknesses. While an extensive survey could be created following this one, providing in-depth details for each of the sections, we understand this cannot be done within the constraints established for this venue.
Technical Quality: 3
Clarity: 2
Questions for Authors: We consider the work to be interesting and relevant. We consider the review to be concise yet relevant, and that could be extended later, providing more fine-grained insights on the topic of algorithmic recourse. Nevertheless, we would like to point to the following improvement opportunities:
(1) - the authors in their abstract mention multiple times the actionable component of algorithmic recourse, which is present in some of the subsections, but in others, the link to this aspect is less clear and could be enhanced.
(2) - did the authors find any works considering actionability in a wider sense, e.g., that some action could be taken even by machines? What are the implications and concerns in such cases?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors have adequately acknowledged the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your positive evaluation of our paper!
---
> (1) - the authors in their abstract mention multiple times the actionable component of algorithmic recourse, which is present in some of the subsections, but in others, the link to this aspect is less clear and could be enhanced.
We are happy to enhance this connection wherever possible. While reviewing the existing literature, we have observed that actionability is understood as the primary consideration for algorithmic recourse, distinguishing it from "passive" explanations. In the current draft, we emphasized the actionability considerations when they influenced the contributions of other authors. For instance, authors who understand actionability as *"acting on mutable features"* tend to incorporate this requirement into their AR solutions, but not more (e.g., not an interface that would allow the end-user to specify their preferences).
---
> (2) - did the authors find any works considering actionability in a wider sense, e.g., that some action could be taken even by machines? What are the implications and concerns in such cases?
This is a very interesting question, especially since, as we note in the draft, the understanding of actionability tends to be relatively limited in the existing research. We have not identified any works where an automated system would take actions on behalf of the end-users (e.g., a machine canceling customer's lines of credit to improve their credit score). Nonetheless, the considerations of Venkatasubramanian and Alfano [142] who proposed that in some settings algorithmic recourse may require (human) fiduciaries, or even Slack et al. [125] who considered that model owners are not guaranteed to be trustworthy, seem to apply here. Ultimately, this is a question of control and accountability, e.g.:
* What types of decisions can an algorithm take on behalf of an end user?
* Who is responsible if the automated action leads to an adverse outcome?
Also, certain design decisions in recourse solutions inherently involve broadening or restricting the actionability of the generated recommendations on behalf of the user. For instance, many authors postulate that generating diverse recommendations improves their actionability, but we have also noted dissenting voices such as Albini et al. [6] who suggest that diversity may lead to cognitive overload for the end-users. Finally, we note that some works attempt to automatically discover what is actionable, e.g., the work of Kelechi and Jiao [72] on quantifying actionability. That line of work is relatively unexplored.
---
Rebuttal Comment 1.1:
Comment: Thank you for the comments. We acknowledge we have reviewed the responses and have no further comments.
---
Rebuttal 2:
Comment: We also acknowledge having seen the response of Reviewer bnWe and thank them for the confirmation. | Rebuttal 1:
Rebuttal: Dear Reviewers,
First and foremost, we would like to thank you for your insightful and comprehensive comments. We know that the review process tends to be time-consuming, and we are grateful that you took this time to read our paper in depth and produce reviews of such high quality.
We are also delighted that — regardless of the preliminary scores — your reviews commend the quality of our work and the relevance of the literature review that we have undertaken.
We address the specific points you have raised in the individual responses. In this "global rebuttal", we instead want to focus on the two main objections that seem to have resulted in the three rejections. More specifically, we:
1. explain why we have decided to submit this work to NeurIPS and still believe that it is suitable for the venue; and
2. outline how we can address the suggestion of Reviewer Wg7R that our draft is not achieving its full potential within the additional one page of content in the camera-ready version, should you decide to accept our paper.
-----
**Regarding point 1.**, we agree with the comments that a literature review is not a typical contribution for NeurIPS. We decided to submit this work to NeurIPS, because we believe that algorithmic recourse could become a valuable safety mechanism in algorithmic decision-making systems, provided that it attracts interest from industry and governmental partners. NeurIPS actively engages with these partners and, as also recognized by Reviewer Wg7R, they form an integral part of its audience. In our literature review, we have observed that research on algorithmic recourse remains driven by academia with relatively little interest from other communities to pilot such mechanisms. In our opinion, this work can help bridge the gap and increase engagement with (existing) research on AR. We are very happy that the potential of this work was also recognized by the Reviewers.
Furthermore, we believe that our work complies with the Call for Papers in that NeurIPS invites papers on *"social and economic aspects of machine learning"* and also *"interdisciplinary submissions that do not fit neatly into existing categories"*. While we acknowledge that other authors have previously pointed out the lack of user studies in interpretable ML (as noted by Reviewers pd5W and Wg7R) , we reckon that reinforcing this sentiment is only a part of our work.
Most of all, our contribution is empirical in nature in that we review the landscape of algorithmic recourse and *quantify* the insights underlying existing work. We are able to put numbers to these insights because — differently from the previous reviews in the field — we follow a systematized methodology. Based on the empirical results, we explain that for algorithmic recourse to satisfy the socio-technical requirements of systems where such mechanisms would be applied, future research requires a broader scope: not only by involving users but also, crucially, by looking at AR solutions in the context of specific real-world domains.
-----
**Regarding point 2.**, due to the nine content pages limit we decided to reduce the length of Section 5 (Discussion) to better introduce the results in Section 4. We believe that with the additional page of content in the camera-ready version, we will be able to properly address the well-grounded concerns of the Reviewers.
In particular, we decided to only briefly explain the practical aspects of introducing algorithmic recourse into specific domains in lines 338-350, where we discuss the stark differences in what AR would entail in two contexts: education and medicine. This analysis highlights that several important questions for AR research cannot be answered without attending to its operational aspects. For example, in Section 4.3 we have looked at the definitions of "actionability" and noted that while this concept is crucial for algorithmic recourse (frequently equated with *"actionable counterfactual explanations"*), its understanding is limited.
In our opinion, actionability can only be understood for a *specific* problem: domain, application, context, stakeholders, etc. As we explain, in a setting such as providing recommendations to improve learning outcomes almost any suggestion may be acted upon by an affected student. Meanwhile, in a more involved setting such as attempting to improve health outcomes, there will exist a variety of additional constraints on the recommendations (e.g., availability of medications) and the involved stakeholders (e.g., a clinician implementing recourse on behalf of the patient). Moreover, algorithmic recourse may even be altogether impossible if improving the health outcomes of a patient is beyond the capabilities of medicine.
We understand that this analysis is understated in the current draft and we will be happy to elaborate on the steps that can be taken to put our recommendations into practice in specific real-world contexts (see also our answer to the comments of Reviewer Wg7R). This way we can also address the feedback of reviewer pd5W who noted that it is currently not clear how the first part of Section 5 ties to the results in Section 4. The five recommendations in Section 5.1 directly follow from the main "problems" we have observed in the literature review (we very briefly note this in lines 352-356), and we will be able to explain in more detail how they can be put into practice in a specific real-world context with the additional page of content. In our opinion, this requires only a minor revision of the paper.
---
We are looking forward to reading your further comments,
All the best,
The Authors | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Fully Distributed, Flexible Compositional Visual Representations via Soft Tensor Products | Accept (poster) | Summary: The authors propose the use of the tensor product to model the interactions between object properties and their values, in contrast to the usual concatenation-based fusion for compositional representations. Extensive experiments on a large variety of image datasets are performed, where performance gains are often significantly higher than those of the baselines considered.
Strengths: - Modeling “multiplicative interactions” has proven to be a powerful mechanism for deep learning [1], but is largely neglected in modern times. This paper proposes an interesting way of fusing compositional representations of object properties and their values through the tensor product (and subsequent summation). Beyond the TPR’s success in this paper, I imagine the insights here through the proposed model form could help the design of more expressive deep learning architectures in future work more generally.
- The experiments are exhaustive, the technical exposition clear and crisp, and for the most part, every architectural design and decision is justified in great detail. Overall, the paper is remarkably digestible given the technical sophistication, yet offers many insights.
---
- [1]: Jayakumar, Siddhant M. et al. “Multiplicative Interactions and Where to Find Them.” *International Conference on Learning Representations* (2020).
Weaknesses: # [W1] Multiplicatively large dimensionality of the TPR representations & issues scaling to larger settings
My main criticism of the paper is the resultant TPR's dimensionality. In particular, the soft TPR representations live in a $D_f\cdot D_r$ dimensional space. Due to the use of the Kronecker product, the dimension of the TPR representations grows multiplicatively with the two terms.
Whilst the datasets studied in the paper are relatively simple and only 10 factors of variation are modeled, it seems prudent to acknowledge that the TPR size could grow prohibitively large for increased values of $D_f, D_r$ for more complex datasets outside of controlled settings, where significantly larger number of FoVs exist. In particular, the NeurIPS checklist states: `The authors should discuss the computational efficiency of the proposed algorithms`.
For example, I see from [L1008] that $N_R:=10$, and $N_F$ is as high as $106$ for Cars3D. Even in this regime with a very small number of roles, the TPR representations are (presumably?) larger than the concatenation-based representations of the baselines. A comparison of FLOPs/multiply-adds needed for the compared methods would be appreciated to better understand the methods’ drawbacks through the use of the (often computationally expensive) tensor product.
Technical Quality: 4
Clarity: 4
Questions for Authors: **[Q1]** As expanded upon in [L933], the “role” embedding matrix $M_{\xi_R}$ is initialized with $n$ orthogonal columns but not trained. We are told that it is “intuitive” that role embeddings do not need to convey semantic information, but I think it would be interesting to see an ablation study on this.
In particular: if it is the case that the embedding matrix does not need to learn semantic representations, why not make the simpler choice of initializing $M_{\xi_R}$ as a (truncated) identity matrix (this shares the same property of semi-orthogonality)? The decision not to make this parameter learnable is peculiar, and I would be interested in seeing its impact explored experimentally (for example, through the DCI score as in Table. 7).
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Limitations are discussed well throughout (and in further detail throughout the appendix), but an extra discussion of the potential drawbacks of increased computational costs should be made.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comprehensive and insightful review.
**W1: Scalability Issues - Multiplicative Dimensionality of (Soft) TPR**
As noted, the Soft TPR lives in a $D_{F} \cdot D_{R}$ dimensional space, which grows multiplicatively. However, several factors mitigate scalability concerns:
1. Independent Dimensionality: The dimensions of the role and filler embedding spaces ($D_{R}$ and $D_{F}$) can be fixed independently of the number of roles, $N_{R}$, and the number of fillers, $N_{F}$. Thus, Soft TPR’s dimensionality ($D_{F} \cdot D_{R}$) can be smaller than $N_{F} \cdot N_{R}$, the number of roles (FoV types) multiplied by the number of fillers (FoV tokens), which may be large in complex visual domains. In this rebuttal, Table* 5 is added to illustrate this, with the dimensionality of the TPR being smaller than $N_{F} \cdot N_{R}$ in both the Shapes3D and MPI3D datasets.
2. Relaxing Orthogonality: While $D_{R} \geq N_{R}$ is needed for semi-orthogonality of the role embedding matrix, $M_{\psi_{R}}$, which guarantees faithful (L841-858) and computationally efficient (L917-929) recoverability of constituents, it is possible to relax this constraint to further reduce dimensionality. When $D_{R} < N_{R}$, semi-orthogonality of $M_{\psi_{R}}$, and thus faithful recoverability of constituents cannot be guaranteed, but some less stringent guarantees on unbinding outcomes can still be provided (see 291 of [1] for details).
Table* 6 is added in this rebuttal to compare Soft TPR’s dimensionality with other baselines. Scalar-tokened symbolic representations have a low dimensionality of 10 ($N_{R}$) at the expense of expressivity (each representational constituent $\psi_{i}(a_{i})$ is scalar-valued). In contrast, Soft TPR has vector-valued constituents (i.e. approx $\psi_{F}(f_{m(i)}) \otimes \psi_{R}(r_{i})$), like VCT and COMET. When compared to these models, our Soft TPR has significantly lower dimensionality than VCT and is comparable to COMET.
Despite these mitigations, we acknowledge that the tensor product is computationally expensive. To more concretely address this concern, Table* 7 is added to indicate the FLOPs for a single forward pass on a batch size 16 using fvcore [F]. This data demonstrates that, despite the tensor product's computational cost, the mathematically-informed derivation of our model allows it to obtain compositional representations with vector-valued constituents at a significantly lower cost than relevant baselines (2 orders magnitude less than VCT and 4 orders magnitude less than COMET).
Future work could explore the use of tensor contraction techniques [G,H] to reduce computational expense. For instance, [I] uses a Hadamard product based tensor product compression technique. This reduces computational cost from $nm$ (tensor product of 2 vectors) to $n$ (Hadamard product), but compromises the theoretical guarantees of constituent recoverability. We believe developing tensor contraction techniques within the TPR framework is an important direction for future research, to ensure efficient TPR-based representations with provable recoverability of constituents.
**Q1: Non-learnability of role embedding matrix**
Semi-orthogonality of the role embedding matrix, $M_{\psi_{R}}$ ensures that the unbinding vector, $u_{i}$ (defined as the $i$-th column of $M_{\psi_{R}}$’s left inverse (L841-850) corresponds to the $i$-th column of $M_{\psi_{R}}$ (L925-927). The Unbinding Module leverages this to efficiently unbind soft fillers from $z$, using columns of $M_{\psi_{R}}$ as unbinding vectors instead of performing costly matrix inversion. If $M_{\psi_{R}}$ is not semi-orthogonal, the Unbinding Module cannot reliably produce soft fillers from $z$, as its columns are not true unbinding vectors.
Extracting the soft fillers from the Soft TPR via unbinding ensures that the TPR decoder produces $\psi_{tpr}^{\*}$ as per Eq 5. Minimising $||z-\psi_{tpr}^{\*}||_{2}^{2}$ (term 1 in Eq 6) encourages the encoder, $E$, to produce outputs that are continuous relaxations of explicit TPRs.
In response to the reviewer's suggestion, we add Table* 8 in this rebuttal with results of the ablation where $M_{\psi_{R}}$ is learnable. This retains a competitive edge over disentanglement baselines, but substantially lower FactorVAE and DCI scores empirically demonstrate the importance of fixing $M_{\psi_{R}}$ for effective Soft TPR Autoencoder performance. This fixation leverages the unique mathematical properties of the Soft TPR/TPR framework, which are essential for effectively learning the representation.
We thank the reviewer for the suggestion to use a truncated identity matrix. Results in Table*8 show that a truncated identity matrix retains an edge over learnable $M_{\psi_{R}}$, again highlighting the need for semi-orthogonality. However, the lower disentanglement results suggest potential benefits of random initialisation. Unbinding with columns of a truncated identity will force soft fillers to be axis-aligned ($\tilde{f}\_{m(i)}:= zu_{i}$), potentially limiting the flexibility and richness of learned representations. Additionally, a more varied set of weights in $M_{\psi_{R}}$ produced by random initialisation helps to break symmetry in subsequent layers, and injects noise, which can improve overall learning dynamics [J, K].
[F]: Fvcore https://github.com/facebookresearch/fvcore/tree/main/docs
[G]: Kossaifi et al. "Tensor Contraction Layers for Parsimonious Deep Nets." CVPR 2017.
[H]: Sharir et al. "Neural Tensor Contractions and the Expressive Power of Deep Neural Quantum States." Physical Review 2022.
[I]: Schlag et al. "Enhancing the Transformer with Explicit Relational Encoding for Math Problem Solving." 2019.
[J]: Glorot and Bengio. “Understanding the difficulty of training deep forward neural networks”. AISTATS 2010.
[K] Noh et al. “Regularising Deep Neural Networks by Noise: Its Interpretation and Optimization.” Neurips 2017.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for their thorough response.
The rebuttal was a pleasure to read; the extra experiments and ablations are insightful, and the reviewers make a good point that—despite the dimensionality of the space growing multiplicatively—the independent dimensionality of the two factors means this is not always too problematic.
The “weaknesses” raised in my initial review were only very minor in the first place—the authors have acknowledged the multiplicative growth of the space, and I see no critical reasons why the paper should not be accepted.
---
Rebuttal 2:
Comment: Dear Reviewer xKnt,
We are glad you found our additional experiments and ablations insightful. Your feedback on our approach, especially regarding the dimensionality, is greatly valued. Thank you again for your time and thoughtful review.
Best,
Submission14814 Authors. | Summary: This work explores compositional representations -- considered a crucial capability underlying intelligent human behaviour in deep learning systems. It argues that there is an incompatibility between discrete symbolic compositional representations—e.g. as obtained through traditional disentanglement approaches—and the continuous vector spaces underlying deep learning systems. To address this, the authors introduce a novel continuous compositional representation that builds on the Tensor Product Representation (TPR) approach, akin to a soft approximation to the TPR. They introduce a method for learning soft TPRs with weakly supervised data called the Soft TPR Autoencoder, and apply it to visual representation learning. This model demonstrates state of the art disentanglement for representation learning and improved sample efficiency for downstream models using those representations.
Strengths: - importance: compositional representation learning, and compositional generalisation and sample efficiency is an important and relatively under-explored area in deep learning research. By focusing on an approach for representation that aims to be more compatible with deep learning this work makes important contributions to this area.
- novelty: the method presented is interesting & new but builds on the well established TPR framework.
- clarity: the method is explained well and in enough detail to understand and reproduce it.
- evaluation: the paper provides a thorough evaluation of the model, including comparisons with the relevant baselines, convergence rates, and performance in low sample regimes. The better performance on downstream tasks w.r.t. number of training examples is particularly interesting and supports some of the motivations for compositional representation.
- interested to see where this work might develop in future work exploring hierarchical compositional representations.
Weaknesses: - motivation for the approach & model could be unpacked a bit more (see questions)
- other domains: the authors focus on applying the work to the visual data, outside the typical domain of TPR models. In general, a strong feature, enabling them to tackle the messier world of complex visual data (and weaker super vision) and compare the model to the many disentanglement approaches that have previously been applied in the visual domain. However, it could be interesting to see how the _Soft_ TPR approach compares to traditional TPR in its typical domain of language.
- no high impact is shown for downstream utility. the improvements seen for downstream tasks are certainly interesting, particularly in the low data regime, however only two tasks are explored. are there any more good downstream tasks to evaluate the utility of their learned representations? could there even be more speculation as to where future work might really leverage what can be learned with this approach?
- need for supervision (albeit weakly): could the work be extended to use different forms of supervision or less/no supervision. noting that their ablation showed the importance of the supervision, perhaps the authors could explore more variants on this ablation, and speculate on extensions of the model that could reduce the need for supervision?
Technical Quality: 3
Clarity: 3
Questions for Authors: A couple of questions, both on the motivations of the approach in the paper and how it could be improved. In both cases, these are more questions focused on the conceptual motivations and understanding than what the results show:
- In terms of motivating the approach, can you be more specific about how representations obtained with traditional disentanglement approaches (typically continuously-valued themselves) could create an incompatibility with the continuous vector space of deep learning systems, and lead to suboptimal performance for the representation learning and downstream models? Any hypothetical or concrete examples (and/or references discussing this point)?
- Relative to TPR, what is the relaxation in Soft TPR supposed to be helping with, and why? The manifold example is given, e.g. in Figure 1c, but seems quite vague. Also discussed is that the data being explained maybe only approximately compositional, but why would TPR fail with this and why would we specifically expect soft TPR to help? Could this be unpacked more, perhaps with some concrete examples.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the reviewer’s detailed and thoughtful feedback.
**W1, Q1, Q2: Motivation for approach and model**
Thank you for the detailed questions on the motivation. We clarify these points in the General Response.
**W2: Limited Domains**
Applying Soft TPR to language is an intriguing future direction, especially as language can deviate from strict algebraic compositionality. E.g., idiomatic expressions like ‘spill the beans’ cannot be understood as a function of their constituents alone. Traditional TPR, relying on strict algebraic compositionality (Eq 1) may be less effective in such cases. Soft TPR's more flexible, approximately compositional structure might better handle such complexities.
To adapt our framework to language, we replace our Conv/Deconv encoder/decoders with simple RNNs, retain our TPR decoder and remove the semi-supervised loss, using Eq 6 as the full loss. Preliminary results (Table* 4 in this rebuttal) on the BaBI dataset [A] are compared with TPR baselines from AID [36]. Our Soft TPR Autoencoder does not yet surpass AID, but notable points include:
1) Our use of simpler RRN-based encoders and an MLP-based downstream network, unlike the more sophisticated architectures of [36]
2) Soft TPR retains its performance improvement above the corresponding explicit TPR it can be quantised into
3) The smaller gap between systematic vs non-systematic dataset splits in our model compared to TPR-RNN (+AID) and FWM
4) We train our representation learner using self-supervision (reconstruction loss) alone, only employing supervision on the downstream prediction network, while the baselines employ strong supervision and end-to-end training to produce representations
**W3: Downstream Utility**
Our focus on the 2 selected tasks (FoV regression/classification and abstract visual reasoning) aligns with the standard framework for assessing the quality and downstream utility of compositional representations [19,23,26,32,B,C]. While explicitly compositional representations enhance downstream sample efficiency over non-compositional representations [23,26], which we improve upon (C.5.1), their broader utility remains an open question [15,32,33,C].
Theoretical arguments [D,E] posit that explicitly compositional representations are crucial for productive, systematic, and inferentially coherent thought/processing – 3 key properties of human cognition. Using compositional representations to produce empirical benefits along these dimensions is a crucial direction of future research. While preliminary results [32,33,C] do not find strong evidence that compositional representations enhance compositional generalisation (a form of systematicity), [C] suggests that compositional representations are necessary, but not sufficient to promote systematicity; an explicitly compositional form of processing is also required.
We believe our theoretical formalism offers a unified framework for future research to achieve empirical results aligned with [D,E]. In particular, as our unbinding module provably recovers structured role-filler constituents from the Soft TPR (L189-197, L841-858), our model enables the systematic rearrangement of a representation’s roles/fillers to form novel composite representations. This potentially advances the development of compositional processing methods as suggested by [C] and holds potential in the domains such as concept learning and compositional generalisation.
We will add some discussion in the paper regarding this point.
**W4: Need for Weak Supervision**
To produce a compositional representation $\psi(x) = C(\psi_{1}(a_{1}), \ldots, \psi_{n}(a_{n}))$ (L116), each representational constituent $\psi_{i}(a_{i})$ must map directly to a data constituent, $a_{i}$. Without supervision, this is challenging because the data constituents are unknown.
In generative models, [19] showed that unsupervised disentanglement is theoretically impossible for essentially this reason: without supervision, there is no mechanism to match representational constituents with ground truth generative factors (role/filler bindings). Although our non-generative model avoids this impossibility result, the proof’s intuition is compelling. Thus, we use weak supervision [7,16,24,26,28] by presenting the model with data pairs $(x, x')$ differing in a subset of FoVs, indexed by set $I$. This weak supervision is minimal, only providing knowledge of the changing FoV types (e.g. ‘floor colour’) and not the FoV tokens (i.e., the specific floor colours in $x, x’$).
As suggested by the reviewer, Table* 3 is added in the rebuttal to present additional variants on the supervision ablation from the perspective of disentanglement performance (please refer to Eq 7 for definitions of $\lambda_{1}, \lambda_{2}$).
We outline some possible extensions to reduce weak supervision:
1. In the visual domain, some roles e.g. ‘object position' correspond to affordances. Embodied agents may be able to reduce the need for explicit supervision by collecting $(x, x'), I$ through interaction.
2. Initialising the filler embedding matrix, $M_{\psi_{F}}$ with embeddings learnt by a pre-trained vision model could impart knowledge of domain-agnostic fillers (e.g. colours, shapes), reducing the need for weak supervision.
3. Segmentation masks for each constituent may potentially reduce the need for weak supervision.
[A] Weston et al. 2015. Towards ai-complete question answering: A set of prerequisite toy tasks.
[B] Yang et al. 2022. Towards Building a Group-based Unsupervised Representation Disentanglement Framework. ICLR.
[C] Montero et al. 2022. Lost in Latent Space: Examining Failures of Disentangled Models at Combinatorial Generalisation. NeurIPS.
[D] Fodor. 1975. The Language of Thought: A Theory of Mental Representation.
[E] Chomsky. 1957. Syntactic Structures.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed and clear follow-up, many of concerns have been addressed and I have raised my score.
---
Rebuttal 2:
Comment: Dear Reviewer 93av,
We are glad our response addressed many of your concerns. Thank you again for your time and insightful review. We really appreciate your thoughtful feedback,
Submission14814 Authors. | Summary: This paper introduces a novel framework for representation learning known as Soft Tensor Product Representations (Soft TPR), aimed at capturing the compositional structure of data more effectively. The authors propose a continuously-valued compositional representation that contrasts with traditional symbolic methods. The paper makes several key contributions in the realm of representation learning, including the conceptualization of Soft TPR, the Soft TPR Autoencoder, and the demonstration of the benefits of this representation for both representation learning and downstream models.
Strengths: The idea of Soft TPR brings a fresh perspective to the field of representation learning, offering a new way to represent compositional structures in a continuous manner. It provides a thorough theoretical exploration of Soft TPR, complete with detailed mathematical proofs and framework extensions. The paper clearly articulates the differences between Soft TPR and existing methods, holding significant potential for enhancing model interpretability and improving robustness to covariate shift.
Weaknesses: 1. No MPI scores for disentanglement (in Table 1) are reported.
2. The construction of the model's loss function involves a multitude of hyperparameters, suggesting that the model might require a complex and intricate tuning process. Compared to other models like VCT, the process of adjusting parameters to achieve optimal results could involve additional complexity and such a requirement might pose challenges in terms of computational resources and time.
3. The reason why *Soft TPR is better than TPR is not fully explored.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. what is the dissentanglment performance of TPR? The paper only report the downstream results of TPR in Table 6, is it possible to report the disentangment performance of TPR in Table 1?
2. how many pairs are provided for the weakly supervised? For example, in Table 4, the numbers are the total samples for training, or the labeled sample pairs?
3. VCT uses attention-based operations (cross-attention and self-attention) to extract concept vectors, and reconstruct images from the concept tokens, what is the difference between those attention-based operation and the unbinding module or TPR constructor in this paper?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: I don't see any negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their detailed and thoughtful review.
**W1**: To our best knowledge, MPI score is not established in disentanglement learning literature [19,23-26,31,32,34,38].
Consistent with VCT [34], we evaluate disentanglement using 3 datasets (Cars3D, Shapes3D, MPI3D) and 4 metrics (FactorVAE, DCI, MIG, BetaVAE) commonly used in disentanglement studies [19,23-26,29,31-34]. Table 1 in our paper compares our method with SoTA approaches using FactorVAE and DCI scores. For MIG and BetaVAE results, please see Table 17 of C.3.3.
To avoid confusion, we will clarify in Table 1’s caption that additional results are in the supplementary.
**W2**: Our loss function in Eq 7 has 2 tuneable hyperparameters, $\lambda_{1}$ and $\lambda_{2}$ in the weakly supervised loss (we set $\beta$ in $\mathcal{L}_{u}$ of Eq 6 to 0.5 (L1006)).
The use of hyperparameters for loss functions is common in compositional representation learning. Canonical VAE disentanglement methods [4,8,9,11,18] and our baselines [31,26,29] each have 1-2 loss hyperparameters.
VCT’s lack of tuneable loss hyperparameters is a strength but VCT is far more computationally expensive to train compared to our model.
To illustrate this point in this rebuttal, Table* 7 is added for comparing FLOPs. Table* 1 is added to show the total time for tuning our model on an RTX 4090 including 2 loss hyperparameters and 4 architectural hyperparameters (details in B.3). Table* 2 is added for the cost of tuning our model relative to the total time required for training a single VCT model on a V100 GPU for 200,000 iterations, excluding the 48-144 hours for training VCT’s VQ-VAE image tokeniser.
Furthermore, ablation experiments (C.6.2) demonstrate our model’s robustness to both architectural and loss hyperparameters indicating multiple viable hyperparameter points that ease the tuning process.
**W3**: Please refer to our detailed explanation of the differences between Soft TPR and TPR in Section 2 of our General Response.
**Q1**: The disentanglement performance of the Soft TPR and TPR are identical. To establish why, we outline our method of extending existing disentanglement evaluation procedures to the continuous instantiation of compositional structure of the Soft TPR/TPR (please refer to C.3.2 for additional details).
Existing disentanglement metrics assume a symbolic representation of compositional structure, requiring a vector $v$ with a dimension $N_{R}$ matching the number of roles (FoV types). Each dimension of $v$ is populated with a scalar-valued filler (FoV token). To produce $v$ from the Soft TPR we follow the procedure in C.3.2:
1) Quantised fillers $\{\psi_{F}(f_{m(i)})\}$ are extracted from $z$ via Unbinding and Quantisation modules.
2) Each dimension of $v$ is populated with the index of the quantised filler for role $i$ (i.e. $v_{i} = m(i)$).
As the explicit TPR, $\psi_{tpr}^{*}$ is built from the Soft TPR’s quantised fillers (Eq 5), the disentanglement scores for both are identical.
We use the quantised fillers of the Soft TPR to construct $v$, not its soft fillers $\{\tilde{f}_{m(i)}\}$ for 2 reasons:
1) There is no natural way of quantising soft fillers into scalars other than using PCA, which incurs significant information loss.
2) Quantised fillers align with the intent of disentanglement metrics, which assess perfect (not approximate) compositionality.
The Soft TPR, $z$, does not enhance disentanglement compared to the explicit TPR, $\psi_{tpr}^{*}$. However, it is important to note our motivation for continuously relaxing the TPR into the Soft TPR is to not to more explicitly represent compositional structure, but rather, to further align TPR-based structure with continuous vector spaces (details in Section 2 of General Response). This enhances downstream performance and representation learner convergence compared to the TPR (C.6.1). Our model is also able to always recover precise, algebraic compositional structure (with form $\sum_{i} \psi_{F}(f_{m(i)}) \otimes \psi_{R}(r_{i})$) from the Soft TPR, $z$, by quantising it using the TPR decoder. Our model can thus leverage the duality of the Soft TPR’s (approximately) compositional structure.
**Q2**: The term refers to the number of individual data points, not pairs (i.e., 100 samples=50 pairs). We will add a footnote for clarification.
**Q3**: The key differences are as follows:
1) Motivation: VCT applies attention to generate concept/image tokens, whereas our framework uses the 3 modules of the TPR decoder to generate a highly specific element $\psi_{tpr}^{\*}$ as per Eq 5. VCT uses these concept tokens to produce a symbolic form of compositional structure, whereas we use $\psi_{tpr}^{\*}$ to minimise $||z - \psi_{tpr}^{\*}||^{2}_{2}$ (term 1 in Eq 6), which encourages the Encoder, $E$, to produce a Soft TPR, a continuous representation of (approximately) compositional structure.
2) Continuous vs Symbolic Representation: VCT represents compositional structure symbolically, through direct sums, with each constituent $c_{i}$ embedded in a separate subspace. Our method embeds all constituents in the same vector space and additively superimposes them. This continuous form of compositional structure produces empirical benefits (C.3.3, C.4.2, C.5).
3) Constituent Structure: VCT’s unstructured constituents (concept tokens) do not guarantee systematic relations between constituents with the same filler/role. Soft TPR, however, with its structured (soft) role-filler bindings ensures systematic relations between constituents with the same filler/role, resulting in a disentanglement improvement.
4) Mathematical Basis: Our modules are mathematically designed to recover soft fillers, quantised fillers, and explicit TPRs (Eqs 3-5). Attention, as employed by VCT, cannot theoretically guarantee the production of these structures.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. Most of my concerns are addressed. I have raised my score.
---
Rebuttal 2:
Comment: Dear Reviewer 6VtP,
We are glad that our response addressed most of your concerns. Thank you again for your time and thoughtful feedback regarding our approach.
Best,
Submission14814 Authors. | null | null | Rebuttal 1:
Rebuttal: (Please note all rebuttal tables are included in the 1-page PDF)
**1) Incompatibility between disentangled representations and deep learning’s continuous vector spaces (Reviewer 93av Q1)**
Traditional disentanglement methods, although producing continuously-valued representations $\psi_{d}(x)$, use a symbolic, direct-sum approach to represent compositional structure (L37-40,122-124). Specifically, each FoV/data constituent $a_{i}$ is represented by a single dimension/contiguous subset of dimensions, $\psi_{i}(a_{i})$, which are concatenated together to form $\psi_{d}(x) = \psi_{1}(a_{1}) \oplus \ldots \oplus \psi_{n}(a_{n})$
Here, each representational component $\psi_{i}(a_{i})$ representing a FoV is embedded into a separate, independent subspace $V_{i}$ within the overall vector space. For example, consider 2 features, colour and shape, with embeddings $\psi_{col}(purple) = (1\\;0)^{T}, \psi_{sh}(square) = (1\\;0)^{T}$.
A disentangled representation is: $\psi_{d}(purple\\;square) = \psi_{col}(purple) \oplus \psi_{sh}(square) = \begin{bmatrix} 1 \\\ 0 \end{bmatrix} \oplus \begin{bmatrix} 1 \\\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\\ 0 \\\ 1 \\\ 0 \end{bmatrix}$.
Here, colour and shape are allocated to distinct subspaces of the representational space $\mathbb{R}^{4}$ (i.e. dims 1&2 for colour, dims 3&4 for shape). This discrete allocation mirrors symbolic systems where discrete symbols occupy separate spaces in the representation, although $\psi_{d}(x)$ is continuously-valued.
Deep learning systems rely on gradient-based optimisation. Disentanglement’s symbolic method of representing constituency structure (i.e., allocating constituents to discrete subspaces of $\mathbb{R}^{4}$) complicates this process, as discrete subspace boundaries (i.e. between dims 1&2, dims 3&4) must be managed for each constituent. This complicates gradient-based processes, as changes must navigate abrupt transitions between subspaces rather than smooth, continuous alterations.
Additionally, this symbolic method of representing constituency structure requires subspace alignment to ensure the overall representation in the larger vector space, $V_{F}$ is semantically meaningful (e.g. the colour subspace having significantly larger magnitudes than others). Guaranteeing this may be challenging.
In contrast, TPR-based representations combine features in a unified vector space (e.g. $\psi_{col}(purple) \otimes \psi_{sh}(square) = (1\\;0)^{T} \otimes (1\\;0)^{T} \cong (1\\;0\\;0\\;0)^{T}$), meaning that constituents (e.g., colour, shape) are integrated into the same vector space and cannot be separated into discrete parts of the representational space, $\mathbb{R}^{4}$. This continuous approach of representing constituency structure avoids the issues of discrete subspace boundaries and subspace alignment, facilitating smooth gradient updates and more effective learning to enhance representation learner (C.3.3, C.4.2) and downstream (C.5) performance.
**2) Why the relaxation of the TPR into the Soft TPR (Reviewer 6VtP W3, 93aV Q2)**
For concreteness, consider the set of roles and fillers $R = \\{object\\;colour, object \\; shape\\}, F = \\{purple, green, square\\}$. We define role embedding $\psi_{R}: R \rightarrow \mathbb{R}^{2}$ and filler embedding $\psi_{F}: F \rightarrow \mathbb{R}^{3}$ functions as follows: $\psi_{R}(object\\;shape) = (1\\;0)^{T}, \psi_{R}(object\\;colour) = (0\\;1)^{T}$ and $\psi_{F}(purple) = (1\\;2\\;3)^{T}$, $\psi_{F}(green) = (2\\;2\\;3)^{T}$, $\psi_{F}(square) = (0\\;0\\;1)^{T}$
2.1. Discrete Mapping: The possible TPRs that can be produced are:
1. $\psi_{tpr}(purple/object\\;colour, square/object\\;shape) = \psi_{F}(purple) \otimes \psi_{R}(object\\;colour) + \psi_{F}(square) \otimes \psi_{R}(object\\;shape) \cong (0\\;0\\;1\\;1\\;2\\;3)^{T} $
2. $\psi_{tpr}(green/object\\;colour, square/object\\;shape) = \psi_{F}(green) \otimes \psi_{R}(object\\;colour) + \psi_{F}(square) \otimes \psi_{R}(object\\;shape) \cong (0\\;0\\;1\\;2\\;2\\;3)^{T}$
The possible TPRs form a 2 element subset, $T$, of the underlying vector space $ \mathbb{R}^{6}$. Relaxing TPRs to Soft TPRs (L191) gives a larger set of points, $T_{s} = \\{(0\\; 0\\;1\\;2\\;2\\;3)^{T} + \alpha, (0\\;0\\;1\\;1\\;2\\;3)^{T} + \alpha : |\alpha| < \epsilon\\}$, which includes points like $(-0.0001\\;0.0002\\;1\\;1\\;2.004\\;3.009)^{T}$ and others. As $T_{s}$ has strictly more points than $T$, there should be more functions e.g., parameterising the map from the observed data to $T_{s}$, the set of Soft TPRs, than $T$, the set of TPRs. This should make the Soft TPR representation potentially easier to learn/extract information from than the TPR, reflected in our empirical results in C.6.1.
2.2. Quasi-Compositional Structure: The TPR enforces a strict algebraic definition of compositionality (i.e. $\sum_{i}\psi_{F}(f_{m(i)}) \otimes \psi_{R}(r_{i})$). The relaxation of this constraint in Soft TPR (L191) enables it to represent structures that only approximately satisfy the TPR’s strict algebraic definition of compositionality (e.g., in French liaison consonants, where a weighted sum of multiple fillers, rather than a single filler, bind to a role [L])
2.3. Serial Construction: Building explicit TPRs requires tokening constituents (role-filler binding embeddings) before the compositional representation can be produced [13,17,22,27,30,36,37]. Soft TPRs allow the encoder $E$ produce any arbitrary element of $V_{F} \otimes V_{R}$ (in this case $\cong \mathbb{R}^{6}$) provided it is sufficiently close to a TPR. Thus, once the Soft TPR Autoencoder is trained, it is theoretically possible to remove the TPR Decoder, and exploit vector space continuity to create approximately compositional representations directly from the Encoder, $E$, without needing to token the constituents.
[L]: Smolensky and Goldrick. Gradient Symbolic Representations in Grammar: The case of French Liaison. 2016.
Pdf: /pdf/19397ba7b674a26b6e2593de686057105346f701.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
NeuralSteiner: Learning Steiner Tree for Overflow-avoiding Global Routing in Chip Design | Accept (poster) | Summary: This paper proposes NeuralSteiner, a two-phase global routing scheme to optimize both wirelength and overflow. It also demonstrate capability of generalization on unseen large-scale circuits. Experiments on public benchmarks show NeuralSteiner reduces 99.6 \% in overflow with a wirelength increase of 1.8 \%.
Strengths: 1. NeuralSteiner proposes a scheme to realize partial parallel routing tasks by group nets whose bounding boxes share no common area or overlap.
2. NeuralSteiner is able to optimize the overflow in global routing. It utilizes SOTA global router CUGR to perform global routing stage and construct expert datasets. These routing results are congestion-avoided, assisting the candidate prediction model to learn the position of Steiner points and corner points.
3. Besides, it also proposes NAG representation and a RST construction algorithm to reduce the overflow in global routing.
Weaknesses: 1. Typos in line 282: "enven arger than 2000".
2. NeuralSteiner is not an end-to-end approach, still relying on the post-processing/greedy algorithm to construct the final routing results. But this greedy algorithm is simple and efficient, and utilizing deep learning for RST construction is leaved for future work.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Where is the edge weight in NAG used?
2. How to calculate the "distance" mentioned in line 222 to select the two nearest connected components?
3. In line 154, the authors mention they use CUGR to perform global routing to construct the congestion-avoid datasets. And they also propose some algorithms such as NAG, overflow-avoided RST construction algorithm. Between the datasets and algorithms, which one is the key to the overflow reduction of NeuralSteiner?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: RST construction still relys on heuristics post-processing algorithm, but this could be leave for future work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and valuable feedback. Please note our global comment with additional experimental results. Below we will address specific questions.
W1: Thanks again for your meticulous review, and we will correct typos in the revised version.
> **W2: NeuralSteiner is not an end-to-end approach, still relying on the post-processing/greedy algorithm to construct the final routing results.**
We understand the reviewer's concerns about NeuralSteiner not being an end-to-end method. The complexity of obstacle-avoiding Steiner tree generation in global routing makes it very challenging for the neural network to simultaneously optimize wirelength and congestion, and ensure connectivity in an end-to-end manner.
Methods like PRNet [1] and [3] have attempted to merge these tasks using a single network but faced significant connectivity issues. Hubrouter [2] addresses these two tasks with different networks, solving the problems of RST connectivity and wirelength optimization.
Our NeuralSteiner method achieves optimization of wirelength and congestion simultaneously through a two-stage setup. However, since the existing candidate points still need to be processed as discrete values for subsequent RST construction, differentiable learning is difficult to apply. In the future, we will explore the efficient generation of overflow-avoiding RSTs using an end-to-end neural network.
> **Q1: Where is the edge weight in NAG used?**
In Section 3.4, Eq.8 defines the weights used for edges in the NAG.
These weights consider both the wirelength and the congestion the edge passes through, with parameters $w_d$ and $w_o$ reflecting the trade-off between the two. We used $w_d = 1$ and $w_o = 2$ in all experiments. However, we realize that the usage of the symbol $O(x,y)$ causes some confusion, as it has the same meaning as $o_{ij}$ in Eq.6 of Section 3.3, representing the value of the overflow map at that location. We will clarify this ambiguity in future revisions.
> **Q2: How to calculate the "distance" mentioned in line 222 to select the two nearest connected components?**
When calculating the distance between two connected components, we compute the shortest path length on the NAG connecting any two points belonging to the different two components respectively (where the path length is the sum of the weights of all edges on that path). The minimum of all these shortest path lengths is taken as the distance between the two connected components.
Please refer to the Q3 in global responses for detailed analysis of RST construction algorithm and the acceleration methods.
> **Q3: Between the datasets and algorithms, which one is the key to the overflow reduction of NeuralSteiner?**
Thank you for raising this question. We believe it is of significant importance to the advancement of this field. In Section 3.3, we use the expert router CUGR to solve the routing results of nets under congestion and construct an expert routing dataset. The neural network learns from this expert dataset, enabling its output to form candidate points for potential low-congestion RSTs, and provides a solution space for subsequent NAG and the overflow-avoiding RST construction algorithm.
The NAG and the corresponding RST construction algorithm use a greedy algorithm within this solution space to find the final routing result that ensures connectivity while optimizing wirelength and congestion. As shown in the ablation experiments Table 3 and Table 4 for adaptec05, even though the inclusion of NAG and the RST construction algorithm significantly reduces OF without using the candidate points generated by neural network, the addition of the neural network trained on the expert dataset further avoids 95% of the remaining challenging congestion.
> **L: RST construction still relys on heuristics post-processing algorithm, but this could be leave for future work.**
As mentioned in reply to Q1 in the global responses and our response to W2, the inclusion of current heuristic post-processing is due to the multi-task nature of overflow-avoiding global routing. We hope to explore more efficient end-to-end approaches in the future to replace the existing heuristic post-processing schemes.
Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
Reference:
[1] The policy-gradient placement and generative routing neural networks for chip design, NIPS 2022.
[2] HubRouter: Learning Global Routing via Hub Generation and Pin-hub Connection, NIPS 2023.
[3] Late breaking results: A neural network that routes ics, DAC 2020.
---
Rebuttal Comment 1.1:
Title: Response to Author Rebuttal
Comment: Thank you for your detailed response.
> Q1 & Q2: Distance Calculation
>
I have a clear understanding of the distance calculation in line 222. I recommend that the authors provide a clearer explanation of the distance calculation in final version: 1) we calculate the distance on the weighted graph NAG based on XXX algorithm, and 2) edge weight is determined according to Eq. 8.
> Q3: Effectiveness of NAG-based RST Construction
>
In Table 4's ablation study, HubRouter + NAG also demonstrates effective overflow reduction and comparable wirelength. I would like to inquire about the dataset on which HR-GAN (w/o Mask) + NAG was trained. Is this dataset aware of the overflow reduction?
> Comparison of CUGR + NeuralSteiner and Original CUGR in Rebuttal PDF
>
CUGR + NeuralSteiner showcases a reduction of 5% and 20% in short and space respectively, highlighting NeuralSteiner's efficacy in overflow reduction. It would be beneficial to include this comparison in the main paper.
---
Reply to Comment 1.1.1:
Comment: Thank you for your valuable comments and kind words about our work.
> **Q1 & Q2: Edge weight and distance calculation**
In the final version of the paper, we will ensure a clearer explanation of the edge weight and distance is provided:
1. We will include more detailed code regarding the distance calculation process in Algorithm 4 of Appendix B.5 in the supplementary material and provide a thorough explanation in the main paper.
2. We will explicitly state that the edge weight is determined according to Equation 8.
> **Q3: Effectiveness of NAG-based RST Construction**
Thank you for pointing out the effectiveness of HubRouter + NAG in Table 4's ablation study. Regarding your inquiry about the dataset on which HR-GAN (w/o Mask) + NAG was trained: yes, this dataset is the same as that we use in the training of NeuralSteiner, which is aware of overflow reduction and generated by expert routing tool CUGR. We will clarify this in the revised paper.
> **Comparison of CUGR + NeuralSteiner and original CUGR**
Thank you for the suggestion. We will incorporate this comparison of CUGR + NeuralSteiner with the original CUGR, as showcased in the Rebuttal PDF, into the main paper to underscore the effectiveness of our approach.
Thank you once again for your insightful comments and suggestions. We will address each point in the final version of our paper to improve clarity and comprehensiveness. | Summary: This paper tackles the challenging task of global routing in VLSI design, with the aim of addressing the overflow limitation inherent in current learning-based methodologies. The authors introduce NeuralSteiner, a novel approach that builds upon a previous method known as HubRouter. Like HubRouter, NeuralSteiner divides the global routing process into two stages: the first predicts the locations of probable candidate points, and the second connects these points and pins with a focus on minimizing overflow. In the first stage, NeuralSteiner introduces a new concept – the Candidate point, and generates an expert routing dataset using CUGR and ISPD07. This dataset is then used to train a neural network for the point prediction task. In the second stage, NeuralSteiner employs a straightforward and heuristic construction method that avoids overflow while ensuring connectivity.
The paper is well-structured, with clear motivations and easy-to-follow content. NeuralSteiner effectively addresses the significant overflow issue in learning-based global routing, making it applicable. Experimental results show that NeuralSteiner significantly reduces overflow, while maintaining comparable wirelength.
Strengths: + NeuralSteiner is empirically proven to be effective in reducing overflow in the global routing task. Equipped with the CNN structure, NeuralSteiner is capable of recognizing the candidate points, and the RST construction further reduces overflow significantly.
+ A parallel technique is proposed to route the non-conflicting nets simultaneously, which can relieve the time-consuming plight.
+ The experiments show a significant improvement in overflow. In particular, for the case of newblue01, the wirelength and overflow both surpass most traditional global routing approaches.
Weaknesses: - The paper lacks an analysis of time complexity in the construction stage, leaving uncertainty about the scalability of the second stage.
- In the last paragraph of Section 3, a simple post-processing algorithm is proposed to shorten the wirelength, but it lacks detail and also misses the analysis of time complexity.
- I note that the performance of wirelength and overflow in Table 2 is pretty good for the case ‘newblue3’. This could be potentially confusing as ‘newblue03’ is part of the training dataset.
- Typo: Line 282, enven arger.
- Suggestion: figure 2 could be moved to the next page to better correspond with Section 3.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Given that the training set is within HPWL <= 128 or the scale of 64x64, how are NeuralSteiner and HubRouter applied to ISPD07 cases with much larger scales?
- Besides the case newblue03 which is already included in the training dataset, I note that the performance of wirelength and overflow in Table 2 is also pretty good for the case ‘newblue1’, outperforming most traditional approaches. However, the overflow for the case ‘ibm01’ is 2033 in Table 3, which is much higher than traditional methods. What are the inherent reasons for this? For instance, the overflow for ‘ibm01’ and ‘newblue1’ are 0 and 400 respectively, while the corresponding results for NeuralSteiner are 2033 and 5.
- Based on the above question, I am curious about what the WL and overflow are for newblue01 and newblue03 when using CUGR, the approach used to generate the expert routing dataset.
- Is the RST construction conducted by Python or C++?
- Will the code be made available upon acceptance of the paper?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and valuable feedback. Please note our global comment with additional experimental results. Below we will address specific questions.
> **W1&W2: Analysis of time complexity in the construction stage.**
We understand the reviewer's concerns regarding NeuralSteiner's optimization of wirelength and algorithm complexity. In the Q3 of global responses, we analyze the time complexity of the RST construction algorithm. The time complexity for constructing an overflow-avoiding RST is $(O((N_{pin})^4 \log N_{pin}))$ and we further reduce it using limited component nodes acceleration to $(O((N_{pin})^3 \log N_{pin}))$.
Additionally, according to Table 3 in the rebuttal PDF, our NeuralSteiner method does not add an excessive number of extra candidate points to the nets. This means that the number of nodes in the NAG remains small-scale for nets occupying the majority with fewer pins. Therefore, this method will be very efficient for the vast majority of nets, and for nets with more pins, heuristic methods such as subtree division can be used to reduce the computation time.
> **W3: The performance of wirelength and overflow in Table 2 is pretty good for the case 'newblue3'. This could be potentially confusing as 'newblue03' is part of the training dataset.**
Thank you for pointing this out. We also realize that the use of names in ISPD07 datasets has caused some confusion. In fact, our datasets for testing and training do not overlap.
Our experimental setup follows the work of Hubrouter, which combines ISPD07 and ISPD08 datasets into a single dataset and re-divides the training and test sets. Specifically, the 'newblue3_3d' (with 6 routing layers) nets from ISPD08 are routed using expert routers and included in the training set, while' newblue3_2d' (with 2 routing layers) nets from ISPD07 are in the test set. The 'newblue3' presented in Table 2 is the 'newblue3_2d' from ISPD07. We realize this is not clearly stated, and we appreciate the reviewer pointing it out. We will clarify this in the revised version by correcting it to 'newblue3_2d' from the ISPD07 dataset.
W4: Thanks again for your meticulous review, and we will correct typos in the revised version.
> **W5: Figure 2 could be moved to the next page to better correspond with Section 3.**
Thank you very much for your suggestion. We will move Figure 2 to the relevant section in the revised version to improve readability.
> **Q1: How are NeuralSteiner and HubRouter applied to ISPD07 cases with much larger scales?**
Both our NeuralSteiner method and the networks used in Hubrouter employ a CNN-based backbone without including structures like fully connected layers that require fixed input sizes. Therefore, the trained network weights can accept inputs of any spatial shape. However, there still exist generalization issues. To mitigate the generalization problems brought by large-scale nets (such as non-locality relations between different pins), we introduce the RCCA mechanism in Section 3.3 (lines 166-175) and App. B.1.
> **Q2: Different performance on ibm01 and newblue01.**
Thank you very much for your detailed reading and questions. We have clarified the source of the newblue3_2d test set in response to W3.
Regarding the comparison between NeuralSteiner and Boxrouter on ibm01 and newblue01_2d, it is evident from Table 6 in App. D.1 of Hubrouter and Table S1 in App. A of our paper that the ibm01 has fewer routing resources and more average pins, while is much smaller than newblue01_2d. This poses a challenge for the NeuralSteiner method. However, Boxrouter's Robust Negotiation-Based A* Search allows nets to bypass resource-constrained areas over a larger range, reflected in Boxrouter's longer wirelength (62659).
In the case of newblue01_2d, due to the ISPD07 competition setup, Boxrouter's wire length and congestion are also influenced by vias (a cross-layer connection structure), making direct comparison with NeuralSteiner impractical, which is the reason why we do not include it in Table 2 for comparison.
> **Q3: Results of CUGR.**
The ISPD07 data format does not match the input format required by CUGR. We input the net information obtained through FLUTE + edge shifting and the corresponding resource distribution into CUGR, invoking its rip-up and reroute algorithm to construct congestion-free nets. The wirelengths for newblue01_2d and newblue03_2d are 2,462,361 and 7,655,784, respectively, with congestion values of 0 and 29,683.
> **Q4: Is the RST construction conducted by Python or C++?**
Our overflow-avoided RST construction algorithm is primarily implemented in C++.
> **Q5: Will the code be made available upon acceptance of the paper?**
Thank you very much for your interest in our method. Yes, our code will be open-sourced.
Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
---
Rebuttal Comment 1.1:
Comment: Thanks for your response. I have read the rebuttal and other reviews. Currently I have no further question and keep the current positive rating. Wish you good luck.
---
Reply to Comment 1.1.1:
Comment: Thank you again for reviewing our manuscript. Your constructive comments and insights are greatly helpful in improving our work. | Summary: The paper presents NeuralSteiner, a method to improve chip design routing by minimizing overflow. Using a neural network to predict Steiner points and a graph-based algorithm for selection, ensures connectivity and reduces congestion. NeuralSteiner outperforms existing methods, achieving up to a 99.8% reduction in overflow with only a 1.8% increase in wire length, and scales effectively to large nets without needing modifications for new designs.
Strengths: 1. This paper is well-written and the figures are very easy to understand.
2. The experiment was very thorough, comparing the routing results of different routers, and verifying the effectiveness of the method in optimizing overlap.
3. The Overflow-avoiding RST Construction proposed by the author is novel and effective. Meanwhile, NeuralSteiner is the first learning-based approach capable of optimizing both wirelength and overflow.
Weaknesses: 1. Typo: line 282: “enven arger”->”even larger”
2. It is recommended that the paper should be extended to a full 9 pages.
3. According to the results, this method still faces challenges when optimizing the wire length and running time.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Table 2, do you compare the Boxrouter’s routing results?
2. In Table S2, why the correctness rate is 100%. Is it due to the small problem size?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful comments. Please note our global responses with additional experimental results. Below we will address specific questions.
W1: Thanks again for your meticulous review, and we will correct typos in the revised version.
> **W2: The paper should be extended to a full 9 pages.**
Thank you for your comments and for affirming our method. We will expand the article based on the constructive comments in subsequent revisions.
> **W3: Challenges when optimizing the wire length and running time.**
We understand the reviewers' concerns about NeuralSteiner's optimization of wirelength and its scalability to large-scale nets.
First, wirelength is also one of our important metrics. In the construction of NAG, wirelength is considered within the weight, which allows us to achieve minimal wirelength loss while significantly optimizing congestion.
We analyze the algorithm complexity of the overflow-avoiding RST construction algorithm in Q3 of global responses and use acceleration methods to further reduce it to $(O((N_{pin})^3 \log N_{pin}))$.
Additionally, according to our latest statistics in Table 3 of the rebuttal PDF, NeuralSteiner does not add an excessive number of extra candidate points to construct the NAG, meaning that the number of nodes remains at the scale of $(O(N_{pin}))$. Therefore, the efficiency of this method will be very high for the vast majority of nets with fewer pins. For nets with more pins, heuristic methods such as subtree division can be used to reduce the computation time.
> **Q1: In Table 2, do you compare the Boxrouter's routing results?**
We understand the reviewers' concerns regarding the comparison of NeuralSteiner and traditional routers like Boxrouter. Boxrouter is not included in the ISPD07 results in Table 2 for the following reasons:
1. Unlike ISPD98, the ISPD07 global routing competition considers not only the consumption of routing resources but also the consumption of resources by cross-layer vias (structures to connect segments in different layers, consuming routing resources). Boxrouter's reported wirelength results also include part of the via, making it numerically incomparable with neural network-based algorithm results.
2. NeuralSteiner inherits the abstraction of the routing environment used in Hubrouter to consider the impact of routing segments on resources. However, this abstraction does not effectively model the effect of vias, making via optimization more challenging, which is a direction for future exploration and improvement.
> **Q2: Correctness rate is 100% in Table S2. Is it due to the small problem size?**
Thank you very much for your meticulous reading. The correctness of the net reflects the proportion of successfully connected nets in the test set, considering only connectivity. This metric comes from Hubrouter. Our method achieves 100% net correctness because our NAG-based RST construction algorithm ensures net connectivity, irrespective of the net size. In Table S2, we mainly aim to demonstrate that our method ensures connectivity like Hubrouter. Other methods, such as PRNet, fail to ensure connectivity for smaller nets and are therefore not included in Table S2 for comparison.
Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
---
Rebuttal Comment 1.1:
Comment: Thank you very much for your detailed response. I hope the authors can release the code soon as promised.
---
Reply to Comment 1.1.1:
Comment: Thank you again for reviewing our manuscript and for your interest in our work. As mentioned, we will make the source code publicly available upon acceptance of the paper. We appreciate your anticipation of its release. | Summary: This paper presents a learning-driven approach for overflow avoiding Steiner tree construction. The authors propose a two-stage framework that initially predicts the locations of potential Steiner points, followed by a post-processing algorithm that constructs the Steiner tree based on these predicted points. The effectiveness of the proposed method is then evaluated through a series of experiments.
Strengths: 1. The paper is well organized.
2. Detailed background introduction is included.
Weaknesses: 1. The idea lacks novelty.
2. Many important experimental settings are not disclosed.
3. The experimental results are not convincing.
4. The writing can be difficult to follow and occasionally confusing.
Technical Quality: 1
Clarity: 2
Questions for Authors: 1. The methodology presented in this work appears to align closely with prior works, such as Hubrouter, and employs trivial techniques. Could you please elaborate on the unique technical contributions of your study?
2. The primary aim of this paper is to optimize overflow during the construction of the Steiner tree, a process that occurs prior to routing. However, there tends to be a significant difference between pre-routing and post-routing overflow. Additionally, most routing tools will also take overflow reduction into account. Hence, optimizing pre-routing overflow may not yield significant benefits. Could you please shed some light on this?
3. In Section 3.4, could you clarify how the congestion value O(x,y) in Eq. (8) is calculated? From which stage are the O(x,y) values extracted? How are points that do not satisfy the two conditions handled? How are the values of the weights w_d and w_o in Eq. (8) determined?
4. The paper lacks clarity regarding the evaluation metrics used. Could you specify the stage from which the overflow values are extracted, for instance, is it immediately after RST construction or after global placement? Could you also share the tool used for global routing during testing and whether it considers congestion?
5. It would be beneficial if the authors could provide a comprehensive description of the test flow, including the source of the inputs, how the proposed framework integrates with the routing tool, and the stage from which the evaluation metrics are extracted.
6. The results in Table 3 seems wield. The data suggests that overflow loss reduces both overflow and wirelength, which is unusual as these two metrics typically have a trade-off. If overflow loss is designed to reduce overflow, it should result in an increase in wirelength. Could you provide some clarification on this?
7. I'm interested to know the number of Steiner points inserted by the proposed method. This is crucial as simple methods can also reduce overflow by adding more Steiner points to circumvent congested regions. However, an increase in Steiner points could significantly limit the optimization space of the subsequent routing process. Could you please include the number of Steiner points in the results tables?
8. To enhance reader comprehension, could you provide an example of an actual solution?
Confidence: 4
Soundness: 1
Presentation: 2
Contribution: 2
Limitations: Listed in questions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and thoughtful comments. Please note our global comment with additional experimental results. Below we will address specific questions.
> **W1/Q1: Novelty and contribution of the method**
Thank you for your valuable comments. Here, we further elaborate on the novelty and contribution of our method in comparison to the previous HubRouter approach, highlighting the differences and innovations in motivation, workflow, and neural network architecture:
1. The motivation of NeuralSteiner is to construct overflow-avoiding RSTs based on the deep learning method, which is crucial for practical global routing applications, rather than merely ensuring connectivity and optimizing wirelength.
2. We utilize a ResNet-based backbone network to learn the encoding of the overflow and pin map (lines 160-164). Additionally, we introduce the RCCA mechanism to address long-range association issues in large-scale nets (lines 165-175) and the cost loss to encourage overflow-avoiding candidate points generation.
3. Compared with other state-of-the-art learning-based methods. NeuralSteiner achieved a significant reduction in congestion with only a minimal sacrifice in wirelength quality of RSTs, fulfilling the motivation of our method.
4. Further experiments on ispd18/19 indicate that NeuralSteiner when combined with traditional routing tools, can effectively reduce the number of violations after detailed routing.
> **W2/Q4/Q5: Experimental settings, evaluation metrics and test flow.**
We realize the importance of clarifying these issues. This could be refered to in the Q2 in global responses for the detailed discussion of the different experimental setup, the metrics extraction method, and different test flows that are integrated with abstract resource model or routing tools.
> **W3: The convincingness of experimental results.**
Since our environment resource modeling and dataset division are the same as those in previous works, we can make a fair comparison between the NeuralSteiner and other neural network-based RST generation methods on the same routing test set.
Tables 1, 2, and Figure 3 in the main paper compare the RST generation results of different methods on ISPD98 and ISPD07. NeuralSteiner achieves a significant reduction in congestion while maintaining wirelength quality, and it also demonstrates higher solving efficiency. Tables 3 and 4 explore the roles of our different design modules from various perspectives.
Experimental results of integrating our method with the routing tool CUGR for global routing are shown in Table 2 in the rebuttal PDF, which indicates the potential of the NeuralSteiner method to reduce overflow when combined with routing tools.
W4: Thanks again for your meticulous review and we will correct the confusing parts of the writing in the revised version.
> **Q2: The necessity of optimizing pre-routing overflow.**
As you mentioned, the relationship between pre-routing congestion and post-routing congestion is quite complex. However, we still believe that optimizing pre-routing congestion is of significant importance:
1. Many global routers use rip-up and reroute to eliminate the overflow caused by initially generated RSTs without considering congestion. Introducing congestion mitigation (such as NeuralSteiner) at the RST generation stage can provide a resource-friendly initial solution for subsequent stages.
2. Table 2 in the rebuttal PDF shows that by integrating NeuralSteiner into CUGR, we achieve a 4.4% and 19.1% reduction in shorts and spaces, respectively, in the detailed routing results, with minimal losses in wirelength and vias. This demonstrates that our pre-routing overflow mitigation method is beneficial for reducing overflow in the post-routing results.
3. The latest routing tool DGR [1], introduces concurrent optimization techniques in the RST generation field, optimizing wirelength, vias, and congestion simultaneously. This also leads to a significant reduction in DRV in the detailed routing results, further supporting our opinion.
> **Q3: Questions about Eq.8, overflow extraction, and edge conditions.**
Please refer to the Q4 in global responses for the detailed clarification of $O(x,y)$ and the parameters $w_d$ and $w_o$.
As shown in Figure 1(a)-(d) on page 2, the overflow map of the net with black pins is extracted in real time from the environmental model.
There will be no edges between points that do not meet both conditions. If this results in a disconnected NAG, we will add turning points and edges between the disconnected components according to the 2 conditions.
> **Q6: The role of cost loss.**
Thank you very much for pointing out this informative detail. In our loss setting, we combine focal, dice, and cost loss. The first two primarily learn candidate points from expert routing data. The inclusion of cost loss introduces candidate points not present in the expert data, thereby altering the topology of the final RST generated.
Additionally, our post-processing method constructs RSTs using a greedy algorithm, which does not guarantee the shortest wirelength. Including cost loss provides a larger solution space for RST construction, enabling optimization of both wirelength and congestion.
> **Q7: Include the number of Steiner points in the results tables.**
Thank you very much for your important suggestion. In Table 3 of the rebuttal PDF, we have summarized the predicted candidate points for ISPD07. The average number of candidate points added by NeuralSteiner is not significantly more than the average number of pins, which means that for the vast majority of nets, the number of nodes in the NAG will remain at a small scale.
> **Q8: Could you provide an example of an actual solution:**
Thank you for your suggestion. We have added a comparison of the actual solutions between NeuralSteiner and Hubrouter in the rebuttal PDF.
Reference:
[1] DGR: Differentiable Global Route, DAC 2024
---
Rebuttal Comment 1.1:
Title: Reply
Comment: I appreciate the authors' effort in answering my questions and addressing my concerns. Therefore, I would like to change the score from 4 to 5.
---
Reply to Comment 1.1.1:
Comment: Thank you very much for your positive feedback and for taking the time to review our responses. We greatly appreciate your willingness to reconsider the score and are glad that our explanations addressed your concerns. Your constructive comments and insights are greatly helpful in improving our work. | Rebuttal 1:
Rebuttal: Dear Area Chairs and Reviewers,
We would like to thank all reviewers for providing constructive feedback that helps us improved the paper. We are encouraged that the reviewers acknowledge the novelty (4X1w, 1MAW), effectiveness (4X1w, 1eER, 1MAW) of our approach, thoroughness of our experiments (4X1w, 1eER) and are interested in the comparison and integration with traditional routers (JcjD, 1eER). We particularly appreciate the recognition of our significant improvements in overflow over state-of-the-art learning-based methods (4X1w, 1eER, 1MAW), the affirmation of our method's generalization capabilities (1MAW), and the acknowledgment of our method's contributions to the field (4X1w, 1eER).
Beyond the positive feedback, some concerns from the reviewers are common, so we give the following global responses:
**Please see the attached PDF for a one-page PDF with a summary of added experimental results.**
> **Q1: Why not adopt an end-to-end model?**
We understand the reviewers' concerns regarding NeuralSteiner not being an end-to-end approach. This is determined by the complexity of the global routing problem, which requires learning-based methods to 1) learn the representation of routing resource, and 2) generate routing result that ensures connectivity of all pins while minimizing wirelength and overflow. NeuralSteiner achieves the first goal by learning the expert candidate Steiner points and corner points. However, processing these candidate points as discrete values for subsequent RST construction makes differentiable learning challenging.
Our NeuralSteiner method significantly outperforms other end-to-end methods such as PRNet [1] and two-stage methods like HubRouter [2] in terms of total time cost and overflow. In the future, we will explore using continuous probabilistic candidate point maps and investigate end-to-end learning with neural networks for generating overflow-avoiding RSTs.
> **Q2: Experimental setup and comparison with routing tools.**
First, our environment modeling of routing resources is consistent with HubRouter [2]. Metal layers are projected onto a 2-dimensional grid graph with a horizontal and a vertical layer. The NeuralSteiner interacts with this resource model, extracting overflow map in real-time and updating the resource of corresponding edges in the model based on the routing results (Figure 2).
Second, NeuralSteiner extracts the metrics by calculating the total wirelength and overflow in the environmental model after routing all nets. This metric extraction method is also the same as that in previous work [2].
Third, we do not integrate our method with routing tools in the main experiments of the paper because they mainly evaluate the metrics above of the RSTs generated by different learning-based methods.
Finally, the comparative experimental results of integrating our method with the routing tool CUGR for global routing are shown in Table 2 in the rebuttal PDF. In these experiments, the RSTs generated by NeuralSteiner replace the RSTs generated by FLUTE + edge shifting in CUGR, and subsequent detailed routing is performed using DRCU. The results on larger ISPD18/19 benchmarks confirm the potential of the NeuralSteiner method to reduce overflow when applied to routing tools.
> **Q3: Time complexity and scalability of the RST construction algorithm.**
The distance between two connected components is the sum of the edge weight of the shortest path between any two points from each component. Here, we use Dijkstra's algorithm to search for the shortest path between every two points. Given that NAG is a sparse graph, and the number of edges and nodes in the NAG is $(O(N_{pin}))$ according to table 3 in the rebuttal PDF, Dijkstra's algorithm with a binary heap yields a time complexity of $(O(N_{pin} \log N_{pin}))$. The total time complexity is $(O((N_{pin})^2 \log N_{pin}))$ to calculate the shortest path for all points in the connected component. In each iteration, we connect the two components with the shortest distance and update the distances matrix. It will end with a connected component containing all pins. So, in the worst case, the total time complexity of the RST construction algorithm is $(O((N_{pin})^4 \log N_{pin}))$.
Table 3 in the rebuttal PDF shows that the pin size for most of nets in dataset is no more than 10, indicating that the algorithm will be efficient.
Additionally, the algorithm can be accelerated by simple heuristic rules. When calculating the distance from one connected component to another, we compute and filter out a fixed number of node pairs with the shortest Euclidean distance, performing Dijkstra's algorithm only on these limited node pairs. This reduces the complexity of the RST construction algorithm to $(O((N_{pin})^3 \log N_{pin}))$. As shown in Table 1 in the rebuttal PDF, the accelerated algorithm significantly improves solving efficiency while achieving similar wirelength and overflow compared to the original version.
> **Q4: Parameters used in NAG's edge weight.**
We realize that the use of the $O(x,y)$ in Eq.8 may be misleading. It is identical in meaning to $o_{ij}$ in Eq.6 in Section 3.3, representing the value at the corresponding position on the overflow map. Eq.8 defines the weights used for edges in the NAG. These weights consider both the wirelength and the congestion that the edge traverses, with parameters $w_d$ and $w_o$ reflecting the trade-off between them. In all experiments, we use $w_d = 1$ and $w_o = 2$. We will explicitly state this in Section 3.4 and make this revision in the final version.
In the following, we provide detailed answers. We are glad to give further responses for informed evaluation.
Reference:
[1] The policy-gradient placement and generative routing neural networks for chip design, NIPS 2022
[2] HubRouter: Learning Global Routing via Hub Generation and Pin-hub Connection, NIPS 2023.
Pdf: /pdf/d6f967a35e28f20b2a5634c9e3f3336c94fc4d06.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Weight decay induces low-rank attention layers | Accept (poster) | Summary: The paper studies the effect of weight decay on losses where the trained parameters include two matrices that are multiplies, and specifically the bias of such losses towards low rank. It is shown theoretically that under certain conditions, local minima of the L_2 regularized loss coincide with minima of the L_* loss, regularized by nuclear norm, which is an analog to low rank. Also, it is shown that with gradient flow, the distance between the minima of the L_2 and L_* losses converges exponentially fast to 0.
Several experiments are given, mostly on transformers, that verify these findings empirically.
Strengths: I think that overall this is a nice paper that should be accepted. The results, although pretty simple mathematically, show a nice observation on the bias of certain parameterizations towards low rank. The experiments seem thorough and cover both toy examples with smaller models, and larger-scale transformers.
Weaknesses: My biggest issue with the paper is its framing. The theoretical results seem to apply more for compressed sensing or matrix compleletion, where the loss is L(A^\top B) for some differentiable L. However, the paper (and also the title) seem to discuss mostly transformers. It might be popular to write papers about transformers these days, but this specific paper looks more like a paper about implicit bias in general, with experiments on transformers.
Second, the discussion about linear models focuses on the underspecified regime since the matrix S is invertible (i.e. the input dimension is larger than the number of samples). This is not the standard regime for linear regressions. I would suggest, to make this discussion more relevant to what happens in practice, is to discuss kernel regimes (e.g. NTK), where it is more standard where the kernel’s dimensions is large than the number of samples.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What happens if L in Eq. (2) is non-differentiable? In practice, L it is non-diferentiable for transformers since it incorporates an MLP with a ReLU activation.
- In Figure 3 - what is the performance of each setting? It is not clear for the more realistic values of wight decay (i.e. those with the best performance) what is the rank.
- Line 212 - How do the authors define an unstable equilibrium?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss the limitations of their results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and constructive criticism. Below, we address your concerns point by point.
> The theoretical results seem to apply more for compressed sensing or matrix completion, where the loss is $L(A^\top B)$ for some differentiable $L$. However, the paper (and also the title) seem to discuss mostly transformers.
This is a valid point. Yet, we stress that the motivation of our research is to understand the inductive bias of weight decay in attention layers. Weight decay is virtually always used by default, and attention layers are almost ubiquitous in modern architectures. Therefore, we believe shedding light on this inductive bias is of great relevance to the community, and justifies the current framing despite our results ultimately being useful in other fields.
> The discussion about linear models focuses on the underspecified regime since the matrix $S$ is invertible. This is not the standard regime for linear regressions.
We may have misunderstood the reviewer here, but we would like to clarify that our setting considers the case where $D$ (number of data points) is larger than $d$ (dimension of the input), since we assume the matrix $XX^\top$ is the identity. As you point out, this is the standard regime for linear regression. Furthermore, even in the case where $d < D$, as long as $XX^\top$ and $YX^\top$ are still co-diagonalizable (a convenient assumption also made by Saxe et al. [2013]), we believe the derivation to be very similar, and that the weight regularization would simply set the weights corresponding to the degenerate dimensions to zero.
> What happens if $L$ in Eq. (2) is non-differentiable? In practice, $L$ is non-differentiable for transformers since it incorporates an MLP with a ReLU activation.
This is indeed an important question. We point out that modern transformers, and in particular LLMs, use the GeLU or SwiGLU activation, making them differentiable.
However, when the model is merely almost everywhere differentiable, we do not know whether the correspondence of the local minima between $L_\star$ and $L_{L2}$, and in particular whether Lemma 3.2, still holds. Note that if the Lemma does hold, given that our proof in Theorem 3.3 does not use differentiability, the correspondence result would hold.
As for our result on the optimization dynamics (Theorem 3.4), our theory uses the gradient flow limit to study the problem. The flow is, of course, undefined at non-differentiable points. However, the Brownian noise accounting for the minibatch noise would allow escape from any non-differentiable point, so one could still define it soundly (e.g., setting the gradient to zero or to any subgradient at those points). Thus, almost everywhere differentiability is enough for the dynamics to be well described by our approximation.
> In Figure 3 - what is the performance of each setting? It is not clear for the more realistic values of weight decay (i.e., those with the best performance) what is the rank.
The performance of the LLM experiment for various values of weight decay can be found in Table 1. For ViT, interestingly, we found that removing the weight decay from the attention layers did not result in a significant gain in performance. In fact, removing weight decay even slightly degraded performance, indicating that for visual tasks, low rank doesn't seem to hurt as much as it does for language tasks.
This highlights a subtlety in our message that we believe was not clear in the original manuscript. The takeaway of our work should not be that practitioners should always turn off weight decay from attention layers. Our work aims to uncover the confounding, low-rank-inducing inductive bias of weight decay coupled with attention layers and demonstrate the relevance of this inductive bias in real applications, including popular foundation models. The benefit of this bias is problem- and architecture-dependent. By taking an off-the-shelf model and hyperparameters and showing that performance can be improved by turning off weight decay in its attention layers, we aim to highlight the need for practitioners to take this inductive bias seriously.
We will include in the final manuscript the performance on ViT, and add the above clarification point.
> Line 212 - How do the authors define an unstable equilibrium?
In this setting, we mean by "unstable equilibrium" a stationary point that is not a local minimum, i.e., the gradient is null but the Hessian is not positive semi-definite.
----
We thank the reviewer again for their review and appreciation of our work. We remain available if you have any further questions.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response.
My point about S being invertible may have been due to confusion on my side, and the authors clarified it.
All the rest of my questions have been answered and I keep my score. I think this paper should be accepted. | Summary: This paper studies the effect of weight decay on the product of matrices, which appear in the attention layers. This paper shows that weight decay will have an effect of reducing the rank hence hurting the generalization. The theoretical results are verified in extensive experiments.
Strengths: 1. Understanding the attention layers is an important question hence is of great interest.
2. The experiments are extensive and well support the theoretical results.
Weaknesses: The novelty of this paper is quite weak:
- Some theoretical results have already been shown by Wang and Jacot [2023]. Specifically, as mentioned by the authors, Theorem 3.1 in Wang and Jacot [2023] is a general version of Theorem 3.3.
- Theorem 3.4 is for gradient flow which seems far from the algorithms used in practice.
- The empirical observation that low rank weights hurt the generalization also has been shown in [Sharma et al., 2023].
Please correct me if I miss any key contributions in this paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. About Theorem 3.4, how to see that "... long before stationary points are found"?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: The contribution of this paper, as outlined in the weaknesses section, does not seem substantial enough.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their feedback. We address your concerns point by point below:
> Some theoretical results have already been shown
We stress that while Wang and Jacot [2023] show a similar result, their result does not apply to transformers where $AB^\top$ is not full rank due to an architectural bottleneck. More importantly, their result only describes what happens when training is converged to an equilibrium point. We stress that this setting is practically never achieved. Our theoretical contribution goes beyond this and considers what happens *during* optimization, which is theoretically far more complex and meaningful than studying behaviour at stationary points. We therefore stress that our results establishes that rank regularization does happen already early in training. This not only provides a theoretical explanation of past empirical observations (e.g., Khodak et al., [2022]), but is especially relevant for understanding (online) training large foundation models such as LLMs, where optimization typically is never brought to completion. Our empirical results, and in particular the analyses we performed on the pretrained foundation model weights, clearly demonstrate this relevance. We hope that this clarifies the novelty and relevance of our work.
> Theorem 3.4 is for gradient flow which seems far from the algorithms used in practice.
We note that we provide in the appendix a similar result when considering gradient flow with noise, as well as with momentum and decoupled weight decay.
As for the continuous dynamics, we believe this to be a benign approximation. Indeed, for the non-stochastic version (Lemma B.1), the proof remains unchanged. One obtains an exponential decrease in $(1-\lambda\eta)^2$, where $\eta$ is the learning rate. Note that for $\eta \to 0$, we retrieve the $2\lambda\eta$ factor from the continuous case. For the stochastic version, one would get an exponential decrease of $A^\top A - B^\top B$ until it becomes of the order of $\sigma M$, where $\sigma^2$ is the variance modeling the stochasticity of SGD, and $M$ is an upper bound on $A$ and $B$. Things get more complicated if we want to model discrete time, stochasticity, and momentum. The discreteness of time adds additional interaction terms between the noise terms and the matrices $A$ and $B$ that needs to be taken into account. The continuous SDE offers a nice approximation of real dynamics, as $\eta$ is often chosen to be small, while still capturing all the intuition.
We agree however that the quality of these approximations to the practical training dynamics is nonetheless an important point to discuss, and will add a short paragraph in the final manuscript.
> The empirical observation that low rank weights hurt the generalization also has been shown in [Sharma et al., 2023].
We respectfully disagree with this assessment. Sharma et al. [2023] show that *after pretraining*, when surgically reducing the rank of the MLP weight matrices, the performance of various LLMs improves on *downstream tasks*. As part of their hyperparameter search, they found no setting in which rank reduction of (even subsets of) attention matrices improved performance, and show some evidence that it may even hurt on, e.g., the CounterFact dataset.
In contrast, to the best of our knowledge, we are the first to show that the low-rank regularization induced by weight decay on attention layers *during optimization* can hurt the perplexity of LLMs.
Furthermore, our empirical results demonstrate, by showing the equality of the entries of e.g. $W_KW_K^T$ and $W_QW_Q^T$ (Fig. 4 and Proposition 3.1), that the attention weights of popular foundation models, such as Llama 2 and ViTs, are being rank regularized through the mechanism we describe.
These are, to the best of our knowledge, very different insights from Sharma et al., 2023. We welcome the reviewer to clarify and point out results in Sharma et al., 2023 where the two aforementioned points were observed empirically.
> About Theorem 3.4, how to see that "... long before stationary points are found"?
This is a goot point and deserves some additional clarification in the main text. What we theoretically show is that under reasonable conditions, the timescale at which the rank regularization can be observed is independent from the rest of the optimization (with a characteristic time equalling $ \lambda $ ). This means that for long enough optimization, such as that of a foundation model, there is ample time for the co-optimization of the two regularizations to happen. Our analyses on pretrained model weights in Fig. 4 once again support that view.
----
We hope we could clarify our contribution to the reviewer and convince them that the paper deserves acceptance. Please let us know if you have further questions or concerns.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. I do not think my concerns are addressed. I will clarify them further below.
1. _...More importantly, their result only describes what happens when training is converged to an equilibrium point...._
Regarding Theorem 3.3, I believe a similar result has been shown in [Wang and Jacot, 2023]. Specifically, I believe Theorem 3.3 also describes what happens at the equilibrium point hence there is not much difference between your results and their results. Additionally, I do not think the $AB^T$ not being full rank is a big technical difficulty.
2. _...note that we provide in the appendix a similar result when considering gradient flow with noise..._
Gradient flow with noise, as well as continuous SDE is also far from practice. I am not sure how much insight we can obtain from analyzing these dynamics. I am not saying the insights shown in this paper are wrong. I just want to emphasize the results are not very strong.
3. _..to the best of our knowledge, we are the first to show that the low-rank regularization induced by weight decay on attention layers during optimization can hurt the perplexity of LLMs..._
This is what you said in line 322-324:
>these findings complement the recent observation that reducing the rank of language model MLP matrices post-training improves their reasoning performance, while doing the same for attention layer matrices mostly hurt it [Sharma et al., 2023].
I am confused. Didn't [Sharma et al., 2023] already show the results for attention layers?
4. _What we theoretically show is that under reasonable conditions, the timescale at which the rank regularization can be observed is independent from the rest of the optimization._
Note that Theorem 3.4 requires the norm of $A$ and $B$ to be __uniformly__ bounded during the whole training process. You should make it clear in the statement of Theorem 3.4. Furthermore, this is a very strong assumption hence making Theorem 3.4 quite weak. While it might be empirically true, it cannot be proved analytically.
5. _... long before stationary points are found_
To me this cannot be easily seen from Theorem 3.4. If it is straightforward, I would suggest adding a corollary and proving the result, otherwise, it is just an empirical observation and should be made clear.
Overall I think this is an interesting paper and has many insightful observations. However, the main focus of the paper is not very clear. This paper seems to make theoretical contributions as it presents the theory first and then uses empirical results to verify it. To me, the theoretical results are not significant enough for NeurIPS as I have outlined in the weakness and response. I would suggest shifting the focus to empirical contributions and then providing theoretical insights.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We will address your points below.
For our theoretical contributions: The reviewer finds it "not very strong" since e.g. gradient flow with noise is far from practice. We reiterate that we also show our result for stochastic gradient flow with momentum and decoupled weight decay, which is by no means trivial. This is not only an obviously good approximation of practical training dynamic when using the still widely used SGD with momentum and decoupled weight decay (as we have explained previously, continuous dynamic is a benign approximation), it is a theoretically tractable approximation that is as close as one can get to the popular AdamW optimizer dynamic. Besides, we proved a set of novel and general matrix inequalities in order to link the norm $||A^\top A-B^\top B||$ to the discrepancy between the L2 norm and nuclear norm - and its generalization to the product of arbitrary many matrices. We believe these inequalities are nontrivial, and can be of interest for a larger community. Together, our results go beyond and complement the description at equilibrium, and provide an understanding of the training dynamic of weight decay applied to attention layers, shedding light on empirical observations that were made in previous works.
As for our empirical contributions: we refer the reviewer to our rebuttal where we have clarified what the contributions of [Sharma et al., 2023] were. Given the clarification, if the reviewer disagrees that our empirical contributions are novel, as we requested, could they clarify and point out results in Sharma et al., 2023 where our empirical contributions were already made? If not, together with the clarification on our theoretical result, we kindly ask whether the reviewer still believes our contribution is "Poor", i.e. the lowest possible score.
> Note that Theorem 3.4 requires the norm of $A$ and $B$ to be uniformly bounded during the whole training process. You should make it clear in the statement of Theorem 3.4. Furthermore, this is a very strong assumption hence making Theorem 3.4 quite weak.
We are confused by this point. We clearly state the boundedness assumption in theorem 3.4, i.e.
"(...) If $||A||, ||B||$ remain bounded during training, then (...)".
As those values are real values, the only meaning for boundedness is that there exists M such that those values remain smaller than M. Uniform boundedness would have made sense if we were considering a family of functions, which we do not. Also, the boundedness constants of $||A||$ and $||B||$ may differ, should the confusion come from this.
As for the necessity of the assumption itself: one can easily find sufficient conditions on the loss, such that the boundedness assumption provably hold. We would happily elaborate if the reviewer is interested. However, it is also possible to construct pathological losses such that the norm of A or B diverge. In that case, little can be said about $|L\_{L2}(A, B) − L\_\star(AB^\top)|$. Ultimately, we used the boundedness assumption because any training dynamic that converges will trivially verify it. This is the practical scenario that we are interested in, given that in practice, virtually any stable training on a realistic loss, coupled with weight decay, will result in a converging dynamic.
> "long before stationary points are found". To me this cannot be easily seen from Theorem 3.4.
We will reformulate Theorem 3.4 in the following way for clarity:
"(...) then we have that $|L\_{L2}(A, B) − L\_\star(AB^\top)|$ converges exponentially to 0, *with a characteristic timescale equal to $2 \lambda$*".
This hopefully highlight the point we made in the rebuttal. Furthermore, to better reflect our point, clarified in the rebuttal, we will also modify our sentence from
"(...) long before stationary points are found."
to
"(...) *potentailly* long before stationary points are found."
We acknowledge it may have been confusing, and thank the reviewer for pointing it out to us.
-------
We hope we could address the remaining concerns, and that together with our other additional clarifications, it convinces the reviewer to reconsider the score. | Summary: This paper explores how applying weight decay to matrices affects the rank of their product. They show that L2 regularization of the operands is equivalent to regularizing the nuclear norm (sum of singular values) of the product which could result in a lower rank. The attention block of transformers contains several plain matrix products that this theory applies to. The authors experiment with different transformer variants, showing that applying weight decay to these matrices results in a loss of rank which is detrimental to performance. They suggest not applying weight decay to these specific matrices, while keeping it for the rest.
Strengths: + Interesting topic of high relevance to the community, weight decay is almost universally used for transformer training and improved guidelines and understanding could result in practical performance gains.
+ The theory looks sound with simple experiments that support the main conclusion.
+ The empirical evaluation on real transformers supports the theory.
+ The paper is well written overall although a bit dense at times. The figures are sufficiently clear although a bit small and not in a vector format.
Weaknesses: - The theory relies on analyzing the converged solution with gradient flow. I’m not sure how well this corresponds to real training (it would be nice to discuss this).
- The experiments could be stronger to eliminate some potential confounding explanations (see details below).
- (minor) Missing a couple of related work on weight decay (see details below).
Technical Quality: 3
Clarity: 3
Questions for Authors: **Experiments**:
- The ViT experiments show a much larger loss of rank but the performance impact of this is not quantified, why not?
- For the GPT experiments the loss of rank seems very small as you point out. I’m not convinced that this is the root cause for the performance loss, rather then some temperature effects for the softmax.
- Many related works (see below) explore weight decay in terms of effective learning rates. They would suggest that when changing the weight decay the learning rate should be adjusted to compensate. Otherwise it is unclear if the performance degradation truly results from changes in the rank and weight decay or changes in the effective learning rate. Showing that these rank effects occur from the effective learning rate as well would strengthen the results and relate it to existing lines of work.
- Suggested experiment: Disentangle the softmax temperature effects (from the magnitude of the weight matrices / activations) from the rank effects. Maybe you could repeat the GPT or ViT experiment using a scale-invariant softmax from [1], reporting the rank and performance again. Since this softmax alternative does not depend on the scale of the inputs, it would eliminate this confounding effect.
- Suggested experiment: Disentangle the effective learning rate effects from the rank effects. When changing the weight decay, keep the weight decay * learning rate product constant. This will keep the effective learning rate in the steady state constant as described by [5] for AdamW, but also change the resulting weight magnitude, so it should be combined with something like the scale-invariant softmax to eliminate that effect.
**Related work**:
- Overall you provide a good overview of prior weight decay literature but are missing one important line of work that explores weight decay in terms of the effective learning rate. Here are a couple of notable works from this line: [2] [3] [4] [5]
**Other Questions**:
- In my experience high (effective) learning rates cause a loss of rank (at least certain measures like the stable rank) even in standard matrices (not just the products). Is this something you observe in your experiments (e.g. a loss of rank in A and B that you apply weight decay to, not just the product AB)?
[1]: Li, Zhiyuan, Srinadh Bhojanapalli, Manzil Zaheer, Sashank Reddi, and Sanjiv Kumar. "Robust training of neural networks using scale invariant architectures." In International Conference on Machine Learning, pp. 12656-12684. PMLR, 2022.
[2]: Li, Zhiyuan, and Sanjeev Arora. "An exponential learning rate schedule for deep learning." ICLR 2020.
[3]: Wan, Ruosi, Zhanxing Zhu, Xiangyu Zhang, and Jian Sun. "Spherical motion dynamics: Learning dynamics of normalized neural network using sgd and weight decay." Advances in Neural Information Processing Systems 34 (2021): 6380-6391.
[4]: Li, Zhiyuan, Kaifeng Lyu, and Sanjeev Arora. "Reconciling modern deep learning with traditional optimization analyses: The intrinsic learning rate." Advances in Neural Information Processing Systems 33 (2020): 14544-14555.
[5]: Kosson, Atli, Bettina Messmer, and Martin Jaggi. "Rotational equilibrium: How weight decay balances learning across neural networks." In International Conference on Machine Learning, 2024.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper does not really discuss limitations in any depth. I would not expect any particular adverse societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the reviewer for providing us with valuable feedback. We address your concerns and questions point by point below.
> The theory relies on analyzing the converged solution with gradient flow. I’m not sure how well this corresponds to real training (it would be nice to discuss this).
We clarify that we have three main theoretical contributions: the correspondence of the local optima of the two regularized losses; the matrix inequalities and the exponential decay of the difference between the two losses during optimization. The first two results are independent of the optimization method and, therefore, are relevant for real training. Specifically, if the solution found is an approximation of a local (or global) minimum of the L2-regularized loss, it is also an approximation of the nuclear-norm regularized loss.
As for the result on the training dynamics, we provide results for both the simple gradient flow regime and stochastic gradient flow with momentum. There are indeed several apparent sources of discrepancy between these dynamics and real training dynamics, e.g., continuous vs. discrete dynamics, how to model the minibatch noise exactly, and how to approximate the theoretically intractable AdamW dynamics. While we believe some of these approximations are benign (see our 2nd response to reviewer fDM3), others may not be (see, e.g., [1]). Ultimately, we had to strike the right balance between theoretical tractability and proximity to real training, such that relevant insights may be extracted without overcomplicating proofs, and we believe our empirical verifications validate the modeling. We agree that these are nonetheless important points to discuss, and will add these points as a limitation of the current theory in the final manuscript.
[1] Dynamic of Stochastic Gradient Descent with State-Dependent Noise. Qi Meng et. al Archive 2020
> (minor) Missing a couple of related work on weight decay (see details below).
Thank you for providing these references. The relationship between weight decay and the effective learning rate is very relevant in our setting, as you pointed out. We appreciate the reviewer for bringing this to our attention. We will incorporate these references into the final version of the manuscript.
> The ViT experiments show a much larger loss of rank but the performance impact of this is not quantified, why not?
This is indeed a good point that needs clarification. In our experiments, we found that removing the weight decay from the attention layers in ViTs did not result in a significant gain in performance. In fact, removing weight decay even slightly degraded performance, indicating that for visual tasks, low rank doesn't seem to hurt as much as it does for language tasks.
This highlights a subtlety in our message that we now clarified in the revised version. The practical takeaway of our work should not be that practitioners should always turn off weight decay in attention layers. In fact, we speculate that when increasing, for example, the key-query dimension, the rank regularization of weight decay may become beneficial, even in language modeling tasks. Our work aims to uncover the confounding, low-rank-inducing inductive bias of weight decay coupled with attention layers and demonstrate the relevance of this inductive bias in real applications, including popular foundation models. The benefit of this bias is problem- and architecture-dependent. By taking an off-the-shelf model and hyperparameters and showing that performance can be improved by turning off weight decay in its attention layers, we aim to highlight the need for practitioners to take this inductive bias seriously.
> Disentangling the softmax temperature effects from the rank effects.
This is an excellent point. During our experimentation, we tried selectively turning off the weight decay on the key-query matrices only, as well as on the value-projection matrices only. Both of these changes, in fact, improved upon the baseline where weight decay was left at 0.1, yielding about half of the improvement achieved by turning weight decay off in all attention matrices. This suggests that low rank in both \( W_{KQ} \) and \( W_{PV} \) is beneficial and that the benefits are cumulative.
Now, since turning the weight decay off in the value-projection matrices does not affect the effective temperature of the softmax attention, we believe the performance improvement could be (at least partially) disentangled from it.
> Disentangling the effective learning rate effects from the rank effects.
This is also an excellent point. We thank the reviewer for bringing this confounding effect to our attention. We conducted the following experiment: to understand whether reducing the effective learning rate of attention layers can account for the performance improvement, we reused the off-the-shelf hyperparameters (where weight decay is set to 0.1 on all attention layers) and modulated the learning rate of the value-projection matrices by 1, 0.1, and 0.01. We left the key-query matrices untouched to disentangle the softmax temperature effect, as you suggested. Our early results suggest that reducing the learning rate in this manner results in significantly worse performance than the baseline, with a decrease of about 1% for the 0.1 modulation and 3% for the 0.01 modulation.
While it is difficult to truly disentangle the various confounding effects, we hope these new pieces of evidence are sufficient to convince the reviewer that the improved performance is, at least partially, due to the reduction in rank.
We thank the reviewer again for their inputs, particularly for their great ideas for analyzing confounding factors, which we also will add to our discussion. We believe that the various new clarifications and the emphasis on the new controls strengthen our paper. We hope the reviewer agrees, and will raise their score accordingly.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their detailed responses and clarifications. The additional experiments and discussion alleviate my concerns about confounding effects to a degree (although the scale-invariant versions would have been more convincing), and strengthen the paper overall. I will raise my review score to 6. | Summary: The authors investigate the landscape of two different optimization problems. For a general objective function $L$ defined on a matrix space, they consider two regularized objectives $\mathcal L_*(B, A) = L(AB^\top) + \lambda ||AB^\top||_*$, $\mathcal L_2(B, A) = L(AB^\top) + \frac{\lambda}{2} (||A||_F^2 + ||B||_F^2)$.
The authors prove that these two objective functions share the same set of critical points up to equivalence. Moreover, along integral lines of $\nabla \mathcal L_2$, the difference $|\mathcal L_2-\mathcal L_*|$ decays exponentially if both $||A||$ and $||B||$ remain bounded, effectively showing an implicit bias towards low-rank solutions. These results naturally apply to transformer training, for which the authors present some numerical results.
Strengths: The work is very well-presented, and the contribution is self-contained. The theoretical analysis covers cases of practical interest. Presenting the results through gradient flows enhances readability.
Weaknesses: 1. Lack of an additional lemma addressing the effect of time discretization in the gradient flow scenario.
2. The stochastic case is studied through SDEs, which may differ significantly from the exact setting encountered in deep learning.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. The balance condition is often enforced directly at initialization and can be proven [1] to be conserved during the continuous-time flow. By the use of proposition 3.1, this would imply that the gradient flow of $\mathcal L_2$ is **exactly the same gradient flow** of $\mathcal L_*$. This kind of condition is for example satisfied by spectral initialization, which is commonly used. I believe a comment about this would be interesting to see in the manuscript.
2. Do the authors see any way to address the two weaknesses? I believe further clarifications in this direction in the manuscript would make the result sound more solid.
3. I suggest to organize better the references section, to make them all uniform in style.
[1] S.S.Du, W. Hu, J.D. Lee. "Algorithmic Regularization in Learning Deep Homogeneous
Models: Layers are Automatically Balanced", NeurIPS 2018.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: I have no suggestions for the authors.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and interest. Below, we address your concerns point by point.
> Lack of an additional lemma addressing the effect of time discretization in the gradient flow scenario. The stochastic case is studied through SDEs, which may differ significantly from the exact setting encountered in deep learning.
We thank you for bringing up this point. We would like to briefly comment on how our result could be extended to the discrete dynamic setting and why we omitted it in the manuscript.
For the non-stochastic version (Lemma B.1), the proof remains unchanged. One essentially obtains an exponential decrease in $(1-\lambda\eta)^2$, where $\eta$ is the learning rate. Note that for $\eta \to 0$, we retrieve the $2\lambda\eta$ factor from the continuous case.
For a stochastic version, assuming we model the minibatch noise similarly, one would get an exponential decrease of $A^\top A - B^\top B$ until it becomes of the order of $\sigma M$, where $\sigma^2$ is the variance modeling the stochasticity of SGD, and $M$ is an upper bound on $A$ and $B$.
Things get one order more complicated if we want to model discrete time, stochasticity, and momentum. The discreteness of time adds additional interaction terms between the noise terms and the matrices $A$ and $B$, that we now need to take into account. While this is totally within reach, it would add a lot of technicalities that don’t provide any additional insights or intuition. The proofs would be barely readable, and even the phrasing of a precise proposition would be much more complicated. As you rightfully pointed out, the continuous SDE offers a nice approximation of real dynamics, as $\eta$ is often chosen to be small, while still capturing all the intuition.
We agree, however, that the quality of these approximations to the practical training dynamics is nonetheless an important point to discuss, and will add a short paragraph in the final manuscript.
> The balance condition is often enforced directly at initialization and can be proven [1] to be conserved during the continuous-time flow. By the use of proposition 3.1, this would imply that the gradient flow of $L2$ is exactly the same as the gradient flow of $L^\star$. This kind of condition is, for example, satisfied by spectral initialization, which is commonly used. I believe a comment about this would be interesting to see in the manuscript.
This is an intriguing point, and we thank the reviewer for bringing it to our attention. There is indeed a deep connection with spectral initialization, which will now have a dedicated paragraph in the discussion for the final manuscript.
However, we stress that $\mathcal{L}\_\star(A_t,B_t)=\mathcal{L}\_{L2}(A_t,B_t)$ for all $t$ during optimization with respect to $\mathcal{L}_{L2}$ does not imply the equality of the gradients of the two losses. We can theoretically show that the equality of the gradients would in fact hold whenever the singular values of $AB$ are all equal, but this does not hold in general.
Obviously, however, the equality $\mathcal{L}\_\star(A_t,B_t)=\mathcal{L}\_{L2}(A_t,B_t)$ implies $\mathcal{L}\_\star$ is co-optimized from the beginning, thus the rank of $AB$ is regularized, and given our theoretical result, optimization will find a local minimum of $\mathcal{L}\_\star$. We stress that while optimizing directly $\mathcal{L}\_\star$ will also find a local optimum, it may take a different trajectory and find a different optimum. The study of this difference may be an interesting future work.
> I suggest organizing the references section better to make them all uniform in style.
We fully agree with this point, and we have now fixed it. We thank the reviewer for bringing this to our attention. We will keep on improving our manuscript for the final version.
---
We thank the reviewer again for bringing intriguing connections to our attention, which will help improve the discussion. We hope we have addressed your concerns and remain at your disposal for any further questions.
---
Rebuttal Comment 1.1:
Comment: I wish first of all to thank the authors for their thorough rebuttal.
I apologize for the imprecise statement, I agree that the flows of $\mathcal L_2$ and $\mathcal L_*$ are not the same even under spectral initialization, but they are co-optimized during the whole path.
I am satisfied with the answers and I will keep my more. | Rebuttal 1:
Rebuttal: We sincerely thank all the reviewers for their efforts in evaluating our work. Please find your personalized responses addressing your specific concerns. We welcome any further questions and remain open to continued discussions. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this paper, the authors study the role of the weight decay in training of matrices especially when they appear in multiplicative form. First estabilshing the equivalence between the L2 regularization and nuclear norm regularized loss at stationary points and local minima, they establish how even training on the L2 loss invariably leads to the latter and hence low-rank solutions at the minima. Using this main observation, they empirically validate their claim for various practical LLMs and provide interesting insights about their behavior.
Strengths: The authors consider a well-motivated problem and characterize it mathematically with substantive evidence. Given the impact of transformers and the underlying need to study them fundamentally, such analyses are indeed quite insightful and useful. In particular, a rigorous theoretical anallysis of how low-rank solutions emerge is indeed a very interesting observation.
Weaknesses: While overall I enjoyed reading the paper, I wish the authors did a slightly better job at the following two things which could have made it even more solid:
1. The writing and exposition could be greatly improved and at various places especially concerning mathematical statements, it feels a bit loose though the details are provided in appendix. For example, Theorem 3.4 concerns the gradient flow analysis but neither the flow equation nor the precise statement for the "L2-nuclear norm" loss gap going to zero is written mathematically. Since this is an important result, it would be better to present enough mathematical details and expound upon them later. Similarly in Theorem 3.3, regarding the rank-r. It's not clear what this "r" is supposed to be. At the optimum or it's something fixed before hand. Things like this should be fixed. Also there are a couple of typos such as Eq.(8) should be a full stop, line 176 it reads better with "two losses" rather "2 losses" (same comment applies at various places like line 173" etc.
2. While the L2 loss in equation (2) concerns a pair of matrices (A,B) with the loss of the form L(AB^\top), given that the main motivation of the authors stems from transformers, it would have been nice to have a discussion about how their analysis extends to Eq.(1), let's say. Here there are two such multiplicative terms with $PW_V$ and $E^\top W_k^\top W_Q E$. In this case, can your results about low-rank minima still carry over? Or not? Some insights about this?
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the above weakness part for questions.
-------------------------
Minor note: In a recent work, the authors observe a similar low-rank structure when training transformers with Markov chains. They reconstruct the formula for these low-rank matrices and indeed it seems your low-rank optimal conditions seem to be satisfied there (Appendix C I think). Thought I would share with the authors if they find interesting (not to compare as such): https://arxiv.org/abs/2402.04161
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their encouraging feedback and constructive criticism. We address your concerns point by point below.
> The writing and exposition could be greatly improved
We really appreciate all the feedback about the writing. We fully agree with all the above points, all of which are now taken into account in the revised version.
> Can the results about low-rank minima still carry over when the multiplicative terms are $PW_V$ and $E^\top W_K^\top W_Q E$?
This is indeed a crucial point which could be clearer. We thank the reviewer for bringing it to our attention. For our theory to hold, the paramount condition is that two matrices, $A$ and $B$, enter the unregularized loss $L$ only as their multiplied form, $AB$. That unregularized loss may depend on other parameters and inputs, which obviously may interact with $AB$. But as long as this condition holds (as is the case for the attention matrices), the dependence of $L$ on other quantities can be safely ignored without loss of generality.
For any such two matrices $A$ and $B$, we can write the loss as $\mathcal{L}: (A, B, \Theta) \mapsto L(AB^\top, \Theta)$, where $\Theta$ denotes all the remaining parameters. Stationary points of $\mathcal{L}$ are also stationary in $A$ and $B$ so Lemma 3.2 still holds. For Theorem 3.3, the exact same proof would allow us to show that $A, B, \Theta$ is a local minimum of $\mathcal{L}\_{L2}$, if and only if $AB, \Theta $ is a local minimum of $\mathcal{L}_\star$ (constrained to ...). For Theorem 3.4, in the warm-up proof on line 534, $G$ hides a dependence on $\Theta$, but the result still holds as $G$ cancels out. The same applies to the gradient flow proof on line 542, where $G_t$ now depends on time, and thus indirectly on $\Theta$.
We realize we did not provide enough explanation of why the loss can be assumed to depend solely on $A$ and $B$ without loss of generality. In the updated paper, we now very clearly motivate, using the specific example of transformers, why studying our loss in this manner is the right thing to do.
---
We thank the reviewer again for these suggestions and questions, which we believe help improve the paper. We remain at your disposal if you have any further questions, and hope our clarification convinces the reviewer for a strong acceptance of the paper.
---
Rebuttal Comment 1.1:
Title: Acknowledgement of the rebuttal
Comment: I thank the authors for addressing my concerns and entrusting them with the promised changes. I will keep my score. | null | null | null | null | null | null |
Self-playing Adversarial Language Game Enhances LLM Reasoning | Accept (poster) | Summary: This paper explores an adversarial language game named Adversarial Taboo, where an attacker and a defender engage in a conversation centered around a target word visible only to the attacker. The attacker's goal is to prompt the defender to unconsciously utter the target word, while the defender strives to avoid doing so and deduce the word from the conversation history. Using LLaMA-2-7B and Baichuan-2-13B models, the study demonstrates that applying reinforcement learning to game outcomes enhances the reasoning capabilities of large language models.
Strengths: - The paper is well-written.
- The authors investigate an intriguing adversarial language game called Adversarial Taboo, demonstrating that training on game data helps to enhance general reasoning abilities.
- The study includes extensive experiments on two large language models (LLMs) across various datasets.
Weaknesses: - The entire procedure involves standard training within the game data. However, the insight that this training not only increase the game's win rate but also enhances general reasoning ability is quite interesting.
- There is no discussion or analysis conducted on why training within the game can increase general reasoning ability.
- No ablation study was conducted on training techniques, such as filtering negative advantages.
Others:
- PPO objective (Eq.2): KL (\pi || \pi_sft) ?
Technical Quality: 3
Clarity: 3
Questions for Authors: Regarding the two baselines (SP-20Q and SP-GuessCity), the authors describe them as self-play models for non-adversarial games. How is self-play conducted in the context of non-adversarial games?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper includes a section on limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the suggestions and comments from Reviewer 1AP3. To address the reviewer's concerns:
1. About **why reasoning ability improves by training within the game**: We provided a brief explanation about why self-playing adversarial games can improve the LLM's reasoning in the abstract and the introduction sections, that in adversarial language games such as Adversarial Taboo, both attacker and defender need to have high-level language capacities in terms of expression, understanding, and knowledge for the reasoning of the target word. In our opinion, the adversarial language game for reasoning improvement is an analog to sports competition for fitness. For example, a person can play basketball, tennis, or running competition to strengthen his/her body. We imagine that in the future, LLMs can also strengthen their capability by self-playing many different kinds of language competitions, of which Adversarial Taboo can be one candidate.
2. We began our experiments with negative advantages but failed to obtain the desired self-play performances. That is the reason why we use the ReST method which filters the negative advantages in the current paper draft. Since our experiments with negative advantages failed (the performances are even worse than the base models), we did not report them for we haven't figured out whether the failure with negative advantage is because of our RL implementation or the instability of negative policy gradients. The one thing we can make sure of is that without negative advantages the SPAG framework works effectively. We will recheck our implementation of negative advantages and provide a more comprehensive discussion about the negative advantages in the next revision.
3. About the **PPO objective**: Although in recent RLHF papers, the PPO (or more precisely, the LLM alignment objective) is with regularizer KL(\pi || \pi_ref), in the original TRPO[2] and PPO[1] paper the regularizer is KL(\pi_ref || \pi ). I think the term KL(\pi || \pi_ref) is more convenient for DPO[3] to derive the closed-form solution for the optimal policy.
Reference:
[1] Proximal Policy Optimization Algorithms, 2017
[2] Trust Region Policy Optimization, 2015
[3] Direct Preference Optimization: Your Language Model is Secretly a Reward Model, 2023
---
Rebuttal 2:
Title: Reply to rebuttal
Comment: Thanks for the reply. Regarding the second point about negative advantages, how would the method perform with negative advantages filtering, but other data (such as data from no self-play scenarios) were used instead?
---
Rebuttal Comment 2.1:
Comment: Thank you for your reply. I think if non-self-play data were used, the proposed SPAG objective would degenerate into an offline PPO (or A-LoL) objective with negative advantage filtering, which can be regarded as an off-policy variant method for RLHF (or simply RL). The major difference is that in SPAG we collect the positive advantages for **both the attacker and defender**, but without a self-play scenario, we can only collect one type of advantage from the **unique** environment outcome. Therefore, the proposed SPAG objective can be regarded as a PPO customization to the self-play scenarios. | Summary: The paper introduces a self-play method, Adversarial Taboo, to bolster the reasoning ability of LLMs. By engaging in a two-player game that requires strategic communication around a target word, LLMs demonstrate improved performance across several reasoning benchmarks. The method leverages reinforcement learning to refine the models' strategies over successive rounds of play.
Strengths: 1. The proposed method is interesting, leveraging adversarial gaming dynamics.
2. Models post-Adversarial Taboo training outperform baselines on benchmarks like BBH, ARC, and WinoGrande, across different models.
3. Utilizes self-generated data, minimizing reliance on external datasets and reducing data collection costs.
Weaknesses: 1. I concur with the use of a general-purpose instruction-tuning dataset to prevent overfitting to the game. However, I remain concerned that the method proposed in this paper may compromise the language model's broader capabilities in favor of enhancing reasoning skills, as evidenced by the examples provided in the appendix. The authors have only presented performance metrics for a few reasoning-related tasks. To bolster the argument, I believe it would be more persuasive if the authors could compare the results of their method with those of Alpaca SFT on general tasks.
2. The enhancements brought by the method proposed in this paper are not as pronounced when compared to the IM+AlpacaSFT baseline, and there is even a decline under mutual conditions. I look forward to the authors elaborating on the potential of this method for scaling or multi-round iterations.
Technical Quality: 3
Clarity: 3
Questions for Authors: The table in Figure 1 is quite difficult to read. Could you please add more explanatory information in the caption to facilitate the reader's understanding?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer R1Zh for the constructive comments and suggestions. To address the reviewer's concerns:
1. About the **evaluation on general tasks**: we agree with the reviewer that evaluating general language capacities is important to the SPAG models. We actually have reported the general language understanding evaluation of SPAG models on the Massive Multitask Language Understanding (MMLU) dataset in Table 1&2, which might give the reviewer an overview of the general capacity of SPAG models. Besides, the uniform performance gains of SPAG on various reasoning benchmarks also reflect the generalization ability of SPAG models, since these benchmarks are based on diverse reasoning challenges. Moreover, the SPAG models obtain uniform reasoning gains just by self-playing the adversarial language game without any training in the reasoning training sets (many reasoning benchmarks have their own training sets but we didn't use any of them). This setup also supports the generalization ability of our SPAG method.
2. About the **IM+AlpacaSFT baseline performance**: we agree that the IM+AlpacaSFT model can achieve a comparable performance with SPAG-1 model. However, the IM+AlpacaSFT has already shown the upper-bound performance of Alpaca-SFT on imitation-learned models. In contrast, SPAG models can continuously be enhanced through iterative self-play and RL training processes. Besides, we did notice that IM+AlpacaSFT outperformed SPAG with Baichuan-2 as the base model on the Mutual benchmark. Due to the limited effort, we did dive deep to find the reason for this observation in the current draft. One potential reason might be the Mutual dataset has a similar domain or text pattern to the text in the Alpaca set. We will make more detailed analysis on the Mutual benchmark in the next revision.
3. About the **Figure 1**: We are sorry about the confusion caused by the notations in Figure 1. We will add more explanations and descriptions to the notations in Figure 1 in the next revision.
---
Rebuttal Comment 1.1:
Comment: I've read the rebuttal. Thanks for your response! | Summary: This paper explores the effects of fine-tuning LLMs on adversarial language games on standard NLP benchmarks such as MMLU, BIG-Bench Hard, etc. The primary game studied is that of "adversarial taboo", an adversarial variant of the well known game Taboo that was first introduced in prior work. This variant is finite-horizon and zero-sum, where the communicator (aka the attacker) tries to induce the utterance of the target word in the listener (aka the defender) while the defender tries to identify the target word and gets one guess (by answer in the form "I know the word! It is ‘target word‘") to identify the target word. If the attacker is able to induce an utterance in the defender not in the form of a guess, or if the defender guesses incorrectly, then it wins the round, and if the defender correctly guesses the target word, it wins the round. Otherwise, the outcome is considered a tie. The authors collect self-play data from GPT and fine-tune Llama-2 7B and Baichuan 13B on the collected game traces, in addition to standard instruction-tuning data in the form of Alpaca. They find that compared to the base model, as well as Alpaca-only fine-tunes, their models trained to also imitate GPT's gameplay achieve higher performance on the tasks of interest. The authors also compare against imitation learning on other adversarial language games, such as 20 questions and "Guess My City" (proposed in earlier work), and find that models trained on these other games are much weaker.
Strengths: * Presentation of technical content is very clear. The summarization of recent advances in RL with LLMs and how they could be used in adversarial language games was quite helpful
* Authors tackle an important question (how to make LLMs stronger reasoners) with an interesting and creative approach
Weaknesses: * The authors don't actually have any results with RL, even though much of the paper is focused on MDP/RL formalisms and the details of RL algorithms.
* Insufficient baselines to determine whether the imitation learning on GPT self-play traces is actually the reason for improved benchmark performance. The comparisons to Alpaca-only models are interesting, but not enough since the self-play traces are still additional training data. The authors should compare to data that looks similar to the self-play traces, such as multi-turn dialogues about the target word without playing the game directly
Technical Quality: 2
Clarity: 4
Questions for Authors: * An alternative explanation for why this fine-tuning appears to work is that the SPAG taboo data is simply a rich conversational dataset on which supervised learning is helpful for general LLM capabilities, rather than the game objective or self-play or adversarial nature. This would also explain why the model fine-tuned on 20Q data performs so badly even though the game is very similar in structure, since the version of 20Q used only has 158 different unique targets while the taboo game has 50,000 unique targets. How do we know it's actually taboo that is important?
Confidence: 3
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate reviewer ib6k's detailed comments and review. To address the reviewer's concerns:
### 1. About **RL results**:
we did utilize the MDP/RL formalisms to introduce our SPAG objective in equation (13), which is a policy-gradient-based off-policy RL loss. Then we did conduct the RL training with this SPAG loss (eq.13) in an offline scheme on the imitation-learned LLMs. For each LLM, we iteratively conducted three-epoch RL training, denoted as SPAG-1, SPAG-2, SPAG-3. The RL results of these SPAG models are reported in Table 1, Table 2, and Figure 4. From the RL experimental results, we observed clear performance gains in each RL iteration (SPAG-1 $\rightarrow$ SPAG-2 $\rightarrow$ SPAG-3) in terms of LLM reasoning and game-playing.
### 2. About the **contribution of imitation learning**:
although imitation learning on GPT self-play traces improves the LLM reasoning uniformly on benchmarks, we did not count these performance gains as the contribution of our paper. The reason for conducting imitation learning is that the original open-source LLMs (LLaMA-2 and Baichuan-2) have limited instruction-following capability to obey the game rules of Adversarial Taboo. Without imitation learning, it is hard for us to collect legal self-play game traces of open-source LLMs. By cloning the behavior of GPT-4, both LLaMA-2 and Baichuan-2 learn to play the adversarial game legally. At the same time, the reasoning abilities of these imitation-learned models also increase naturally because of the SFT from GPT-4 as a much better LLM.
However, the imitation-learned models are the baselines to show the effectiveness of the proposed SPAG method. We iteratively conduct three-epoch RL training based on the self-play traces of the imitation-learned models and continuously observe the performance gains on reasoning benchmarks (SPAG-1 $\rightarrow$ SPAG-2 $\rightarrow$ SPAG-3 in Table 1&2). These results support the effectiveness of the proposed SPAG method. In these rollout & RL processes, the only supervision data we used is the Alpaca dataset, which helps to preserve the general instruction-following ability of LLMs.
To ablate the impact of the Alpaca SFT data, we also continuously train the imitation-learned models with the Alpaca set and show their performances in Table 1&2 (denoted as IM+AlpacaSFT), which have lower performances than SPAG-trained models (SPAG-1 can be regarded as IM+AlpacaSFT+winning-selfplay). This ablation study also supports the effectiveness of the proposed SPAG compared with continuous SFT with the same amount of SFT data (Alpaca).
To sum up, we agree with the reviewer that imitation learning of GPT-4 increases the reasoning of LLaMA-2 and Baichuan-2. However, this process only provides us with better baseline models and does not weaken the effectiveness of the SPAG method, since all the SPAG-trained models are based on these imitation-learned models.
### 3. About the **Reviewer's Question**:
We agree that the GPT-4 self-play data on the adversarial taboo game can be regarded as a rich conversational dataset for supervised learning. However, as discussed above, we do not count the performance gains from GPT-4 imitation into the contributions of our paper. After the imitation learning stage, all the taboo conversation data is collected **synthetically** from the model outputs. The only additional supervision is the Alpaca dataset. As mentioned above, we have conducted ablation studies (`imitation model+Alpaca` vs `imitation model+Alpaca+SPAG`) to show the effectiveness of SPAG.
For the comparison with 20Q, we agree that the target sizes of the two games are different. However, the target sizes are entangled with the game designs, e.g. GuessMyCity can only use city names as targets. Although target sizes are different, we have made efforts to control other experiment factors. In Appendix Section E, we have used similar self-play sampling sizes (20K for imitation and 10K for self-play RL training) for non-adversarial games. From the perspective of supervision, we have used the same size of imitation data (20K GPT-4-played games) and the same SFT data (Alpaca). All other conversation data are sampled by the learning models themselves aligned with the concept of reinforcement learning. Therefore, the performance gaps between the adversarial taboo and non-adversarial games can support the efficiency of this adversarial game design.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! My apologies, I misread the ReST paper and had previously understood it to be just performing imitation learning on high-scoring samples (e.g. winning GPT-4 traces) but now see that it and your work are using more sophisticated objectives that do require the MDP formalism, and that the SPAG-1/2/3 models differ more substantively from each other as a result. I'd be happy to increase my score.
---
Reply to Comment 1.1.1:
Comment: Thank you so much for raising your score. We are glad to find your concerns being addressed with our explanation! | Summary: This research investigates a novel self-play training method for Large Language Models (LLMs) using an adversarial language game called Adversarial Taboo. In this game, one LLM acts as an attacker trying to get another LLM (acting as a defender) to say a secret word without revealing it directly. Both sides need strong reasoning to succeed: the attacker to nudge the defender, and the defender to infer the word. The authors find that training LLMs with self-play in this game improves their reasoning abilities across various benchmarks.
Strengths: - Self-play is a very interesting approach to improve models that has seen recent successes.
- The fact that performance improves with epochs is a strong signal that the method is useful.
Weaknesses: - The choice of the second model seems arbitrary. Why not just show it on two LLaMa models?
Technical Quality: 4
Clarity: 3
Questions for Authors: - Would it be beneficial if we had $n>2$ models?
- How well do you think the results will transfer to other models?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limitations addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We express many thanks for the reviewer's supportive comments. For the reviewer's concerns:
1. About **the second model choice**: we choose Baichuan-13B as the second base model mainly because the full self-play experiments (imitation learning, SPAG-1, SPAG-2, SPAG-3) of one base model are already heavy enough especially with limited computational resources. So we give this opportunity to a totally different open-source LLM to LLaMA-2-7B, with different model sizes, major languages, and even company regions. Baichuan-2 is pretrained with more focus on Chinese language scenarios by a Chinese startup. In this setup, we can test the universality of the proposed SPAG method with totally different LLMs. Fortunately, SPAG has shown LLM reasoning improvements on both model bases.
2. About **N>2 models**: I think it can be imaginative and potential to have more than two LLMs engaged in the language games. These LLMs can play the Adversarial Taboo or other multi-player language games. I believe with elaborate game designs, the LLMs' capacity can be further enhanced via self-play, just like our human activities in society. It can be a very interesting direction for future work.
3. About the **performance of SPAG transferred to other LLMs**: as mentioned above, we chose two different LLMs to test the proposed SPAG method and observed satisfied reasoning improvements, which has supported the universality of our method. However, it remains unclear whether the SPAG method has its upper bound since the open-source LLMs' reasoning abilities are continuously getting better. But we can still design more challenging self-play games and conduct SPAG for LLMs to practice and reinforce. For example, we can let two LLMs implement game AIs to play the AI Game Programming Competitions, which could be a more hard-core SPAG to enhance LLMs' coding and reasoning. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
GAVEL: Generating Games via Evolution and Language Models | Accept (poster) | Summary: The paper presents a method to generate board games in a domain specific language using an LLM tuned on that language. An LLM is finetuned using fill-in-the-middle training to complete descriptions of board games from a previously gathered dataset. This LLM is used as a mutation operator in a quality-diversity algorithm. Evaluations start generation from a held-out set of seed games and show the algorithm produces more variation of high quality than an ablation. Preliminary expert human evaluations are favorable for the quality of the newly generated games.
Strengths: # originality
Low
- Using LLMs as mutation operators was previously established as a methodology (as referenced).
- Fine-tuning LLMs to produce domain-specific code is also well-explored.
- The primary novelty is the target dataset used.
# quality
Reasonable
- Great to have statistical testing of differences!
- The new domain makes it hard to have alternative baselines, but this makes it difficult to know how successful the algorithm is at solving the base domain task. The quality-diversity metric is "internal" to the algorithm in the sense of being a proxy metric to produce the desired "good games." The preliminary expert evaluations are promising, but not a strong piece of evidence (yet).
# clarity
Good
- Clearly articulates the problem and approach. Introduces domain-specific information to help readers understand the DSL and its features.
- The paper is direct about limitations and potential extensions.
# significance
Modest
- Of interest to the game generation community.
- Potentially of some interest to the code generation community, but the techniques are not particularly novel.
- The expert human evaluation of potentially novel games is a positive step toward impact.
Weaknesses: Fundamentally the paper rests on the success of the algorithm at generating games (given the relatively established techniques being employed). This is hard to gauge from the experiments so far: the QD metric does not directly measure success against a ground truth and the comparison to GAVEL-UCB is an ablation (thus relative change) without baseline algorithms to compare against.
How could the paper provide more persuasive evidence that GAVEL is advancing the problem of board game generation?
Perhaps consider comparing to few-shot prompted code LLMs as a more "naive" baseline. How does GAVEL compare in terms of number of tokens needed to produce games and the quality of the best of those games?
Or perhaps there is a simple ablation that replaces MAP-Elites with a reasonable alternative and shows that GAVEL produces more diverse and/or better games when consuming the same number of steps of generation (or raw tokens produced through LLM inference). These experiments (unfortunately perhaps too much for the discussion period) would make it clearer that the more direct ways to solve this problem are not sufficient. Matching on inference (token) budget would further provide evidence of better efficiency. Or perhaps this would show GAVEL can asymptote to greater quality in final artifacts even if it demands a larger inference budget.
For the human evaluation, is there recorded feedback or other narrative information from responses that might strengthen the claims that GAVEL is succeeding?
My core concern is the techniques used are not particularly novel in isolation and in combination, but the domain is too novel to know if the algorithm is succeeding from the experiments so far. I would be persuaded to increase my score if I saw stronger results that GAVEL is truly solving game generation, or is a superior approach to reasonable alternatives.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Section 3.1
- How many games were filtered by each step of the process? It looks like ~1/2 the games were filtered (from 1182 to 574+14).
- Section 4.2
- How is the concept vector defined for new games? Is it derived from the seed game concept vector (potentially mutated)?
- (minor) lines 225-227: "In cases like ours where fitness values can be negative, each fitness value is incremented by the minimum possible fitness (i.e. -2) before being summed to ensure that the QD score increases monotonically over time."
- Perhaps this is a confusion on my part: if the minimum is fixed and samples are only added to the archive when increasing in fitness, this (seems to) imply that the score can only increase. Offsetting by the minimum may shift the minimum value obtained to be positive, but should not change the process of monotonic improvement.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 1
Limitations: Yes. The paper describes the limitations of the current approach and provides potential extensions to mitigate these weaknesses. This includes generating unused code, limited diversity, construction of the archive, and automating human evaluation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful comments and critiques. We hope to address their primary concern through additional baseline experiments that were performed during the response period.
Specifically, we compare GAVEL against directly sampling from our fine-tuned language model and few-shot prompting with GPT-4o. For the direct sampling approach, we randomly select a game from the validation set and mutate it using the same selection and re-sampling process and repeat this process 4500 times (this is to mirror of the 500 generations of GAVEL, since in each generation we randomly select 3 games and perform 3 mutations on each). We then evaluate each of the generated games for fitness and determine the cell they would occupy based on their concepts. This allows us to construct a MAP-Elites archive (i.e. by keeping the most fit game in each cell) for the baseline in order to directly compare against GAVEL. We perform a similar process for GPT-4o, except we provide each of the 14 validation games as part of the prompt and ask the model to create a modification of one randomly-selected validation game. Due to constraints on budget we obtain only 900 samples from GPT-4o and thus compare to GAVEL after 100 generations. In a camera-ready version of the paper we will extend the GPT-4o baseline to the full 4500 samples and experiment with including additional context in the prompt.
The results of these baseline experiments are presented in the accompanying PDF, but the short summary is that both baseline methods fall far short of GAVEL in terms of QD score. Pure sampling appears to obtain dramatically fewer high-fitness samples both across the archive and in new cells. While GPT-4o produces a similar number of high-fitness samples in new cells it appears to produce dramatically fewer high-fitness samples overall. In both cases, it seems that GAVEL is doing a better job exploring the space of possible games. We will include these results in a camera-ready version of the paper.
In addition, we provide individual responses to the specific questions raised:
- Q1: How many games were filtered by each step of the process?
- A: From the 1182 games in the full dataset, we first filter down to 1033 games in the “board” category. Removing the Mancala-style games brings the dataset to 825 games, and filtering by tokenized length brings us to our final 574 training games and 14 validation games.
- Q2: How is the concept vector defined for new games?
- A: Ludii can automatically compute the concept vector for any game that successfully compiles: each feature is derived from the combination of “ludemes” (essentially, keywords) that are used in the game description.
- Q3: Doesn’t the QD score increase monotonically even without the offsetting of individual fitness values?
- A: In most MAP-Elites implementations (and in ours), a sample can get added to a previously-empty archive cell regardless of its fitness score. This means that, in our case, games in novel cells with negative fitness scores can cause the QD score to decrease. We will make this distinction more clear in the camera-ready version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the direct answers and additional experiments!
I'm persuaded by these results that naive baselines would not produce as good results in the chosen metrics. I remain somewhat concerned that QD score is not necessarily a good metric for the objective of the work and that MAP-Elites is designed to optimize the score (thus should do better on this rating). That said, the results are clear that random sampling and even a "simple" LLM (GPT4o being SOTA at this time!) show that the choice of both Map-Elites (vs random sampling) and FITM (vs GPT4o) are good choices for this problem.
Overall this raises my score.
> filtering by tokenized length brings us to our final 574 training games
Thank you for the details on the filtering! Hopefully improvements in LLM context lengths will yield improvements to the FITM training and baseline LLM being used. (This is not something I expect for this submission, but a comment about future developments of the work)
> Q2
Ah, I see. That remains a bit opaque, but I'm comfortable with that given it's a detail others would only need to know if reproducing without rerunning the code. It may help to detail that in the appendix for the sake of reproducibility.
> Q3
Thank you for the clarification; that makes sense. | Summary: The paper presents GAVEL (Games via Evolution and Language Models), a system designed for automated game generation. The authors utilize the Ludii game description language (L-GDL) to encode game rules and leverage a combination of evolutionary computation and large language models (LLMs) to generate new and novel games. The system is evaluated both quantitatively and qualitatively, demonstrating the capability to produce playable and interesting games distinct from those in the existing Ludii dataset.
Strengths: - The integration of evolutionary algorithms with large language models for game generation is a promising approach, expanding the capabilities beyond traditional rule-based or heuristic methods.
- The paper provides a thorough evaluation of the generated games, using both automated metrics and expert human analysis, which strengthens the validity of the results.
- The paper clearly describes the training process of the language model, the evolutionary search algorithm, and the evaluation metrics, allowing for reproducibility and further exploration by other researchers.
Weaknesses: - While innovative, the application of this research might be seen as niche within the broader NeurIPS community. The practical impact and scalability of such a system to other domains should be further tested in the future. Additionally, the impact on the broader ML community should be further highlighted.
- There exists some related work on LLM and created games and level, also including combining LLMs with more open-ended search methods like novelty search, that seems missing, e.g.
- Sudhakaran, Shyam, et al. "Mariogpt: Open-ended text2level generation through large language models." Advances in Neural Information Processing Systems 36 (2024).
- Todd, Graham, et al. "Level generation through large language models." Proceedings of the 18th International Conference on the Foundations of Digital Games. 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: - What do the authors imagine could be the impact of the work on the larger ML community
- Which other application areas outside of games could the system be used for?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful comments and critiques and we hope to address their general concerns in our response. We also appreciate the pointers to additional related work and will include them in a camera-ready version of our paper.
While it is the case that the results we present are specific to games and the Ludii description language, we feel that our approach is general enough to apply to most domain-specific languages. DSLs are used in a variety of contexts and many of them are not well-represented in the training data of LLMs -- motivating a fine-tuning approach. Our fitness function is also general-purpose enough that it would work with minor modifications for other game-generation domains and the broad approach of a heuristic fitness function should in principle be applicable to an even wider range of domains. In this way, the broad GAVEL approach could be adapted for effective code synthesis in relatively data-sparse settings.
However, even if our method only works effectively for board game generation, this would in our opinion be both significant and general. Most of the world’s population play board games, at least occasionally, and the Ludii dataset contains more than a thousand games from all over the world. In the sense that our model is tasked with producing modifications to an existing game, it has demonstrated a broad and wide-ranging ability to do so. Further, the procedural generation of novel games presents an avenue for future research in open-ended learning.
Finally, we would also like to press the significance of the empirical results of our paper. We are the first to generate not one, but several new and interesting board games. The only directly comparable work is Cameron Browne’s Ludi system from 2009, but that operated in a significantly smaller search space consisting only of connection and line-completion games.
---
Rebuttal Comment 1.1:
Comment: Thank you for the clarifications. Given the additional experiments and results, I'm happy to increase my score. I still think it would be good to extend the related work sections with other relevant work on LLMs and games. [As a side note, I would suggest not posting to arxiv during the review period since it makes the double blind review process more difficult] | Summary: The paper targets on generating interesting games automatically. To do this, it proposes an evolution-based algorithm, which iteratively mutates the game components using the generalizability of an LLM. The work is based on a previous large-scale game datasets Ludii.
Strengths: 1. It is an interesting task to me, to automate the process of game design using AI. The authors make a good job in presentation. Some of the details really enlighten me.
2. The authors leverage LLMs to evolve the games iteratively. This is a nice idea, or at least a nice application of today's AI.
3. The word is solid. The experiments are based 1000 different games. While the games are limited in board games, it is still a great job and solid work.
Weaknesses: 1. **Lack of illustrative cases of generated games.** I am interested in how the generated games vary in the process of evaluation (maybe a case in generation 10 and another case in generation 100?). Second, the cases provided in the paper are in Ludii language. It is hard for readers to read immediately. It is better to provide at least a piece of natural language description.
2. **Evaluation is fair.** The evaluation is ok to me, while not novel enough. Most results are predictable and lack of inspiration.
3. **It is still a hard task to evaluate the degree of interest of a game.** The authors propose a number of approaches to do that. Though these approaches are sound to me, most of them have been discussed in previous work. Recent approaches are still far from a faithful and comprehensive evidence for a game being interesting.
4. **Some training and evaluation details are missing.** Please see below.
Despite some limitations, I still believe that this paper offers a solid and interesting work for specific communities.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Is the used CodeLLaMA model the base-version or instructed version? Or which one performs better in your fine-tuning process?
2. The authors mention that the LLM would perfectly reproduce the missing section of code in the process of mutation. The proposed solution is to mutate a set of unseen games. It is surprised to me that the authors do not try to use top-k decoding or noisy decoding, since these methods are more intuitive and easy to me. Or these methods are not good in practice?
3. I am not sure whether this work or the proposed method is greatly reliant on a high-level descriptive game language, or not? If we put this work in broader scenes, e.g. role-play games, do the principles and algorithms in the paper also work?
4. It is not clear how to judge the correctness of the generated code, since the new games are represented in code (if I am wrong please correct me). Solely checking the compilation is not enough to determine if the code is right or not.
5. I hope the authors can discuss more recent work in AI generating games, e.g. Instruction-Driven Game Engines on Large Language Models (https://arxiv.org/abs/2404.00276). In this work, the authors automate the development process of poker games based on LLMs.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful comments. We acknowledge the reviewer’s points about the difficulty of evaluation and will include a more thorough discussion of these challenges in a camera-ready version.
We also provide individual responses to the specific questions raised:
- Q1: Is the used CodeLLaMA model the base-version or instructed version? Or which one performs better in your fine-tuning process?
- A: We used the base version since we were fine-tuning on a single task. However, in some of our early experiments, we also fine-tuned instruct models to translate board game instructions from natural language to the Ludii DSL. While we eventually dropped that line of research due to the poor quality of the English descriptions accompanying Ludii games. We didn’t notice any significant difference in performance between the fine-tuned instruct and non-instruct versions of CodeLLaMA.
- Q2: The authors mention that the LLM would perfectly reproduce the missing section of code in the process of mutation. The proposed solution is to mutate a set of unseen games. It is surprised to me that the authors do not try to use top-k decoding or noisy decoding, since these methods are more intuitive and easy to me. Or these methods are not good in practice?
- A: We agree that diversity-boosting strategies like top-k or noisy decoding are intuitive for increasing the novelty of mutations. However, our early experiments showed that these decoding strategies were ineffective at generating mutations that were both novel and valid. Given our relatively small pool of initial seed games, we found that simply excluding them from the training data was more effective. We will clarify this point in the camera-ready revision of the paper.
- Q3: I am not sure whether this work or the proposed method is greatly reliant on a high-level descriptive game language, or not? If we put this work in broader scenes, e.g. role-play games, do the principles and algorithms in the paper also work?
- A: The approach we propose is applicable to most formal languages, including DSLs and general-purpose programming languages. The main constraint is that candidate games (or programs) need to be evaluated using a reward function. We achieve this by compiling the game descriptions, playing through them, and using heuristics to quantify the quality of the playthrough. Unlike strategy games, defining similar heuristics for games that feature a stronger human element, such as role-playing games, may be more challenging.
- Q4: It is not clear how to judge the correctness of the generated code, since the new games are represented in code (if I am wrong please correct me). Solely checking the compilation is not enough to determine if the code is right or not.
- A: Indeed, compilability is only one of the metrics we consider when evaluating the quality or correctness of a game. Instead, we focus on evaluating the quality of playthroughs. We compile sample games into executable programs (they compile to Java bytecode), collect a series of playthroughs using self-play between MCTS agents, and evaluate each game’s playthroughs in aggregate using heuristics that quantify the game’s balance, decisiveness, completion, agency, and coverage. You can find more details about this process on pages 5 and 6.
- Q5: I hope the authors can discuss more recent work in AI generating games, e.g. Instruction-Driven Game Engines on Large Language Models (https://arxiv.org/abs/2404.00276). In this work, the authors automate the development process of poker games based on LLMs.
- A: We thank the reviewer for pointing us towards this paper. While its approach is quite different from ours - they use LLMs as a game engine to predict next states, while we use them to define rulesets in a DSL - the authors' strong results are exciting. We will consider including it as related work in the camera-ready version of the paper.
---
Rebuttal Comment 1.1:
Comment: Thanks for further explanations. I will keep my rating and support acceptance. | Summary: The authors consider the problem of generating sets of diverse and interesting multi-player games. They instantiate this problem in the subspace of board-game like games using a recent domain-specific language called Ludii. They then use a quality-diversity algorithm (specifically, map-elites with a language-model-based mutation operator) as their generation algorithm. They name their overall approach GAVEL.
For the mutation operator, the authors fine-tune a 13b codellama model with a fill-in-middle objective to reconstruct structured portions of Ludii games in a train dataset of ~500 games. For the behavioral characteristic, the authors use a set of game features provided by the game engine on which they have performed PCA (fitted on the train set). For the fitness function, the authors use a set of 3 filters allowing to remove invalid, trivial, severely unbalanced games (among others) and then compute separate scores along several metrics of playablility derived from domain insight and expertise (e.g. amount of time a player has more than 1 option to play, proportion of games that don't end in a draw, etc). The map-elites archive is seeded with 14 games novel to the mutation model.
The authors report qualitative as well as quantitative evaluations of map-elites on their domain. They demonstrate that the method outperforms (by QD-score) a version where portions of code that have been successfully mutated beforehand are preferentially mutated (GAVEL-UCB). They show that GAVEL is able to create games in novel cells compared with the original Ludii set. They showcase some their created games, selected from the set of promising games by human experts.
Strengths: * The paper tackles automatic game generation, which is an interesting area of research where modern machine learning methods have much to contribute;
* The paper is excellently written and very clear (to a reader with knowledge of the area at least). Presentation, pacing and discussion are well executed; it was a pleasure to read. The illustrations are nice;
* The methodology is very sound, all steps are explained in detail and would be simple to reproduce.
* The application of evolution through large model (ELM) like methods to game generation seem novel;
* The cherry-picked examples of games created by the method look engaging, demonstrating the interest of the method;
* The use of concept vectors are a representative behavioral characteristic (BC) in this domain;
* The fitness function, while domain-specific, is precise and captures fine-grained notions of enjoyableness of games;
* Several seeds of all experiments are given, allowing to quantify variability;
* The authors are very upfront about the limitations of their methods and the claims being made are well supported with evidence.
Weaknesses: * I worry that the contribution of the paper is limited. While using Map-Elites + LLMs on Ludii is novel, both the fitness function and the BC space used are very specific to games (of the kind presented in the paper), and the paper presents no algorithmic improvement. I would have welcomed methods that work across domains (for instance ask an LLM to self-refine a fitness function and BC space?)
* While generally, applying a method to a novel domain is interesting in its own right, I worry about the lack of comparative studies in the experiments of the paper. It is good to study map-elites on this new domain but what are the lessons learned? Where are comparisons to other methods, could other generation algorithms (like the ones cited in the related work, or sampling from the model repeatedly without QD, or few-shot prompting a large model with a larger context window) function just as well? Right now this paper reads more like a technical report than a scientific paper, since I do not feel I learned how this complex algorithm works on this domain by careful ablations and comparisons.
* There could be more information on the study of dynamics of evolution of games in the paper. Why did the authors stop at 500 evolution steps? Was the discovery process saturating? If yes, why? Could more OOD games have been discovered if the process would have been run for longer? If more samples would have been used to seed the archive?
Technical Quality: 2
Clarity: 4
Questions for Authors: ### Questions
* line 104 did you try not removing these games, does anything bad happen?
* would CVT-mapelites have worked in this case (to avoid training the PCA on train set games and collapsing variation that doesn't exist in the train set?)
* Do you have ideas on how to characterize game diversity in other spaces than the BC space? How can we validate that this space adequately capture what humans mean by diverse games?
* finetuning vs in-context learning: why have you fine-tuned a smaller model instead of asking a large, capable instruction model to perform the task (with examples?) Did you experiment with this? (cost is a valid reason to try using a smaller model, but I am curious on how larger models perform on the task).
* Open question on fitness: could the various playability measures be collapsed to a single metric such as learnability, or a computational measure of interestingness, do you have ideas on how this could work or whether it is feasible? (See for instance the interesting but empirically ungrounded work in https://arxiv.org/abs/0812.4360, somewhat related to a recent formulation of open-endeness: https://arxiv.org/abs/2406.04268). This is beyond the current scope of the paper, but I am curious to know if the authors have any thoughts as to how this relates to game domains.
* Why to you think the UCB variant works less well?
* The limitation section discusses biasing mutation, do you have any idea on how this could work in practice?
### Suggestions
* (minor) The middle figure of figure 2 could show function expansion;
* l121 should have a footnote specifying when the training was performed because these things change very fast. (eg codestral can do fill-in-the-middle too now, but was released after the neurips deadline);
### Related work
Here is some work related to the current approach:
* There is a recent trend for game generation with LLMs, mostly focused on generating geometrical information. The seminal work here is MarioGPT (https://arxiv.org/abs/2302.05981) but there have been several works in this vein since, like https://arxiv.org/abs/2403.12014, https://arxiv.org/abs/2302.05817 or https://arxiv.org/abs/2406.04663
* On LLM-augmented QD, rainbow-teaming is also an important paper https://arxiv.org/abs/2402.16822
Confidence: 5
Soundness: 2
Presentation: 4
Contribution: 2
Limitations: The authors acknowledge both a large set of limitations of their work as well as its broader impact. I would add that an important limitation of the paper is that the proposed implementation of LLM-augmented map-elites lacks generality: fitness, mutation operator and BC all are specific to the Ludii environment.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for taking the time to carefully appraise our work. We hope we can adequately address their comments here.
First, while it is the case that our results are specific to games and the Ludii description language, we feel that our approach is general enough to apply to most domain-specific languages. DSLs are used in a variety of contexts and many of them are not well-represented in the training data of LLMs -- motivating a fine-tuning approach. Our fitness function is also general-purpose enough that it would work with minor modifications for other game-generation domains and the broad approach of a heuristic fitness function should in principle be applicable to an even wider range of domains.
Next, we take the reviewer’s point about the lack of comparative studies and attempt to address it through additional baseline experiments. Specifically, we compare GAVEL against directly sampling from our fine-tuned language model and few-shot prompting with GPT-4o. For the direct sampling approach, we randomly select a game from the validation set and mutate it using the same selection and re-sampling process and repeat this process 4500 times (this is to mirror of the 500 generations of GAVEL, since in each generation we randomly select 3 games and perform 3 mutations on each). We then evaluate each of the generated games for fitness and determine the cell they would occupy based on their concepts. This allows us to construct a MAP-Elites archive (i.e. keeping the most fit game in each cell) in order to directly compare against GAVEL. We perform a similar process for GPT-4o, except we provide each of the 14 validation games as part of the prompt and ask the model to create a modification of one randomly-selected validation game. Due to constraints on budget we obtain only 900 samples from GPT-4o and thus compare to GAVEL after 100 generations.
The results of these baseline experiments are presented in the accompanying PDF, but the short summary is that both baseline methods fall far short of GAVEL in terms of QD score. Pure sampling appears to obtain dramatically fewer high-fitness samples both across the archive and in new cells. While GPT-4o produces a similar number of high-fitness samples in new cells it appears to produce dramatically fewer overall. In both cases, it seems that GAVEL is doing a better job exploring the space of possible games. We will include these results in a camera-ready version of the paper.
Finally, we provide individual responses to the specific questions raised by the reviewer:
- Q1: Did you try not removing the Mancala-style games?
- A: In initial experiments, our fitness evaluation had difficulty getting reasonable approximations for these Mancala-style games in the limited computational budget we provided. In addition, these games represent a fairly distinct cluster in terms of concepts of ludemes used and we omitted them in order to increase the relative “generality” of the training dataset, though future work could include them.
- Q2: Would CVT-MAP-Elites have worked in this case?
- A: This is a very reasonable idea, and we did explore CVT-MAP-Elites in an initial experiment as well. Unfortunately running the CVT algorithm over the full concept vector space produced either a very sparse archive (when the number of cells was high) or a relatively collapsed archive (when the number of cells was small), which we attribute to the fact that the tessellation occurs uniformly over the space while the games are not distributed uniformly.
- Q3: Do you have ideas on how to characterize game diversity in other spaces than the BC space?
- A: Capturing human notions of diversity in gameplay remains an open research question. Prior work does indicate that Ludii concepts capture certain high-level judgements of game genre (see citation [54] in our paper), but outside of this the gold standard would likely remain a human user study. We are in the process of organizing such a user study for future work / a camera-ready version of this paper.
- Q4: Why have you fine-tuned a smaller model instead of asking a large, capable instruction model to perform the task?
- A: See baseline experiment above
- Q5: Could the various playability measures be collapsed to a single metric such as learnability, or a computational measure of interestingness?
- A: While this idea is reasonable, we think that the main challenge would be in tractability. Using a measure of learnability as a fitness function would effectively add an inner loop to what is already the main bottleneck of the evolutionary algorithm. Perhaps such an approach would work in domains where generated game code could be compiled to something that runs on the GPU, but it would likely be infeasible for Ludii.
- Q6: Why do you think the UCB variant works less well?
- A: One hypothesis is that it is very difficult to make any kind of improvement in the fitness landscape GAVEL operates over. The UCB MAP-Elites selector relies on the intuition that cells which produced successful mutations in the past are more likely to do so again. However, if improvement in any given cell is basically equally (un)likely, then the UCB algorithm would effectively waste time re-selecting previously successful cells instead of a more efficient uniform sampling.
- Q7: The limitation section discusses biasing mutation, do you have any idea on how this could work in practice?
- A: In the context of Ludii, the most likely way to accomplish this would be with a heuristic-based method. It should in principle be possible to write a set of rules that link certain kinds of errors (e.g. unused components in the “equipment” section) with certain program regions (e.g. player actions in the “rules” section). Another possibility is an instruction-based mutation operator to which additional context (e.g. “this game has unused components”) could be provided.
---
Rebuttal Comment 1.1:
Comment: I thank you for your response, it answers all of my questions and clarifies some points. I am very happy that you performed additional experiments with gpt4 and repeated sampling from the model and I think this strengthens the paper.
I still feel that my point on the generality of the method stands (as well as the fact that LLM-augmented map-elites is not new), so I will not be raising my grade. | Rebuttal 1:
Rebuttal: We would like to thank each of the reviewers for taking the time to consider our work. In the attached PDF we include the results of additional baseline experiments performed during the response period that compare GAVEL to a pure sampling approach and few-shot prompting with GPT-4o. The details and take-aways of these experiments are included in the table captions.
Pdf: /pdf/71f996868c33ffb5cc8dc21f9841f63a312d28ad.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Exact Random Graph Matching with Multiple Graphs | Reject | Summary: This paper considers the graph matching problem, where the goal is to produce a mapping between vertices of multiple graphs which maximizes similarities among them. The authors study graph matching from a theoretical perspective, in which one observes multiple (appropriately correlated) Erdös-Rényi (ER) graphs that have ground-truth latent mappings between them. The authors' goal is to characterize the information-theoretic threshold for exactly recovering the latent mappings between all of the observed ER graphs. Prior work has settled the information-theoretic thresholds for 2 correlated ER graphs, and this paper settles it for more than 2 ER graphs.
To determine the information-theoretic threshold for exact graph matching, the authors establish matching achievability and converse results. The converse is based on a simple reduction to a graph matching problem with two ER graphs, combined with known results on impossibility results for exactly matching two ER graphs. For the achievability results, two algorithms are discussed. The first is the MLE, which is optimal for exact graph matching. The authors show that it has a clean, easy-to-understand form: the MLE outputs vertex mappings which maximize the number of edges in the corresponding union graph. However, the authors do not directly analyze this algorithm due to technical complexities. Instead, they propose an algorithm which involves two phases. (1) For each pair of graphs, a partial, fully-correct mapping is computed via the $k$-core estimator, and (2) unmatched vertices are matched through a "transitive closure" procedure. This algorithm provably outputs the full, correct set of vertex mappings in the parameter region that complements the converse.
Finally, a few numerical experiments are presented, showing that the transitive closure procedure can be combined with known computationally efficient algorithms for pairwise graph matching to derive algorithms for matching multiple graphs in a principled manner.
Strengths: Almost all the existing theoretical work on graph matching concerns two graphs, except for some trivial results (to the best of my knowledge). The extension of the theoretical framework to multiple graphs is a natural and important follow-up, and may inspire several future works as well.
While the algorithms and analysis are largely adapted from prior work (e.g., the $k$-core estimator), a key novelty is the transitive closure step, which provides a principled (and optimal!) bridge between pairwise graph matching and $m$-ary graph matching. As the authors highlight, this step can be used to extend practical algorithms for pairwise matching to the $m$-ary case in a black-box manner. I imagine that this technique could be useful in practice.
Additionally, the paper is well-written.
Weaknesses: To me, the main weakness of the paper is in the discussion of transitive closure's implications. The authors make a striking observation that one can use their transitive closure technique (at least heuristically) to generalize pairwise graph matching to $m$-ary graph matching. However, several details are lacking in the simulations section. For instance, what are the graph parameters ($n$ and $p$)? What is the error rate before and after the transitive closure boosting? How do the results shown compare to the accuracy of Algorithm 2? (Even though the $k$-core matching is not efficient to compute, the result of the matching procedure is a function of the ground-truth permutations, so I believe the algorithm's accuracy can be simulated efficiently).
There are a couple other minor weaknesses. One is that Algorithm 2 is computationally inefficient. However, making such an algorithm efficient is likely a challenging research question itself, and is appropriate for future work. Another weakness is that there is no nice figure to visualize Algorithm 2. I feel that the reader's understanding could be greatly improved if one could create a representative figure for the transitive closure boosting.
Technical Quality: 4
Clarity: 3
Questions for Authors: Here I've listed a number of minor questions / comments in addition to the weaknesses above.
- (p. 1, line 21) I'm not sure if it's clear at this point what "latent correspondence" means. Perhaps motivate this notion through social networks first, where the latent correspondence is derived from the same users across the two networks.
- (p. 1, line 30) I'd suggest adding a sentence or two at the end of the introduction summarizing your contributions. But this is more of a stylistic comment.
- (p. 2, line 39) I'm not sure if it's clear at this point what "matching all vertices correctly" means. I think you mean to say something along the lines of *exactly* learning the *entire* latent correspondence. (I'm just worried that using *matching* and *correspondence* without clarification of equivalence could confuse readers outside of this domain.)
- (p. 2, line 50) I would mention that the literature on establishing information-theoretic limits largely (including the $k$-core estimator) largely uses computationally inefficient algorithms.
- (p. 3, line 93) I would emphasize that permuting node labels is equivalent to removing node-level information. Hence any algorithm must use topological features, rather than node-level features, to do matching in this setting.
- (p. 3, line 106) Emphasize that matchings need not be complete, for clarity.
- (p. 4, line 116) I think you mean that $Cs (1 - (1 - s)^{m - 1}) \ge 1$ is a necessary condition. Similarly, in the next line you should say something like "this condition is also sufficient (except for the equality case) to exactly match $m$ graphs with probability going to 1."
- (p. 4, line 139) Can you say something about *why* the analysis of the MLE is cumbersome?
- (p. 5, Algorithm 1) Is the algorithm environment here really necessary? Maybe for brevity (and for minimal confusion) you can simply write, in the main text,
$$
(\widehat{\pi}_{12}^{ML}, \ldots, \widehat{\pi}_{1m}^{ML}) \in argmax |E(G_1 \lor G_2^{\pi_{12}} \lor \ldots \lor G_m^{\pi_{1m}})|
$$
- (p. 5, line 146) The sentence "If a node $v$ is unmatched..." could really use an illustrative figure, as this idea is really the essence of transitive closure boosting. Also, I would add for clarity that the sequence of graphs must consist of unique elements, and need not use all the graphs.
- (p. 7, Definition 7) In light of Lemma 8, it seems that $\mathcal{H}(v)$ can be derived from $\widetilde{\mathcal{H}}(v)$ by just thresholding the edge weights. Is there a strong reason why the analysis is done on the (seemingly more complex) weighted graph?
- (p. 7, Theorem 9) What's the intuition behind the threshold $\frac{m^2}{4} \left( k + \frac{1}{Cs^2} \right)$?
- (p. 9) Perhaps put the simulations and discussion of transitive closure into its own section.
- (p. 9, bullet point on Beyond ER graphs) The theoretical literature on the $k$-core estimator allows for quite general classes of random graphs. What specific aspects of your analysis would be challenging to extend beyond ER?
- (general comment) For two graphs, the information-theoretic threshold has a nice qualitative interpretation: it is the connectivity threshold of the common (intersection) graph. Is there a similar qualitative interpretation for the $m$-ary graph matching threshold?
- (general comment) It would be useful to note that when $s$ is small, the information-theoretic threshold is reduced by a multiplicative factor of $m -1$, compared to the $m = 2$ case. This follows since $1 - (1 - s)^{m - 1} \approx (m - 1) s$ for small $s$.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: Limitations have been largely discussed. The authors could expand upon implications of graph matching to protecting / breaking privacy in anonymized social networks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Our original submission uses $(n,p) = (10^3,0.1)$ in the simulation, and we have now included details such as error rates before and after boosting through transitive closure in the PDF file accompanying our global response. These results are for the simulated output of the $k$-core estimator, which can be efficiently obtained as you pointed out. Please see the global response for some remarks on this.
Thank you also for the many suggestions on improving the presentation of the paper. We will make sure to incorporate these if the paper is accepted. Below, we would like to address your questions.
**On analyzing the MLE:** For any set of permutations $(\pi_{12},\cdots,\pi_{1m})$, the random variable $X(\pi_{12},\cdots,\pi_{1m})$ of interest is the number of edges in the corresponding union graph $G_1 \vee G_2^{\pi_{12}} \vee \cdots \vee G_m^{\pi_{1m}}$. When $m=2$, it is possible to obtain MGF bounds for $X(\pi)$ based on the orbit decomposition of $\pi$ and the number of correctly matched nodes in $\pi$, but we are unable to obtain an analogous result for general $m$. The difficulty is in getting a handle on how different orbits in the decompositions of $\pi_{12},\cdots,\pi_{1m}$ interact with each other to determine the distribution of $X$.
**On using the weighted graph $\widetilde{\mathcal{H}}(v)$**: We chose the weighted graph representation because our analysis uses the fact that for any vertex cut in $\widetilde{\mathcal{H}}(v)$: if the *sum* of the weights of all edges crossing the cut is larger than a threshold $\tau$, then the graph $\mathcal{H}(v)$ must have at least one crossing edge for the same cut. This allows us to threshold the sum of the weights over all edges, rather than the individual weights of each edge, and the former statistic is easier to compare across different cuts.
**Intuition behind the threshold $\frac{m^2}{4}\left( k + \frac{1}{Cs^2}\right)$:** Our choice of this threshold is an artifact of the analysis. If the total crossing cost in $\widetilde{\mathcal{H}}(v)$ for a $1$-cut is larger than $\frac{m^2}{4}\left(k+\frac{1}{Cs^2}\right)$, then the total crossing cost for any other cut (which has stochastically larger crossing cost) is also larger than $\frac{m^2}{4}\left(k+\frac{1}{Cs^2}\right)$. The extreme case in an $m/2$ cut, for which there are $m^2/4$ possible edges that cross the cut: our threshold choice ensures that even in this extreme case there is a crossing edge $(i,j)$ in $\widetilde{\mathcal{H}}(v)$ whose cost is larger than $\left(k+\frac{1}{Cs^2}\right)$, so the corresponding crossing edge must exist in $\mathcal{H}(v)$. This argument rules out all possible cuts and allows us to conclude that $\mathcal{H}(v)$ must be connected under our necessary condition. Note that any threshold larger than our choice would also work for this argument.
**On extending the framework beyond ER graphs**: Indeed, the $k$-core estimator works well for pairwise graph matching in general settings such as inhomogeneous random graphs [RS23]. The bottleneck is to prove an analogous result to Lemma 8, i.e. sufficient condition for whether a node is included in the $k$-core of a graph based on its degree. Our proof of Lemma 8 crucially exploits the fact that: given the size of the $k$-core of an ER graph $G$, the set of nodes that form the $k$-core are all equally likely. This symmetry is lost in non-ER models. Even so, the analysis may be tractable for inhomogeneous graphs - we leave this for future work.
**On interpretation via connectivity threshold:** By the proof of the impossibility result (Theorem 2), the threshold $Cs(1-(1-s)^{m-1})$ can be viewed as the connectivity threshold of the intersection graph between $G_m$ and the union of the other $m-1$ graphs.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed response and further discussion of transitive closure! I will maintain my positive score. | Summary: This paper studies the information theoretic limits for matching
multiple correlated random graphs. Based on a correlated Erdos-Renyi
random graph model, the authors provide both lower bound and achievable
bound for the condition to correctly match all nodes with high
probability. These bounds match each other. A highly interesting insight
is that, even when exactly matching two graphs is not possible, the
proposed algorithm can leverage more than two graphs to produce exact
matching among all the graphs. The achievable algorithm exploits the
transitivity among partial matchings through $k$-cores, which is also
quite interesting.
Strengths: 1. The novelty of the paper is high in dealing with graph matching among
multiple correlated graphs.
2. The necessary and sufficient conditions for exact matching meet each
other.
3. The proposed algorithm can exploit transitivity to match all graphs,
even when any two graphs alone cannot be exactly matched. This is a very
insightful result.
Weaknesses: 1. The proposed algorithms do incur high complexity.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. In Algorithm 2, first the $k$-core for each pair of graphs is found.
However, there is no discussion of how to choose the parameter $k$.
2. Does Proposition 5 depend on the sub-sampling probability $s$?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Limitations are discussed in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. We address the two questions below:
1. The theoretical guarantees for the $k$-core estimator hold for any constant $k \geq 13$ (this is an artifact of the analysis). In the PDF file (global response) with simulation results, we use $k \in \lbrace 13,14 \rbrace$ because smaller values of $k$ correspond to larger $k$-cores of the graph.
2. The statement of Proposition 5 is true for any constant $s \in (0,1]$.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response! I wish to maintain my current score. | Summary: This theoretical paper gives tight conditions for exact graph matching with multiple correlated random graphs. This problem has been extensively studied recently for the case of 2 graphs, and it is shown here that with more than 2 graphs, there is a regime where pairwise alignment is not possible, but with the information provided by all graphs, it is possible to align all of them. This is a nice theoretical result.
Strengths: This paper studies a natural extension of a well-studied problem from 2 graphs to more graphs and shows a surprising effect: making partial pairwise matching is sufficient to get the exact recovery. The proof outlines give the main insights into the technical proof.
Weaknesses: The resulting algorithm is not practical as it does not run in polynomial time (as it is mentioned by the authors).
Technical Quality: 4
Clarity: 4
Questions for Authors: This paper is mostly theoretical and would probably benefit from being published in a more math-oriented journal. The impact on the Neurips community and the feedback here will probably be limited.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The authors are very clear with the limitations of their work in section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review. Please see our response to reviewer Yg8a for a note on contextualizing our work with respect to NeurIPS, and the PDF in our global response for experimental evaluation of our algorithm in ER and non-ER models.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal. I still think your paper is a very solid contribution and would like to see it published in a more theoretical venue (which is not incompatible with a presentation at NeurIPS). | Summary: The paper aims to find out alignments between G_1 and G_2,....G_m, under the assumptions that they all are essentially sampled from ER graph distribution. The paper presents one impossibility result (or necessary condition to estimate such alignment) and two sufficiency results to solve the underlying problem.
Strengths: The paper tackles an interesting problem and it is written clearly.
Weaknesses: (1) I am not too confident that the paper is appropriate for neurips audience. I think the paper suits better to a conference like ISIT or such. The paper has barely any learning component and the practical utility is not very clear.
Also, the primary area assigned by the authors "Probabilistic methods (for example: variational inference, Gaussian processes)" is probably not correct.
(2) The paper only tackles a very simple graph model (ER graph model). While I understand that theoretical analysis for complex graph model is difficult, I would recommend the authors should discuss that in comprehensive manner. To elaborate concretely,
suppose, G_1, G' _2...,G' _m are *not* generated from an ER model. But G_2,...G_m are generated using an ER like model with constant edge deletion probability $s$. In such case, can one characterize the necessary and sufficient condition.
Note that, the area is not too new in the literature. There has been work already in this line of research [CK17,WXY22 in the paper]. Although I will not say this work is an extension but the theoretical contribution given the existing works is not very interesting (m=2 to an arbitrary m for example).
(3) There is no experimental analysis. I would have increased my rating if the authors have done a thorough study on implications (including limitations) of their work on graphs from other models. For example, if we apply the same algorithm in other graph models, how would it perform. Since the line of work is not new, I would not say the theoretical results have strong enough impact to ignore the poor experiments.
Technical Quality: 2
Clarity: 2
Questions for Authors: See above.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Restrictive graph model; poor experiments and incremental contribution.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review.
**On relevance to NeurIPS:** NeurIPS and other learning conferences have been the venue of choice for other works in graph matching that establish information-theoretic recovery limits in various settings, such as [1]-[4] below. Since graph matching is an important data processing step for various downstream machine learning tasks (for example in vision and NLP), and since our transitive closure step provides an efficient black-box approach to extend $2$-ary matchings to $m$-ary matchings, we feel that NeurIPS is a relevant venue for our work.
**On theory and experimentation:** Our principal objective is a theoretical analysis for ER graphs to illustrate a key insight: pairwise matching followed by boosting is an optimal bridge between $2$-ary and $m$-ary graph matching. We leave the analysis of more general models such as stochastic block model and inhomogeneous random graphs to future work (please see our response to Reviewer tQgp for a comment on bottlenecks to this extension). Even so, we agree that experimental verification of this insight would bolster our work. In the attached PDF (global response), simulation results for ER graphs with high and low correlation, as well as a non-ER model (stochastic block model with 5 communities) is presented. We feel that these results are promising for the use of transitive closure as a subroutine to combine information from multiple graphs in practice. If the paper is accepted, we will use the extra page allowed in the camera-ready version to contextualize these simulation results.
**References**
[1] Racz, M., & Sridhar, A. (2021). Correlated stochastic block models: Exact graph matching with applications to recovering communities. In *Neural Information Processing Systems (Spotlight Paper)*, 34, 22259-22273.
[2] Wang, Z., Wang, W., & Wang, L. (2024). Efficient Algorithms for Attributed Graph Alignment with Vanishing Edge Correlation. In *Conference on Learning Theory* (pp. 4889-4890). PMLR.
[3] Ameen, T. & Hajek, B. (2024). Robust Graph Matching when Nodes are Corrupt. In *International Conference on Machine Learning*, (235: pp 1276-1305). PMLR.
[4] Gaudio, J., Racz, M. Z., & Sridhar, A. (2022). Exact community recovery in correlated stochastic block models. In *Conference on Learning Theory* (pp. 2183-2241). PMLR.
---
Rebuttal Comment 1.1:
Title: Response
Comment: I have read the rebuttal; but I still believe it may not attract the audience of Neurips. The results are OK. But, they are still on synthetic dataset. I am not convinced it's real world application yet. Though I have increased the score by +1, I am not too positive. | Rebuttal 1:
Rebuttal: We would like to address experimental evaluation of our proposed algorithm in the global response, since this was raised by multiple reviewers.
As Reviewer tQgp pointed out: the $k$-core estimator is not efficient, but its output can be efficiently simulated by computing the $k$-core of the true intersection graph. Theoretical guarantees from [CKMP19, RS23, GRS22] establish that the output of the $k$-core estimator coincides with the $k$-core of the true intersection graph with probability $1-o(1)$ as $n\to\infty$, when $k \geq 13$ and and $p = \Theta(\log(n)/n)$. In our simulations, we set $n = 10^3$ and $k \in \lbrace 13,14 \rbrace$ for different values of $p$ and show that the transitive closure step can significantly improve the simulated output of the $k$-core estimator for both ER and non-ER models.
In the attached PDF, the $k$-core estimator is simulated to obtain a partial matching between any two graphs $G_i$ and $G_j$. The fraction of correctly matched nodes for each pairwise matching is tabulated for the example case of $m=6$. Next, each pairwise matching is boosted using transitive closure, and the fraction of correctly matched nodes after boosting is tabulated alongside. By averaging these fractions over the $\binom{m}{2}$ pairwise matchings, we obtain the mean fraction of correctly matched nodes before and after transitive closure. This is plotted against $m$ in the accompanying figures.
The simulations consider three scenarios:
1. Erdos-Renyi graphs with low correlation, $s=0.25$: Each graph provides a lot of new information in this setting, and so the fraction of correctly matched nodes among any pair of graphs is significantly boosted with transitive closure.
2. Erdos-Renyi graphs with high correlation, $s=0.8$. Since the graphs are all similar to each other, the boosting is less prominent compared to the low correlation scenario.
3. Non Erdos-Renyi graphs (stochastic block model, SBM). To demonstrate the versatility of the transitive closure step, we simulate correlated SBMs. The correlated SBMs are obtained from the subsampling model where the parent graph is an SBM with 5 equal-sized clusters. Within each cluster, edges are independently present with probability $p$, whereas they are independently present with probability $q$ between clusters. Here too, there is a significant boost in the mean fraction of correctly matched nodes (see Table 3 and Figure 3). This suggests that transitive closure can be a useful bridge between $2$-ary and $m$-ary graph matching for more general models in practice.
Pdf: /pdf/c09f5ef08f1c650f672f4187b76f147e3c6867d0.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Identifying and Solving Conditional Image Leakage in Image-to-Video Diffusion Model | Accept (poster) | Summary: This paper argue that I2V-DMs tend to overly rely on the conditional image at large time steps, resulting in videos with less dynamic motion.
To address this, they propose using the KL divergence between the initial noise distribution and the actual marginal distribution to enhance the amplitude of motion while preserving video quality.
Besides,the paper propose a time-dependent noise distribution for the conditional image during training process.
Strengths: Motivation. The challenge of conditional image leakage presented in the paper is a credible issue and represents a significant academic problem.
The paper is exceptionally well-organized, making complex concepts easily digestible for the reader.
Weaknesses: 1. Claim. Regarding the conditional image and the diversity of content changes (referred to as 'motion' in this paper), it is, in fact, a matter of trade-off. The essence of image to video generation is to leverage the conditional image to enhance video quality. Therefore, the Proposition 1 and Training Strategy proposed by the authors are essentially fine-tuning techniques that favor the diversity of motion.
2. Additionally, the foundation model of video generation guided only by the first frame inherently contains uncertainty in its output; it does not know whether the user wants to generate a video with significant or minimal motion. Some recent studies (last frame guided[1], motion control[2])have demonstrated that incorporating additional guidance signals can indeed achieve big and precise motion.
Particularly, the works DragNUWA and MotionCtrl have shown that for the image-to-video generation task, it is still possible to generate significant motion. Therefore, the foundation model of SVD has the potential to directly generate substantial motion. In this context, the initial version of SVD, which could only generate minor motion, might be due to the lack of a specific condition or guidance signal for significant motion, rather than an inherent incapability. From a subjective standpoint, not all applications require substantial motion.
[1] ToonCrafter: Generative Cartoon Interpolation
[2] MotionCtrl: A Unified and Flexible Motion Controller for Video Generation
[3] DragAnything: Motion Control for Anything using Entity Representation
[4] DragNUWA: Fine-grained Control in Video Generation by Integrating Text, Image, and Trajectory
3. Experiments. The experiments appear to be somewhat confusing and not entirely convincing.
Regarding the variables of the related training strategy displayed in Figure 8, the reviewer has not found the corresponding ablation study, which is a core contribution of this paper. Additionally, the author has not provided an ablation study on the 'M' of Proposition 1, such as testing performance changes between 0.6T and 1T, which is central to the first contribution. If the reviewer has overlooked it, please point it out.
Additionally, Figure 9 also appears to be somewhat confusing. Was a user study not conducted on the Analytic-Init? What does '-tune' signify—does it mean finetuning (training strategy)? If so, why does it appear in the inference phase?
--------------------
After reading the author’s and other reviewers' feedback, I have adjusted my score to 5.
Technical Quality: 3
Clarity: 2
Questions for Authors: Can the authors provide more insights into the selection of hyperparameters (Line 230 Implementation Details) for the time-dependent noise distribution?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: yes, the authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Author Response to Reviewer f9AW
We thank reviewer f9AW for the valuable and constructive comments. We address the concerns as follows.
## Q1: Claim. The conditional image and the diversity of content changes are, in fact, a matter of trade-off. The essence of image-to-video generation is to leverage the conditional image to enhance video quality. Proposition 1 and the Training Strategy proposed by the authors are essentially fine-tuning techniques that favor the diversity of motion.
We appreciate the reviewer for the comment but disagree with our highest respect. While the conditional image for video quality and motion diversity can be a trade-off, **we emphasize that our results simultaneously achieve dynamic motion and high video quality, without sacrificing either aspect**. As demonstrated in Table 1, we achieve superior FVD and IS metrics compared to all baselines, indicating higher video quality. The user study in Figure 9 further confirms that our method outperforms all baselines in overall aspects, including video quality and dynamic motion. The results in **Rebuttal Table A** show that our method achieves a lower motion score error than all baselines, underscoring that our method helps to control motion degrees more precisely.
## Q2. Recent studies show incorporating additional guidance signals can achieve significant motion and control the motion degree. From a subjective standpoint, not all applications require substantial motion.
We clarify that our method is orthogonal to the approaches that introduce additional signals and our method enables them control motion degrees more precisely. We validate this by consistently achieving a lower error between the motion score of the generated video and the given desired motion score, regardless of the input motion score in SVD (see **Rebuttal Table A**). More details are provided in Common Concern 1. DragAnything, DragNUWA, MotionCtrl, and ToonCrafter are novel controllable methods that achieve great performance on controllable video generation. We will include a discussion with these methods in the Related Work section in the final version
## Q3: Ablation studies.
We did a systematic analysis of each hyperparameter in our proposed strategy in the original paper.
For the start time $M$ in Analytic-Init, the qualitative and quantitative results are reported in Figure 4, Table 7 and Table 8. As discussed in lines 144-146 and lines 159-162 of the original paper, an appropriate $M$ can enhance performance by increasing motion without compromising other performance. A too-small $M$ delivers poor visual quality due to the training-inference gap. Analytic-Init helps to mitigate it, especially with a smaller $M$.
For the two hyperparameters in the training strategy, the qualitative results and corresponding analysis are shown in Figure 5 and described in lines 183-195 in the original paper. Specifically, higher noise levels, such as those produced by a concave function ($a$ < 1), enhance dynamic motion but reduce temporal consistency and image alignment. For the maximum noise level $\beta_m$, a higher $\beta_m$ enhances dynamic motion but decreases temporal consistency and image alignment, while a lower $\beta_m$ leads to less motion.
## Q4: In Figure 9, was a user study not conducted on the Analytic-Init? What does '-tune' signify—does it mean finetuning (training strategy)? If so, why does it appear in the inference phase?
The inference strategy in Figure 9 is denoted as Analytic-Init. The < method >-tune represents a naively finetuned baseline using the same training setup and dataset as our TimeNoise training strategy, ensuring a fair comparison. We validate our inference strategy on both the original baseline and a naively finetuned baseline. The < method >-tune baseline is crucial because I2V-DMs often use private datasets with varying data filtering, resolutions, and unknown training settings. Controlling these variables ensures a fair comparison. We will rename < method >-tune as < method >-naive-tune and label the inference strategy as Analytic-Init for clarity in the final version.
## Q5: Insights into the selection of hyperparameters
As discussed in lines 184-185 of the original paper, firstly, the principle of $\mu(t)$ is to increase monotonically with time step. This ensures that higher noise levels are favored at later time steps, which carries a higher risk of leakage. To formalize this, we define $\mu(t)$ as a power function: $\mu(t)=2t^a-1, a>0$. This choice allows us to flexibly tune $a$ to achieve various monotonic behaviors, where smaller values of $a$ indicate that more of the later time steps will sample higher noise levels. Specifically:
- $a=1$ corresponds to a linear increase,
- $a<1$ indicates a concave function, suggesting a faster increase in noise levels over time, and
- $a>1$ represents a convex function, indicating a slower increase in noise levels over time.
For the choice of the maximum noise level $\beta_m$, the only principle is that it should be greater than 0. We refer to popular choices of maximum noise levels in VE-SDE[1,2] (e.g. 80-100) and experiment with parameters around these values.
[1] Score-based generative modeling through stochastic differential equations.
[2] Elucidating the Design Space of Diffusion-Based Generative Models.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response, which addressed most of my concerns. I will finalize my score after discussing with other reviewers, and it will likely be between 4 and 5. Thanks.
---
Rebuttal 2:
Title: Author Response to Reviewer f9AW
Comment: Dear Reviewer f9AW,
Thank you for your feedback. We are pleased to hear that our responses have addressed most of your concerns. We believe the additional experiments, analysis, and explanation have significantly improved the quality and clarity of our submission. We hope that you and other reviewers may regard this as a sufficient reason to raise the score.
Best, Authors | Summary: The paper points out that existing image-to-video diffusion models can lead to videos without significant/desired amount of motion. Some evidence is presented to show that this is due to what the authors call "conditional image leakage" where the model places too much emphasis on the conditional image and all the generated frames look too similar to the conditional image. Two strategies are proposed to overcome this issue. The inference strategy starts the reverse diffusion process at an earlier time and uses improved initial noise distribution parameters using a closed-form expression to match the noise parameters at training time. The training strategy develops a time-dependent noise distribution to have higher noise at larger t to place lesser emphasis on the conditioning image for larger t values. Experiments indicate improved video generation using either or both of these strategies.
Strengths: 1. The methods presented in the paper are well-motivated and make intuitive sense (although there appear to be some technical issues as noted in the weaknesses below).
2. Results: The quantitative results show clear improvements over baseline. The video examples submitted also show clear improvements over previous methods
3. Relationship to existing works: the authors have done a good job presenting related works carefully and how existing methods may also inherently overcome some issues, if not all.
Weaknesses: 1. Issues with derivation: For deriving the expression of better noise initialization, the authors present their proof in Appendix A. The second and third terms of (7) are considered constants for the purpose of optimizing the noise distribution. I don't follow this and think there may be a technical error here. It is written that since $q_M(X_M)$ and $p_M(X_M)$ are independent, the entropy $H(\mathcal{N}(X_M; \mathbf{\mu}_q, \Sigma_q))$ and $H(q_M(X_M))$ can be considered constants for optimizing the parameters $\mathbf{\mu}_q$ and $\Sigma_q$. I don't think this is correct as these parameters influence both $H(\mathcal{N}(X_M; \mathbf{\mu}_q, \Sigma_q))$ and $H(q_M(X_M))$. Can the authors please explain this in detail?
2. Quantifying motion: The authors say that with previous works the models could in principle just output the conditioning image several times and still lead to lower error or improved quality metrics like FID and IS. However, they still use the same metrics for their experiments. Perhaps quantifying amount of motion or even better, quantifying realism of motion directly is important.
3. Figure 9 is not clear. What are orange and blue colors indicating. Please explain this figure in detail and how it shows that the proposed method is considered better by the user study. I see very little differences going from <method> to <method>-tune in the figure.
Technical Quality: 2
Clarity: 3
Questions for Authors: No additional questions.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Yes, authors have addressed some limitations of their method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Author Response to Reviewer hjyn
We sincerely thank Reviewer hjyn for the constructive and valuable comments. The concerns are addressed as follows.
## Q1: Issues with derivation.
We clarify that our objective is to optimize $\mu_p$ and $\sigma_p^2$, the parameters of $p_M(X_M)=N(X_M; \mu_p, \sigma_p^2 I)$, rather than $\mu_q$ and $\Sigma_q$ (as stated in Proposition 1, line 155 of the original paper). Note that $\mu_q$ and $\Sigma_q$ represent the expectation and covariance matrix of the forward marginal distribution $q_M(X_M)$, which is determined by the forward diffusion process and is independent of the $\mu_p$ and $\sigma_p^2$ in $p_M(X_M)$. Given this independence, the second and third terms in Equation (7) can be disregarded for the purpose of optimization. Therefore, the optimization objective can be formulated as:
$$
\min_{\mu_p,\sigma_p^2} D_{KL}(q_M(X_M)||p_M(X_M)) \Leftrightarrow \min_{\mu_p,\sigma_p^2} D_{KL}(N(X_M; \mu_q, \Sigma_q)||N(X_M; \mu_p, \sigma_p^2 I))
$$
We will improve the writing of Appendix A in the final version.
## Q2: The authors say that with previous works the models could in principle just output the conditioning image several times and still lead to lower error or improved quality metrics like FID and IS. However, they still use the same metrics for their experiments. Perhaps quantifying amount of motion or even better, quantifying realism of motion directly is important.
We clarify that models that repeatedly output the conditioning image might achieve a lower **training loss**, as defined by Eq. (2) in the original paper, rather than a lower evaluation error or improved video quality metrics like FVD and IS. In fact, as demonstrated in Table 1 of the original paper, such repetitive behavior can lead to poorer FVD and IS, since these metrics can evaluate the dynamic qualities of videos and static conditioning images lack the inherent dynamics. It's important to note that FVD and IS are video evaluation metrics based on an action classification model rather than image-based metrics like FID and IS, which can reflect video dynamics and video quality.
As suggested, we also incorporate the motion score metric to quantify the amount of motion, which is computed by obtaining dense optical flow maps with RAFT [1] between adjacent frames, calculating the magnitude of the 2D vector for each pixel, and summing these magnitudes across the frame. As demonstrated in **Rebuttal Table F**, our approach yields substantially higher motion scores compared to all baseline methods. We will add this in the final version. Quantifying the realism of motion is indeed crucial, yet, to the best of our knowledge, there currently exists no specific automatic quantitative metric to evaluate this aspect directly. Our user study, which covers overall aspects, takes this into account. We welcome any suggestions from the reviewers regarding additional metrics for evaluating motion realism.
[1] Raft: Recurrent all-pairs field transforms for optical flow.
**Rebuttal Table F**. Motion score results on the ImageBench dataset. < Method >-naive-tune represents a naively fine-tuned baseline using the same training setup and dataset as our TimeNoise training strategy, ensuring a fair comparison.
| Model | Motion Score ↑ |
|-------|-----------------|
| DymiCrafter | 50.96 |
| + Analytic-Init | **71.04** |
|||
| DymiCrafter-naive-tune | 31.68 |
| + Analytic-Init | **50.08** |
| + TimeNoise | **72.32** |
|||
| VideoCrafter1 | 63.36 |
| + Analytic-Init | **139.04** |
|||
| VideoCrafter1-naive-tune | 62.72 |
| + Analytic-Init | **65.12** |
| + TimeNoise | **64.80** |
|||
| SVD | 16.64 |
| + Analytic-Init | **19.68** |
|||
| SVD-naive-tune | 9.60 |
| + Analytic-Init | **20.64** |
| + TimeNoise | **20.96** |
## Q3: Explanation of Figure 9.
Figure 9 shows user preference percentages for our methods versus baselines. The left side displays baselines, while the right side shows our inference (Analytic-Init) and training (TimeNoise) strategies. Orange indicates the preference for our strategies and blue for the baselines. For instance, in the first row at the top left, 84.2% preferred our inference strategy over the original DynamiCrafter's output (15.8%) from an overall perspective. The < method > and < method >-tune denote the original model and naively finetuned versions, serving as baselines. The < method >-tune uses the same training settings and dataset as our TimeNoise strategy, ensuring a fair comparison. This is crucial because I2V-DMs often use private datasets with varying data filtering, resolutions, and unknown training settings. Controlling these variables ensures a fair comparison.
In summary, we validate our inference strategy against the original method and a naively finetuned version, and our training strategy against a naively finetuned version. As discussed in the original paper (lines 238-245), our strategies significantly enhance video dynamism and natural motion, maintaining image alignment and temporal consistency, thus achieving superior results overall. We will rename < method >-tune as < method >-naive-tune for clarity and improve the caption of Figure 9 in the final version.
---
Rebuttal Comment 1.1:
Title: Thank you for your response
Comment: I thank the authors for their response.
I under the derivation better now, I think misunderstood some parts initially.
The additional results quantifying motion could be of value in the final paper, if accepted. And they are appreciated.
Overall, the authors have addressed my concerns, and I believe they have addressed most of the other reviewers' concerns.
I am raising my score to a 6 based on NeurIPS rating scale.
---
Reply to Comment 1.1.1:
Title: Thanks for the feedback
Comment: Thank you for the appreciation of our response and the update on the score. We highly appreciate it. | Summary: This paper investigates and proposes solutions for the problem of “conditional image leakage” in image-to-video generation. The authors claim that existing image-to-video diffusion models over-rely on the conditional image at large diffusion timesteps when the inputs are too noisy, leading to static video output. To address this issue, the authors propose an inference-time generation process that skips those large timesteps. The authors also propose adding noise to the conditional image during training to mitigate conditional image leakage.
Strengths: This paper identifies the problem of conditional image leakage and shows that existing methods suffer from static motion at large diffusion timesteps. The two proposed solutions empirically boost FVD and IS on UCF101 and bring significant improvements to motion magnitude while maintaining a similar image alignment and temporal consistency in the user study. These show that the proposed approach is indeed useful in generating more dynamic videos.
Weaknesses: There are a lot of prior works [a, b, c] that deal with noise initialization in video diffusion models that are highly related to the problem of conditional image leakage and the proposed inference technique. There are, however, no discussions on the differences between these techniques. Notably, FreeInit [a] and FrameInit [c] perverse the low-frequency component of the initial image – which is at odds with the proposed theory. I believe an in-depth discussion and a comparison with these approaches would help to establish the thesis of this paper. Additionally, PIA [d] also allows motion control – comparing the proposed motion-adding techniques with PIA’s motion control would also be helpful.
[a] FreeInit: Bridging Initialization Gap in Video Diffusion Models
[b] Preserve Your Own Correlation: A Noise Prior for Video Diffusion Models
[c] ConsistI2V: Enhancing Visual Consistency for Image-to-Video Generation
[d] PIA: Your Personalized Image Animator via Plug-and-Play Modules in Text-to-Image Models
Secondly, I am not entirely convinced by the authors’ explanation of conditional image leakage. The authors claim that models learn a shortcut by copying the initial image at large diffusion timesteps, leading to static video output. However, models like VideoCrafter and SVD do produce outputs of diverse motion of some degree, as shown in their papers. What are the authors’ explanations for these motions? Where are they coming from? Trying to copy the conditional image when the input is otherwise noise is likely the optimal solution and would be learned – but that is not a sufficient condition for static output.
Thirdly, how are evaluations normalized for motion magnitude? Methods like SVD and PIA have motion control. Since the evaluation is focused on “dynamic motion”, perhaps choosing a different motion parameter for these methods would lead to different results in the user study. How are those parameters chosen? Do baseline SVD, SVD after finetuning, and SVD with the inference trick interpret the same motion scale parameter in the same way? One way to validate this is to plot the input motion scale parameter against the output motion magnitude (e.g., optical flow) for parameter selection.
Technical Quality: 3
Clarity: 2
Questions for Authors: What are the motion magnitudes of SVD in Figure 3 and Figure 7?
Adding different noises to the conditioning image at different time steps requires the conditioning image to be embedded multiple times. How does this affect inference time?
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Author Response to Reviewer NwFJ
We thank Reviewer NwFJ for the valuable comments.
## Q1: Add comparisons.
### Q1-1: Add comparison with related noise initialization methods.
(1) As suggested, we add comparisons with FreeInit[a], Progressive Noise[b] and FrameInit[c]. Our Analytic-Init outperforms all baselines across all metrics, as demonstrated in **Rebuttal Table C**. Specifically:
- compared with FrameInit: we achieve much higher motion scores. This is because FrameInit duplicates the conditional image into a static video and uses it as coarse layout guidance,which can limit motion degree.
- compared with FreeInit: we reduce inference time by more than half, while achieving a slight enhancement in performance. This is because FreeInit's iterative refinement process, involving the sampling of a clean video, adding noise, and regenerating the video, increases its overall inference time.
- compared with Progressive Noise: we achieve better performance. This is because our inference framework generates videos from an earlier time step, while the baseline can not handle this training-inference gap.
(2) Additionally, we clarify that our initialization method enhances temporal consistency (referred to as 'low frequency' in related work), consistent with findings in prior work ( see 5th vs 9th columns in Figure 4 of the original paper). Our high-motion capability (referred to as 'high frequency' in related work) is improved by starting the generation process at an earlier time step rather than initialization, avoiding the unreliable late-time steps of I2V-DMs.
Thank you for highlighting these important related works. We will incorporate the above discussions in the final version.
**Rebuttal Table C.** Comparison of different noise initialization methods. Output MS is the motion score of the generated videos, computed via an optical flow network. User Rate. indicates the average rank assigned by users across all methods from overall aspects. Inference Time refers to the duration required to generate a video.
| Method | FVD ↓ | IS ↑ | Output MS ↑ | User Rate ↑ | Inference Time ↓ |
|-------------------|---------|--------|-------------|-------------|------------------|
| FrameInit | 380.7 | 20.09 | 32.16 | 4.57 | 24.3s/it |
| FreeInit | 347.4 | 22.66 | 46.24 | 1.95 | 49.6s/it |
| Progressive Noise | 358.1 | 21.35 | 49.76 | 3.31 | 23.3s/it |
| Analytic-Init | **342.9** | **22.71** | **50.08** | **1.77** | **22.6s/it** |
### Q1-2: Add comparison with PIA.
As suggested, we apply our inference strategy to PIA. Our method consistently enhances the motion degree, regardless of the initial motion level in PIA, as demonstrated in **Rebuttal Table D in the reponse PDF**. This improvement suggests the presence of conditional image leakage in PIA. It is important to note that our method is orthogonal to approaches that introduce additional signals, such as PIA and SVD, and enables them control motion more precisely (see Common Concern 1).
## Q2: The authors claim models learn a shortcut by copying the image, yet they still produce outputs with motion.
We clarify that the static video example in Figure 2 is a schematic representation. We aim to highlight that these models tend to generate videos with **reduced motion** due to over-reliance on the conditional image at large diffusion timesteps (see lines 6, 34, 103 in the original paper). In some cases, this over-reliance might lead to nearly static videos, as exemplified in Figure 2. We repeat the experiments three times using different random seeds and conducte a statistical analysis using a two-sample t-test between the baselines and ours. As demonstrated in **Rebuttal Table E in the response PDF**, our method achieved higher motion scores with a significance level of p < 0.05 compared to all baselines. We will make it clear in the final version.
## Q3: The principle for motion score chosen. Plot the input motion score against the output motion score.
For the input motion score chosen, we select a range of motion scores between 5 and 200 on the ImageBench. A score of 20 achieves high visual quality, including high temporal consistency and natural motion, and is used as a baseline. In the remaining experiments (Table 1, Figure 7, Figure 9), we directly use this 20 motion score for the baseline SVD, SVD-tune, and SVD with Analytic-Init. As suggested, we report the input motion scale against the output motion scale in **Rebuttal Table A** and **Figure 1 in the response PDF**. The results show that our method enables SVD control motion degrees more precisely by consistently achieving a lower error between the motion score of outputs and input motion score. See more detailed analysis in Common Concern 1.
## Q4: The motion score in Figure 3 and Figure 7
In Figure 3, we use the default motion score of 127 in SVD, which tends to show camera movement with poor temporal consistency. In Figure 7, as detailed in Q3, we use a motion score of 20 for its high visual quality. As discussed in Common Concern 1, our method consistently achieves a lower motion score error, regardless of the input motion score in SVD.
## Q5: How does TimeNoise affect inference time?
During inference, we use the conditional image with a fixed noise level across all time steps, similar to CDM[1], which does not affect inference time. This is because, during training, the model learns to handle conditional images with varying noise levels. In our initial experiments, we tested noise levels at 0, 5, and 10, finding that levels 0 and 5 exhibit similar performance, while a level of 10 sacrifices image alignment and temporal consistency. Based on these findings, we directly use the clean conditional image across all time steps for simplicity. We will include these details in the final version.
[1] Cascaded diffusion models for high-fidelity image generation.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the response.
Follow-up questions:
- Where is "output motion score" defined? I cannot find it in the rebuttal and the mentions of "motion score" in the main paper seem to be all related to the input, not the output.
- Q3 -- The authors chose a score of 20, much lower than the default 127. The authors claim that this "achieves high visual quality". Does this suggest a score of 127 does not achieve high visual quality? Can we back this up? If so, why not 15, or 25? Some tuning has been done for the proposed method (e.g., Table 8) so it would be fair to perform similar tuning for SVD and the baselines.
- Also Q3 -- "consistently achieving a lower error between the motion score of outputs and input motion score" -- suggests that the input motion score is supposed to be calibrated to the output motion score. Again I am not sure how the output motion score is defined.
-- Comparisons with PIA -- Again, this does not show how the input motion score for PIA is selected.
---
Reply to Comment 1.1.1:
Title: Further Response to Reviewer NwFJ
Comment: We sincerely appreciate the reviewer’s feedback. Below, we address the further concerns in detail.
## Q1: Where is "output motion score" defined?
Sorry for the unclear description. Since the official code is not available, we implement the output motion score following the guidelines for calculating the input motion score as detailed in the Optical Flow section (Appendix C) of the SVD paper. This approach ensures consistency in the calculation of both the input and output motion scores. Consequently, this consistency enables the error to accurately reflect the precision of motion control achieved.
## Q2: Some tuning has been done for the proposed method so it would be fair to perform similar tuning for SVD and the baselines.
As presented in Rebuttal Table A, we conduct tuning for the input motion score of the two baselines (SVD and SVD-naive-tune) within the range of 5 to 200, including the default score of 127. The results indicate that our method consistently outperforms these baselines, achieving both higher user preference and lower motion score error, regardless of the input motion score.
## Q3: A lower error suggests that the input motion score is supposed to be calibrated to the output motion score. I am not sure how the output motion score is defined. Comparisons with PIA -- Again, this does not show how the input motion score for PIA is selected.
As addressed in our response to Q1, we ensure consistency between the input and output motion scores by implementing the output motion score calculation following the guidelines from the SVD paper. Regarding PIA, since it categorizes motion degree into three discrete levels (small/moderate/large) labeled as 0/1/2, it's not feasible to compute a precise motion score error. Nevertheless, as presented in Rebuttal Table D of the response PDF, regardless of the input motion level, our method consistently outperforms PIA in user preference metrics. Combined with the main baseline SVD results in Rebuttal Table A, these findings underscore the effectiveness of our approach.
We sincerely appreciate the reviewer’s constructive suggestions and believe that the additional experiments, analysis, and explanations significantly improve the quality of our submission. We hope that this provides a sufficient reason to consider raising the score.
---
Rebuttal 2:
Title: Further Response to Reviewer NwFJ
Comment: We appreciate the reviewer’s quick feedback. Below, we address the further concerns.
## Q1: Details about the experiments across motion scores.
### Q1-1: Please detail the computation of the motion score
We compute the motion score by resizing the video to $800\times450$, extracting dense optical flow maps using RAFT [1] between adjacent frames, calculating the magnitude of the 2D vector for each pixel, averaging these magnitudes spatially, and then summing them across all frames.
[1] Raft: Recurrent all-pairs field transforms for optical flow.
### Q1-2: Why it would make sense to directly compare the input/output motion scores.
Training SVD involves learning a conditional probability distribution $p(X|y)$, where $y$ represents the motion score condition. During inference, the generated output $X' \sim p(X|y)$ is expected to match the given condition $y$. Therefore, the error between the motion score of generated output $y'$ and the given input score $y$ can reflect the discrepancy between the generated outputs and the target distribution in terms of motion magnitude.
We will include the above details in the main paper in the final version.
## Q2: Comparisons with PIA: "it categorizes motion degree into three discrete levels (small/moderate/large) labeled as 0/1/2" -- this is not true. The motion value from PIA comes from pixel-level L1 distance in the HSV space which obviously has more than three levels.
Thanks for highlighting the misunderstanding about PIA. The default three levels (small / moderate / large) of motion degree in PIA correspond to three predefined input affinity scores $S _ {in} = $ {$s _ {in}^i $}$ _ {i=1}^n$ , which are negatively correlated with the motion score, where $n$ is the frame and $s _ {in}^i\in[0,1]$. Consistent with the experiments from Q1, we evaluate the precision of motion control by computing the error between the input and output affinity scores. We implement the output affinity scores $S _ {out} =$ {$s _ {out}^i$}$ _ {i=1}^n$ following the guidelines for calculating the input affinity scores as described in the PIA paper's Inter-frame Affinity section (3.2). Then error between them is computed as:
$$
error = \sum_{i=1}^n \frac{|s_{in}^i - s_{out}^i|}{n}.
$$
As shown in **Rebuttal Table G**, our method consistently outperforms the baseline by achieving a lower affinity score error, regardless of the input affinity score. We will include the above experiments in the main paper in the final version.
**Rebuttal Table G.** The error comparison between PIA and PIA with our Analytic-Init. The three motion degree levels of Input Motion Level correspond to three predefined input affinity scores.
|Input Motion Level|Error(PIA/Ours)$\downarrow$|
|--|--|
|Small Motion|0.309/**0.257**|
|Moderate Motion|0.230/**0.221**|
|Large Motion|0.182/**0.124**|
---
Rebuttal Comment 2.1:
Comment: Thank you for the responses. They addressed most of my concerns. I have increased the rating to weak accept.
---
Rebuttal 3:
Title: Thanks for the feedback
Comment: Thank you for the feedback and for updating the score. We greatly appreciate your patience and the multiple rounds of communication, as your engagement has significantly enhanced the quality of our work. | Summary: This paper presents an approach for image-to-video (I2V) generation. The authors start with a conditional image leakage problem in existing works where the conditional image significantly influences the generation process thereby impacting the dynamism of the video. The authors propose an inference time and a training time strategy to mitigate the problem. The approach is evaluated on UCF101 and the authors' ImageBench dataset. The results are impressive.
Strengths: 1. The authors identified a crucial problem of conditional image leakage with I2V generation.
2. The authors proposed two strategies to address conditional leakage. An inference time strategy that can be readily applied to existing approaches. Another approach is applied during the training time.
3. The paper is well-organized and easy to follow.
4. Authors conducted a user study for qualitative evaluation. The results are impressive.
Weaknesses: 1. The authors are encouraged to provide more details about the ImageBench dataset.
2. While performing the user study, what qualities are evaluated by the users?
3. How is the accuracy of the generated videos estimated corresponding to the text prompt? Do the users rate the matching score of the generated video corresponding to the text prompt?
4. The realism of the generated videos is lacking. This is common for diffusion-based generated visual content. Have the authors considered how to improve the realism of generated videos?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please address the comments in the weaknesses section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: # Author Response to Reviewer iEBW
We sincerely thank Reviewer iEBW for the recognition of our work and for providing constructive comments.
## Q1: Details about the ImageBench dataset.
Our ImageBench dataset is designed based on two key aspects: breadth of categories and logical complexity. For breadth, we include popular categories such as nature, humans (both single and multi-person), animals, plants, buildings, food, arts, and vehicles. For complexity, we incorporate elements like numerals, colors, and complex scenes. Following these principles, we collected 100 images from various sources and T2I models like SDXL[1] and UniDiffuser[2]. We will add this in the revised version.
[1] Sdxl: Improving latent diffusion models for high-resolution image synthesis.
[2] One transformer fits all distributions in multi-modal diffusion at scale.
## Q2: While performing the user study, what qualities are evaluated by the users?
As noted in lines 226-228 of the original paper, users were asked to conduct pairwise comparisons between our method and the baselines, assessing dynamic motion, temporal consistency, image alignment, and overall performance.
## Q3: Add the evaluation about the text alignment.
We appreciate the constructive feedback provided. Following the suggestion, we add text alignment evaluation using the CLIP-text score[1] and a user study in **Rebuttal Table B**. Since SVD does not utilize text prompts as input, we conducted these evaluations only for VideoCrafter1 and DynamiCrafter. The results demonstrate that our approach achieves comparable text alignment performance with the baselines, suggesting that our strategy does not adversely affect text alignment quality.
[1]: Learning Transferable Visual Models From Natural Language Supervision
**Rebuttal Table B.** Text alignment performance comparison on the ImageBench dataset. < Method >-naive-tune and < Method >-TimeNoise represent finetuned models with naive strategy and our training strategy respectively, using the same setup and dataset for a fair comparison. User preference indicates the percentage of users who preferred our method over the baselines. DC. and VC. denote DynamiCrafter and VideoCrafter1.
| Model | CLIP-text Score ↑ | User Preference ↑ |
|-------|--------------------|------------------|
| DC. / DC. + Analytic-Init | 0.274 / **0.277** | **53%** / 47% |
| DC.-naive-tune / + Analytic-Init | **0.269** / 0.266 | 48% / **52%** |
| DC.-naive-tune / DC.-TimeNoise | **0.269** / 0.267 | 49% / **51%** |
||||
| VC. / VC. + Analytic-Init | **0.252** / 0.251 | **52%** / 48% |
| VC.-naive-tune / + Analytic-Init | 0.247 / **0.255** | 45% / **55%** |
| VC.-naive-tune / VC.-TimeNoise | **0.247** / 0.242 | 46% / **54%** |
## Q4: Future work on improving the realism of generated videos.
We appreciate the insightful comments. We think there are two promising ways for enhancing the realism of the generated videos. One promising direction is to explore advanced model architectures, such as the Transformer architecture. By refining the architecture to better capture long-range dependencies and spatial-temporal coherence, we may enhance the realism and consistency of the generated videos. Another possible aspect is to scale up the diffusion model. Scaling the model can involve increasing the depth and width of the network, as well as using larger datasets to improve
quality. | Rebuttal 1:
Rebuttal: # Common Concerns from reviewers
## Common Concern 1 (from NwFJ and f9AW): Incorporating additional guidance signals can achieve significant motion and control the motion degree. Show the input motion score against the output motion magnitude.
We clarify that **our method is orthogonal to the approaches that introduce additional signals and enables them to the control motion degree more precisely**. As demonstrated in **Rebuttal Table A** and **Figure 1 in the response PDF**, our method consistently achieves a lower error between the motion score of the generated video and the given desired motion score, regardless of the input motion score. Notably, the baseline tends to generate videos with a significantly lower motion score than the input motion score, suggesting the presence of conditional image leakage. Our method effectively reduces this discrepancy.
In addition, we emphasize that our method not only enables more precise control of the motion degree but also **enhances its naturalness**. As discussed in the original paper (lines 123-130) and illustrated with more examples in **Figure 2 of the response PDF**, the baseline often produces static objects with pronounced camera movements to meet high motion scores. In contrast, our approach generates videos with natural and vivid object movements. This suggests that a simple scalar condition may not be sufficient to address the fundamental challenge facing I2V-DMs, which is to generate clean videos primarily from noisy inputs. Our method offers a more effective solution, enhancing realism and fluidity in the generated videos.
**Rebuttal Table A.** Results on the ImageBench dataset. SVD-naive-tune and SVD-TimeNoise are finetuned SVD with naive finetuning and our TimeNoise respectively, using the same setup for a fair comparison. Input MS is SVD's input motion score. Output MS is the average motion score of generated videos. Error is the absolute difference between them. User Preference is percentages favoring our method. Note that the motion score is implemented based on the description provided in the SVD paper, as the actual code is unavailable. For the first two experiments (SVD-naive-tune and SVD-TimeNoise), both the input and output MS are consistently calculated, ensuring that the error measure is reliable. In the case of the original SVD (the third experiment), the error may not be as robust, but it is included for reference.
| Input MS | Output MS $\uparrow$ | Error $\downarrow$ | User Preference $\uparrow$ |
|----------|-------------|---------|-------------------|
| | **SVD-naive-tune / + Analytic-Init**|
| 5 | 2.08 / **4.64** | 2.92 / **0.36** | 32.0% / **68.0%** |
| 10 | 5.12 / **8.16** | 4.88 / **1.84** | 29.5% / **70.5%** |
| 20 | 9.60 / **20.64** | 10.4 / **0.64** | 16.5% / **83.5%** |
| 40 | 19.84 / **34.08** | 20.16 / **5.92** | 21.0% / **79.0%** |
| 80 | 52.96 / **65.12** | 27.04 / **14.88** | 32.5% / **67.5%** |
| 100 | 55.84 / **83.68** | 44.16 / **16.32** | 19.5% / **80.5%** |
| 127 | 55.52 / **111.20** | 71.48 / **15.80** | 34.0% / **66.0%** |
| 200 | 64.16 / **133.92** | 135.84 / **66.08** | 27.5% / **72.5%** |
| | **SVD-naive-tune / SVD-TimeNoise** |
| 5 | 2.08 / **6.72** | 2.92 / **1.72** | 19.5% / **80.5%** |
| 10 | 5.12 / **9.44** | 4.88 / **0.56** | 9.0% / **91.0%** |
| 20 | 9.60 / **20.96** | 10.4 / **0.96** | 7.2% / **92.8%** |
| 40 | 19.84 / **44.80** | 20.16 / **4.80** | 11.5% / **88.5%** |
| 80 | 52.96 / **80.48** | 27.04 / **0.48** | 15.5% / **84.5%** |
| 100 | 55.84 / **97.12** | 44.16 / **2.88** | 18.5% / **81.5%** |
| 127 | 55.52 / **113.76** | 71.48 / **13.24** | 23.0% / **77.0%** |
| 200 | 64.16 / **150.24** | 135.84 / **49.76** | 35.5% / **64.5%** |
| |**SVD / +Analytic-Init** |
| 5 | 4.32 / **5.12** | 0.68 / **0.12** | 23.0% / **77.0%** |
| 10 | 8.96 / **9.92** | 1.04 / **0.08** | 31.5% / **68.5%** |
| 20 | 16.64 / **19.68** | 3.36 / **0.32** | 16.5% / **83.5%** |
| 40 | 31.04 / **40.48** | 8.96 / **0.48** | 21.0% / **79.0%** |
| 80 | 72.16 / **78.72** | 7.84 / **1.28** | 16.0% / **84.0%** |
| 100 | 92.96 / **104.64** | 7.04 / **4.64** | 38.5% / **61.5%** |
| 127 | 112.96 / **125.28** | 14.04 / **1.72** | 42.0% / **58.0%** |
| 200 | 190.40 / **206.40** | 9.60 / **6.40** | 42.5% / **57.5%** |
Pdf: /pdf/53aa310894a75d4b2dda110088027c60527f6834.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Attention Temperature Matters in ViT-Based Cross-Domain Few-Shot Learning | Accept (poster) | Summary: This paper deals with ViT-based cross-domain few-shot learning. It first proposes an observation regarding ViT-based model. In cross-domain learning, the attention module in ViT seems to be hurting the model performance in the target domain. Based on this observed phenomenon, this paper proposes some fixes by including an additional temperature to reduce the effect of the attention map (e.g. essentially making the attention map to be uniform)
Strengths: This paper proposes an interesting phenomenon about ViT-based model in the cross-domain application
Weaknesses: 1) The whole paper is based on the phenomenon (that the attention module in ViT hurts cross-domain performance) analyzed in Sec 2. However, I do not think you can draw such conclusions convincingly from the analysis in Sec 2.
Sec 2 uses the simplest possible to train the model parameters, i.e. the model (including a backbone and a classification head) is learned on the source data, then the backbone is directly used in the target domain using a prototype-based classification. This is the simplest baseline method of model training. There are a lot more sophisticated methods of model training (e.g. MAML, ProtoNet, etc) that are designed for few-shot learning. It is not clear whether the phenomenon observed in this paper is due to this simplistic model learning method (i.e. maybe this phenomenon will not appear if you use a more advanced learning method)
2) This paper deals with few-shot cross-domain learning. But it is not clear whether the observed phenomenon is due to "few-shot" or due to "cross-domain". The analysis in Sec 2 only considers the "cross-domain" aspect, but did not provide anything about the "few-shot" aspect. It is not entirely clear whether this phenomenon is due to the domain shift between the source vs target domain, or due to the semantic difference between the classes observed on source and target domains. I would like to see a more rigorous analysis that seperate the two confounding factors
3) It is possible that the phenomenon is merely an overfitting issue, i.e. when you have the learnable attention module (query, key matrices) in ViT, the model just overfits to the source domain easily. If that ia the case, the observed phenomenon may not be that interesting (i.e. it is just overfitting). The proposed solution simply reduces the overfitting by regularization
4) This paper proposes an observation about some phenomenon, but it does not provide a convincing explanation on why this phenomenon happens or matters. This paper simply says that the attention module (learned on source data) does not generalize to target domain. But I think this is too generic -- it is well-known that if you train a model on the source domain, the model usually does not generalize to a target domain (this is the main motivation for research in domain adaptation).It seems the conclusion of this paper is just a specific manifestation of this well-known fact.
4) In the previous papers used for comparison (e.g. [10,49]), those papers reported results on more datasets. What is the reason for omitting those additional datasets in the experiment? This makes me wonder whether the results are cherry-picked.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1) Please explain how can you be sure that the phenomenon in the paper is not just overfitting, instead it is something more interesting about ViT.
2) Please explain why not using all datasets used in [10,49] in the experiment.
3) What exactly is the role of "few-shot" in this paper? It seems most of the paper is discussing the domain shift issue.
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate your valuable comments. In the following, we respond to the concerns.
## W1. Implementing with other baseline methods.
We implemented our method with ProtoNet. Our method continues to be effective with this learning method and improves the performance on the target domain.
| Method | Crop. | Euro. | ISIC. | Ches. | Ave. |
| --------------- | --------- | --------- | --------- | --------- | --------- |
| ProtoNet | 93.59 | 86.92 | 46.15 | 25.68 | 63.09 |
| ProtoNet + Ours | **95.07** | **89.46** | **48.64** | **27.14** | **65.08** |
## W2. "cross-domain" vs. "few-shot", and "domain shift" vs. "semantic difference"
#### "domain shift" vs. "semantic difference":
We would like to point out that the miniImageNet performance is evaluated on the novel classes of this dataset, while the model is trained on the base classes of miniImageNet. Since there is no overlap between the base and novel classes, a semantic difference exists between these two sets of classes. As the model shows correct attention on these source-domain novel classes, such phenomenon is majorly due to the domain shift instead of semantic difference.
To verify this, we construct datasets with source-domain semantic and target-domain style, by swapping the amplitude of the source-domain image with the amplitude from target-domain images and maintaining the source-domain phase (following WaveSAN). Then, we measure the performance of the baseline method and ours.
| 5w5s | src semantic + src Style | src + Crop. style | src + Euro. style | src + ISIC style | src + Ches. style | Avg. target style |
| ---- | ------------------------ | ----------------- | ----------------- | ---------------- | ----------------- | ----------------- |
| BL | 97.94 | 79.10 | 64.24 | 71.70 | 60.27 | 68.83 |
| Ours | 97.56 | 81.40 | 70.92 | 76.11 | 62.90 | 72.83 |
From this table, we can see
(1) By swapping the style from the source domain to the target domains, the performance consistently decreases, verifying the rationale of the constructed pseudo-target domains.
(2) With the source-domain (src) style, by applying our method, the performance slightly decreases. In the meanwhile, with the target-domain (Crop., Euro., ISIC, Ches.) styles, our method significantly improves the performance.
Since the semantics are preserved in the constructed pseudo-target domains, we can conclude that the domain shift is the cause of the phenomenon in our paper.
#### "Cross-domain" vs. "few-shot":
Since ineffective attention is observed by directly extracting features from target-domain images, no specific "few-shot" adaptation method is integrated. Therefore, this phenomenon is majorly due to the "cross-domain" aspect.
Moreover, by adapting our method, we can also contribute to the "few-shot" aspect. In the paper, we conclude that the query-key parameters tend to overfit the source domain, which makes the model discriminative but less transferable. For the "cross-domain" part, we resist the learning of these parameters during the source-domain stage to avoid overfitting. On the contrary, for the "few-shot" part, **we encourage the learning of them during the target-domain finetuning to better fit the target datasets**. To further verify the contribution to the "few-shot" part, we compare the finetuning of the query-key parameters and non-query-key ones.
| 5-way 5-shot | Crop. | Euro. | ISIC. | Ches. | Avg. |
| ------------ | ----- | ----- | ----- | ----- | ----- |
| FT non-QK | 95.91 | 90.27 | 54.05 | 27.66 | 66.97 |
| FT QK | 96.01 | 90.36 | 54.30 | 28.31 | 67.25 |
We can see the advantage of finetuning (FT) query-key (QK) parameters, albeit query-key parameters are much fewer than the non-query-key ones. In all, our contribution includes both the "cross-domain" and "few-shot" parts.
## W3. Overfitting
We would like to point out that although overfitting is a common phenomenon in machine learning, our contribution is in
1. We are the first to unveil that the query-key parameters are more likely to be overfitting than other parameters, especially under large domain gaps.
2. We are the first to find that temperature adjustment and abandonment can effectively handle the overfitting problem in ViT.
To the best of our knowledge, we do not find papers with similar contributions. Could you please provide any specific papers for better comparison?
## W4. Why this phenomenon happens and matters
Please refer to the global rebuttal Q1; we have conducted a theoretical analysis.
## W5. Comparison of more datasets
We would like to point out that many works (e.g., MEM-FS [40], TIP'23) only conduct experiments on the four datasets we experimented with in the paper. Here, we also report our performance on all 8 datasets as in [10, 49]. Please refer to the global rebuttal Q2. As can be seen, our model can still achieve a state-of-the-art performance average on all 8 datasets.
## Other questions
Q1: Please refer to W3.
Q2: Please refer to W5.
Q3: Please refer to W2.
---
Rebuttal Comment 1.1:
Comment: Thanks to authors for providing these additional results. They help addressed some of my concerns.
Regarding the "overfitting" issue, I do not see how the theoretical insight is relevant. It is well-known that adding a regularization will make the learning loss smooth, so the theoretical insight is just a reflection of this fact. So I think this paper basically just adds a regularization to address a standard overfitting issue.
---
Reply to Comment 1.1.1:
Comment: Thanks for your response!
We would like to point out that the "overfitting" issue is reflected in the increased eigenvalue of the $W^k$ and $W^q$, which is proved in (Chen, 2019). Our contribution to the theoretical analysis is in
(1) pointing out that query-key parameters show a higher tendency to overfit,
(2) linking the increased eigenvalue to domain robustness by the sharpness of loss landscapes, and
(3) verifying our method essentially handles the domain gap problem based on our theoretical and empirical analysis.
As for the mitigation of overfitting, our method is novel in its design and analysis, to the best of our knowledge, and achieves top performance. Indeed, our method can be understood as a kind of regularization, **but "regularization" is a very general topic just like "deep learning"** (could you please provide any specific papers for better comparison?). Therefore, our work is still insightful and novel to the current research.
Thanks again for your comments. If you still have further comments, please feel free to tell us! | Summary: This paper investigates the application of Vision Transformer (ViT) in Cross-Domain Few-Shot Learning (CDFSL). It fully analyzes the effectiveness of attention to CDFSL performance through experiments and identifies a method to enhance ViT's transferability across domains by adjusting attention mechanisms through temperature scaling. Despite reducing attention maps to uniform distributions, this adjustment effectively improves ViT's performance on target-domain datasets, mitigating issues with query-key mechanisms under large domain gaps. The proposed approach focuses on limiting query-key parameter learning while promoting non-query-key parameters, resulting in consistent outperformance across multiple CDFSL datasets compared to current state-of-the-art methods.
Strengths: 1. The analysis of how attention affects the CDFSL results is comprehensive.
2. The proposed method obtains the SOTA performance.
Weaknesses: 1. Can uniform attention be understood as an operation like randomly initializing attention?
2. Is there any deep theoretical explanation for the conclusion that “Compared with the query-key attention, the non-query-key components in ViT tend to be more transferable but less discriminative than the query-key components”.
3. “which inspires us to improve the generalization of ViT by encouraging the learning of non-query-key parts and resisting the learning of query-key ones.” in line 174-175. CDFSL requests both transferable and discriminative. Why resist the learning of query-key ones? If resist the learning of query-key ones, can authors still guarantee excellent performance in the source domain? Or does this paper not focus on the performance in the source domain even in the training phase, but only on transferability?
4. The method is not novel enough.
5. There is one way that the query-key attention be randomly downgraded to a uniform map after the source domain training and adjust it in the target domain inference. What is the difference between the above way and the proposed method? What are the advantages of the proposed method?
6. There are too few comparison methods, and it is also necessary to compare with some methods that use CNN as the backbone to show the superiority of the proposed method in ViT.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please answer and address the above-mentioned problems.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: In the paper authors mention that “We discuss the limitations of the work in the appendix”. However, I couldn’t find anywhere about limitation discussion in the appendix. This work does not contain any negative social impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate your valuable comments. In the following, we respond to the concerns.
## W1. Random attention initialization
Since random attention initialization tends to produce a uniform attention map, we can view our attention abandonment method as producing a randomly initialized attention. However, our difference with the re-initialization is that we do not abandon the trained parameters and retrain them. Instead, we just "skip" the query-key attention to resist the learning of their parameters, but the trained parameters will not be abandoned. To the best of our knowledge, we are the first to propose this design.
## W2. Theoretical explanation
Please refer to the global rebuttal Q1; we have conducted a theoretical analysis.
## W3. Source-domain performance
In section2, we conclude that the query-key attention tends to be discriminative but less transferable, while the non-query-key parts tend to be transferable but less discriminative. Since the backbone network is pretrained on the ImageNet dataset, the query-key parts are already discriminative enough. Therefore, it is not necessary to further train them on the source dataset (miniImageNet, a subset of ImageNet) to make it more discriminative. Instead, it is more important to prevent query-key parts from being too discriminative (i.e., less transferable), therefore we resist the learning of them.
On the contrary, for the non-query-key parts, as they are transferable but less discriminative, we need to encourage learning of them, utilizing abandoning the query-key attention as the proposed method. Indeed, for the CDFSL task, the target-domain performance is more important, but we do not sacrifice the source-domain performance. To verify this, we measure the 5-way 5-shot accuracy on the source dataset of both the baseline method and ours. **The accuracy is 97.94 for the baseline method and 97.56 for ours, which is only a marginal decrease**.
## W4. Novelty
We would like to point out that our novelties are in
(1) We are the first to unveil the importance of the attention temperature in ViT-based CDFSL methods and interpret it as a remedy for the poorly transferred query-key attention.
(2) We are the first to encourage the learning of non-query-key parameters by abandoning the query-key attention through a random binary temperature, as recognized by Reviewer G6Ww and D8qR.
Could you please provide papers with similar contributions, so that we can address your concerns more effectively?
## W5. Difference of source and target domain operation, and the advantage of our method.
During the source-domain training, we randomly multiply a temperature of 0 or 1 to the attention (before softmax) in each forward pass, so that the query-key attention map randomly switches between a uniform map and the original attention map. If the attention map is downgraded to a uniform map, the query-key attention will be abandoned, so that the query-key parameters will not be trained, thereby resisting the learning of these parameters and encouraging the learning of others.
During the target-domain stage, we set the temperature for each attention map to a fixed value (0.3), because our model is not sensitive to the temperature choice, as validated in Appendix Fig.8.
As validated in experiments, our method effectively improves target-domain performance with comparable source-domain performance, which is simple but effective, as recognized by Reviewer G6Ww and D8qR.
## W6. More comparisons with CNN-based methods.
We list more comparisons with state-of-the-art works as follows, where we can see our method achieves the best performance. Please refer to the global rebuttal PDF Tab.1 and Tab.2.
---
Rebuttal Comment 1.1:
Comment: Thank you. For Question 5, I mean as authors mentioned “Compared with the query-key attention, the non-query-key components in ViT tend to be more transferable but less discriminative than the query-key components”, i.e., the query-key attention has less transferable and more discriminative. What happens if we cold-start the query-set attention in the target domain inference?
---
Reply to Comment 1.1.1:
Comment: Thanks for your response! We report the results of the cold-start-finetuning that you suggested as follows.
| Method | Crop. | Euro. | ISIC. | Ches. | Ave. |
| -------------------------------- | ----- | ----- | ----- | ----- | ----- |
| Baseline | 94.24 | 88.62 | 45.72 | 25.66 | 63.53 |
| Baseline + cold-start finetuning | 84.45 | 80.12 | 42.49 | 23.66 | 57.68 |
| Baseline + finetuning | 94.93 | 90.41 | 48.94 | 25.96 | 65.06 |
| Ours | 95.53 | 90.13 | 53.09 | 27.72 | 66.62 |
| Ours + cold-start finetuning | 88.40 | 84.29 | 46.65 | 24.19 | 60.88 |
| Ours + finetuing | 96.66 | 90.82 | 54.91 | 28.03 | 67.61 |
As can be seen, the performance of the cold-start strategy is still lower than the regular finetuning method. This is because by re-initializing the query-key parameters randomly, the knowledge in these parameters transferred from the ImageNet pretraining is totally abandoned. In the paper (Tab.4, last row), we have verified that such transferred knowledge is still useful in the target domain, although it is not as good as that in the source domain. Therefore, totally abandoning such knowledge and learning it on the target domain cannot lead to a higher target-domain performance.
In contrast, our method can effectively resist the overfitting of the query-key parameters in the source-domain stage, and take advantage of the ImageNet pretraining of these parameters, thereby achieving higher performance.
If you have any other questions, please feel free to let us know! Thanks! | Summary: This paper investigates the effectiveness of the attention mechanism in Vision Transformer (ViT) for solving cross-domain few-shot learning tasks. It finds that the traditional query-key attention operation is more on the side of discriminability than transferability in their trade-off balance, thus leading to downgraded performance for the target domain when there are large domain gaps. Based on a series of related analyses, this paper proposes an attention abandonment operation for source domain training and an attention adjustment operation for target domain finetuning.
Strengths: 1. This paper is well organized, easy to follow and free of typos.
2. From an observed phenomenon between target domain classification accuracy and attention temperature, this paper conducts comprehensive quantitative analyses of the effectiveness of different attention strategies in terms of the trade off between discriminability and transferability. The results fully support the authors’ claim that the non-query-key components in ViT tend to be more transferable but less discriminative than query-key parts.
3. All the operations (Source-Domain Attention Abandonment and Target-Domain Attention Adjustment) newly designed in this paper are supported by the above analyses and thus are technical sound.
4. This paper may provide some insights for developing novel attention operations for ViT based cross-domain few-shot learning methods.
5. Extensive experiments on four datasets demonstrate the effectiveness of the proposed method and also showcase its superiority over other state-of-the-art approaches.
Weaknesses: I personally like this paper, there are just some small weaknesses:
1. In Figure 2(a), 3(a) and 5(a), the CLS token attention values located at the left-top corner are too small. Moreover, the authors have not provided the vector diagram, so I also cannot see these values clearly by zooming in. Could the authors enlarge the CLS boxes in these figures? In addition, it maybe better to provide a color bar for these heat maps to clearly show the range of their values.
2. Some references are not correctly formatted, the conference or journal names as well as page numbers are missing. For example, [4], [10], [13], [14], [15], [23], [29], [30], [47], [48], [49], etc.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In the section 4.2 implementation details, the authors propose to “set the attention of the CLS token to 0 for blocks whose ID is greater than 4” during the target-domain evaluation phase. Is this operation performed before or after the softmax function? Is the purpose of conducting this operation to make the image tokens unaffected by the CLS token? Could the authors explain the mechanism and impact of this operation?
2. In the top plot of Figure 5(b), I have noticed that the CLS token values in the 2nd-10th blocks are near zero on all the datasets. This seems strange because the entire line does not fluctuate within this interval. Does this mean that the CLS token and image patches tokens are orthogonal to each other?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your appreciation of our work!
## W1. Fig. 2a and Fig. 5a
We have added the color bar to the attention map and enlarged the class toke for a clear observation. Please refer to the global rebuttal PDF Fig.1 and Fig.2.
## W2. Formatted references
We have checked the references and completed the conference or journal names as well as page numbers.
[4] Yinbo Chen, Zhuang Liu, Huijuan Xu, Trevor Darrell, and Xiaolong Wang. Meta-baseline: Exploring simple meta-learning for few-shot learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 9062–9071, October 2021.
[10] Yuqian Fu, Yu Xie, Yanwei Fu, and Yu-Gang Jiang. Styleadv: Meta style adversarial training for cross domain few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 24575–24584, June 2023.
[13] Patrick Helber, Benjamin Bischke, Andreas Dengel, and Damian Borth. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7):2217–2226, 2019.
[14] Shell Xu Hu, Da Li, Jan Stühmer, Minyoung Kim, and Timothy M. Hospedales. Pushing the limits of simple pipelines for few-shot learning: External data and fine-tuning make a difference. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 9068–9077, June 2022.
[15] Yanxu Hu and Andy J. Ma. Adversarial feature augmentation for cross-domain few-shot classification.
In Shai Avidan, Gabriel Brostow, Moustapha Cissé, Giovanni Maria Farinella, and Tal Hassner, editors, Computer Vision – ECCV 2022, pages 20–37, Cham, 2022. Springer Nature Switzerland.
[16] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization. In Yoshua Bengio and Yann LeCun, editors, 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings, 2015.
[24] Hanwen Liang, Qiong Zhang, Peng Dai, and Juwei Lu. Boosting the generalization capability in cross domain few-shot learning via noise-enhanced supervised autoencoder. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), pages 9424–9434, October 2021.
[30] Cheng Perng Phoo and Bharath Hariharan. Self-training for few-shot transfer across extreme task differences. In International Conference on Learning Representations, 2021.
[47] Jinghao Zhou, Chen Wei, Huiyu Wang, Wei Shen, Cihang Xie, Alan Yuille, and Tao Kong. Image BERT pre-training with online tokenizer. In International Conference on Learning Representations, 2022.
[48] Xiang Zhou, Yuan Zeng, and Yi Gong. Learning to scale temperature in masked self-attention for image inpainting. arXiv preprint arXiv:2302.06130, 2023.
[49] Yixiong Zou, Yicong Liu, Yiman Hu, Yuhua Li, and Ruixuan Li. Flatten long-range loss landscapes for cross-domain few-shot learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pages 23575–23584, June 2024.
## Q1. Set CLS token to 0
This operation is conducted after the softmax operation. In Fig.5b, we validate that the attention value of the CLS token tends to be larger in target datasets than in the source dataset in the first few blocks, therefore we manually resist the influence of the CLS token in the target-domain attention map.
## Q2. Near 0 attention on the CLS token
Intuitively, this is because the model focuses more on the image tokens, therefore the attention on the CLS token significantly decreases. Theoretically, we can observe that Fig.5b and Fig.3b are similar, with both the CLS token attention (or value) near 0. This is because according to our theoretical analysis (please refer to Q1 in the global response), our model decreases the eigenvalue of the query-key parameters (i.e., $W^k$ and $W^q$), therefore preventing the query-key attention from amplifying the perturbation on the representation $X$ [49]. As a result, the influence of the query-key attention brought to the attention's input feature $X$ decreases, therefore the output attention will be more similar to the activation map of $X$ as shown in Fig.3b. Since Fig.3b shows near 0 activation on the CLS token, the attention map of our model also shows the similar behavior.
---
Rebuttal Comment 1.1:
Comment: Thank you for your reply!
For weakness 1, it would be better for authors to provide high-resolution vector diagrams, so that the readers can zoom in for details.
For question 1, the authors "manually resist the influence of the CLS token in the target-domain attention map." Does this operation significantly influence the final performance? If so, it seems to cause an unfair comparison, since such operation is not the main contribution of this paper and has not been applied to other competing ViT-based methods.
---
Reply to Comment 1.1.1:
Comment: Thanks for your suggestions!
For the images, we promise to include high-resolution images in the final version.
For the manual rectification of the CLS token, the influence is only marginal, we report the result with and without this operation as follows.
| 5-shot | CropDiseases | EuroSAT | ISIC | ChesX | Avg. |
| -------- | ------------ | ------- | ----- | ----- | ----- |
| w/ CLS operation | 95.53 | 90.13 | 53.09 | 27.72 | 66.62 |
| wo/ CLS operation | 95.42 | 90.01 | 52.81 | 27.70 | 66.49 |
The reason why we include this operation is to be consistent with our findings that the model tends to show high attention to the CLS token. However, **since our model generates better attention maps that the manual rectification is not in great need** (L230-231, and verified in Fig.8 of the appendix), this operation only marginally influences the performance.
For the fair comparison problem, since the observation of the excessive attention on the CLS token is also our contribution, the manual rectification of the CLS token's attention value **can also be understood as a part of our method**, although it only marginally influences the performance. Therefore, the comparison with current works is still fair.
Thanks again for your response! We promise to carefully polish our paper for the final version. If you have any questions, please feel free to ask us! | Summary: This paper studies the effectiveness of Vision Transformer (ViT) for Cross-domain few-shot learning (CDFSL). In particular, the authors found that by simply multiplying a temperature to the attention in ViT blocks, the target-domain performance consistently increases, even though the attention map is downgraded to a uniform map. The authors investigated phenomenon through several experiments and propose a simple and efficient solution for boosting ViT’s transferability by resisting the learning of query-key parameters and encouraging that of non-query-key ones. Extensive experiments demonstrate the effectiveness of the proposed method for CDFSL.
Strengths: 1. The paper studies an important problem in few-shot learning called CDFSL and also investigates the effectiveness of ViT for this problem which is of high practical importance.
2. The paper conducted a detailed analysis of attention temperature on addressing target-domain shift by visualizing the attention maps and quantitative analysis.
3. The proposed solution is simple and effective for addressing the domain gap issue in CDFSL with ViT.
Weaknesses: 1. The paper mostly focuses on empirical analysis while lacking theoretical insights on why query-key features tend to be discriminative but less transferable.
2. The proposed method needs to retrain the ViT in Source-Domain Attention Abandonment which can be costly.
3. In Target-Domain Attention Adjustment, a pre-defined hyper-parameter is needed, which can be difficult to tune in CDFSL.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What does it mean by "non-query-key structures"?
2. In Source-Domain Attention Abandonment, would the training strategy degrade the model performance on source data?
3. Further explanations are needed for Equation (6) to clarify how it can improve learning better attention for CDFSL.
4. The authors primarily consider a 5-way 5-shot scenario. How about increasing the number of samples in each class? For instance, in a 5-way 20-shot problem, would the proposed strategy still be effective?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We truly appreciate your valuable comments. In the following, we respond to the concerns.
## W1. Theoretical insights.
Please refer to the global rebuttal Q1; we have conducted a theoretical analysis.
## W2. Retrain ViT on the source domain
We would like to point out that following current works [10,49], the size of the source-domain dataset (miniImageNet) is not large since this dataset is only a subset of ImageNet and contains only 64 classes with 600 samples in each class. The training is conducted on a single RTX3090 GPU for around 5 hours, which is affordable.
Moreover, we also decrease the number of samples in each class to verify how the source dataset size influences the performance.
| Sample Number | - | 100 | 200 | 300 | 400 | 500 | 600 |
| --------------------- | ----- | ----- | ----- | ----- | ----- | ----- | ----- |
| 5-way 5-shot accuracy | 63.53 | 65.29 | 65.69 | 65.86 | 65.95 | 66.00 | 66.00 |
We can see the performance is not sensitive to the source dataset size, as we can achieve comparable performance even if the dataset size is halved to 300 samples in each class. In all, the retraining of the ViT on the source dataset is affordable.
## W3. Pre-defined hyper-parameter of Attention Adjustment
The hyper-parameter in the target-domain attention adjustment is the temperature, which is simply set to a fixed value (0.3) for all datasets. As shown in Fig.1b, the average performance plateaus when the temperature gets smaller. We also report the performance of our model w.r.t. the temperature as follows.
| Temperature | 1.0 | 0.9 | 0.8 | 0.7 | 0.6 | 0.5 | 0.4 | 0.3 | 0.2 | 0.1 |
| ------------------- | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
| 5-way 1-shot accuracy | 53.51 | 53.52 | 53.65 | 53.70 | 53.98 | 54.05 | 54.09 | 54.12 | 54.11 | 54.10 |
As can be seen, the performance plateaus when the temperature gets smaller than 0.5. We also evaluated the temperature sensitivity in the appendix Fig.8. Therefore our model is not sensitive to the specific choice of the target-domain temperature, i.e., this hyper-parameter is not difficult to tune.
## Q1. Non-query-key structures
The non-query-key structure refers to the ViT backbone network without the query-key parameters, i.e., replace the query-key attention with the identity attention, the cosine attention, or the average attention, as shown in Tab.2. We promise to refine this demonstration.
## Q2. Degrade the source-domain performance
The source-domain performance is decreased from 97.94 to 97.56 by applying our method, which is only a marginal decrease and is affordable.
## Q3. Explanation of Eq. 6
Eq. 6 means to randomly multiply a temperature of 0 or 1 to the query-key attention in each block, and the probability of 0 is *p*. If the temperature is 0, then the attention is downgraded into average attention in Tab.2. If the temperature is 1, the original query-key attention is maintained.
In section 2 we conclude that the query-key attention mechanism makes the model discriminative but less transferable, while the non-query-key ViT structures (ViT network without the query-key parameters) tend to make the model transferable but less discriminative. Therefore, we randomly abandon the query-key attention to encourage the non-query-key attention (i.e., the average attention) in the source-domain training. As the source-domain training will encourage the trained parameters to be discriminative but less transferable, this operation helps the learning of the non-query-key ViT structures to be more discriminative, and resist the learning of the query-key parameters to avoid them from being less transferable.
In all, the query-key part is more discriminative but less transferable, therefore we resist its training to avoid it from being less transferable, while the non-query-key part is more transferable but less discriminative, therefore we encourage its training to improve its discriminability. As a result, the overall attention on the target domain is improved, due to the overcoming of shortcomings of each part.
## Q4. Increase the number of shots
We report our performance with a larger number of shots below.
| 10-shot | CropDiseases | EuroSAT | ISIC | ChesX | Avg. |
| -------- | ------------ | ------- | ----- | ----- | ----- |
| Baseline | 96.33 | 90.59 | 51.35 | 28.42 | 66.67 |
| Ours | 96.85 | 91.42 | 58.52 | 30.19 | 69.18 |
| 20-shot | CropDiseases | EuroSAT | ISIC | ChesX | Avg. |
| -------- | ------------ | ------- | ----- | ----- | ----- |
| Baseline | 97.15 | 91.76 | 55.56 | 30.73 | 68.80 |
| Ours | 97.59 | 92.63 | 62.38 | 32.85 | 71.36 |
We can see our proposed strategy is still effective in these situations.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarifications.
Comment: The rebuttal has addressed my concerns.
---
Reply to Comment 1.1.1:
Comment: Thanks for your appreciation of our work! We will continue to polish our work in the final version! | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable input.
## Q1. Theoretical insights
#### 1. Our method reduces the sharpness of the model's loss landscape.
Theoretically, we analyze our findings from the sharpness of the loss landscapes (Foret 2021). That is, each value of model weights is viewed as a point in the weight space, and corresponds to a loss value. Each point and its loss value constructs a loss landscape, where the source-domain-trained model is viewed as a minimum in the landscape. The sharper the minimum is, the more vulnerable against domain gaps model will be [49]. Specifically, the sharpness is measured as
$$
R_\rho(\theta)=max_{||\epsilon||_2 \leq \rho} L(\theta + \epsilon) - L(\theta)
$$
where $\theta$ refers to the model weights, $\epsilon$ refers to the perturbation with the radius $\rho$. With this criterion, the generalization can be bounded as follows (Foret 2021).
**Theorem 1**. For and $\rho>0$ and any distribution $\mathscr{D}$, with probability $1-\delta$ over the choice of the training set $S \sim \mathscr{D}$,
$$
L_\mathscr{D}(\theta) \leq max_{||\epsilon||_2 \leq \rho} L_S(\theta + \epsilon) + \sqrt{\frac{k \space log(1+\frac{||\theta||_2^2}{\rho^2}(1+\sqrt{\frac{log(n)}{k}})^2) + 4 \space log \frac{n}{\delta} + \tilde O(1)}{n-1}}
$$
Following this criterion, we measure the sharpness given perturbations on different model weights.
| Perturbed Weights | All | Query Key |
| --------------------- | ------ | --------- |
| Sharpness of Baseline | 7.1483 | 7.0679 |
| Sharpness of Ours | 5.9915 | 6.4024 |
We can see our method indeed reduces the sharpness of the loss landscapes, indicating better robustness against domain gaps [49]. Notably, the sharpness of the query-key parameters is also decreased, indicating we can effectively resist their overfitting to the source domain by the proposed method.
#### 2. Why the query-key mechanism increases the sharpness
The query-key attention is calculated as $QK^T = W^k X X^T (W^q)^T$. Compared with other calculations in ViT, only the query-key attention involves the "square term" of $X$. Therefore, any perturbation added to representations [49] is likely to be amplified by the multiplication of $W^k$ and $W^q$. According to current studies (Chen, 2019), a well-trained model tends to increase the eigenvalue of its weights if overfitting happens. Since the eigenvalue here squarely influences $W^k X X^T (W^q)^T$ due to both $W^k$ and $W^q$ in it, the query-key attention is more likely to amplify the perturbation added to representations and parameters. As the perturbation is finally forwarded to the classification loss, the query-key attention therefore increases the sharpness of the model, thereby making the model more vulnerable to domain shifts.
To solve this problem, our method resists the learning of the query-key attention, preventing $W^q$ and $W^k$ from learning large eigenvalues, therefore reducing the sharpness and benefiting the transferring. To verify the decreased eigenvalues, we measure the product of eigenvalues as follows.
| | DINO Pretraining | Baseline Training | Ours |
| ---------------------------------------------- | ---------------- | ----------------- | ----- |
| Average eigen value product of $W^q$ and $W^k$ | 14.54 | 14.56 | 14.52 |
We can see the baseline training increases the eigenvalue, while our method decreases it, verifying our theoretical analysis.
References
Pierre Foret, Ariel Kleiner, Hossein Mobahi, and Behnam Neyshabur. Sharpness-aware minimization for efficiently improving generalization, 2021. 1, 2, 4, 5, 7, 8
Chen, Xinyang, et al. "Transferability vs. discriminability: Batch spectral penalization for adversarial domain adaptation." *International conference on machine learning*. PMLR, 2019.
## Q2. Comparison of more datasets
We would like to point out that many works (e.g., MEM-FS [40], TIP'23) only conduct experiments on the four datasets we experimented with in the paper. Here, we also report our performance on all 8 datasets as in [10, 49].
| 5shot | CUB | Cars | Plac. | Plan. | Crop. | Euro. | ISIC | Ches. | Avg. |
| ----------- | --------- | --------- | --------- | ------------ | --------- | ------------ | --------- | --------- | --------- |
| StyAdv [10] | 95.82 | 61.73 | 88.33 | **75.55** | 94.85 | 88.57 | 47.73 | 26.97 | 72.45 |
| FLoR [49] | 96.18 | 61.75 | 89.23 | 72.80 | 95.28 | **90.41** | 49.52 | 26.71 | 72.74 |
| Ours | **96.28** | **64.26** | **89.25** | *73.24* | **95.53** | *90.13* | **53.09** | **27.72** | **73.69** |
| 1shot | CUB | Cars | Plac. | Plan. | Crop. | Euro. | ISIC | Ches. | Avg. |
| ----------- | --------- | --------- | --------- | ------------ | --------- | --------- | --------- | --------- | --------- |
| StyAdv [10] | 84.01 | 40.48 | 72.64 | **55.52** | 81.22 | 72.15 | 33.05 | 22.92 | 57.75 |
| FLoR [49] | 84.60 | 40.71 | 73.85 | 51.93 | 81.81 | 72.39 | 34.20 | 22.78 | 57.79 |
| Ours | **85.48** | **43.45** | **74.49** | *52.58* | **84.02** | **74.35** | **34.92** | **23.19** | **59.06** |
As can be seen, our model can still achieve a state-of-the-art performance average on all 8 datasets.
Finally, we appreciate the inspiring comments again and will thoroughly revise the paper accordingly. We hope our explanations have answered your questions.
Pdf: /pdf/ffb3e4879f5dd364788d76898d0f4ea320fb0f24.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Code Repair with LLMs gives an Exploration-Exploitation Tradeoff | Accept (poster) | Summary: This paper studies the code refinement problem: given a candidate program and the reason why the program fails to satisfy the user specification, an LLM is called to generate an improved program. The paper frames this as an arm-acquiring bandit problem and solves this using Thompson sampling. The evaluation is performed on three code generation tasks: competitive programming, visual program reasoning, and loop invariant generation. The results show that while the proposed approach solves only moderately more tasks than baseline given a large-compute limit, it is significantly faster to achieve a certain performance.
Strengths: The paper is very well-written and easy-to-follow. I thank the authors for providing the details regarding the evaluation tasks and the prompts. Moreover, I appreciate the paper’s message on emphasizing the importance of developing better base models over constructing better refinement strategies. While it sounds like a negative result for the paper, it is definitely useful for the community.
Weaknesses: REx seems like a straightforward application of Thompson sampling. Moreover, it only gives moderately improvement over baseline search methods in a large-compute setting. I do appreciate simplicity when it is combined with effectiveness, but this is not the case for REx. Therefore, I consider the technical contribution of this paper rather thin.
Moreover, a major selling point of the paper is to study an exploration-exploitation tradeoff. While the paper provides some preliminary study at Lines 179-182, I believe a more thorough analysis is needed for full understanding.
Another limitation, as also pointed out by the paper, is that only GPT-4 is studied. Would REx give better results in the large-compute setting when the self-repairing capability of the base model is weaker and the search strategy might play a more important role?
Technical Quality: 3
Clarity: 3
Questions for Authors: Please consider addressing the points raised in the “Weakness” section.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The paper has sufficiently addressed the points concerning limitations and societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and for the suggestions. We have run new experiments that we really believe can address your concerns. Please see below.
> only GPT-4 is studied. Would REx give better results in the large-compute setting when the self-repairing capability of the base model is weaker and the search strategy might play a more important role?
Thank you for this idea. We ran last week other (weaker) language models, and discovered that the advantage of REx is sometimes much more pronounced: For example, on GPT3.5/APPS-competition, our method solves about ~40% more problems (relative improvement) over the second best approach.
Thank you again for suggesting this, which we think significantly strengthens the paper by showing more models as well as incidentally discovering cases where REx makes a much bigger difference than we showed in the original submission.
> Moreover, it only gives moderately improvement over baseline search methods in a large-compute setting. I do appreciate simplicity when it is combined with effectiveness, but this is not the case for REx. Therefore, I consider the technical contribution of this paper rather thin
We hope that you can reconsider this in light of the above result from the experiment that you suggested. We also hope one can see technical contribution in cost reduction and increased hyperparameter robustness.
> Moreover, a major selling point of the paper is to study an exploration-exploitation tradeoff. While the paper provides some preliminary study at Lines 179-182, I believe a more thorough analysis is needed for full understanding.
We'll include videos we made recently of REx searching through refinement trees, which clearly show a balancing of exploring and exploiting. We also gave a theoretical (mathematical) analysis in L120-L132/Fig 2.
Please let us know if you have further questions.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for running the experiments! The rebuttal has addressed my main concerns. Therefore, I am raising my score from 4 to 5. I hope the authors could consolidate the experiments on more LLMs, the application of REx to more challenging tasks, and the analysis of the exploration-exploitation tradeoff. Including these results would help strengthen the paper. | Summary: This paper explores a variety of code generation tasks, taking the approach of using a LLM to generate solutions, and conditioning the generation of each solution on the repair of a previously generated solution. It frames this as an arm-acquiring bandit problem, where each solution generated is an arm, and pulling an arm means refining a previous solution with an LLM (specifically, with GPT-4). The paper applies Thompson Sampling to this problem; specifically the paper's method is to use a heuristic-driven prior for which solution to refine at each step (i.e. the fraction of test cases passed by that solution), and then to penalize a solution each time its refinement does not lead to a perfect solution. The method is conceptually simple, and results in stronger performance (i.e. more problems solved; fewer model calls used) than a few baseline approaches.
Strengths: The method presented is conceptually simple, and is easy to implement and adapt to new tasks. Therefore this method can readily be put into practice by practitioners. (However, see weakness no. 1).
The connection to bandit problems is sound, and potentially a valuable framing of the problem. (However, see weakness no. 3.) With regard to significance, making this connection could suggest a wide range of approaches from bandit literature. That said the paper does not explore this potential benefit of the framing beyond its application of Thompson Sampling.
The range of tasks considered is another strength of the paper, and a key strength of the paper is that the method demonstrates performance improvements on three of the four tasks (the exception being the easiest APPS problems). The variance of the results on NLI and ARC is high though, so while the method is an improvement, the improvement is not substantial or guaranteed.
The modesty/honesty of the limitations section is refreshing too, providing clear commentary on the biggest limitations of the work.
Weaknesses: I noted as the first strength listed that the method can readily be put into practice by practitioners, but I must couch this assessment: the problem statement and method assume access to an efficient checker for whether the task is solved, which restricts the space of tasks to which the method can be applied.
A weakness of the paper is that only GPT-4 is tested. As the paper acknowledges, GPT-4 is among the strongest models for editing code. It is also among the more expensive models to run. By only evaluating the method with GPT-4, we are left to wonder about the cost tradeoffs of using smaller cheaper models with more samples vs this large model with fewer samples. Since one of the stated goals of the paper is to reduce cost (the paper states this as the goal of reducing the number of model calls), this would be valuable to understand. Since the results are high variance and of modest effect size, understanding how the method's performance varies across models would be valuable.
It is not clear that single-sample-refinement is the best framing of the problem; there are lots of ways to prompt a model beyond refinement from a single prior solution, and the paper does nothing to suggest that single-solution-refinement is a promising strategy. I think this is the biggest weakness of the problem statement; it assumes a rather rigid set of ways to apply LLMs to solving these programming tasks, and then optimizes within that rigid constraint, without justifying adequately that it is a valuable constraint.
---
Overall, given the constraints of the problem statement, the method presented is a step forward in terms of performance, formulating an idea from bandits for this program synthesis setting and achieving improved results. A key aspect of the paper's contribution is that the paper frames the search tree as infinite width and infinite depth; I cannot meaningfully comment of the novelty of this framing, though given the limited search budgets of a few hundred LLM calls the claim of infinite width and infinite depth doesn't seem critical. The paper is written clearly and the method presented is simple to follow and implement. The main restriction on the significance of the work is that the constraints made by the problem statement -- that the method works by iteratively refining a single preexisting solution at a time -- are overly constraining and not representative of the range of ways in which people apply LLMs today.
Technical Quality: 3
Clarity: 2
Questions for Authors: In the related work section you state that it is not possible to apply MCTS or any of its standard variants to the problem statement out-of-the-box. Could you elaborate on (or refine) this position, and comment on whether small modifications would be sufficient to apply MCTS to this problem?
Since the LLM is shown the full specification (Page 3, eq (4)), there is risk of it generating solutions that satisfy the specification literally without generalizing to satisfy the intent of the specification. A simple example of this would be if the model writes special case handling for each of the constraints in the specification, rather than capturing the underlying problem to be solved. Do you observe this in practice at all?
How do methods that fit the problem statement, i.e. methods that operate by only refining one solution at a time to produce new solutions, compare with other approaches to using LLMs to solve these tasks. Showcasing what other approaches have been applied and how they perform, or alternatively making a strong case that this particular formulation is a good choice, would improve the paper.
At Line 92 you consider an alternative heuristic of edit-distance to a target output. Am I correct in understanding that having a target output available for the heuristic would obviate the need for applying the method?
One limitation of the approach is that as information comes in about proposed solutions, the system does not use that to learn anything about other previous solutions, even if they are heavily related. Similar to weakness 3 and question 3, approaches that leverage this source of information sit outside the problem statement. Could you comment on the relevance of such approaches, either comparing with them or justifying their absence?
nit: Figure 1, Right: I would encourage you to label the outgoing edges "exploit" and "explore" rather than labeling the nodes or incoming edges as is currently done. This is because the current figure makes it look as if generating labelled nodes was done as exploitation vs exploration, where in fact it is refining those nodes that would be exploitation or exploration.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The authors address the limitations of the work in Section 7, Limitations. They explain the effect of only testing their approach on GPT-4, and discuss the limited effect size of their approach at solving more problems overall.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the helpful review! Please see below for our responses, and see the global response PDF for new experimental results motivated by your suggestions.
> only GPT-4 is tested
Thanks or the suggestion of testing other models. Please see the global response PDF for new results on other LLMs. Although the results are still coming in, and the absolute numbers are somewhat different, the overall qualitative conclusion is the same, with one exception: We've discovered that our method seems to help a lot with cheaper models, which means that it might be more broadly applicable then we originally sold it as.
> you state that it is not possible to apply MCTS... Could... small modifications would be sufficient to apply MCTS to this problem?
While such modifications could, in principle, likely be invented, they would probably be more complicated than REx, which is considerably simpler than MCTS, despite capturing MCTS-esque dynamics. However, we'd be happy to try any specific modifications you suggest next week during the discussion period.
> Since the LLM is shown the full specification... there is risk of it generating solutions that satisfy the specification literally without generalizing to satisfy the intent of the specification
We evaluate on holdout inputs for ARC. For loop invariants we use a formal verifier to check that the invariant actually holds for all possible inputs. APPS, unfortunately, does not come with designated holdout tests, and conventionally has been evaluated without them [e.g. Olausson ICLR '24; Zelikman NeurIPS '23]. However, APPS averages 27.6 tests/problem, so it would be very hard to "overfit" without recovering the true algorithm, and in manually inspecting 15 program solutions we observe no memorization of isolated test-cases. The revision will mention all these issues.
> other [non-refinement] approaches to using LLMs to solve these tasks
We are happy to try any particular baseline that you suggest next week during the discussion period. We do think that refinement is an especially popular and simple approach, though, so we think focusing on refinement is a reasonable research strategy for this paper.
> Am I correct in understanding that having a target output available for the heuristic would obviate the need for applying the method?
We'd still need to apply our method because we would still need to generate a program that maximizes the heuristic value.
> One limitation of the approach is that as information comes in about proposed solutions, the system does not use that to learn anything about other previous solutions, even if they are heavily related. Similar to weakness 3 and question 3, approaches that leverage this source of information sit outside the problem statement. Could you comment on the relevance of such approaches, either comparing with them or justifying their absence?
This would be a fascinating direction to explore, for example by incorporating a distance metric between programs and using probabilistic kernel methods for the prior (such as a Gaussian process). But this requires a metric between program source code in order to determine what programs are similar, which introduces its own complexities and would likely be more brittle and sensitive to hyperparameters and/or require in-domain training data. Still, it could be an interesting direction to explore, and we will add it to the future work.
Thanks again for the review and for the suggestions of future work. Please let us know if you have any further questions.
---
Rebuttal Comment 1.1:
Comment: Thanks for your rebuttal. The inclusion of additional language models makes the results more compelling.
A key remaining concern is that it is not clear that single-sample-refinement is the best framing of the problem; there are lots of ways to prompt a model beyond refinement from a single prior solution, and the paper does nothing to suggest that single-solution-refinement is a promising strategy compared to these alternatives.
Here are a couple examples of categories of alternatives: single-sample prompting techniques like scratchpad/Chain-of-Thought and all its variants, LLM-chain approaches that generate a single result, like ReAct and its variants (one can view these as improvements on how to refine, rather than what to refine), and refinement approaches like EvoPrompting that consider more than one sample at a time. | Summary: This paper proposes to improve the iterative code refinement process by prioritizing the “good” programs, where the goodness is defined by a heuristic estimator -- the program that passes the more test cases is better. To balance exploration (explore a lesser refined program) and exploitation (refine the best program), the paper formulates this problem as a bandit problem, and solves with Thompson Sampling. The proposed method achieves modest but consistent improvements across various program synthesis benchmarks (loop invariant, visual transformation program, competition problems).
Strengths: - The paper’s formulation of the iterative program refinement problem is insightful.
- The exploration-exploitation tradeoff is an important yet surprisingly overlooked aspect for effective code repair. This paper systematically analyzes different search policies.
- The proposed solution with bandit algorithms is very simple yet effective, and seems to be robust across various repair benchmarks.
Weaknesses: ### **Technique**
The scope of this paper is limited to solving small, isolated programming challenges given a relatively large compute limit. It is unclear whether such techniques can be applied to more realistic settings (where we cannot afford large numbers of full program samples). For example, to repair a bug in a repository, it might be more important to improve the code refinement step itself, either with better prompting or training.
It is also unclear whether similar results can be achieved using more advanced prompting techniques (e.g., few-shot or CoT).
Formulating the problem as a bandit problem seems a little unnatural, as the algorithm terminates as soon as a positive reward is observed. In other words, the accumulated reward is always 0 before termination. It’d be great if the paper can include more theoretical discussion regarding how this special assumption would affect the theoretical guarantee of the algorithm.
### **Evaluation**
The improvement of REx seems marginal.
- Compared with alternative search algorithms, REx often brings only marginal improvements given a large enough budget (Figure 4).
- Also on the some dataset (loop invariant), BFS (with the optimal hyperparameter) is even better than REx with smaller sampling budget.
The authors should list the cost of baselines for fair comparison.
- In Figure 4, the existing LLM-based methods should be marked as data points in the plot, with their corresponding sample budget (if they use the same base LM).
According to Table 1 in appendix, the Greedy baseline seems to be very strong, even though the paper only considered two values for its single hyperparameter. This leads me to wonder whether such a simple baseline has more potential.
- The hyperparameter, specifically the heuristic value of the initial program, controls when to resample a new program instead of refining existing ones. This seems to be highly problem-specific. Instead of having a fixed value for all problems, can we maintain a (moving) average of all sampled solutions for one particular problem?
Also, each arm is not independent, as the refined program $\rho^\prime$ is correlated with the original program $\rho$.
- I am curious about whether it is possible to better incorporate the observations of the refined program(s) to update the posterior belief of the original program. Although the current reward (refined program $\rho^\prime$ satisfies all input-output pairs) is very sparse, we may use the heuristic metric $h(\rho^\prime)$ as an alternative.
- In fact, MAB has been applied for fuzz testing [a,b], where code mutation is performed instead of code refinement. [a] formulates the problem as hierarchical multi-armed bandits, and [b] also formulates the problem as arm-acquiring bandits. It could be an interesting line of related work to check.
[a] Ran, Dezhi, et al. "Badge: prioritizing UI events with hierarchical multi-armed bandits for automated UI testing." 2023 IEEE/ACM 45th International Conference on Software Engineering (ICSE). IEEE, 2023.
[b] Yang, Chenyuan, et al. "White-box compiler fuzzing empowered by large language models." arXiv 2023
### **Implementation**
Some implementation details are unclear. For example, what is the temperature for ARC and loop invariant experiments? Appendix A5 lists temperature=1.0 for APPS, is it the same for all considered benchmarks?
### **Minor**
Formula (11): $2+C+N_\rho$ should be $2+2C+N_\rho$
Figure-6: The x-axis is somewhat unclear. Does it represent the number of requests needed to achieve performance comparable to the best baselines? Please clarify.
Technical Quality: 3
Clarity: 3
Questions for Authors: Could you please explain and address the weaknesses?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thoughtful input and for your support. Please see below our responses.
> REx often brings only marginal improvements given a large enough budget (Figure 4)
We agree! REx isn't magic: It is simply a more hyperparameter-robust, cost-saving refinement policy that also modestly improves the number of solved problems overall. We're excited about REx because it seems to help generically across the board, so we're optimistic that it could have an impact for many researchers, and why we've emphasized just how easy it is to implement in Python. (But please see below our response for the next comment on where we've taken REx recently, and the global response for some cases where REx makes a big difference when using cheap models)
> The scope of this paper is limited to solving small, isolated programming challenges given a relatively large compute limit
We have since applied REx to much harder problems involving programs with hundreds of lines of code. We will reference these results in the camera ready. In fact, the original reason we considered a relatively large compute limit was because we believe that these harder problems require many more rounds of refinement.
> the Greedy baseline seems to be very strong, even though the paper only considered two values for its single hyperparameter
We originally did this because greedy's hyperparameter is unique in that it has very few reasonable settings, as it corresponds to the heuristic value of an empty program. Setting it to e.g. 1 would mean that the search policy would never even try doing refinement; setting it close to zero means that it would only ever refine the initial program once. To double-check these intuitions we will rerun the greedy experiments with more hyperparameters.
> Also on the some dataset (loop invariant), BFS (with the optimal hyperparameter) is even better than REx with smaller sampling budget.
The important point here is that you can't count on BFS always being the best: on a different dataset (APPS), it's actually the worst performing method! We want methods that are robust across hyperparameters and robust across datasets, which means acknowledging that sometimes special hyperparameters can make a method *seem* superior on a particular datasets for a particular sampling budgets. For loop invariants in particular, the box and whisker plots show performance across a range of hyperparameters, for which BFS actually tends to be worse than REx (Fig 4/6).
> It is also unclear whether similar results can be achieved using more advanced prompting techniques (e.g., few-shot or CoT).
REx is orthogonal to the prompting technique: Given a refinement prompt (e.g., few-shot or CoT), REx then works through an outer loop that repeatedly uses that prompt.
> what is the temperature for ARC and loop invariant experiments? Appendix A5 lists temperature=1.0 for APPS, is it the same for all considered benchmarks?
Yes, temperature=1 for all considered benchmarks.
> In fact, MAB has been applied for fuzz testing [a,b]...
That work is indeed very related! We will cite and discuss the papers you mention.
Thank you for your review and for your support. Please let us know if we can answer further questions.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: Thanks for answering all my questions! I'll be keeping my score of 6 and support the acceptance of the paper. | Summary: The paper identifies that LLM refinement process can be formulated as (arm-acquiring) non-contextual bandit problem which can be solved optimally (in the limit) using principled bandit-algorithms like Thompson Sampling against heuristic based solutions. It applies this idea to three code refinement tasks and demonstrates empirical improvements from their approach.
Strengths: 1. **Novelty**. The proposed formulation although simple is novel in the context of multi-turn LLM applications and provides insights discovered beyond existing works. The proposed solution using Thompson-Sampling with Beta priors is straightforward but effective at improving efficiency.
2. **Writing and Clarity**. The paper was pleasant to read and provides appropriate context connecting prior works. There is some missing information detailed in weaknesses.
3. **Results**. Comprehensive experiments across three (widely varying tasks) demonstrating empirical effectiveness over simpler heuristic based exploration-exploitation baselines.
Weaknesses: 1. **Non-contextual Bandit formulation**. While the bandit formulation is clean and simple, it leaves more to be desired and makes simplifying assumptions. Particularly, since the bandit formulation neither contextualizes on the problem statement nor considers "depth" of the arm as a special attribute. This seems different from expected behavior
1.1 **Problem Statement**. Problem statement links is directly linked with the complexity of task. For very challenging tasks, optimal strategies would differ from easy tasks, where for challenging tasks more exploration across "width" might be expected.
1.2 **Depth of Arm**. It might be reasonable to have different priors for "arms" at different "depths". For example, pulling the "empty program arm" seems special compared against other arms.
At the same time, without explicitly modeling these contextual components the proposed approach seems strong -- perhaps accounted for by more "diverse" hyper-parameter search space (discussed further in Questions section point 1.).
2. **Choice of LLM**. As authors mentioned, choice of the LLM might play a role in the performance of this work. It might be useful to study this axis further and see if the findings change by taking a weaker model like `gpt-3.5-turbo`.
3. **Invariant Task Grading**. The invariant task is a core task studied in the paper and the authors point out that they check
> being a sufficiently strong inductive invariant (i.e., precondition, induction, and postcondition).
The statement is not precise enough and needs to be further clarified. Additionally, is it possible for models to generate trivial invariants (e.g. - `True is True`) and fool the grader?
4. **Contamination**. APPS benchmark is considerably older and released before the cutoff date for newer GPT-4 models (authors do not specify the GPT-4 version used here).
5. **Missing details**. There are some missing/hard-to-find experimental details in the Appendix. Particularly, what feedback is used for refinement of invariant task based on the types of mistakes it makes -- weak invariant, failing invariant, solver timeout.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Hyper-parameter variance. It seems that optimal hyper-parameters vary considerably across tasks (Table 1 in Appendix). Perhaps, it can be viewed that the hyper-parameter choice C allows better control over the exploration-exploitation search space allowing it to exhibit widely different behaviors -- accounting for the lack of contextual features discussion in weaknesses point 1?
2. **Connecting the Appendix**. There are many interesting discussions and details in the Appendix (e.g. A.2) etc which are not connected in the main paper. I would recommend the authors to add them (and at a minimum link) to the main paper. Similarly, lot of experimental settings are scattered throughout the Appendix which should be linked from the main paper via forward references making them more accessible.
3. **Figure 3**. Figure three while highlights the tasks considered is perhaps considerably larger and authors can truncate the examples and provide more experimental details and results in the main paper.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Thoughtful discussion on limitations of the approach in a.) allowing solving more problems and b.) differences arising from the choice of LLM is already provided.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the feedback and the supportive review. Below we answer your main questions.
> Choice of LLM. As authors mentioned, choice of the LLM might play a role in the performance of this work. It might be useful to study this axis further
Great idea! In the global response we have attached a PDF showing results for GPT3.5, Llama, and Claude. Although the absolute numbers are different, the qualitative outcomes are highly similar, and in fact, the advantage of REx seems larger for gpt3.5.
> Non-contextual Bandit formulation. While the bandit formulation is clean and simple, it leaves more to be desired and makes simplifying assumptions. Particularly, since the bandit formulation neither contextualizes on the problem statement nor considers "depth" of the arm as a special attribute. This seems different from expected behavior
This would be interesting to explore. Conditioning on depth or program/problem embeddings could be very promising, but would introduce more free parameters. We’ll add it to the discussion section as a future direction.
> The invariant task is a core task studied in the paper and the authors point out that they check "being a sufficiently strong inductive invariant (i.e., precondition, induction, and postcondition)." The statement is not precise enough and needs to be further clarified. Additionally, is it possible for models to generate trivial invariants (e.g. - True is True) and fool the grader?
It is not possible to "fool" the verifier with trivial invariants, which we will clarify by revising the main text to explain precondition/induction/postcondition. Briefly the *precondition* means that the invariant holds at the beginning of the loop; *induction* means when the invariant is true it stays true after another loop iteration; and *postcondition* means if the invariant is true and the loop terminates then the assertions after the loop are satisfied. `True is True` would not satisfy the postcondition constraint.
> Contamination. APPS benchmark is considerably older and released before the cutoff date for newer GPT-4 model
We'll revise to mention that this as an important concern for APPS. Originally we (incorrectly) assumed that, given the popularity of benchmarking GPT4 on APPS, there must be good reason to assume no contamination, but upon digging through the GPT4 tech report this week, we could find no such justification. Good catch.
In the global response we show new results on GPT3.5, which deriving from GPT3 is a lot less likely to have seen APPS in pretraining.
> It seems that optimal hyper-parameters vary considerably across tasks (Table 1 in Appendix). Perhaps, it can be viewed that the hyper-parameter choice C allows better control over the exploration-exploitation search space allowing it to exhibit widely different behaviors -- accounting for the lack of contextual features discussion in weaknesses point 1?
Yes, it's possible that with a richer set of features for the context, there could be a universal optimal hyperparameter setting. We wanted to explore many hyper parameters to understand sensitivity of different methods, and REx was almost always the best both in terms of median and max hyperparameter settings.
> Missing details. There are some missing/hard-to-find experimental details in the Appendix. Particularly, what feedback is used for refinement of invariant task based on the types of mistakes it makes... There are many interesting discussions and details in the Appendix (e.g. A.2) etc which are not connected in the main paper. I would recommend the authors to add them (and at a minimum link) to the main paper. Similarly, lot of experimental settings are scattered throughout the Appendix which should be linked from the main paper via forward references making them more accessible.
Thank you for suggesting these improvements. We are revising to incorporate this feedback, especially adding links to the appendix from the main text.
Thanks again for your helpful feedback and please let us know if there are any further questions we can answer.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions. I will maintain my rating. | Rebuttal 1:
Rebuttal: Multiple reviewers raised the concern that we only evaluated our method on GPT4.
To address this concern we are in the process of running our experiments on GPT3.5, Llama3, and Claude3. The attached PDF shows preliminary in-progress results.
Although the results are still preliminary, the advantage of REx seems to like it might be more pronounced for some weaker models such as gpt3.5: Speculatively, this may be due to a "saturating" effect when the model is powerful enough. We think this makes the work stronger because it reveals a regime that we have not discovered previously where our method especially shines. Thank you to the reviewers who suggested these experiments.
Pdf: /pdf/2eee7740e46bb29144a7c5fe16d3347044cd3e88.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
LookHere: Vision Transformers with Directed Attention Generalize and Extrapolate | Accept (poster) | Summary: This paper proproses LookHere, a novel positional encoding method for Vision Transformers for dealing with high-resolution images. Specifically, LookHere explicitly constratin attention heads to attend certain directions via 2D masks. With comprehensive experiments. LookHere demonstrates strong performance across different benchmarks. It also achieves better performance than existing positional encoding methods on high-res images.
Strengths: 1. The idea is very clear. High-res images introduce new challenges on positoinal encoding due to the mismatch between the training and inference time. By introducing 2D attention masks, we can assign different heads a direction, such that they obtain explicit bias and will know how to attend on high-res images.
2. The experiemnts are very comprehensive and strong, which well demonstrate the effectiveness of the proposed method.
3. This paper is well-written and in a very good shape.
Weaknesses: 1. Actually the authors have clearly dicussed the weakness of their method, e.g. hand-designed masks, single scale of ViT. Beside that, have the authors considered a discussion with ConViT [50]? ConViT introduces convolutional bias to attention head, which also restricts the head to only attend to certain areas.
2. (Minor) It would be more promising to see if this technique can be applied into multi-modal LLMs.
Technical Quality: 4
Clarity: 4
Questions for Authors: See the weakness.
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review. In particular, their recognition of our clear ideas, comprehensive experiments, and writing, as well as suggestions to improve the paper. The review asks two questions, which we address in this rebuttal.
$\textbf{Q1)}$ Have you considered a discussion with ConViT [50]?
$\textbf{A1)}$ We agree there are some conceptual similarities between LookHere and ConViT — e.g., local and directional inductive biases. In ConViT, they arise from carefully initializing relative position encoding such that attention behaves like convolutions early in training. ConViT can learn to overcome the initially restricted views via learnable gating. In our estimation, ConViT is most similar to RPE-learn, with a clever and effective initialization scheme. We thank the reviewer for raising this point, and we will add a short discussion of ConViT to our final paper, should it be accepted.
$\textbf{Q2)}$ Can you apply this technique to multi-modal LLMs?
$\textbf{A2)}$ We do not have the resources to train multi-modal LLMs, for instance vision-language models (VLMs) with LookHere position encoding — at least during this rebuttal period. Given our extensive experiments, we fully expect that LookHere can significantly improve the extrapolation ability of VLMs. We also believe LookHere may improve VLMs in other ways. For example, compositional reasoning is a significant limitation in VLMs [R1, R2, R3]; e.g., they may find it challenging to differentiate between “a bed to the left of a dog” and “a dog to the left of a bed.” LookHere explicitly encourages ViTs to learn direction-aware representations since each head attends to a specific direction. We appreciate the reviewer’s suggestion and are excited to bring explicit directional inductive biases to VLMs in future work!
[R1] “SugarCrepe: A benchmark for faithful vision-language compositionality evaluation”, Hsieh et al, NeurIPS 2023
[R2] “Image Captioners Are Scalable Vision Learners Too”, Tschannen et al., NeurIPS 2023
[R3] “CREPE: Can Vision-Language Foundation Models Reason Compositionally?”, Ma et al, CVPR 2023
---
Rebuttal Comment 1.1:
Title: Official Comment from Reviewer
Comment: Thanks for the authors' rebuttal. They have addressed my concerns. | Summary: This paper introduces Lookhere, a novel mask-based positional encoding designed to address the performance degradation of ViT when resolution changes. Lookhere restricts the receptive field of each head in the attention mechanism and enhances the diversity of information perceived by each head. Extensive experiments have demonstrated the effectiveness of Lookhere. Compared to other positional encodings, Lookhere has significantly better resolution scalability.
Strengths: 1. The author's writing and illustrations are very clear, making it easy to understand and follow the method.
2. The authors conducted numerous experiments, thoroughly validating the effectiveness of Lookhere.
3. The issue explored by the authors, namely the resolution generalization of vision models, is a very important but rarely investigated area. The authors' work could potentially guide future research in this direction.
Weaknesses: In summary, I believe this is a very solid work with no obvious issues. I have only one personal concern. From Figure 6, it can be observed that the essence of the proposed positional encoding seems to be limiting the receptive field of each token, so that when the resolution changes, the number of tokens each token attends to does not change too drastically. However, this approach seems to reduce the model's overall receptive field. As a result, the fine-tuning results at a resolution of 384 do not show a particularly significant advantage compared to other positional encodings. I suspect that as the fine-tuning time increases, this advantage may disappear or even be worse than RoPE. Could the authors fine-tune for a longer period (for example, 30 epochs) to verify that this approach does not negatively impact the model's accuracy?
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refer to the weakness
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review. In particular, their recognition of our writing and illustrations, extensive experiments, and the importance of resolution generalization in ViTs, as well as suggestions to improve the paper. The review lists one concern, which we address in this rebuttal.
$\textbf{Q1)}$ Will the 2D masks that limit each token’s receptive field harm performance if finetuned for longer, relative to 2D-RoPE?
$\textbf{A1)}$ First, a clarification: each of the eight directed heads are restricted to attend in a given direction, but the concatenation of all heads, including the four unrestricted heads, results in each token attending to the entire image. Second, the primary design motivation and performance advantage is LookHere’s extrapolation ability. We view LookHere’s other features, like performance at the training resolution and adversarial robustness, as an outcome of our 2D masks acting like a regularizer by encouraging attention diversity; in fact, we notice that LookHere consistently achieves higher training loss and lower validation loss than 2D-RoPE (which is the desired outcome of a regularizer).
Regarding finetuning performance at 384, we believe LookHere’s ~1% performance advantage on ImageNet is significant (many papers have been published at NeurIPS showing a 1% improvement on ImageNet). That being said, this advantage at 384 might be due to LookHere’s inductive biases that could potentially harm performance when training for longer — as the reviewer points out. As requested by the reviewer, we finetune LookHere-90 and 2D-RoPE for 30 epochs on ImageNet at 384x384 px. The gap narrows significantly; LookHere-90 achieves 83.18% / 88.31% and 2D-RoPE achieves 83.18% / 88.00% on ImageNet-Val / ReaL. Interestingly, LookHere-90 achieves comparable results — 83.08% / 87.99% (Table 5 in our main submission) — after 5 epochs of finetuning, underscoring LookHere’s sample efficiency. Since finetuning on ImageNet for 30 epochs at 384x384 is expensive — roughly 65 A100 hours or 90% the cost of training for 150 epochs at 224x224 — we firmly believe the ability to quickly finetune models at higher-resolutions is significantly valuable to users.
However, the reviewer raises an excellent question: Do LookHere’s inductive biases achieved via attention masks and distance penalties, that improve ViT sample efficiency and enable extrapolation, eventually limit its performance? To answer this question we train LookHere-90 and 2D-RoPE models from scratch for 600 epochs using function matching [R1] with a highly accurate teacher. LookHere-90 achieves 84.94% / 89.39% and 2D-RoPE achieves 85.06% / 89.39% on ImageNet-Val / ReaL. These results are around the empirical upper bound of ViT-B/16 models trained on ImageNet-1k at 224x224 px. Thus, we demonstrate that LookHere’s inductive biases do not limit its performance. We also find that this LookHere model outperforms this 2D-RoPE model by 15% when tested on ImageNet-Val at 1024x1024 px — maintaining a vast extrapolation advantage.
[R1] “Knowledge distillation: A good teacher is patient and consistent”, Beyer et al., CVPR 2022
---
Rebuttal Comment 1.1:
Title: final review
Comment: Thanks the author for the rebuttal. My concerns are well addressed. I have raised my score to 7. | Summary: The authors explore an alternative to existing positional encoding methods for vision transformers. Within attention operations, 2D attention masks (subtracted from attention matrix i.e. key-query inner product, prior to softmax) limits feature mixing to fixed fields of view pointed in different directions. ViTs resulting from proposed method exhibit improved performance across diverse tasks, especially when extrapolating to input image dimensions larger than the train data. The authors also contribute a new high-resolution ImageNet dataset variant for evaluation.
Strengths: 1. The paper is well written with clear motivation, thorough discussion of prior works, and detailed explanation of methodology.
2. The proposed method is simple and elegant, providing better OOD performance to ViTs. This can be highly useful to the community.
3. Rigorous experimentation including hyper-parameter tuning to all prior works on a common training setup.
4. Extensive evaluations and ablations to establish usefulness of proposed method and to highlight its various interesting aspects.
5. Useful high-res ImageNet dataset that is already released anonymously (in appendix)
Weaknesses: 1. **Translation Equivariance:** Authors claim this to be a result of proposed method. However, this is not proved theoretically or demonstrated through experiments. Please explain this in method section, compare to CNNs, and provide experiments to quantify this is possible. In case I missed something related from appendix, please highlight it better in main paper.
2. **[Minor] Adversarial Attacks:** Only FGSM - maybe try newer methods like AutoAttack?
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. **Learning Parameters**: Have you tried learning the slope values? And maybe even the entire attention masks? These could be learned using some SSL setup like MAE.
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: Discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review. In particular, their recognition of our writing, proposed method, rigorous experimentation, extensive ablations, and ImageNet-HR, as well as suggestions to improve the paper. The review lists two concerns, which we address in this rebuttal.
$\textbf{Q1)}$ Is LookHere translation-equivariant?
$\textbf{A1)}$ LookHere masks and biases are translation-equivariant w.r.t. patches. However, the model’s representations are not translation-equivariant due to patchification and edges. In our view, the motivation for translation-equivariant position encoding follows from the motivation for translation-equivariant convolutions — i.e., we want the feature map of an object or a region to be insensitive to its location within the image. This reasoning motivated CNN-ViT hybrids that only encode positions via convolutions [43], also achieving translation-equivariance.
To show that LookHere is translation-equivariant, we may consider two arbitrary patches at P1(x1, y1) and P2(x2, y2); then LookHere’s contribution to the attention matrix from query P1 to key P2 is A. Now impose a patch-level translation T such that the two patches are at P1’(x1 + Tx, y1 + Ty) and P2’(x2 + Tx, y2 + Ty); then LookHere’s contribution to the attention matrix from P1’ to P2’ is B. Where A=B because LookHere’s two components depend only on the relative positions of P1' and P2', which are maintained under translation — i.e., masks are a function of the direction between query-key pairs, and biases are a function of the Euclidean distance between query-key pairs. A formal proof is straightforward from here, which we are happy to provide should the reviewer wish to see it. We thank the reviewer for this question and will elaborate on LookHere’s translation-equivariance in our main paper if it is accepted. We feel this may be valuable to readers.
$\textbf{Q2)}$ Why not try newer adversarial attack methods?
$\textbf{A2)}$ We chose FGSM because it is the most well-known and is extremely fast — allowing us to attack all 80 ViTs on 50k Val images, using multiple strengths in a reasonable amount of time. During this rebuttal period, we first considered AutoAttack’s official implementation but found that their fastest attack takes ~2 hours (on an RTX 4090) per model and strength. Alternatively, we test all 10 position encoding methods against PGD attacks during this rebuttal period. Averaged across 4 attack strengths, LookHere-180 / 90 / 45 outperforms 2D-RoPE by 3.5% / 2.8% / 4.3%, and 1D embeddings by 6.8% / 6.2% / 7.6%. Please see Table 1 in our rebuttal PDF for full results. We thank the reviewer for this question and agree that more sophisticated attacks will strengthen our claim of LookHere’s adversarial robustness — space-permitting, we will add these new results to the appendix or our main paper.
The reviewer also asked about learning, rather than hard-coding, components of LookHere. In preliminary experiments, we found that learnable slopes did not improve performance at 224x224 px. During this rebuttal period, we re-ran this experiment and confirmed that learnable slopes perform roughly on-par with fixed slopes at 224x224 px and hurt performance when extrapolating to 1024x1024 px (see Table 3 in our rebuttal PDF). We thank the reviewer for this question and will add it to our ablations section.
We did not try learning masks with LookHere, mainly because other learnable relative position encoding methods do not extrapolate well. For instance, RPE-learn could theoretically learn LookHere’s masks by biasing attention scores with large negative numbers whose entries become zero after the softmax. In preliminary experiments, we also tried another learnable RPE method that did not perform well; this method calculates the relative positions between query-key pairs and processes them with an MLP, which outputs attention biases (inspired by position encoding used in hierarchical ViTs [54]). However, we are excited for the community to build on LookHere and help us develop better position encoding methods — and we welcome learnable solutions!
---
Rebuttal Comment 1.1:
Title: Final Review
Comment: The authors address all concerns; no changes to my original rating of 8 (strong accept).
Highly encourage authors to release results on stronger adversarial attacks at a later point in time as feasible. | Summary: Vision transformer is known for its constrained scalability across various image resolutions and this work is designed to address the generalization ability of ViT at high-resolution images. Specifically, the authors propose LookHere, a drop-in replacement for the positional encoding of standard ViTs, which restricts attention heads to fixed fields of view oriented in different directions using 2D attention masks. It offers translation-equivariance, guarantees attention head diversity, and minimizes the distribution shift encountered by attention heads during extrapolation. Extensive experiments validate the effectiveness of the proposed method.
Strengths: 1. Evaluation is extensive. The authors provide validations across different datasets, and tasks, and offer detailed analysis to demonstrate the effectiveness of the proposed approach.
2. The results seem promising. LookHere exhibits the best performance compared to other baseline methods.
3. The presentation is clear and the paper is easy to follow.
Weaknesses: 1. The generalization ability of LookHere to smaller-scale images remains unverified. Although the authors particularly focus on testing at high-resolution images, it would be better if LookHere could be generalized to smaller images as well.
2. The ablation study on the design choices seems missing. Although the authors provide abundant experiments and analysis, the ablation study on the specific designs is missing and their efficacy is unverified.
3. The size of the proposed ImageNet-HR is too small (5 images per class). The limited number of images in each class may not be representative enough and could have a strong bias implicitly.
Technical Quality: 3
Clarity: 3
Questions for Authors: Apart from the questions in weakness, the reviewer has one additional question:
Is there a particular reason that the authors did not compare LookHere with NaViT, which can also be tested at higher resolutions?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful review. In particular, their recognition of our extensive experiments, LookHere’s performance, and our submission’s presentation, as well as suggestions to improve the paper. The review lists three concerns, which we address in this rebuttal.
$\textbf{Q1)}$ Does LookHere generalize to smaller images?
$\textbf{A1)}$ Our main goal is to improve ViT performance on larger images, which is an exciting challenge with great potential impact, given the long trend of applying computer vision methods to higher-resolution imagery that contain more detailed scene information. Although deploying models on smaller images will hurt performance, some users will benefit from reduced computational costs. We thus thank the reviewer for raising this point and perform new evaluations to address it. We test all 10 position encoding methods on images ranging from 64x64 px to 208x208 px and all sizes in between (at 16 px increments). LookHere outperforms all baselines at every size, averaged across all 6 datasets. In particular, LookHere-45 outperforms 2D-RoPE by an average of 4.1% (top-1 accuracy) at 64x64 px. We display these 6 plots in Figure 1 of our rebuttal PDF, and we will add this Figure to the 10th page of our main paper, should our submission be accepted.
$\textbf{Q2)}$ Is there a missing ablation section?
$\textbf{A2)}$ We believe the reviewer missed our extensive ablations in the appendix. On page 25, we report the performance of 18 ablation runs. Specifically, we report performance on all 6 benchmarks at 224x224 px and 1024x1024 px. In our main paper, we only had the space to summarize the results of these experiments; please see lines 171 - 180. We will move some of these ablation results to our main paper if accepted. Our ablations demonstrate that our novel 2D direction masks and distance penalties are crucial to extrapolate effectively — and that LookHere is robust to both 2D direction mask types and slopes. We believe these ablations are a strength of our submission. Additionally, during this rebuttal period, we ran a 19th ablation (experimenting with learnable slopes), which reviewer cHUb enquired about.
$\textbf{Q3)}$ Is ImageNet-HR too small?
$\textbf{A3)}$ Our introduced ImageNet-HR dataset consists of 5k total images, with 5 images per ImageNet class; this is smaller than most ImageNet benchmarks. However, we do not believe it is too small.
First, although ImageNet-Val is substantially larger, we note that ImageNet-A and ImageNet-v2 comprise 7.5k and 10k images, respectively. Both are widely used (1K+ citations each). Second, we focused on quality over quantity, agreeing with other experts in the field; for instance, Lucas Beyer publicly encouraged the creation of “small, very high-quality test data [R1].” Although ImageNet-Val comprises 50k images, it contains significant annotation error and label ambiguity [4, 105]. For instance, one study estimates that 44% of the “mistakes” made by a ViT are actually correct predictions on mislabeled samples [R2]. Labeling quality has not been thoroughly studied for v2; however, we have observed annotation error and ambiguity via visual inspection. For ImageNet-HR, we manually selected and cropped high-quality and diverse images from Unsplash and flickr. Several rounds of quality control further reduced annotation error and ambiguity. Finally, to demonstrate that performance on ImageNet-HR is a meaningful measure of a model’s capability, we measure the correlation between ImageNet-HR / v2 and ImageNet-ReaL (a re-annotated version of ImageNet-Val). Across 240 paired evaluations, ImageNet-HR top-1 accuracy is highly correlated with ImageNet-ReaL top-1 accuracy ($R^2$=0.997), even more so than between ImageNet-v2 and ImageNet-ReaL ($R^2$=0.983). Please see Figure 3 of our rebuttal PDF.
We believe we’ve addressed all concerns raised by the reviewer. The reviewer also asked why we do not compare LookHere to NaViT, which we answer below.
NaViT is primarily a recipe to efficiently train ViTs on images with variable aspect ratios by processing non-square grids, i.e., variable grid-height and grid-width. It achieves this by “packing” multiple images into a sequence, preventing attention between images, and then creating a batch of these sequences to process. We believe this recipe, along with other innovative ViT training recipes, may be complementary to LookHere. Furthermore, NaViT does not see improved performance when testing on longer sequences than it saw during training (see Figure 10 b in [6]) — whereas LookHere does see improved performance when extrapolating. NaViT also introduces additive factorized position encoding — which is one of the seven baselines in our submission. We thank the reviewer for this question, and we will add some of this text to our main paper, should it be accepted, since we feel it may be valuable to readers.
Based on the reviewer’s question, we examined whether LookHere can also generalize to non-square images like NaViT. During this rebuttal period, we tested the 10 position encoding methods on ImageNet-Val by resizing the largest dimension to 384 and maintaining the native aspect ratio for each image. LookHere generalizes the most effectively to non-square images (see Table 2 in our rebuttal PDF).
[R1] Twitter: https://tinyurl.com/4azematv
[R2] “When does dough become a bagel? Analyzing the remaining mistakes on ImageNet”, Vasudevan et al., NeurIPS 2022
---
Rebuttal Comment 1.1:
Title: Post-rebuttal comment
Comment: The reviewer appreciates the detailed response provided by the authors. The concerns are addressed and the rating is upgraded to weak accept. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful comments and are pleased with the positive reception of our submission. In particular, we appreciate the recognition of our writing, presentation, and extensive experiments from all reviewers. Additionally, we appreciate the suggestions to improve our paper. Should our submission be accepted, we believe we will have space for these additions on a 10th page. There were no shared concerns among reviewers, so we formulate our rebuttal as replies to specific reviewers.
During this rebuttal period, we ran additional experiments that reviewers enquired about. These results can be found in a 1-page PDF attached to this comment.
Pdf: /pdf/f0fc68f62372d5954ce1cf4f166de7ff2c1b0b42.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Constrained Sampling with Primal-Dual Langevin Monte Carlo | Accept (poster) | Summary: This paper aims to solve the constrained sampling problem via primal-dual method. The authors proposed a new sampling method PD-LMC and provide detailed convergence analysis. Several numerical experiments were conducted to verify the sampling method.
Strengths: The structure of paper is easy to follow. The authors discussed detailed background and related work. The authors also established solid convergece guarantee for their algorithms under different settings.
Weaknesses: 1. The primal-dual method for constrained sampling problem is not novel. Similar algorithms were also studied in [1]. I notice that PD Langevin in [1] requires the estimation of expectation with respect to sampling distribution. But it can easily addressed by running many particles in parallel. Therefore I take the theoretical analysis as the most important contribution of this paper. The weaknesses are listed as following.
2. In the theoretical analysis part, the authors didn't show the rate of violation of constraints, i.e. if $\mu_k$ satisfies constraints. And this leads to the following one.
3. In Theorem 3.3, when there are equality constraints, $\mu^*$ is supported on a low dimension manifold while $\mu_k$ is not, then $KL(\mu_k\\|\mu^*)$ is not well defined. Hence I doubt the correctness of Theorem 3.3. I suggest the authors should consider metric not depending on density ratio, like W2 distance or TV distance.
4. In the numerical experiments, the paper lacks comparison with other methods, e.g. mirror Langevin related methods for sampling on convex set; PD Langevin and Control Langevin in [1] in rate-constrained Bayesian models.
[1] Liu, Xingchao, Xin Tong, and Qiang Liu. "Sampling with trusthworthy constraints: A variational gradient framework." Advances in Neural Information Processing Systems 34 (2021): 23557-23568.
Technical Quality: 2
Clarity: 3
Questions for Authors: Please see weakness part.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: The limitations are pointed out in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments on our results, particularly our theoretical guarantees. As the reviewer correctly notes, in contrast to [22] ([1] in the reviewer's comments), our analysis handles two levels of approximation: time- and space-discretization. We next address their questions.
**Q1** Indeed, as we clearly acknowledge in the paper, Algorithms 1 and 2 can be seen as approximations of the PD Langevin from [22]. But there are substantial computational and theoretical differences between these methods.
The dual variable updates in the algorithms from [22] require computing an expectation over the current particle distribution [see (10)], which is intractable. While we agree that it can be *approximated* by running many particles in parallel, this would result in a substantially higher computational cost than the stochastic updates in our paper. Indeed, we show PD-LMC (Algorithm 1) converges using a single-particle approximation.
Furthermore, it is not clear from [22] how these approximation errors impact the convergence of the algorithms since it only provides guarantees for exact dual updates (computing expectations). In contrast, our theoretical analyses exactly characterize the bias introduced by approximation errors (Proposition D.1) and show that they do not affect the convergence in the convex setting (Theorem 3.3). This is confirmed by our experiments. These results hold for discrete-time, expectation-free algorithms.
That being said, we acknowledge that we could have better illustrated the relative performance of PD Langevin and our stochastic version PD-LMC. We have addressed this oversight and provided an illustration of the performance of both schemes as the number of LMC particles $N$ grows for the one-dimensional truncated Gaussian and the rate-constrained Bayesian model (see Fig. 1 and 2 in the pdf attached to the global response). This hopefully illustrates that one can achieve essentially the same sampling performance as PD Langevin at a considerably lower computational cost.
**Q2** The reviewer has a point: our results do not *directly* analyze the decrease rate of the constraint violation. Yet, our theoretical guarantees show that PD-LMC converges to $\mu^\star$ that is feasible by definition. Hence, Theorems 3.3 and 3.5 do address the issue of feasibility. Additionally, it is possible to use our results to explicitly control the constraints violation along iterations.
Consider the results in Theorem 3.3, namely (13):
$$\frac{1}{K} \sum\_{k = 1}^K \mathrm{KL}(\mu\_{k} \| \mu^\star)+ \frac{m}{2} W\_2^2(\mu\_k,\mu^\star)\leq\frac{R\_0^2}{\eta K}+ \eta G^2 \bigg( 3 +\sum\_{k = 1}^K \frac{\mathbb{E}\_{\lambda}[\|\lambda\_k\|^2]+ \mathbb{E}\_{\nu}[\|\nu\_k\|^2]}{K}\bigg) \triangleq \Delta\_K.$$
Since $x\_k \sim \mu\_k$ and $\mathbb{E}\_{\mu^\star}[g] \leq 0$, we obtain that
$$\mathbb{E} \bigg[ \frac{1}{K} \sum\_{k=1}^K g(x\_k) \bigg]\leq \frac{1}{K} \sum\_{k=1}^K \int g d\mu\_k - \int g d\mu^\star\leq \frac{1}{K} \sum\_{k=1}^K \Big| \int g d\mu\_k - \int g d\mu^\star \Big|\leq \sqrt{\frac{1}{K} \sum\_{k=1}^K \Big| \int g d\mu\_k - \int g d\mu^\star \Big|^2},$$
where we used the $\ell\_1/\ell\_2$-norm relation.
If $g$ is bounded by 1, the summands are bounded by $TV(\mu\_k,\mu^\star)^2$. Using Pinsker's inequality, we therefore obtain
$$\mathbb{E} \bigg[ \frac{1}{K} \sum\_{k=1}^K g(x\_k) \bigg]\leq \sqrt{\frac{1}{2K} \sum\_{k=1}^K \mathrm{KL}(\mu\_{k} \| \mu^\star)^2}\leq \sqrt{\frac{\Delta\_K}{2}}$$
If $g$ is $1$-Lipschitz (and $f$ is $m$-strongly convex), we have
$$\Big| \int g d\mu\_k - \int g d\mu^\star \Big|^2\le W\_1^2(\mu\_k,\mu^\star) \le W\_2^2(\mu\_k, \mu^\star),$$
which implies
$$\mathbb{E} \bigg[ \frac{1}{K} \sum\_{k=1}^K g(x\_k) \bigg]\leq \sqrt{\frac{2 \Delta\_K}{m}}.$$
The same arguments hold for $h$.
Hence, our results can be used to control (in expectation) the constraint violation of ergodic averages along the PD-LMC trajectories. These behaviors are also observed in our experiments. For instance, Fig. 3 of the pdf attached to the global response shows the ergodic constraint slacks $\sum h(x\_k)/K$ for the equality constraints of the Bayesian stock market problem (Fig. 2 in the paper). Notice that by the end of the simulation their values are within $3 \times 10^{-3}$ of zero. We thank the reviewer for raising this point and will include these corollaries and discussions in the camera-ready version.
**Q3** We respectfully, but strongly disagree with this statement. We believe there may have been a misunderstanding arising from the fact that the constraints in (PI) are *distribution constraints* rather than *constraints on the support of $\mu$*. We refer the reviewer to Section 2.2 for a detailed discussion of the differences between these constraint types. In our case, imposing a moment constraint (even as an equality constraint) does not result in $\mu^\star$ being supported on a lower-dimensional manifold. Another way to see this is by noticing that $\mu^\star \propto \pi \exp^{-\nu^\star h}$ [see Prop. 2.2(iv)]. Hence, for $\pi$ full-dimensional and $(\nu^\star, h)$ finite [which is the case, see Prop. 2.2(ii)], $\mu^\star$ will also be full-dimensional. For a concrete example, we refer the reviewer to the response to *Q2 of reviewer 4nfu*, where we derive an explicit solution for (DI) under an equality constraint.
**Q4** We have included results for the Mirrored Langevin in the two-dimensional truncated Gaussian as well as the PD Langevin from [22] for the rate-constrained Bayesian problem (Figs. 4 and 2 respectively in the pdf attached to the global response). We note again that the latter requires an explicit integration that is intractable and that using LMC chains to approximate it reduces to the settings of Algorithms 1-2, whose convergence is guaranteed by our results (Theorems 3.3 and 3.5). As expected, we therefore obtain the same results albeit at a considerable increase in computational complexity.
---
Rebuttal Comment 1.1:
Comment: Thanks for the feedback. Most of my concerns are addressed and I'm happy to raise the score. The rate of violation of constraints is an important proposition that should be added to the main text. I would like to mention that the proposed PD LMC algorithm is the same with PD Langevin in [22] when particle number $N=1$, while the theoretical analysis in this paper is solid and novel. | Summary: The paper studies constrained sampling schemes. The objective function is the KL divergences with a target distribution, while the constraint set is given as some expectation equations or inequalities. The authors rewrite the problem into a saddle point formulation and study the Wasserstein gradient descent and L2 gradient ascent directions. This results in a particular Langevin dynamics with a dual ascent direction. The authors study the convergence analysis of the proposed algorithm. Several numerical examples demonstrate the effectiveness of the proposed method.
Strengths: The authors clearly explain the constrained optimization problems. They apply the primal-dual gradient descent-ascent algorithm based on the Wasserstein space to solve the optimization problem. Convergence analysis is presented using tools in optimal transport.
Sampling from a convex set is a very good application.
Weaknesses: There is a lack of literature on generalized optimization problems in Wasserstein spaces.
Wang et al. Accelerated Information Gradient flow. Journal of Scientific Computing. 2022.
Wang et al. Information Newton's flow: second-order optimization method in probability space, 2020.
Tan et al. Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals, 2023.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. For the time update, the authors perform the forward Euler time step for the primal and dual variables. Do the authors expect some advantages if one applies the proximal steps? What is the main difference in analyzing the primal-dual algorithm in Wasserstein space and the classical Euclidean space.
2. Can the authors provide some simple examples, such as Gaussian target distributions and linear moment constraints, to explain the main result? This could partially answer the first question.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: There are no limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their enthusiastic opinion of our paper. We next address their questions one-by-one.
**Weakness** The reviewer has a point. We focused on regular Langevin dynamics and its literature, but this paper is indeed part of the large line of work of sampling as an optimization in the space of probability measures. It is therefore connected (or could be extended) to many other schemes. We appreciate the suggested references and will include them in the manuscript.
**Q1.1** Since our main focus was on deriving and studying a computationally-friendly algorithm for the constrained sampling problem (PI), we did not consider proximal updates that are more expensive than the fully explicit Euler updates used in this paper (for both primal and dual variables). However, we acknowledge that investigating proximal steps is an interesting future direction and we believe that it could in fact lead to stronger theoretical guarantees (under additional assumptions) as is the case for gradient descent-ascent in Euclidean space (see, e.g., [40]). We will make note of this point in the revised version of the paper.
**Q1.2** To a large extent, our approach is akin to that used to study any optimization algorithm/dynamical system: we construct a Lyapunov function to bound the duality gap $L(\mu,\lambda^\star,\nu^\star) - L(\mu^\star, \lambda, \nu)$ [note from the saddle-point relation (8) that this is the right optimality measure]. The techniques used to bound this gap in Wasserstein space, however, are different.
Indeed, while the dual (DI) remains a (finite dimensional, non-smooth) optimization problem in Euclidean space, the primal (PI) is an (infinite dimensional, smooth) optimization problem in Wasserstein space. The proof must therefore account for the mixed nature of the Lagrangian. Additionally, Algorithm 1 performs stochastic updates in both the primal (step 3) and dual (steps 4-5). Hence, the potential $U$ used to update $x$ in step 3 is a random variable and we have to deal with the joint distribution $(x\_k,\lambda\_k,\nu\_k)$. This is an important distinction with the traditional LMC or when the dual updates (steps 4-5) are performed using exact expectations as in [22]. To address these issues, we instead use conditional laws to obtain our guarantees in expectation over realizations of $(\lambda\_k,\nu\_k)$.
**Q2** Consider a standard Gaussian target, i.e., $\pi \propto e^{-\\|x\\|^2/2}$, and the linear moment constraint $\mathbb{E}[x] = b$, for $b \in \mathbb{R}^d$. This can be cast as (PI) with $f(x) = \\|x\\|^2/2$ and $h(x) = b - x$ (no inequality constraints, i.e., $I = 0$). Clearly, the solution of (PI) in this case is $\mu^\star = \mathcal{N}(b,I)$. What Prop 2.2 claims is that rather than directly solving (PI), we can solve (DI) to obtain a Lagrange multiplier $\nu^\star$ such that $\mu^\star = \mu\_{\nu^\star}$ for $\mu\_{\nu}$ defined as in (5) (line 134).
We can show this indeed the case here by doing computations explicitly. Indeed, we have
$$
\mu^\star(x) = \mu\_{\nu^\star}(x) \propto \pi(x) e^{-(\nu^\star)^\top h(x)}
= \exp\big[ -\\|x\\|^2/2 -(\nu^\star)^\top (b-x) \big].
$$
Completing the squares we obtain
$$
\mu^\star(x) \propto \exp\big[ -\\|x - \nu^\star\\|^2/2 + \\|\nu^\star\\|^2/2 -(\nu^\star)^\top b \big].
$$
From the definition of the dual function in (7) (line 139), we can write (DI) explicitly as a ratio of normalizing factors, namely
$$
\nu^\star
= \text{argmax}\_{\nu}\ \log\Big( \frac{\int \pi(x)dx}{\int \mu\_\nu(x) dx} \Big)
= \text{argmax}\_{\nu}\ \log\Big( \frac{\int \pi(x)dx}{\exp(\\|\nu\\|^2/2 -\nu^\top b) \int \pi(x) dx} \Big).
$$
Immediately, we obtain
$$
\nu^\star = \text{argmax}\_{\nu}\ -\\|\nu\\|^2/2 +\nu^\top b = b.
$$
Note that, as we mention in the text, the dual problem is indeed a concave program. To conclude, observe that as we indeed have
$$
\mu^\star(x) = \mu\_{\nu^\star}(x) \Bigm\vert\_{\nu^\star = b} \propto \exp\big[ -\\|x - b\\|^2/2 - \\|b\\|^2/2 \big]
\text{, i.e., $\mu^\star(x) = \mathcal{N}(b,I)$.}
$$
Our main results (Theorems 3.3 and 3.5) show that it is not necessary to first determine the Lagrange multipliers $(\lambda^\star,\nu^\star)$ to then sample from $\mu\_{\lambda^\star\nu^\star} = \mu^\star$. Indeed, the stochastic primal-dual Langevin methods in Algorithms 1--2 simultaneously determine $(\lambda^\star,\nu^\star)$ and sample from $\mu\_{\lambda^\star\nu^\star} = \mu^\star$. Additionally, they do so without explicitly evaluating any expectations.
Please do not hesitate to let us know if something is not clear. If this explanation indeed clarifies some of our duality derivations, we consider including it in the appendix as a warm-up example.
---
Rebuttal Comment 1.1:
Title: Reply to authors
Comment: The authors carefully address my questions. I also like the example of Gaussian distributions. I suggest the publication of this paper. | Summary: In this work, the focus is on a constrained optimisation problem in the space of measures. Specifically, the goal is to obtain a distribution / samples from a distribution which is close in KL to a target distribution while also satisfying a set of statistical constraints. The paper discusses this somewhat atypical problem, and designs the PD-LMC method to generate approximate sample from such a distribution. Theoretical properties of PD-LMC are also presented, along with some numerical experiments to showcase the working of this method.
Strengths: ### Quality and Clarity
The paper reads well, with a clear description of the objective at hand and the necessary background to help the reader grasp the problem and the proposed solution. For someone used to the classical constrained sampling problem, the examples in Section 2.2 definitely helped contextualize the problem better, and is appreciated. The dual formulation which is central to this work is also adequately presented, which helps understanding the algorithm. It is interesting that the proposed method gets away without computing expectations and replacing them in a single particle.
### Originality and Significance
The idea is novel, and looks like a clean way to adapt ideas from Euclidean optimization for the distributional constrained problem considered here. I cannot quite comment on the significance of this proposed method as I'm not familiar with this flavor of constrained sampling. Judging by the numerical experiments in section 4, the method appears to work well in low-dimensional setting.
Weaknesses: The method is interesting and novel, but there is some loss of intuition for me, which I've tried to express as a question in Q2. Essentially, the proposed method performs two levels of approximations which while practically appealing, makes it hard to follow. I also have a minor issue with laws of $x_{k}$ and the expectations appearing in Theorem 3.3. Essentially, the concern is how there's no stochasticity on the LHS, and this is where my confusion with working with laws and samples is confusing to me.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is the purpose of the $x$-updates simply to approximate the discretisation in the steps 11b and 11c? This was particularly confusing to me because problem DI doesn't involve any $\mu$, which is what is sought out to be solved.
2. Following-up on the previous question, to understand better, the line of reasoning is: Eq. 9 would be what is ideal, but this requires knowing $\mu_{\lambda_{k}, \nu_{k}}$ and expectations under this. So, [22] proposed Eq. 10 in the equality constrainted setting, which is extended to Alg 1 in this paper. Since $\mu_{\lambda_{k}, \nu_{k}}$ is a tilted version of $\pi$, what would a proposal like approximately computing the expectations in Eq. 9 using a high-accuracy sampler like MALA do (notwithstanding the sample waste)? I suppose something similar is being done in Algorithm 2?
3. What is preventing the dual variables from blowing up in magnitude? In other words, how large can $\max_{k} \mathbb{E}[\|\lambda_{k}\|^{2}] + \mathbb{E}[\|\nu_{k}\|^{2}]$ be / what rate does it grow at?
4. Why are equality constraints disregarded for PD-LMC with LSI potentials? Since $\nu$ doesn't exist, why does it show up in Algorithm 2?
5. Can you give a concrete example for when Assumption 3.4 is satisfied, i.e., with $g$ being a bounded perturbation, and a bound on $\sigma$?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Not quite, it would be nice if the authors could comment on some limitations of their analysis / methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive comments on our paper.
**Weakness** The reviewer has a point that to obtain a practical algorithm we perform stochastic updates in both the primal (step 3) and dual (steps 4-5), which complicates things. This in fact marks an important distinction with the analyses of the traditional LMC or when the dual updates are performed using exact expectations as in [22], in that we now have to handle the full joint distribution $(x,\lambda,\nu)$. To overcome this challenge, we work with conditional laws to obtain "in expectation" guarantees. That is why the bounds in Theorem 3.3 are not stochastic.
Indeed, note that (13) comes from the inequality (line 617, Appendix C)
$$\frac{1}{K} \sum\_{k = 1}^K \mathbb{E}\_{\lambda,\nu} \Big[\text{KL}(\tilde{\mu}\_{k} \| \mu^\star) + \frac{m}{2} W\_2^2(\tilde{\mu}\_k,\mu^\star)\Big] \leq\frac{R\_0^2}{\eta K}+ \eta G^2 \bigg( 3 +\sum\_{k = 1}^K \frac{\mathbb{E}\_{\lambda}[\|\lambda\_k\|^2]+ \mathbb{E}\_{\nu}[\|\nu\_k\|^2]}{K}\bigg),$$
where $\tilde{\mu}\_k$ is the conditional law of $x\_k \vert \\{\lambda\_{\ell}, \nu\_{\ell}\\}\_{\ell<k}$. Our proof analyzes the evolution of this conditional law [as in (18)] and then apply Jensen's inequality using the fact that $\mathbb{E}\_{\lambda,\nu} [\tilde{\mu}\_k] = \mu\_k$, the law of $x\_k$ (line 618). Due to space constraints, we did not include this technical discussion in the main text but we plan to do so in the final version.
**Q1** Note that (DI) involves $\mu$ through its objective, the dual function $d$ from (7). But the reviewer is correct that, in contrast to (PI) that directly seeks $\mu^\star$, the dual problem (DI) seeks $(\lambda^\star,\nu^\star)$ that can then be used to obtain $\mu^\star$ by $\mu\_{\lambda^\star\nu^\star}$ defined in (5)--(6) (line 134). In fact, finding $\mu^\star$ and $(\lambda^\star,\nu^\star)$ are equivalent [Prop. 2.2(iv)]. Hence, the goal of the $x$-update is in fact twofold: sampling from $\mu\_{\lambda^\star\nu^\star}$, i.e., $\mu^\star$, and compute the $(\lambda^\star,\nu^\star)$ [using the discrete versions of (11b)--(11c) in steps 4-5 of Algorithm 1].
**Q2** The reviewer is correct. Computationally, (9) is intractable; (10) proposed by [22] is better, but still intractable (due to the expectations); but Algorithm 1 is practical and can be implemented without approximations. From the point-of-view of convergence, (9) performs "dual ascent" and would converge to $(\lambda^\star,\nu^\star)$; the continuous-time counterpart of (10) was shown to converge in [22] (we extend these results in Theorem 3.3 by showing the discrete version itself converges under milder conditions on $g$, as we discuss in line 247); and we prove that Algorithm 1 also converges (again, under milder assumptions).
The reviewer's proposal would indeed yield Algorithm 2 (with an extra Metropolis acceptance step). But while this approach has advantages in the non-convex case (Section 3.2), its benefits do not necessarily outweigh the increased computational cost. For instance, consider the sample mean estimate of the one-dimensional truncated Gaussian from Fig. 1 (main paper) as a function of the number of LMC iterations (step 3) taken in Algorithm 1 (see Fig. 1 in the pdf attached to the global response). I.e., for $N=1$ we have Algorithm 1 and for $N > 1$ we follow the reviewer's proposal. Note that the additional computational cost does not improve convergence in this case (especially when considering the number of "LMC steps" $N$ which is proportional to the computational complexity).
**Q3** This is hard to guarantee in general, even in the Euclidean convex case (see, e.g., [35]). As we mention in the paper (line 235), a common solution is to clip the iterates $(\lambda\_k,\nu\_k)$ in Algorithm 1 by some upper bound on $(\lambda^\star,\nu^\star)$. Such bounds are well-known and can be obtained under a variety of constraint qualifications such as Assumption 2.1, although explicit bounds for equality constraints require additional assumptions (see, e.g., [26,35] or the more general [Gauvin, "A necessary and sufficient regularity condition to have bounded multipliers in nonconvex programming," 1977]). Alternatively, a decreasing "weight decay" regularization can be used (see, e.g., [62]). In some cases, it is even possible to directly bound the iterates $\lambda\_k$ (see Lemma D.3). For these reasons, we chose to present our results in the form of Theorem 3.3, which we find to be more applicable, and then comment on potential solutions after the theorem. We will expand our remark on this point in the camera-ready version.
**Q4** The reviewer is correct: there is a typo in step 4 of Algorithm 2. We omit the equality constraints in the non-convex case to keep the analysis manageable. Deriving results similar to Lemma D.3 and D.4 for equality constraints requires additional assumptions (see response to Q3) and substantial derivations that we felt obscured the results. We note that it is sometimes possible to cast equality constraints as two tight inequalities, although this might lead to numerical issues. We will expand on this remark in the camera-ready version.
**Q5** Let $f(x) = x^2/2$ ($1$-strongly convex) and $g(x) = \sin(x)$ (bounded). We therefore have $\mu\_\lambda \propto e^{-x^2/2-\lambda\sin(x)}$. For $\lambda = 0$, Assumption 3.4 holds with $\sigma = 1$. As $\lambda$ increases, $\sigma$ decreases but $\sigma > 0$ for all finite $\lambda$. In fact, $\sigma \geq e^{-2\lambda}$ (see, e.g., [42, Prop. 5.1.6] or Theorem 1.1 in [Cattiaux and Guillin, "Functional inequalities for perturbed measures with applications to log-concave measures and to some Bayesian problems," 2022]). Since $\lambda^\star$ is finite (see bound in Lemma D.3), we could explicitly clip the iterates $\lambda\_k$ and get an *a priori* bound on $\sigma$. But even without any explicit limit, we show in Theorem 3.5 that $\mathbb{E} \\| \lambda\_k \\|\_1$ is bounded for all $k$.
---
Rebuttal Comment 1.1:
Title: Thank you for your rebuttal
Comment: Q3: I feel like this is an important point to cover. Specifically if the quantity increases faster than $K$, then the analysis would yield a vacuous bound. In your method, 1) how would you obtain a bound, 2) how would you enforce a bound on these variables, and 3) how would this affect the guarantees for the method?
I have no further questions aside from the ones above.
---
Reply to Comment 1.1.1:
Comment: We agree with the reviewer that these are important points and address them in the discussion after Theorem 3.3 in the current version of the manuscript (lines 235-244). There, we argue for the form of Theorem 3.3 rather than modifying Algorithm 1 because Theorem 3.3 also shows there exists a sequence of step sizes such that $(\lambda\_k,\nu\_k)$ are bounded. Additionally, none of our experiments enforce bounds (they implement PD-LMC exactly as described in Algorithm 1). Yet, we always observe the dual variables converging (see, e.g., Fig. 2 in the manuscript), suggesting that pathological cases are not common. That being said, the theory would not be complete without addressing these cases, which we discuss in the remarks of lines 235-244. We expand our treatment of these points, including the discussion below, in the revised version of the manuscript. If any point remains unclear, we are happy to provide further details.
To the reviewer's points:
1) **Bounds on $(\lambda^\star,\nu^\star)$**: Under Assumption 2.1, we provide such a bound on $\\|\lambda^\star\\|\_1$ in Lemma D.3. Explicitly, suppose there exists a $\mu^\dagger$ such that $\mathrm{KL}(\mu^\dagger \\| \pi) \leq C$, $\mathbb{E}\_{\mu^\dagger} [g\_i] \leq -\delta < 0$, and $\mathbb{E}\_{\mu^\dagger} [h\_j] = 0$ (Assumption 2.1). Then, by definition
$$
D^\star = d(\lambda^\star,\nu^\star) \leq \mathrm{KL}(\mu^\dagger \\| \pi) + \sum\_i \lambda^\star\_i \mathbb{E}\_{\mu^\dagger} [g\_i] + \sum\_j \nu^\star\_j \mathbb{E}\_{\mu^\dagger} [h\_j] \leq C - \\|\lambda^\star\\|\_1 \delta
$$
for the dual function in (7). By strong duality, $D^\star = P^\star \geq 0$, which yields
$$
\\|\lambda^\star\\|\_1 \leq \frac{C}{\delta}.
$$
A bound on $\nu^\star$, though more complicated, can be obtained under a similar constraint qualification (see, e.g., the general [Gauvin, "A necessary and sufficient regularity condition to have bounded multipliers in nonconvex programming," 1977]).
2) **Enforcing the bound**: Suppose $\\|\lambda^\star\\|\_1,\\|\nu^\star\\|\_1 \leq B$. Suffices it to replace steps 4 and 5 in Algorithm 1 by
$$
\lambda\_{k+1} = \Big[ \lambda\_k + \eta\_k g(x\_k) \Big]\_0^{B}
\quad \text{and} \quad
\nu\_{k+1} = \Big[ \nu\_k + \eta\_k h(x\_k) \Big]\_{-B}^{B},
$$
where $[z]\_{L}^U = \min(\max(z,L),U)$. This is a common solution used in the Euclidean case (see, e.g., [35]).
3) **How does this affect the guarantees?** It does not. Indeed, note that proof of Theorem 3.3 is based on the fact that (17) is a Lyapunov function. This is proved in Lemma C.1. When bounding the Euclidean norm of the dual variables (line 642), the first step is realizing that the projection $\lambda \mapsto [\lambda]\_0^B$ and $\nu \mapsto [\nu]\_{-B}^B$ are contractions, since $\lambda^\star\_i \in [0,B]$ and $\nu^\star\_j \in [-B,B]$. Hence, we can use
$$
\\| [\lambda]\_0^B - \lambda^\star \\|^2 \leq \\| \lambda - \lambda^\star \\|^2
\quad \text{and} \quad
\\| [\nu]\_{-B}^B - \nu^\star \\|^2 \leq \\| \nu - \nu^\star \\|^2
$$
to obtain (26). The proof then proceed as is and yields the exact same results as in Theorem 3.3, except that (13) [and similarly (14)] can now be rewritten as
$$
\frac{1}{K} \sum\_{k = 1}^K
\mathrm{KL}(\mu\_{k} \\| \mu^\star)
+ \frac{m}{2} W\_2^2(\mu\_k,\mu^\star)
\leq
\frac{R\_0^2}{\eta K}
+ \eta G^2 ( 3 + B^2 ).
$$ | null | null | Rebuttal 1:
Rebuttal: We thank the editors and reviewers for their time and for the positive comments on our paper.
In the sequel, we respond to each of their questions individually. Throughout our responses, we refer to references and equations as numbered in the submitted version of the manuscript. We also refer to figures contained in the pdf attached to this response as "pdf attached to the global response."
Pdf: /pdf/209db16be6169108460e3f1943a1d7233eecbcf5.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Measuring Goal-Directedness | Accept (spotlight) | Summary: The paper proposes a measure of evidence for goal-directedness based on the maximum causal entropy principle, MEG. It is operationalized as the ability to predict variable D based on the hypothesis that it is optimizing a utility function U, where U represents the goal and D the agent’s decisions. Specifically, they try to predict the agents' decisions using the maximum-entropy policy that achieves the same utility as the agent’s own policy. MEG is essentially the negative cross-entropy of this prediction. The authors also describe algorithms for estimating MEG, and demonstrate it in action with some small-scale experiments.
Strengths: - The work is well-motivated and situated among prior work
- The writing is clear and the examples provided are very helpful for understanding
- The MEG satisfies some nice properties such as invariance to translation and scaling
- The authors allow MEG to cover a wide range of use cases, for instance
- It can be estimated even when only a set of trajectories is available, rather than the agent’s policy itself.
- It can be extended to settings where the the utility function is unknown
Weaknesses: - It is unclear how useful MEG will be for checking goal-directedness of generalist AI agents acting in the real world, because:
- it is expensive to compute, since it requires optimizing the maximum entropy policy, and their estimation algorithm can only guarantee a lower bound on MEG for the unknown-utility case
- It requires a causal model containing the target variables that the agent might be trying to manipulate.
- The name and framing of MEG is slightly misleading: MEG is not directly a measure of how goal-directed an agent is, but rather the amount of evidence for goal-directedness. The maximum value MEG can take varies based on the number of possible choices in an environment. Thus, it may be difficult to compare the goal-directedness of different agents’ behavior if we could only observe them acting in different environments. (e.g., to determine if agents acting in our constantly changing world are becoming increasingly goal-directed)
- typo: agentcy on line 18
- Capitalization of v at the end of definition 2.1 is inconsistent
Technical Quality: 4
Clarity: 3
Questions for Authors: Why does the maximizing policy in equation 1 always obtain the same expected utility as pi?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the review.
**Lower bound.**
It’s true that our algorithm can only guarantee a lower bound in the unknown-utility case, but this is true whenever one performs SGD on a nonconvex loss function to estimate some quantity. With neural networks, we often get good estimates of the quantity regardless. We’ve added a note to clarify.
**Misleading name.**
It’s true that MEG only measures the evidence for goal-directedness exhibited in the behaviour (with respect to the variables we measure), not necessarily the full goal-directedness of the agent. But this is generally true of tests: a high-school maths test cannot measure the full ability of a star mathematician, since there is limited opportunity to provide evidence for it while completing the test. Similarly, depending on the environment in which we measure MEG, we may see relatively little evidence for an agent’s goal-directedness. But we think it’s still fair to say that MEG is measuring goal-directedness, just as the test is measuring mathematical ability.
We should of course bear this point in behind, and think of MEG as measuring the goal-directedness with respect to a given environment distribution. We have updated the paper to clarify this.
**Equation 1.**
The fact that the maximising policy obtains the same expected utility as pi is non-obvious but follows from Ziebart’s result. I’m not sure why we didn’t include a proof of this – we have included one now. Thanks for drawing our attention to it.
---
Rebuttal Comment 1.1:
Title: thank you for the response
Comment: **Lower bound** "we often get good estimates of the quantity regardless" this is a bit too vague to assuage my concerns. "Good" might not be useful, depending on how the metric is being used: e.g, if we are trying to determine which of agents A and B is more goal-directed, we might obtain a higher bound for agent A than agent B, even if agent B is in fact more goal directed than agent A.
I am satisfied with the other replies, thank you for clarifying.
---
Reply to Comment 1.1.1:
Title: Agreed
Comment: You are right that this is an important consideration. In some cases we can find MEG via global optimisation or tabular methods, and a good idea for future work is to compare the results of such methods against SGD in these cases. | Summary: * The work introduces a framework for quantifying the goal-directedness of a
decision-making system with respect to a particular utility function or a
family of utility functions.
* The method is grounded in Dennett's *intentional stance* and Jaynes'
*maximum entropy inference*. The authors have made some additional choices
while formalising these principles to arrive at their formulation, which
ends up similar to Ziebart's Maximum Causal Entropy IRL framework.
The resulting formula is essentially a measure of how close the subject
policy comes, in terms of cross entropy, to a maximum entropy formula for
the utility function at some inverse temperature.
* The authors establish that the resulting formula, MEG, satisfies desirable
properties for a measure of goal-directedness, namely being invariant to
the scaling and translation of utility functions, having interpretable
upper and lower bound policies for a given utility function, and measuring
only causal goal-directedness.
* The authors extend their measure of goal-directedness with respect to a
single utility function into an aggregate measure of goal-directedness with
respect to a family of utility functions (simply by taking the maximum MEG
over the family of utility functions). This also allows them to express a
measure of goal-directedness with respect to a set of 'target variables',
by considering the family of utility functions defined over those
variables.
* The authors derive algorithms for computing both kinds of MEG in the
setting of causal influence models derived from MDPs. The algorithms are
based on gradient-based search requiring repeatedly computing maximum
entropy policies for various inverse temperatures (and, in the case of a
family of utility functions, varying family parameters).
* The authors implement their algorithm for some simple MDPs as a 'proof of
concept' experiment, demonstrating how various factors influence the
measure of goal-directedness.
Strengths: I thank the authors for submitting their valuable work which is in a position to make an excellent contribution to the field.
* I think the goals of the work are exceptionally well-motivated. The authors
did not spend much time elaborating on the need for measures of goal
directedness, so I note that such measures have immediate applications in
attempts to rigorously study risks from learned optimisation in deep
learning. Emergent goal-directedness is a core component of the problem of
AI safety (for modern systems and future more capable systems) and scalable
methods for quantifying goal-directedness are sorely needed by researchers
such as myself who want to better understand this phenomenon in order to
reduce or avert these risks.
* The authors have taken careful steps to justify their proposed framework
for measuring goal-directedness, including showing that it satisfies some
sensible properties and showing that in simple examples (including toy
experiments with some MDPs) the values given by the MEG measure are
sensible.
* Overall, the paper is fairly clearly written and I feel I was able to
understand what the authors have to say. I think that this strength is
limited in that there may be some room for improvement in the flow of
sections and in some low-level details of the writing to make the paper
optimally clear and accessible (see below), however overall it was
passable.
Weaknesses: I noted the following issues with the work, based on my understanding.
1. MEG is described as "philosophically grounded" because of its roots in an
interpretation of Dennett's Intentional Stance. However, it does not
appear to be a unique interpretation and this philosophy does not appear
to be the only possible philosophical approach to defining agency.
* Related works Kenton et al. and Orseau et al. both appeal to Dennett's
philosophy but arrive at different formalisms. The authors have not
established that their interpretation is uniquely philosophically
grounded. (Appendix A begins to address this with respect to Kenton et
al.'s prior work, but it would be good to see a discussion of the
relationship with Orseau et al.'s approach.)
* If MEG were a unique formalisation of Dennett's philosophy, it would
still appear to be a behaviourist approach, which stands in contrast to
a mechanistic approach that looks into the internals of a system for
explicit representations of goals and for the implementation of
decision-making processes.
2. I am concerned that the statements of proposition 3.1 and 3.2
are incorrect as stated.
* **Proposition 3.1.** There must be some missing assumption in this
theorem statement that $a$ is a positive scalar. Otherwise you have
proven that a MEG policy for any given utility function is also
goal-directed towards a negated version of that policy. Could you
please clarify whether this is the case or whether I am missing
something that means that a policy that pursues the negative of a
utility function is somehow goal-directed towards that utility
function?
* **Proposition 3.2.** It is well known that in many examples there are
decisions for which multiple actions lead to equal expected utility and
are therefore equally optimal. In such cases there is no unique optimal
policy. What happens to the bounds in this case? The statement of the
proposition would appear to be ill-defined in case there is no unique
optimal policy. Is it the case that you do not derive an upper bound in
this case? The distinction being then that there still will be some
upper bound (perhaps?) but that you have not characterised it?
When I checked the proofs for these propositions in the appendix I found
them to provide insufficient detail to resolve my concerns or point to a
particular mistake in the reasoning.
I am recommending acceptance of the paper on the assumption that these issues will be easy to dismiss or to correct.
3. In the unknown-utility case, MEG fails to account for complexity of
utility functions within the hypothesis class.
* It is well known in RL that for any policy, a utility function exists
that can rationalise that policy. For example, there is a reward
function that immediately, highly rewards whichever actions that policy
happens to take in each state.
* This leads me to wonder if for any policy it is possible to find some
reward function for which MEG very strongly indicates that this policy
is goal-directed towards that utility function.
* Especially in the case of MEG for unknown utility functions, it is
therefore possibly that if the hypothesis class of utility functions is
too rich, the MEG for unknown utility functions will register almost
any policy as goal-directed.
* I think something that is missing is some way of accounting for the
complexity of utility functions. Philosphically, a complex utility
function is not 'useful for predicting behaviour' if it is itself as
complex as the behaviour, even if it is capable of predicting behaviour
well.
* I think a significant weakness of MEG, especially in the
unknown-utility setting, is that it leaves the user of the framework to
rule out too-complex utility functions by the coarse tool of including
them or excluding them from the hypothesis class, rather than providing
a principled tool for managing this complexity.
If the authors are aware of this issue but consider addressing it to be
out of scope, I would like to see it acknowledged as such in the main
text.
4. If I am not mistaken, the scalability of the method for computing MEG
relies on solving an RL problem (computing a maximum entropy policy in
MDPs). If this is the case then I think the scalability of the method is
questionable.
* It is certainly common practice to solve RL problems, and in any case
where we can train a policy that will be the target of MEG evaluation,
we can presumably also train a max-ent policy to measure its MEG.
Perhaps calculating such a policy repeatedly will be expensive but it
should at least be feasible to do so.
* However, the very applications I think motivate this work are settings
where we don't understand the internal motivational system of the
trained system (and so MEG can help quantify that). This would of
course only happen if we are training the subject policy in such a
complex environment that its emergent goal-directedness is in question,
due to, for example, potential goal misgeneralisation (or so-called
'inner alignment failure').
* In such a setting, I suppose we would share the same concerns about the
max-ent policy required for quantifying MEG. If I am not mistaken, if
this probe policy's generalisation could not be trusted, then neither
could we trust the result of using it to calculate the MEG of the
subject policy.
There is a comment in the discussion to the effect that future work can
consider the setting of 'distributional shifts' which is normally
considered to be a prerequisite for the phenomenon of goal
misgeneralisation. However, I think this concern undercuts the motivation
of the present work in its early stages and if there is no feasible way
to reliably compute the MEG for complex learned systems where the
internal goals are in question then the work may not achieve its goals in
the future, and this issue should be raised and (ideally) addressed now.
5. There is insufficient explanation of relationship to prior work, particularly to IRL and agents and devices.
* **Inverse reinforcement learning.** IRL represents a large body of
work including theoretical work aimed at finding goals that
rationalise particular policies, which is clearly closely related to
MEG. The authors' brief comments in the related work section have not
clarified to me the relationship between MEG and IRL.
In particular, it seems plausible to me that MEG is essentially
functions similar to a subcomponent of a IRL framework based on
Bayesian inference.
* A Bayesian approach for IRL for example involves positing a
likelihood of the behavioural data given each goal. This likelihood
function represents a kind of measure of how well the behaviour can
be explained by that goal. This seems similar to MEG, though of
course MEG does not adopt a Bayesian framework.
* Similarly, what is the relationship between unknown-utility MEG
with a given hypothesis class and the model evidence in a Bayesian
formulation of IRL with the same hypothesis class? Of course there
is a difference of integrating versus taking a max, but my point is
that these are different approaches to the same kind of problem.
I think the paper would be strengthened if the authors could clarify
whether MEG is addressing a fundamentally new kind of problem or one
that arises naturally in the course of IRL. If it's not new, that
seems fine, the approach is apparently new---but it would be nice to
see a discussion of how the approach compares to the Bayesian
approach outlined above.
* **Agents and devices.** This prior works appears to present a
Bayesian framework for distinguishing agents from non-agents, along
with one particular instantiation of this general framework in terms
of epsilon-greedy suboptimality and Solomonoff induction.
The comments in the related work section appear to relate to the
instantiation. What is the relationship between the present work and
the general framework from Orseau et al.?
For example, would it be possible to derive a maximum-entropy version
of Orseau et al.'s framework, and if so, how might that compare to
MEG? Does the difference come down to the difference between Bayesian
inference and maximum entropy inference?
I would have liked to see sections along the lines of appendix A (for
Kenton et al.) drawing out these relationships.
If the authors are able to address each of these issues to my satisfaction I think my assessment of the paper's strength would be greatly improved and I would be willing to substantially strengthen my recommendation to accept the paper.
Technical Quality: 3
Clarity: 2
Questions for Authors: Many of the weaknesses I raised above are partially framed as questions since
I am not fully confident that I have understood the work in forming these
comments (I welcome corrections from the authors).
Here I collect additional questions that are less crucial to the overall
validity of the work but are still either barriers to my understanding or
seem like minor issues in the presentation.
* The introduction claims that "pursuing goals is about having a particular
causal effect on the environment." This appears to be part of the case for
MEG being "philosophically grounded".
I can see how in many cases this correspondence holds, but it is not
immediately obvious to me why there couldn't be cases where the pursuit of
goals is separated from causality. For example, if a goal is always going
to be achieved counterfactually regardless of an agent's actions, then it
becomes possible to 'pursue' this goal without having any causal effect on
its achievement.
Do the authors have some further justification for this framing, or a
citation that gives such justification?
* Equation 1:
* Can the definition of MEG be interpreted as a minimum KL divergence
between the policy and the closest maximum-entropy policy?
* The cross entropy between two distributions is not symmetric. Why is is
justified to take the cross entropy in this particular direction?
* What is the relationship between theorem 5.1 and prior work on maximum
entropy RL such as Haarnoja et al. 2017 "Reinforcement learning with deep
energy-based policies"?
* In the discussion section, under "societal implications", there is a
passage "the relationship between goal-directedness and danger is fairly
indirect".
* I wonder, did the authors mean to write "the relationship between
*measuring* goal-directedness and danger is fairly indirect"?
* If not, could the authors please elaborate on this indirectness, since it
seems that the goal-directedness itself is fairly tightly and directly
connected to risks (as outlined in the works cited in the introduction).
Not a question, but while reading the paper I happened to note the following
minor presentation issues the authors may like to address in future versions.
* Line 5: I think the authors mean "adaptation" rather than "adaption".
* Line 14: The word "mundane" seems to downplay the real and potentially
devastating impacts discussed in Chan et al. Consider a different word
or a different angle of contrast (perhaps the scale of the harms?).
* Line 16: Shavit et al. citation and reference missing a date.
* Line 18: "Agentcy".
* Line 22: Incomplete sentence, consider "According to Dennett... [1989],
*we are justified in doing so whenever it* is useful ...".
* Line 31: MEG acronym was introduced in the abstract is undefined in the
scope of the introduction.
* Line 65: Orseau et al. should be cited in text mode.
* Line 78: Something appears to have allowed the bottom margin on page 2
to shrink, causing the final lines on the page to come too close to the
page number.
* Algorithm 1 line 7: Until convergence of what? Not beta, for example.
* Line 294: This and the next paragraph headings are missing periods on
the abbreviation "vs.".
* Line 296: "meg" should be capitalised.
* Footnote 1 is missing a period, at the footnote mark (line 296) AND at
the end of the footnote text.
* Line 300: Potential missing word, "optimal *policies*"?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 4
Limitations: It would have been good to see a more acknowledgement of the limitations of
the work (such as those I have enumerated above in the weaknesses section, pending the authors' clarification), throughout the paper or in a dedicated section of the paper.
Some additional points on addressing these limitations in the paper:
* You could more carefully justify or qualify the discussion of the extent to which the
approach is uniquely principled / philosophically motivated.
For example, in the conclusion, MEG is described as "grounded in the
philosophical literature" when the authors appear to just mean "grounded in
the philosophy of Dennett" (as they do qualify elsewhere), the difference between which I outline in Weakness (1).
* The scalability of the work does appear to depend on repeatedly computing a
maximum entropy policy, which would appear potentially many times more
expensive than computing the subject policy, though perhaps feasible. This
is based on my understanding. I would have liked to see the authors clarify
the requirements in comparison to the cost of finding the target policy
explicitly, since they claim their method is "scalable".
See also weaknesses (3) and (4).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your extensive and careful review.
**Motivation.**
Agreed emergent goal-directedness is a core AI safety problem. Updated to be more explicit.
**Terminology.**
We agree our approach is not a unique interpretation of Dennett, and his view not the only one in the literature. We have changed "the literature" to "Dennett". Would “philosophically motivated” be preferable to “philosophically grounded”?
**Mechanistic vs behavioural.**
Agreed our approach is behavioural rather than mechanistic. We think both approaches should be pursued. Behavioural approaches seem more tractable, but mechanistic approaches will generalise better off-distribution. We have added this point to the discussion.
**Prop 3.1.**
The proposition is correct – MEG considers an agent goal-directed wrt a utility function both if it maximises it and if it minimises it. We have made the proof more explicit.
We admit this may be unintuitive. Arguably trying to pessimise a utility function is just as goal-directed as trying to optimise it. But on the other hand, perhaps MEG should be signed. We did consider this. Would the reviewer argue in favour?
**Prop 3.2.**
The proposition is correct, but was stated ambiguously. We have updated it to say, “there is equality in the upper bound if and only if pi is a *unique* optimal (resp. anti-optimal) policy”. Is the meaning now clear?
**Is unknown utility meg always high?**
There does alway exists a reward function to rationalise a (deterministic) policy. But this only applies to state-action or state-action-state rewards, not state rewards (ignoring the constant reward function). It’s the same with MEG: all (deterministic) policies are maximally goal-directed with respect to utility functions over <observation,action> pairs. But if the set of functions if over some downstream variable (e.g. states in an MDP), most policies will not be maximally goal-directed. We think MEG gets this right -- it's hard to say an agent is not optimising its own behaviour, but its easy to say whether it's optimising some variable out in the world.
**Complexity of utility functions.**
Any way of measuring goal-directedness needs a way to avoid the trivial result that all policies are maximally goal-directed. One option is indeed to penalise more complex utility functions. But MEG does it in a different way: by considering utility functions over specific sets of variables. If the variables considered don’t include the agent’s actions, then most policies will not be very goal-directed, for any complexity of utility function. So we think it’s best to present MEG without an additional complexity penalty, although we acknowledge that adding one may be natural. We have now mentioned this in the paper.
**Scalability.**
MEG does require repeatedly solving an RL problem. MEG is about as scalable as MCE IRL, so we expect it to scale better than UTM-based approaches (Agents and Devices). But we have not demonstrated the scalability here, so we have now weakened the language. We agree that much usefulness depends on scaling. However, there is also value in conceptual clarification, and taking a step towards scalable methods.
**Generalisation.**
Agreed distributional shifts are a key motivation. Rather than trusting the imitation policy to generalise correctly, a better approach could be to think of MEG as measuring goal-directedness on a given distribution, and to proactively seek out distributions to test on, e.g. where unknown-utility MEG is much higher than intended-reward MEG. This may let us detect goal misgeneralisation.
**Prior work.**
We have added new sections in the appendix comparing to A&D and IRL. One key difference with IRL: IRL does not try to tell you *how* goal-directed a policy is, just what its goal is. Consider an agent that performs well on the intended reward by default, but badly under a distributional shift. IRL would just tell you that the utility function of the agent changes, but MEG can tell us whether the behaviour is strongly goal-directed on the new distribution, i.e. whether we have capability misgeneralisation (low MEG) or goal misgeneralisation (high MEG).
**Does pursuing goals require a causal effect?**
This is a common view in philosophy (see section 3.1 of the SEP article on agency). But it’s not universal, e.g. *evidential decision theory* is concerned with pursuing goals even if one cannot causally affect them. We think that MEG can be generalised to an “evidential” version, but it’s beyond the scope. We have added a reference and a footnote.
**KL div, cross entropy, Haarnoja.**
- MEG is similar to the minimum KL with a maxent policy. You would remove the second term (the entropy of the uniform policy) and add a term for the negative entropy of the policy itself. The first change is just a translation. The second would mean that a less entropic way of achieving a certain utility would be considered more goal-directed than a more entropic way.
- The direction of the cross entropy gets us a maxent policy that well-predicts the agent’s policy, rather than vice versa. Reversing it would mean maximising the maxent policy's probability of the agent's *highest probability* action, i.e. ignoring other aspects of the policy.
- Haarnoja’s results are essentially a useful alternative formulation of Ziebart’s. We could probably reformulate our results along the lines of Haarnoja’s, but chose to focus on Ziebart.
**Harm.**
We agree the relationship is not indirect. Probably we meant to include the word “measuring”, but this too now seems like an overstatement. We have removed the sentence.
**Limitations.**
We have added the following to the limitations section: scalability; mechanistic vs behavioural approaches; more on distributional shifts. Has the rest been satisfactorily clarified?
Let us know if our response is enough to increase your score, or whether further changes would be needed. Thanks again.
---
Rebuttal Comment 1.1:
Title: Thanks, your rebuttal has addressed my concerns, with notes
Comment: The revisions proposed by the authors in this rebuttal and their other
rebuttals will substantially improve the clarity and quality of the paper.
They will also address all of my concerns (subject to the below notes).
Accordingly I am pleased to strengthen my recommendation for the paper's
acceptance.
**Notes on terminology:**
Once the authors have clarified that the main philosophical grounding is in
Dennett's work, I am satisfied and I leave it to the authors to decide the
best choice of wording between "philisophically motivated" and
"philosophically grounded".
**Notes on proposition 3.1, and goal-directedness of minimisation:**
Thank you for clarifying this point about the unsigned nature of MEG, which
I missed in my initial review.
I think my confusion stemmed from the fact that I picked up the phrasing "a
policy pi is goal-directed *towards* a utility function U".
The phrasing "directed towards" suggests to me that the policy is acting to
maximise the utility function (to some extent), rather than to minimise it.
I note that the authors switch between this phrasing and the sign-neutral
phrase "goal-directed with respect to" in the paper, but I missed this
distinction.
I am satisfied with the following resolution:
1. If they have not already done so, the authors should make sure they
reserve "towards" for cases where the policy is positively directed with
respect to the utility function.
2. The authors should also explicitly clarify this aspect of their
framework prominently in the main text.
Moreover, the authors also asked if I would argue in favour of a signed
measure of goal-directedness. I think I would, but it is not crucial to my
evaluation of the present work, any substantial revision in this direction
would require further review. Seems like it is best left for future work.
**Notes on proposition 3.2, about multiple (anti-)optimal policies:**
Thank you for clarifying. I now understand the proposition statement. The
wording change you have proposed in the initial rebuttal is sufficient in my
judgement.
**Notes on distributional shift and goal misgeneralisation:**
I appreciate clarification on the proposed application of MEG for use in
detecting goal misgeneralisation. I think this is an interesting direction
for future work and I am pleased that the authors are expanding their
discussion in their revision.
I maintain my concerns about the specific proposed methods, since in practice
it may not be straightforward to identify novel state distributions or their
corresponding intended reward functions for measuring intended-reward MEG.
However, this issue alone does not undermine my judgement of this paper as
an important theoretical contribution to the field.
---
Reply to Comment 1.1.1:
Title: Prop 3.1.
Comment: Thanks, we have followed your suggestions re: Prop 3.1.
---
Rebuttal 2:
Title: Further Clarification of Prop 3.2
Comment: Just to further clarify Proposition 3.2, as it may still not be clear. The upper bound holds whether or not there exists a unique optimal (or anti-optimal) policy, but iff *there exists* such a policy, *and* pi is that policy, then there is equality in the upper bound. Do you think we should spell this out more explicitly in the statement? | Summary: The paper introduces maximum entropy goal-directedness (MEG), a measure of how much an agent is goal directed towards a given utility function. The authors extend the theoretical framework introduced to the case where the utility function that is being optimized is not known. Moreover, they propose an algorithm to measure MEG in Markov Decision Processes.
To show the practicality of the metric introduced, the authors show the performances of the algorithms empirically. Specifically, they show how MEG relates with the optimality of a $\epsilon$-greedy policy, and how it relates to task difficulty.
Strengths: - The paper is theoretically sound, and well presented
- I found the images and explicit examples to be particularly helpful in understanding how MEG works and why it was defined in a specific way
- Section 4 addresses an important problem, and I found that it provides clear results which are also well explained.
- I have not seen much similar work, and I definitely think it addresses an important problem, especially when talking about future goal-directed LLM systems.
Weaknesses: - As mentioned by the authors, MEG relies on having full access to the causal model of the environment. This can be a highly-demanding assumption in many real environments.
- Given the restrictive assumption mentioned above, the paper should at least investigate in more detail how computationally expensive is computing MEG in large-scale setting.
- The experiments are limited to few toy examples
Technical Quality: 3
Clarity: 3
Questions for Authors: - How do you think one could make MEG more robust towards distributional shifts?
- Do you have any example in mind on how MEG could fail when considering how goal-directed an agent is? I think it could be an important point to discuss to give a better perspective to the reader on when one would want to employ such metric. Maybe you have some toy examples which can illustrate this?
- Is it possible for two similar policies to achieve drastic differences in MEG? Do you have any experiments which shows how robust of a metric it is?
-How one could extend MEG to cases where the causal environment is only partially known?
- Do you have some experiments/comparison/thoughts on how MEG relates directly with agency?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: no ethical limitations
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the review.
**Requiring a causal model**.
A full causal model is not strictly necessary for computing MEG. What is needed is a simulator where you can measure the variables of interest under different policies. We have updated the paper to make this clearer. That said, investigating how MEG can be applied in practice is indeed an important direction for future work.
**Robustness to distributional shifts**.
One simple idea is to proactively seek out distributions where MEG is especially high, either by hand or by using MEG as an optimisation target. It would be especially interesting to find distributions such that goal-directedness with respect to the intended reward function is low, but goal-directedness with respect to some other reward function is high. This could be a way to detect goal misgeneralisation (https://arxiv.org/abs/2105.141110).
**How MEG can fail.**
One failure mode is that if you measure goal-directedness with respect to the agent’s own actions (or more correctly, state-action pairs in an MDP, or observation-action pairs in a CID), then the agent will appear maximally goal-directed regardless of its policy. This means that if you measure the goal-directedness of an LLM with respect to the text it generates, for example, it won’t tell you much. Instead you need to measure the goal-directedness with respect to some downstream variable (such as a user’s response, or the outcome of a task). We have updated the appendix to mention this example.
**Can similar policies get very different MEG?**.
Yes, at least under some reasonable notions of policy-similarity. Consider an example where two policies in an MDP agree in every state *except the start state*, where one policy takes action a1 and the other takes a2. If a1 leads to a part of the state space where the policies are both very goal-directed, and a2 leads to one where they are not, then there can be an arbitrarily large difference in goal-directedness, despite the policies agreeing in almost every state. This underlines that MEG measures the goal-directedness on a particular distribution. We have added this example to the appendix.
**The case where the causal model is only partially known.**
As mentioned above, we can tackle this case as long as we can simulate the environment under different policies. The simulation may only partially specify a causal model, as it may not allow interventions on non-policy variables. It may be interesting to explore other forms of partial knowledge.
**How MEG relates to agency.**
Agency is an overloaded term with a range of meanings and connotations. Goal-directedness is often considered a key component of agency. For example, Dennett's intentional stance only applies to systems pursuing goals in some sense, and Dung (https://philarchive.org/rec/DUNUAA) argues that it underlies other aspects of agency such as autonomy, empowerment, and intentionality. Some of these notions are still vague, but it's an interesting line of future research to try to make these other aspects precise, and formally and/or theoretically relate them to the notion of goal-directedness we present here.
---
Rebuttal Comment 1.1:
Title: Thanks for the clarifications
Comment: Dear authors,
Thanks for your answer.
**Requiring a causal model**: This addresses one important critique and concern I had. Thanks.
**Distributional Shift**: I agree that this could be a sensible way of detecting goal misgeneralisation. I would especially be curious to know if MEG could be used to identify also the 'cause' of themisgeneralisation, and not only if this is happening or not. This could be done by considering increasingly subsets of the Causal Model and see which of the downstream variable causes the shift in MEG. Obviously this goes beyond the scope of this work, but thanks for clarifying my question
**How MEG can fail** and **Can similar policies get very different MEG**: These examples are clear to me, and I appreciate that they have been added to the appendix. I would like the authors to clarify in the final version that *MEG measures the goal-directedness on a particular distribution*
**The case where the causal model is only partially known**and **How MEG relates to agency**: Thanks for your answer.
To summarize, all the issues I raised were addressed by the authors. I will increase the score accordingly. | Summary: The paper is studying a problem that is both mathematical and philosophical in nature: how can we measure quantitatively, whether an observed behaviour is goal-directed or not (or to what extent it is). It tackles this problem by starting from a causal model of the world, where there are explicit utility variables, state variables, and decision variables. For a known utility function, it proposes to measure goal-directedness by essentially asking: given an observed policy $\pi$, find a policy that is maximizing the utility function but regularised by entropy that is _most predictive_ of $\pi$. Compare its predictive accuracy to uniform prediction, and this quantifies goal-directedness. They also extend this to the case of unknown utilities, by considering parameterized sets of utility functions. The favourable properties of this measure is discussed, and its practicality is demonstrated with a simple empirical result on CliffWorld.
Strengths: - The question of measuring goal-directedness is tremendously significant, since it is a core requirement for agency.
- The proposed solution is sufficiently novel. Even though it builds on the Maximum Causal Entropy IRL, it sufficiently extends it and applies it in a completely new way.
- The paper is written very clearly and it is easy to follow.
Weaknesses: - Societal implications could have been discussed in more detail, especially how this "could enable bad actors to create even more dangerous systems." An extended discussion about this added to the appendix would improve this.
- A more comprehensive empirical demonstration of the practicality would improve the paper.
Technical Quality: 4
Clarity: 4
Questions for Authors: Q: I have one question about the unknown utility setting. If I get this right, in general, if the set of possible utility functions is large enough, and the causal model is rich enough, I can potentially find a utility function making many behaviours goal-directed, even though they may not be. For example, consider the behaviour of a river, as its flow. If my causal model represents enough things relevant (e.g. elevation, geography, physics, weather etc.), I can potentially find a utility function in the lines of _minimize travel distance to the nearest sea/lake_ where elevation incurs a cost, and conclude that the flow of the river is goal-directed. Is this correct? If so, it seems like one must be careful in terms of the class of utility functions they consider.
Bonus Q: I know that the paper assumes a known causal model. However, I am curious. How would we deal with hidden confounders here? Is there no hope? Specifically, could there be reasons internal to the agent that make a seemingly non-goal-directed behaviour in fact goal-directed, unbeknownst to us?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: Paper discusses limitations sufficiently.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review. We agree that the problem of measuring goal-directedness is of great significance and we’re glad you feel we have made a novel contribution here.
**Societal implications**.
We agree this should be discussed in more detail. Thanks for the suggestion of an extended discussion in the appendix about the concern of bad actors using our ideas to create more dangerous systems. We have added a section in which we weigh various considerations, and then explain why we ultimately think it’s not a major concern.
**Empirical demonstration**.
The contribution of this paper is primarily theoretical and conceptual, and so we decided to include experiments mostly for illustrative purposes. Investigating scalability, robustness, and practical applications (like detecting goal misgeneralisation) are promising directions for future work.
**Question about the unknown utility setting.**
Good question. Every system you model will be goal-directed with respect to a utility function over its own actions, but beyond that, it’s not the case that by including enough variables in your causal model, you can always find a utility function towards which the system is goal-directed. In fact, we can apply Theorem 4.1 to this question, and say that if the agent is not very goal-directed with respect to some Markov blanket of variables surrounding its action, then it’s no more goal-directed with respect to any variable further away in the causal model.
Your river example is an interesting one. Depending on how you model it, it may indeed be the case that the river appears quite goal-directed. If we think of the river as being able to change to flow in any direction at various points along its path, but choosing to follow the path it does, then yes, it will have high goal-directedness with respect to its travel distance to the sea, as you suggest. This is in line with famous examples of very simple agents, such as a thermostat.
On the other hand, it’s debatable that you should model the river as “being able to” flow in any direction. In general, the smaller the set of actions the system has available, the lower its maximum goal-directedness, as we show in Proposition 3.2. Therefore a more reasonable way of modelling the river might lead to much lower goal-directedness. And if we put the river in a setting where some other system sometimes disrupted its path, then since the river would not adapt to overcome these disruptions, this would also lead to low goal-directedness.
**The necessity of assuming a causal model.**
Something we perhaps under-emphasised in the paper is that if you have black box access to the environment or a simulator, then you don’t need an explicit causal model, you just need the ability to measure the variables you’re interested in under various different policies. So that’s one way in which you can avoid worrying about hidden confounders. We have updated the paper to make this clearer.
Of course, we still need to be measuring the important variables. Interpretability-based approaches to measuring goal-directedness may have an advantage over our behavioural approach here. We have added this point to a discussion of the pros and cons of taking a behavourial vs mechanistic approach to measuring goal-directedness in the paper.
The second part of your question is about what happens if an agent is goal-directed with respect to variables we can’t measure because they are internal to the agent. There are two cases here: either the agent is also (perhaps instrumentally) goal-directed with respect to external variables, or it isn’t (as in the case of “wire-heading”). In the first case, we may be able to measure this by considering the external variables. The second case is of less concern to us, since our motivation is about the risks arising from agents pursuing goals in their environment.
Thanks again for the thoughtful review.
---
Rebuttal Comment 1.1:
Comment: Thank you for the interesting response.
"Something we perhaps under-emphasised in the paper is that if you have black box access to the environment or a simulator, then you don’t need an explicit causal model, ..."
- I would definitely emphasise this more in the paper. This makes the paper much stronger than my initial impression.
I have decided to maintain my score.
---
Reply to Comment 1.1.1:
Title: Agreed
Comment: Thanks. We agree this point makes the paper much stronger and will indeed emphasise it a lot more. | Rebuttal 1:
Rebuttal: Thank you to the reviewers for their feedback, which has already allowed us to improve the paper. We are pleased that the reviewers agree that the problem we are tackling is an important one and that our approach is novel.
We have responded to each reviewer’s points individually. Minor presentation issues have all been fixed without comment. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Provable Partially Observable Reinforcement Learning with Privileged Information | Accept (poster) | Summary: This paper offers valuable insights into the use of privileged information in partially observable reinforcement learning (POMDP), presenting theoretical advancements and practical algorithms. However, challenges remain in ensuring the effectiveness of expert distillation, applicability to real-world problems, and bridging the gap between training and test environments. Addressing these limitations is essential for advancing the practical implementation of Privileged Policy Learning.
Strengths: 1. The paper formalizes expert distillation, highlighting its limitations in finding near-optimal policies in observable POMDPs. This formalization provides a structured approach to understanding and improving existing empirical methods.
2. The introduction of the deterministic filter condition broadens the scope of tractable POMDP models and ensures that expert distillation can achieve polynomial sample and computational complexities.
3. The investigation into multi-agent RL (MARL) frameworks with privileged information and the proposed algorithms ensure polynomial complexities.
Weaknesses: 1. Expert policies in pratical problems may not be optimal [1-3]. This raises concerns about the effectiveness of Privileged Policy Learning. If the expert policies are suboptimal, the student policies distilled from them may inherit these inefficiencies, potentially leading to suboptimal performance in practical applications.
2. A significant limitation highlighted is the dependency on access to the state of the POMDP environment. In most practical scenarios, the internal states are not observable, restricting the application of the proposed methods. The deterministic filter condition, while theoretically valuable, may not always be applicable in real-world problems where state observability is a significant challenge.
3. The paper discusses the difficulty of obtaining the internal state in actual problems, creating a gap between the training environment (where privileged information is available) and the test environment (where it is not). This discrepancy can lead to performance degradation when the policies trained with privileged information are deployed in real-world scenarios where such information is unavailable.
[1] Wu, Yueh-Hua, et al. "Imitation learning from imperfect demonstration." International Conference on Machine Learning. PMLR, 2019.
[2] Tian, Yuandong, Qucheng Gong, and Yu Jiang. "Joint policy search for multi-agent collaboration with imperfect information." Advances in neural information processing systems 33 (2020): 19931-19942.
[3] Tang, Yu, et al. "Hierarchical reinforcement learning from imperfect demonstrations through reachable coverage-based subgoal filtering." Knowledge-Based Systems 294 (2024): 111736.
Technical Quality: 3
Clarity: 3
Questions for Authors: The paper does not discuss the tightness of the derived bounds or whether they meet the needs of practical problems.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Lack of sufficient experimental verification
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback and believe there are several important misunderstandings we would like to clarify. We address the reviewer's concerns and questions below:
---
## Regarding the potential sub-optimality of the expert
**Firstly, the focus of our paper is to study how to distill a student policy from a given (**even the perfect**) expert policy provably.** We agree that in practice, not having a perfect expert might be an issue. But we believe this is some perspective **orthogonal** to the focus of our paper. **In fact, training a better/optimal expert policy is the focus of RL in MDPs, which has been studied extensively with many existing works.** In contrast, we studied RL in POMDPs with privileged information. Among the **first attempts** at theoretically understanding privileged information, we started with two most commonly used algorithmic paradigms: Expert Distillation and Asymmetric Actor-Critic, with many empirical successes as justifications (see Appendix B for a detailed discussion), and **in the most fundamental and basic setting**. We believe this will serve as the foundation for further studying the extended setting mentioned by the reviewer. We do believe extensions to the case without “optimal” expert policy is an important future work to explore. We will make sure to refer to these papers when motivating and discussing such generalizations.
---
**More importantly, our analyses and results can be readily generalized to the case when the expert policy is sub-optimal.** Specifically, for any $\epsilon>0$, if the expert is only $\epsilon$-optimal, i.e., the performance difference between the expert and the optimal policy is $\epsilon$, the distilled student policy's performance gap compared with the optimal policy will also only increases by at most $\epsilon$, through a direct triangle inequality argument. If needed, we can add this corollary in the next version of our paper.
---
## Regarding the access to the state
**Depending on the access to the state is exactly the setting that we wanted to understand better theoretically, and the setting that has been widely used in practice.** This was referred to as settings with "privileged information" in the literature and our paper. The cases when such a state is *not accessible* are *not* the theme of our work here, and will have to suffer from the known computational and statistical hardness in general.
Meanwhile, the deterministic filter condition is **weaker** than many existing “statistically tractable” POMDP models, and also allows us to incorporate “computationally” considerations when carefully incorporated with privileged information, i.e., our main results. Empirically, it can also serve as a good criteria for us to determine whether the empirical paradigm of expert distillation will suffer from sub-optimality or not. We will add clarifications in the revision to make these points clearer.
---
## Regarding the gap between training and testing
We firstly acknowledge that the performance *gap* between training and testing is the major challenging of RL in POMDPs with privileged information, which is **exactly we hope and managed to address in our paper**. In other words, all the guarantees (in particular, Theorem 4.5, 5.3) of our algorithms are showing that the **deployed policy** is an optimal one for the POMDP (with **partial observations** as input), instead of the training policy (allowing **privileged state information** as input). Hence, our results exactly showed that **such a performance gap can be conquered with our more carefully algorithm design and analyses** (while the naive versions of the algorithms from existing empirical works can fail, see our Sec 3).
---
## Regarding the tightness of the bound
We thank the reviewer for bring up this important question and discuss the tightness of our bound here.
- For our results in Sec 4, we actually achieve **polynomial** sample complexity, which matches the best results for sample complexity [31, 39]. For the time/computational complexity, we also achieve the **polynomial** complexity, thus not improvable either (note that even planning in MDP is a P-complete problem).
- For our results in Sec 5, we still achieve **polynomial** sample complexity, thus matching the best sample complexity as before. More importantly, we also achieve **quasi-polynomial** time/computational complexity, which is shown to be tight for the *observable POMDP* setting we are considering (see [22]).
---
## Regarding the experimental evaluation
We **have provided new experiments** for more instances of POMDPs of larger sizes. We refer to the uploaded pdf for detailed results.
---
## Regarding the additional references
We thank the reviewer for bringing more related literature [1, 2, 3] to our attention. We will definitely include them in our related work section in our revision.
---
We hope our responses have addressed the reviewer’s concerns, and would be more than happy to answer any other questions the reviewer may have. Please do not hesitate to let us know if there are any other explanations/updates needed that may help re-evaluate our paper.
---
Rebuttal 2:
Title: Have we addressed your concerns?
Comment: Dear Reviewer tn5m,
We hope the reviewer is doing well! Since the discussion period is ending very soon, we would like to kindly ask whether our further responses have adequately addressed your remaining concerns?
We understand that the reviewer has the very valid concern of whether RL in POMDP with privileged information is a meaningful setting. We admit there might be other paradigms that will complement this privileged information setting like the wonderful literature shared by the reviewer (we will make sure to include those references in our revision!). Meanwhile, we also would like to point out that our focus is a **theoretical understanding** of such a valid and popular empirical paradigm that is already extensively employed in empirical applications. Therefore, discussion whether the POMDP with privileged information setting is valid is still a quite important question, but beyond the focus of our paper.
Finally, we would like thanks again for the reviewer's patience and efforts dedicated to reviewing our paper and looking forward to your replies!
---
Rebuttal Comment 2.1:
Title: Response
Comment: Thank for the detailed responses that address my concerns. I thus raise the score. | Summary: This paper has several contributions. The authors examine using privileged information for partial observability in RL. The paper looks at two empirical paradigms, expert distillation and asymmetric actor-critic, analyzing their computational and sample efficiency under specific conditions.
In addition, the authors introduce a belief-weighted optimistic asymmetric actor-critic algorithm, providing insights into MARL with CTDE.
Strengths: - There is a considerable amount of summarization and formalization, such as for the expert distillation paradigm, which is appreciated.
- Theoretical results with the proposed deterministic filter condition is a novel contribution.
- The belief-state learning contribution could be interesting for future works.
Weaknesses: - Like most theoretical papers, examination of existing empirical paradigms and evaluation of proposed method is rather lacking, while understandable, is still a weakness.
- The deterministic filter condition, if I understand correctly, is weaker than previous conditions but still rather restrictive compared to real-world tasks.
- More toy examples could be helpful along side theory work.
Technical Quality: 4
Clarity: 3
Questions for Authors: - What is the purpose of developing the deterministic filter condition? Can you give a motivating example?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: The authors are upfront about their assumptions; however, I would like to mention that reliance on specific assumptions for computational tractability might limit the generalizability of the results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. Please see our responses below:
## Regarding the examination of existing empirical paradigms
We agree with the reviewer that understanding existing empirical paradigms is important for theory-oriented research. We believe our paper indeed examined the theoretical perspectives of the **two most important empirical paradigms that utilized privileged information** in RL, by first identifying its suboptimality (Prop. 3.1), inefficiency (Prop. 3.3), and then proposing **new** provable algorithms (Sec 4 and Sec 5). Meanwhile, examining the experimental perspectives of those empirical paradigms is indeed not the main focus of our work but is an important future work we believe.
---
## Regarding the deterministic filter condition
To further clarify, our deterministic filter condition (Definition 3.2) serves two key purposes:
- **Unification** of existing tractable POMDP models: It unifies several important models of tractable POMDPs, including deterministic POMDPs, Block MDPs, and k-decodable POMDPs (see Appendix E for details and novel examples).
- Criteria for the empirical paradigm of **Expert Policy Distillation**: It represents a general class of structured POMDPs with provable efficiency for the empirical paradigm of *Expert Distillation*.
---
## Regarding more experimental results
We have conducted more extensive experimental evaluations on POMDPs of larger sizes. The pdf results are attached to the global author rebuttal
---
We hope our responses have addressed the reviewer’s concerns, and would be more than happy to answer any other questions the reviewer may have. Please do not hesitate to let us know if there are any other explanations/updates needed that may help re-evaluate our paper.
---
Rebuttal 2:
Title: Have we addressed your concerns?
Comment: Dear Reviewer 9C6t,
We hope that you are doing well recently! Since the discussion period is ending very soon, we would like to kindly ask whether our responses have adequately addressed your concerns?
We understand that your main concerns are also on our Def 3.2 (deterministic filter condition). We would like to summarize again for our responses.
- On the one hand, Def 3.2 (deterministic filter condition) is already our best efforts at trying to unify existing problems. It also further extends the boundary of our current knowledge for tractable POMDP problems.
- More importantly, we also admit the potential limitations of such a condition. Therefore, in Sec.5, we do not assume this condition anymore but rather only require that observations are not totally uninformative (i.e., ruling out the $\gamma=0$ case in Assumption C.8, which is a quite weak assumption in our opinion). The only sacrifice is that the computation complexity becomes quasi-polynomial instead of polynomial. However, such quasi-polynomial dependency shown to be unimprovable either [22]
Finally, we would like thanks again for the reviewer's patience and efforts dedicated to reviewing our paper and looking forward to your replies!
---
Rebuttal Comment 2.1:
Comment: I thank the authors for their clarification. It seems that other reviewers and I are uncertain of how much value this theoretical work would bring. I agree that as far as I can see, there is no straightforward significance. My score remains the same since I'm optimistic of the theoretical results providing insight for future research.
---
Rebuttal 3:
Title: Thanks for your support!
Comment: **We greatly thank the reviewer for appreciating our potential theoretical insights for future research and being NOT concerned with the value of theoretical works!** Meanwhile, we also thank the reviewer for bringing up the **value of the theoretical work** and would like also to clarify for you and (potentially) other reviewers to help further understand our paper in a high-level way.
In terms of **theoretical value**, we believe it has been emphasized and summarized in our Table 1 and Figure 1, which gives a complete landscape for how our approach with privileged information has offered **strict and significant** theoretical benefits in a wide variety of POMDPs. Therefore, we believe our theoretical results are sufficient enough.
In terms of **empirical value**, we also provide some potential points here
1. **We gave a rigorous criteria for using privileged policy learning/expert distillation in specific problems**. This will remind the practitioners to examine how well their applications satisfy our criteria (exactly or approximately) first before really applying such a method to certain problems. Therefore, it can not only serve as a theoretical condition but also as an empirical guidance, particularly considering that the limitations of privileged policy learning/expert distillation have not been not very well understood.
2. **We revealed the potential inefficiency of vanilla asymmetric-actor critic, and highlighted the importance of the advanced decoupled belief learning + policy optimization pipeline.** Notably, this rigorously answered why it is meaningful to pursue different and advanced variants for vanilla asymmetric-actor-critic algorithms to get better efficiency.
In summary, our Sec. 3, explicitly named as ***revisiting empirical paradigms*** is exactly trying to highlight/justify the value of our later theoretical results for practice. We believe different empirical practitioners can find different values from it according to what empirical paradigm they are adopting and what problem they are trying to solve.
**Finally, we want to sincerely thank you again for being optimistic of the theoretical results providing insight for future research! As theoreticians, we are also making efforts for developing empirically-relevant theory (like what we are trying to do in the current submission)!** | Summary: This paper presents a novel theoretical characterization of certain kinds of
POMDP's which admit efficient learning. First, related characterizations are
explored and theoretical results show that these classes of POMDP's suffer from
certain drawbacks when trying to learn policies. Based on this analysis, a new
class of POMDP's is defined using the "deterministic filter condition".
This characterization of POMDP's is used to develop a novel learning algorithm
based on a decomposition of POMDP policy learning into belief learning and a
fully-observable policy learning step. Theoretical results show that policy
learning in POMDP's satisfying the deterministic filter condition condition is
tractable given access to privileged information at training time. The paper
also presents an extension of the proposed algorithm to handle partially
observable multi-agent RL.
Strengths: - This paper considers a problem of practical importance, namely how to learn
policies when there is more information available at training time than test
time.
- The extension to multi-agent RL is a useful inclusion.
- Giving some theoretical failings of existing approaches helps motivate the
work.
- The theoretical analysis of the proposed algorithm is extensive.
Weaknesses: - Some important sections are deferred to the appendix (related work,
conclusion, and the discussion of experiments). I understand that space in the
main paper is quite limited but I think it's important to include these
sections. Maybe instead some theoretical results could be deferred to the
appendix, especially those in section 3 concerning the shortcomings of
existing approaches.
- More generally, so much of the discussion in the paper is in the appendix that
the main body of the paper is quite hard to read.
- The central definition 3.2 could be explained a bit more (see questions).
- The legends on the graphs in the experimental section cover a lot of the
relevant information in the graph.
Technical Quality: 3
Clarity: 1
Questions for Authors: I'm not sure that my understanding of 3.2 is correct. The intuition given in the
paper is that it corresponds to allowing $\gamma$ to go to 1 in assumption C.8.
But to my understanding C.8 encodes a certain kind of approximate observability
and letting $\gamma$ go to one recovers a fully-observable environment. (I
believe this is related to the restriction that $b^s$ is a one-hot vector in
definition 3.2). But if the environment is observable then there's no need to
consider POMDP's anymore, regular MDP's should be sufficient. Could you clarify
this definition a bit more, and especially how it is distinct from assuming that
the emission function can be inverted?
Confidence: 2
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: Limitations of the proposed approach are discussed, although this discussion is left to the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback. We believe there are several important misunderstandings we want to clarify. Please see our responses below:
---
## Response to Point 1 of weakness
We are thankful for the suggestions on the organization of the paper. We will re-organize the paper by moving some motivation parts to the appendix, and moving back some more related work and experiment discussions.
---
## Response to Point 2 of weakness
We shall explain more about Definition 3.2 in the following responses to the questions. Meanwhile, we will add more discussions on Definition 3.2 in the main paper.
---
## Response to Point 3 of weakness
We have replotted the figures for better readability. Please see the attached PDF in our global author rebuttal.
---
## Response to Questions
Firstly, the reviewer is correct that when $\gamma$ tends to one, the condition implies a reversible emission matrix. However, **we only use this extreme example as a motivation.** This only corresponds to one extreme case of us, i.e., Block MDP (Example E.2).
**More importantly, our condition (Def 3.2) is much more general than this relatively trivial case.** For example, we can also impose **no assumptions** on the emission **by allowing the emission to be totally uninformative/unobservable ($\gamma=0$)**, while only requiring the latent state transition to be deterministic (see Example E.1). In general, our (newly identified) condition represents a **joint effect** of transition determinism and observation informativeness, which is **of independent interest** in the POMDP literature. The $\gamma=1$ case and the deterministic latent state transition case just serve as *two extreme examples* of our condition. There are certainly more examples in between, e.g., Example E.3 and the red shade area in Fig. 1.
**Intuitive explanations about the deterministic filter.** Firstly, the terminology of a *filter* refers to the process of *estimating the underlying unknown state using a sequence of actions and observation* (recursively). Therefore, our condition only assumes that if at step $h-1$, the *estimation* of the current states is not random, then the state estimation for the next step $h$ after taking the new action $a_{h-1}$ and receiving the new observation $o_h$ is also non-random. **Therefore, it is direct to see that requiring $o_h$ to be informative such that it can be inverted to/decode the state at step $h$ is only sufficient, but not necessary.**
---
We hope our responses have addressed the reviewer’s concerns, and would be more than happy to answer any other questions the reviewer may have. Please do not hesitate to let us know if there are any other explanations/updates needed that may help re-evaluate our paper.
---
Rebuttal Comment 1.1:
Comment: Thank you for the explanation. I am less concerned about definition 3.2 now, although it still seems quite restrictive to me. I have raised my score slightly (3 -> 4). I am still concerned about the restrictiveness of 3.2, combined with the relatively small experiments (even the new experiments) and the amount of work which is deferred to the appendix.
---
Rebuttal 2:
Title: Thanks for your replies!
Comment: We appreciate the reviewer's feedback and are glad to hear that the previous concern regarding whether Definition 3.2 implies decodability has been resolved. Regarding the remaining concerns, we believe they can be addressed with a few additional clarifications, which we apologize to be unable to detail fully in our previous rebuttal.
---
### Concerning the Restrictiveness of Definition 3.2
1. **Firstly, Definition 3.2 is not an artificial condition disconnected from existing literature, but rather an effort to unify and further relax existing classes of POMDPs that have been extensively studied.** Specifically, this condition is broader and encompasses the following well-known classes of POMDPs:
- **Deterministic POMDP:** This is discussed in our Example E.1 and has been studied in a line of research [1, 2, 3].
- **Block MDP:** This is addressed in our Example E.2 and has been studied in a separate line of research [4, 5].
- $k$**-decodable POMDP:** This is covered in our Example E.3 and has been the focus of yet another research thread [6, 7].
These examples have already been extensively studied, and the structural assumptions they entail are considered reasonable and valuable rather than restrictive. **Thus, we believe that our condition, which unifies these seemingly unrelated assumptions and further relaxes them, should not be regarded as restrictive, especially since these stronger assumptions have been thoroughly examined.**
2. **Secondly, Definition 3.2 pertains only to the first half of our results in Section 4, while the second half of our results in Section 5 does not rely on it anymore.** Specifically, we acknowledge that Definition 3.2 may not always be satisfied in real-world applications. **Therefore, in Section 5, we assume only that the observation is not entirely uninformative, meaning we rule out the** $\gamma = 0$ **case in Assumption C.8 without making any other assumptions.** The trade-off is that the computational complexity increases from polynomial to quasi-polynomial, but this is shown to be unimprovable even in the easier problem of planning [8].
---
### Regarding the Experiments
- Firstly, we emphasize that our primary goal is to develop solid theory rather than specific empirical algorithms. **In fact, none of the theoretical studies we previously cited [1-7] that focus on POMDPs include any experiments (except for [7] on simple environment)**. **Nonetheless, we acknowledge the importance of proof-of-concept experiments and have conducted corresponding validations**. Therefore, we think this should be viewed as a strength rather than a weakness.
- **Regarding the problem size, our problems are not significantly smaller even compared with those examined in the empirical literature on POMDP planning.** For example, the Tiger-grid problem, one of the most famous examples in the POMDP planning literature, has 36 states, 5 actions, 17 observations, and a shorter horizon than ours (a discount factor of 0.95 implies an effective horizon of 20). Notably, experiments of this scale are conducted for the relatively easier problem of planning. Thus, we believe our experiments on the much more challenging learning problems are sufficient for a theory-oriented paper.
---
### Regarding the Deferred Results in the Appendix
We do admit that a long appendix is necessary to present our results sufficiently and accurately. However, we believe that with one additional page in the final revision, along with the reviewer's wonderful suggestions, we can address all remaining concerns. Specifically:
- We will move the results related to revisiting empirical paradigms to the appendix. Based on the reviewer's feedback, we believe that maintaining the core messages of these results in the main text is sufficient.
- We will restore the motivating examples and explanations for Definition 3.2 to the main paper. We believe this will address most of the confusion expressed by all reviewers.
Therefore, we believe these straightforward adjustments will make our presentation much clearer and avoid further confusion. Finally, we hope that our paper will be evaluated primarily on its contributions rather than on these easily fixable presentation issues.
---
We are grateful to the reviewer for their dedicated efforts in reviewing our paper and for engaging in the discussion period, which has undoubtedly helped improve our work! We are looking forward to your further feedbacks!
---
Rebuttal Comment 2.1:
Title: References
Comment: [1] Jin, Chi, et al. "Sample-efficient reinforcement learning of undercomplete pomdps." Advances in Neural Information Processing Systems 33 (2020): 18530-18539.
[2] Uehara, Masatoshi, et al. "Provably efficient reinforcement learning in partially observable dynamical systems." Advances in Neural Information Processing Systems 35 (2022): 578-592.
[3] Uehara, Masatoshi, et al. "Computationally efficient pac rl in pomdps with latent determinism and conditional embeddings." International Conference on Machine Learning. PMLR, 2023.
[4] Krishnamurthy, Akshay, Alekh Agarwal, and John Langford. "Pac reinforcement learning with rich observations." Advances in Neural Information Processing Systems 29 (2016).
[5] Jiang, Nan, et al. "Contextual decision processes with low bellman rank are pac-learnable." International Conference on Machine Learning. PMLR, 2017.
[6] Efroni, Yonathan, et al. "Provable reinforcement learning with a short-term memory." International Conference on Machine Learning. PMLR, 2022.
[7] Guo, Jiacheng, et al. "Provably efficient representation learning with tractable planning in low-rank pomdp." International Conference on Machine Learning. PMLR, 2023.
[8] Golowich, Noah, Ankur Moitra, and Dhruv Rohatgi. "Planning in observable pomdps in quasipolynomial time." arXiv preprint arXiv:2201.04735 (2022).
---
Rebuttal 3:
Title: Have we addressed your remaining concerns?
Comment: We hope the reviewer is doing well! Since the discussion period is ending very soon, we would like to kindly ask whether our further responses have adequately addressed your remaining concerns? To briefly summarize and complement our further response above
- For the restrictiveness: we believe reviewer zFhw have shared similar concerns and confusions as you before. After understanding how our condition has been used to generalize existing, extensively-studied problems and obtain stronger theoretical guarantees, reviewer zFhw has revised the evaluation from 3 to 5. Therefore, we believe the discussions there can be very informative, and if the reviewer is also interested in some more detailed technical discussions, we also kindly refer to our further response to reviewer zFhw (Response to Q.2 there).
- For the problem size, we would like to complement that there is a large body of traditional literature on studying planning in POMDP, which also focuses on tabular POMDPs that could have the same order of size as us (as we mentioned in the response above). To connect to the modern deep RL literature, we have also made significant theoretical efforts. **In particular, we have shown that when the observation space is continuous/infinite, our algorithm just needs a simple classification oracle** (the entire Sec G discusses this setting). Therefore, we believe our algorithm can also scale to more complex applications, which can be a good future direction.
Finally, we would like thanks again for the reviewer's patience and efforts dedicated to reviewing our paper! We do apologize if we have caused some confusion for the reviewer in the original submission and promise to revise it accordingly! | Summary: The submission addresses both statistically and computationally efficient reinforcement learning (RL) in Partially Observable Markov Decision Processes (POMDPs). The training phase has access to hidden states, while the goal is to learn the optimal policy at test time without such access. The authors propose a sample and computation-efficient algorithm under a deterministic filter stability condition.
Strengths: - The proposed setting is interesting and has significant potential for practical applications.
- The paper makes a commendable effort to cover various potential cases, enhancing the applicability of the problem settings.
Weaknesses: **High-Level Critic**
- More space should have been dedicated to elaborating on the real technical question that the paper aims to resolve. With access to hidden states, the statistical hardness is virtually eliminated, so the focus could have been purely on the computational aspect. The poly-sample result is not surprising, and with the $\gamma$-observable assumption of (Golowich et al., 2022), quasi-polynomial complexity does not seem surprising either.
- The privileged information combined with the deterministic filter condition is very strong. It is unclear why Definition 3.2 was needed given the strong assumption of privileged information.
- The proposed algorithm does not seem as practical as advertised. Beyond the tabular setting, it is unclear how to implement it with general function classes, which are common in deep RL.
The current manuscript needs significant improvement in terms of writing, as detailed below.
- At the beginning of the introduction, it is confusing whether the paper is about RL in POMDPs with more information or about multi-agent RL in POMDPs. The two components investigated in this paper are orthogonal to each other, and the main result seems to be more about the former.
- The contribution of the paper needs to be more clearly articulated in the introduction. My understanding is that the paper is about (1) an algorithm that achieves both sample and computational efficiency with some new technical conditions, and (2) an extension of the proposed condition and algorithm to the multi-agent setting.
- The sentence in Lines 64-65 is confusing. What do we aim to achieve with actor-critic?
- Proposition 3.1 pertains to a specific algorithm, not to the general hardness of the problem.
- Jargon such as privileged, expert distillation, or teacher-student teaching may not help much in motivating the setting of this paper, as these are not the applications the paper pursued. It would be simpler if the problem were motivated from a theoretical perspective, focusing on technical challenges in existing works and some relevance to practice in the context of actor-critic implementation.
The decoding function $g$ in Lemma 4.3 is unclear. Definition 3.2 defines a posterior probability given $(s_{h-1}, a_{h-1}, o_h)$, but Lemma 4.3 assumes the posterior is almost deterministic. This could have been clearly stated during the problem setup. This assumption is essentially similar to the block MDP.
Technical Quality: 2
Clarity: 1
Questions for Authors: - Definition 3.2: Does $\gamma$-observability or $\gamma$-separated PSR imply Definition 3.2?
- When (Golowich et al., 2022) already achieved quasi-computational complexity for planning, what does Theorem 5.3 improve upon it? How are the previous results in Section 4 related to this result?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: There are no experiments on practical benchmarks other than synthetic small tabular POMDPs.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the valuable feedback, and noticed that there were several important misunderstandings we would like to clarify.
## Response to Point 1 of high-level critic:
**For sample complexity, we respectfully disagree with the claim that with privileged information, statistical hardness is virtually eliminated.** This was exactly the point and technical contributions of the references [36,26,67] that only addressed the “sample complexity” improvement. **The key challenge is that the “output” of the learning process is still a “partially observable” policy (with partial observations as input), not one from the MDPs**.
**For computation complexity, our results are not a direct combination of the poly sample learning result + quasi-poly time planning result.** Having a quasi-poly-time planning algorithm with model knowledge certainly does **NOT** necessarily imply that learning without model knowledge can also be made in quasi-poly time. In fact, the **time/computational complexity** of **learning** without model knowledge and **purely planning** with model knowledge can have **strict gaps** sometimes even in simpler models. For example, recently [24] showed that learning in block MDPs (without privileged information) is computationally (strictly) harder than supervised learning problems, despite that planning with model knowledge in block MDP is computationally easy. **Intuitively, it is the sampling errors in the learning setting that can make the computation nature of the problem fundamentally harder.** This is in the same spirit as the famous problem of *learning parity with noise*, which is also computationally hard, although solving this problem given model knowledge is trivial.
**Back to our setting, access to privileged information never "assumes this challenge of sampling error away".** Therefore, our positive results on the computational complexity are **not** a direct consequence of [21, 22]. In particular, **both** the **algorithms** and **techniques** for showing "poly-sample complexities" in our paper are fundamentally different from those in [21] when there is no privileged information.
## Response to Point 2 of high-level critic:
**Why deterministic filter + privileged information?**
- **Empirical motivations.** Our work aims to understand practical privileged-information-based algorithms, leading to the identification of the Deterministic Filter condition after recognizing the limitations of Expert Distillation.
- **Privileged information alone imposes no assumptions on the POMDP model itself, implying its PSPACE-hardness computationally (inherited from planning).** This is because **planning with model knowledge** is **easier** than **learning with privileged information**, where one can essentially **simulate** the problem of learning with privileged information when knowing the model knowledge.
- **Unifications of known problem classes (see Appendix E).**
## Response to Point 3 of high-level critic:
Firstly, we did **not** claim our algorithms are highly practical. Our focus was to understand existing practical paradigms, Expert Distillation, and Asymmetric Actor-Critic, in the most fundamental tabular setting, with some abstractions for theoretical analysis.
**More importantly, we did have function approximation results (lines 275-277, and full results in Appendix G), where we only relied on a simple classification oracle, which is indeed quite compatible with current Deep RL implementations.**
## Response to improvements in terms of writing:
**Clarifications for contributions (single-agent v.s. multi-agent):** Our main contribution is understanding empirical paradigms using privileged information and then addressing their limitations. **More importantly, results for single-agent POMDP and multi-agent POSG are both under this privileged information setup, following the same algorithmic principle.** Therefore, they are **NOT** orthogonal.
**Line 45-46 (goal of actor-critic):** Asymmetric Actor-Critic is already a strong empirical algorithm, and our goal is to firstly **identify its limitations** (Prop 3.3) and then further develop both computationally and statistically efficient versions of it (Sec 5).
**Regarding Proposition 3.1:** We hope to firstly **understand** the limitation of existing empirical paradigms through Proposition 3.1. Then we further use it to **motivate** a (unifying) subclass of POMDPs (Def. 3.2) under which we **managed to prove** that the paradigm of Expert Policy Distillation **can be provably efficient** (Sec 4).
**Response to the last point:** The benefits of privileged information are well-studied empirically. **Those terminologies are not what we invented but are already quite standard in empirical works.**
Finally, we **strongly disagree** that this assumption is "essentially similar to the block MDP". **We have provided several examples in lines 919-929, including block MDP, but significantly going beyond it.** In fact, one example of Def 3.2 can include the POMDP with latent deterministic transition **without any assumptions on the emission**.
## Response to Questions:
**Firstly, neither observability or well-conditioned/regular/separated PSR condition implies Def 3.2.** To the best of our knowledge, such conditions instantiated to POMDP mainly rule out the case of *uninformative* observations. In contrast, Def 3.2 in extreme cases, can require **no assumptions on emission**.
**Secondly, Thm. 5.3 improves (Golowich et al., 2022) by achieving both polynomial sample and quasi-polynomial time complexity (see more details in our response to Point 1).**
**Finally, results in Sec 4 are "in parallel" with those in Sec 5.** In Sec 4, we have studied another class of POMDPs, i.e., those satisfying Def 3.2, **instead of assuming observability as in Sec. 5**. Since Def 3.2 and $\gamma$-observability **do not imply each other**, the results of Sec 4 and Sec 5 **do not imply each other, either**.
---
Rebuttal 2:
Title: Thank you for the response
Comment: I thank the authors for clarifying some of my concerns. However, there are several points that still not very convincing to me:
1. You mention the computational hardness for learning (statistically known to be tractable) POMDPs even with access to hidden states (privileged information). It would be very helpful for me to understand this challenge by answering the following question:
*Why cannot we simply learn the model through sufficiently long (but polynomial) reward-free exploration episodes (but also learn the reward and emission models), and output the result of quasi-poly planning on the estimated model?*
2. It seems that Definition 3.2 lacks some key intuitive explanations. Why are there no examples of $\gamma$-observability provided in Appendix E, and why is this case addressed separately in Section 5? Furthermore, what are the specific differences in the final guarantees (Theorem 4.5) between Examples E.3 and E.4?
---
Rebuttal 3:
Title: Response to further questions of Reviewer zFhw
Comment: We thank the reviewer for the further questions and respond as follows.
---
### Response to Q.1
1. Firstly, the **computational hardness** for learning (statistically known to be tractable) POMDPs even with access to hidden states we have stated before is for the setting with **only privileged information** and **without any structural assumptions on the POMDP model.** We use it to reply to your high-level critic as follows
> The privileged information combined with the deterministic filter condition is very strong. It is unclear why Definition 3.2 was needed given the strong assumption of privileged information.
2. Secondly, we only claimed that such hardness of learning problems can potentially exist in the **standard learning setting without privileged information**. **Therefore, we use it to justify why privileged information, a well-motivated empirical paradigm, also offers strict theoretical benefits in terms of computation complexity.**
3. Thirdly, we **really appreciate the reviewer's great question regarding whether reward-free suffices**. In fact, **it was our initial thought as well**. However, we note that **naively extending** the reward-free techniques from MDP to the POMDP by also learning the emission fails, as detailed below.
To briefly review, the key idea for standard reward-free exploration in MDP is to estimate the transition given some reward-free dataset $D$ at step $h\in[H]$
$$
\hat{\mathbb{T}}\_h(s^\prime\mid s, a)=\frac{N\_h(s, a, s^\prime)}{N\_h(s, a)},
$$
where $N_h(s, a, s^\prime)$ and $N_h(s, a)$ denote the count of such state-action triplets/tuples in the reward-free dataset $D$. Correspondingly, we can get the guarantee of
$$
V_1^{\pi, (\mathbb{T}, r)}(s_1)-V_1^{\pi, (\hat{\mathbb{T}}, r)}(s_1)\le \epsilon,
$$
for any reward function $r$ and policy $\pi$, where $V_1^{\pi, (\mathbb{T}, r)}(s_1)$ denotes the value of policy $\pi$ in the MDP specified by $(\mathbb{T}, r)$, and similarly, the definition extends to $V_1^{\pi, (\hat{\mathbb{T}}, r)}(s_1)$. Therefore, if we can find an optimal solution for the estimated model $(\hat{\mathbb{T}}, r)$, it is also an approximately optimal solution for the true MDP $(\mathbb{T}, r)$.
Back to POMDP, we can indeed estimate the model by
$$
\hat{\mathbb{T}}_h(s^\prime\mid s, a)=\frac{N_h(s, a, s^\prime)}{N_h(s, a)},
$$
$$
\hat{\mathbb{O}}_h(o\mid s)=\frac{N_h(s, o)}{N_h(s)},
$$
where again those $N_h$ denote the counts of appearance of the corresponding quantities in the reward-free dataset $D$. By a similar analysis, we can get a similar reward-free POMDP guarantees of
$$
V_1^{\pi, (\mathbb{T}, \mathbb{O}, r)}(s_1)-V_1^{\pi, (\hat{\mathbb{T}}, \hat{\mathbb{O}}, r)}(s_1)\le \epsilon,
$$
for any reward function $r$ and policy $\pi$, where $V_1^{\pi, (\mathbb{T}, \mathbb{O}, r)}(s_1)$ denotes the value of policy $\pi$ in the POMDP specified by $(\mathbb{T}, \mathbb{O}, r)$, and similarly the definition extends to $V_1^{\pi, (\hat{\mathbb{T}}, \hat{\mathbb{O}}, r)}(s_1)$. Therefore, if we can find an optimal solution for the approximate POMDP $(\hat{\mathbb{T}},\hat{\mathbb{O}}, r)$, it is also an approximately optimal solution for the true POMDP. Up to now, the analysis could be similar to that of reward-free exploration in MDP.
However, even if the original POMDP satisfies $\gamma$-observability, $(\hat{\mathbb{T}},\hat{\mathbb{O}}, r)$ **is not necessarily a $\gamma$-observable POMDP. This is true even if we run the reward-free process for (polynomial) long enough episodes**, since the maximum visitation probability for certain states can be exponentially small. This will affect the estimation accuracy for corresponding rows of the emission $\hat{\mathbb{O}}$, potentially breaking its $\gamma$-observability (to **rigorously see why and how, we kindly refer to the proof of our Theorem H.5**). In other words, although such states that are inherently hard to visit do not affect the **value performance bounds**, or planning in the approximate **MDP** (since any MDP is computationally tractable), they do affect the tractability of planning in the approximate POMDP.
To briefly introduce the key idea to circumvent such an issue, **we introduce a new terminal state $s^{\text{exit}}$ and try to redirect the probabilities transitioning to those hard-to-explore states to this terminal state** and **carefully re-define the transition/emission correspondingly to ensure the *misspecified* model is $\gamma^\prime$-observable**, while making sure $\frac{\gamma^\prime}{\gamma}\ge \mathcal{O}(1)$. **Note that none of such construction or analysis along the way is proposed/needed in the standard MDP reward-free exploration framework** since it only needs to care about the **value performance bound**, rather than the **computation tractability in the approximate model.**
---
Rebuttal 4:
Title: Response to further questions of Reviewer zFhw (Cont'd)
Comment: 4. **More importantly, we would like to emphasize again that our motivation is to understand existing practice and develop corresponding provable algorithms based on it, while the reward-free exploration + planning framework (using the algorithm from (Golowich et al., '23)) is certainly disconnected from this goal.** Specifically, our starting point was to analyze the popular pipeline of belief learning + policy optimization (**asymmetric actor-critic**). Note that **our novel techniques in Point 3 are just for one instantiation of the first step of belief learning under the specific $\gamma$-observability assumption and online exploration setting.** In practice, even if observability is not satisfied, there are many effective, empirical methods for the belief learning oracle that can be used, our policy optimization algorithm **decoupled from** the belief learning step are **still effective**. In contrast, such reward-free exploration + planning framework is restricted to the $\gamma$-observable POMDP setting only, and are different from emirical practice.
5. Finally, we would like to remind the reviewer that what the reviewer focuses on is just half of our results (Sec. 5), while in **Sec. 4, we do not need any reward-free techniques or observability assumption**, and the corresponding algorithms are also much more natural. Even in the half of the results that the reviewer focused on, our goal is not just developing **an** algorithm to achieve poly sample and quasi-poly computation complexity for the specific observable POMDP with online exploration setting at the same time, but analyzing (the flaws and enhancements of) **the** algorithmic paradigms used in practice.
---
### Response to Q.2
1. **Key intuitive explanations of Def 3.2.** Firstly, the terminology of a filter refers to the process of estimating the underlying unknown state using a sequence of actions and observation (recursively). Therefore, our condition states that if at step $h$, the estimation of the current states $s_h$ is not random, then the state estimation for the next step $h+1$ after taking the new action $a_h$ and receiving the new observation $o_{h+1}$ is also non-random.
2. **Example of $\gamma$-observable POMDP.** We apologize for not further explaining observability. This was because we thought it is one of the standard assumptions in RL theory for addressing POMDPs. Notably, another useful/related assumption is weakly-revealing assumption [39], which is indeed also equivalent to $\gamma$-observability (up to some problem-dependent factors). For specific examples, we point out that a sufficient condition is that the emission matrix has **full row-rank**, and some simple/natural examples can be found in Example B.1 of [22].
3. **Why is this case addressed separately in Section 5?**
- Firstly, Sec.4 is to analyze the empirical paradigm of privileged **policy** learning/**expert policy distillation**, while Sec.5 is to analyze the empirical paradigm of privileged **value** learning/**asymmetric actor-critic**. **We have shown that the empirical paradigms in Sec.4 applied to observable POMDPs suffer from sub-optimality (Prop 3.1).** Therefore, we cannot handle this case in Sec. 4. Meanwhile, this sub-optimality further motivates us to propose our Def 3.2 (that is neither stronger or weaker than $\gamma$-observability).
- A more fundamental reason is that for those problems under Def 3.2, the paradigm of Sec. 4 can actually achieve poly sample + poly computation, while it is known that the quasi-poly computation complexity for $\gamma$-observable POMDP is **unimprovable** (Golowich et al., '23). Therefore, we analyzed **another** empirical paradigm in Sec.5 and show it can be used to handle the this $\gamma$-observable POMDP case we cannot handle before.
4. **Furthermore, what are the specific differences in the final guarantees (Theorem 4.5) between Examples E.3 and E.4?**
We thank the reviewer for bringing this question that can indeed help us **further justify the provable benefits** of privileged information.
- Firstly, there are **no** specific differences in the final guarantees between Examples E.3 and E.4. In other words, Theorem 4.5 holds as long as the POMDP satisfies Def 3.2.
- Secondly, **the reason why there are no differences is that our guarantees do not suffer from the exponential dependency on decoding length anymore!** Note that references [A, B] that studied Example E.3 ($k$-decodable POMDP) **has to suffer from the exponential dependency on $k$ both statistically and computationally**, which explains why they have to assume $k$ to be a small constant. In contrast, under privileged information, we show that such exponential dependency on $k$ **can be removed** (with natural and **practical-relevant** algorithms), which further explains why we can handle the new case Example E.4 that can not be handled by existing literature.
---
Rebuttal 5:
Title: Response to further questions of Reviewer zFhw (Cont'd)
Comment: ---
We are grateful to the reviewer for their dedicated efforts in reviewing our paper and for engaging in the discussion period, which has undoubtedly helped improve our work. We are looking forward to your further feedback. Thank you again.
---
[A] Efroni, Yonathan, et al. "Provable reinforcement learning with a short-term memory." International Conference on Machine Learning. PMLR, 2022.
[B] Guo, Jiacheng, et al. "Provably efficient representation learning with tractable planning in low-rank pomdp." International Conference on Machine Learning. PMLR, 2023.
---
Rebuttal Comment 5.1:
Title: Thanks!
Comment: I appreciate the detailed responses. Now the picture is more clear to me, and I am now slightly more on the positive side.
However, I still think the submission in the current form needs improvement in terms of writing to clearly convey key challenges and intuition. In the revised version, it would be nicer if a bit more concise and crisp version of the responses to be included accordingly.
---
Reply to Comment 5.1.1:
Title: Thanks for your dedicated efforts!
Comment: We are excited to hear that our responses help address your concerns and will make sure the discussions with you (that are quite effective in our opinion) are included in the later revised version! | Rebuttal 1:
Rebuttal: ## Additional experimental results
In response to the reviewers, to make our experimental evaluation more sufficient, we **have added new results** by testing our algorithms on more POMDP problems of larger size than the original problems in the paper. Meanwhile, we also addressed the problem of overlapped legends in some figures. We hope those additional experimental evaluations and re-plotted figures will address the concerns regarding the empirical evaluation of our algorithms.
Pdf: /pdf/bea78f8ecd01af22ca3ad49b28348b0d61cc6430.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Efficient Evaluation of LLMs via Branching Preference Learning | Reject | Summary: In this work, the authors conceptualize the evaluation process as a decision tree, where each node represents an evaluation action, and each path from the root to a leaf node represents a trajectory of evaluation reasoning. The authors demonstrate that within a limited search space, there exist better decision-making behaviors that facilitate the model in making reasonable and accurate judgments.
Strengths: 1. The idea of branching LLM evaluation is interesting and novel.
Weaknesses: 1. The authors missed a lot of key related works, including close-ended benchmarks such as MMLU, MMLU-pro, MixEval, GSM8k, GSM1k, etc; open-ended benchmarks such as Arena-Hard, AlpacaEval, WildBench, Chatbot Arena, etc.
2. I think the writing needs improvement. Now it's not easy for a reader to get what you are focusing on. If you are doing evaluation, then try to use some pipeline figures and comprehensive captions to describe the core idea. Besides, all captions in this paper is misleading, not telling the reader about what is happening in the table or figure; also, there lacks some key sections such as conclusion.
3. How to measure the quality of the proposed evaluation? I think just evaluating 5-6 models is far from enough. Beyond that, how is the model rankings related with Chatbot Arena or some other popular benchmarks such as MMLU?
Technical Quality: 2
Clarity: 1
Questions for Authors: Is it necessary to consider evaluations specifically for dialogue settings? I think all LLM evaluations are in a dialogue settings, the difference is just one-turn or multi-turn evals. Just calling it open-ended eval will be better aligned with the LLM research community.
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review comments.
**For Concerns**:
> Q: The authors missed a lot of key related works
1. We clarify our task and related work (refer to common responses 1, 2, and 4 for more details):
- Our work has broader application scenarios; it is not only for benchmarking model capabilities but also for reward modeling, preference data construction, and correction models [1] (see Sec 5.5 for related experiments). In subsequent experiments, we show our results on Chatbot Arena, which exhibit a strong correlation with human scoring.
- Our work supplements benchmarks like MMLU and GSM8k (as discussed in common responses 1 and 2). While MMLU and similar evaluation tests focus on correctness scoring, many real-world problems don't have strict binary labels (correct or incorrect) and require preference scoring. Our research aims to address the shortcomings of these automated evaluation benchmarks.
- Our work can provide an open-source, automated alternative to replace GPT-4 for evaluations (Like WildBench relies on GPT-4 for evaluation).
- Our contributions (see common response 4 for more details): our aim is not only to provide a better evaluation method for ranking a large number of LLMs but also to assist with tasks that require human evaluation. The core contribution of our paper is transforming the evaluation task into an optimization problem over an evaluation tree, enhancing evaluation capability through preference branch selection.
> Q: I think the writing needs improvement.
2. We will make revisions based on your suggestions:
- Create a figure to illustrate how the evaluation task is performed. As described in section 3 (lines 91-95), the preference evaluation task involves taking two AI responses and determining which one better addresses the query (*win*, *lose*, and *tie*).
- Include key conclusions in the captions. The main conclusions are already included in the paper, but if you have specific concerns, please let us know.
> Q: just evaluating 5-6 models is far from enough. like model rankings?
3. New experiments. **We promise to include these experimental results in the next versions**:
The quality and diversity of dialogue data are certainly crucial for training models. While collecting more diverse data is important, it is not the core contribution of our work. Our method can achieve significant performance improvements by incorporating more dialogue data, demonstrating its scalability and effectiveness.
- We conduct Model Ranking, referring to [2] (all data from official github repo). Considering that our approach is more suited for Pair-wise Evaluation rather than Single Evaluation, the evaluation results might be biased. Nevertheless, our results exhibit a strong correlation with human scoring. *Experimental setup*: We used the DPO model under the OOD setting to score different models. We present our model's scores (Our Score) and the scores from the ChatBot Arena Leaderboard [2] as a reference (Arena Score). Delta represents the change in ranking.
| |Arena Score|Our Score|delta|
|--|--|--|--|
|gpt-4 |1162|230.13|0|
|gpt-3.5-turbo|1068|222.26| +2|
|claude-v1|1149|219.53|-1|
|wizardlm-13b |1058|218.93| +2 |
|Llama-2-70b-chat|1093|217.80| -3|
|vicuna-7b-v1.3|1005|212.46| +2|
|Llama-2-13b-chat|1063| 210.86| -2|
|Koala-13b|964|205.13|+1|
|Chatglm-6b|924|200.16|+1|
|RWKV-4-Raven-14B|921|197.66|+1|
|Llama-2-7b-chat|1037|192.33| -4|
|Alpaca-13B|901|183.06|0|
|Dolly-V2-12B|822|165.13|0|
|LlaMA-13B |799|160.93|0|
(2) We conduct experiments with other baselines [3]. It is worth noting that our model was trained using only 7K dialogue data, significantly less than the 200K used by PROMETHEUS.
|Model|Data|Eval-P|Eval-P|MT-bench|MT-bench|
|--|--|--|--|--|--|
|||w/ Tie|w/o Tie|w/ Tie|w/o Tie|
|MIXTRAL-INSTRUCT-8X7B|| 53.81|73.50| 51.85| 71.42|
|ULTRA RM (13B) | | |59.85|| 56.00|
|PROMETHEUS-2-7B| 200k(label)|57.61|73.80| 56.18|67.25|
|PROMETHEUS-2-8X7B| 200k(label)|58.41|79.98|55.07|71.96 |
|**Ours(OOD DPO)**| 3k(label) + 4k(unlabel)|56.75|77.23|55.89|72.08|
(3) We add experiments on the more general benchmark **Rewardbench** [4], which covers the AlpacaEval data you mentioned. **Please refer to the common response 5 for detailed experimental results.** These results effectively validate the efficacy of our method. Our model outperforms many commercial APIs and 70B parameter models. (Importantly, our experiments did not use any human labels, providing strong evidence for scalability.)
The RewardBench evaluation dataset contains following:
- Chat (alpacaeval-easy, alpacaeval-length, alpacaeval-hard, mt-bench-easy, mt-bench-medium)
- Chat Hard (mt-bench-hard, llmbar-natural, llmbar-adver-neighbor, llmbar-adver-GPTInst, llmbar-adver-GPTOut, llmbar-adver-manual)
- Safety (refusals-dangerous, refusals-offensive, xstest-should-refuse, xstest-should-respond, do not answer)
- Reasoning (math-prm, hep-cpp, hep-go, hep-java, hep-js, hep-python, hep-rust)
Our work demonstrates that (1) automated data construction can rival or even surpass human collected data, and (2) automated preference optimization methods can help evaluation models rapidly identify key evaluation criteria, ensuring efficiency in the evaluation process.
> Q: Is it necessary to consider evaluations specifically for dialogue settings?
4. Thanks for your question. We will revise any inconsistent expressions in the paper accordingly (see common response 3). Furthermore, our experiments cover a broad range of domains, and we have supplemented experiments on Rewardbench, which should support the claims in our work and demonstrate the effectiveness of proposed method.
[1] LLM Critics Help Catch LLM Bugs (OpenAI 2024)
[2] Judging LLM-as-a-judge with MT-Bench and Chatbot Arena (NeurIPS 2023)
[3] PROMETHEUS 2: An Open Source Language Model Specialized in Evaluating Other Language Models
[4] RewardBench: Evaluating Reward Models for Language Modeling (Allen AI 2024)
---
Rebuttal 2:
Comment: Thank authors for your responses. I keep my original score unchanged.
---
Rebuttal Comment 2.1:
Comment: Hi Reviewer RFGW, could you provide more insights on why / on which point you don't think the authors have sufficiently addressed your concerns (given that you decide to keep your score)? Because there is still one day until the end of the reviewer-author discussion period, they authors could still have chance to clarify and to provide more information to address your concerns. | Summary: This paper proposes a novel approach to efficiently evaluate LLMs using branching preference learning. The authors conceptualize the evaluation process as a decision tree, where each path represents an evaluation reasoning trajectory. They introduce a tree-based data sampling method and preference learning based on the DPO algorithm to improve evaluation capabilities. The method is tested in three settings: in-distribution, out-of-distribution, and transfer evaluation. The authors claim their model significantly reduces dependency on labeled data and demonstrates strong performance across different evaluation settings while reducing inference costs by 90% compared to searching the entire evaluation tree.
Strengths: - The paper's novel approach of framing LLM evaluation as a decision tree problem is a significant strength. This allows for a more nuanced and flexible evaluation process that can adapt to different scenarios and criteria. The use of branching preference learning enables the model to prioritize critical evaluation criteria.
- The authors test their model in multiple settings (in-distribution, out-of-distribution, and transfer evaluation), providing a thorough assessment of its performance. Applaud to that.
Weaknesses: - The biggest concern I have is that in-distribution performance is not better than other baselines it compares to. This begs the question of where the improvement gain is from. If in-distribution evaluation performance is mediocre but out-of-distribution does better, then doesn't the most gain come from a better dataset?
- Another concern is the unnaturalness of using evaluation criteria as individual nodes. How to ensure the coverage of those criteria across different nodes. Are they overlapping each other or completely different? The paper is quite vague on this.
- Why is each criteria subtree only a binary tree? If using a tree structure, it seem like it can be easily extend to multiple nodes rather than just 2 at each layer.
Technical Quality: 3
Clarity: 3
Questions for Authors: - It is unclear how to sample multiple evaluation path? By using a high temperature? Or is each individual criterion a criterion node?
- What is the Eval-P benchmark? It is not cited in the paper.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for appreciating the novelty of our work. We hope the following responses can further address your concerns:
> Q: If in-distribution evaluation performance is mediocre but out-of-distribution does better, then doesn't the most gain come from a better dataset?
1. We try to explain in detail why counter-intuitive results appeared in the in-distribution (ID) experiments, particularly as it may be related to "the curse of recursion" [1]:
- First, there is a key difference between in-distribution (ID) settings and other (OOD and transfer) settings: the training data in the ID setting consists of high-quality, well-defined domain data collected by humans (including baseline AutoJ and Fennec). Therefore, in ID setting, the initial model, SFT model, and DPO model were all trained **using the same Dialogue data**. In other words, we iteratively synthesized new training data, and the queries for these data are the same. In contrast, the OOD and transfer settings **used different Dialogue data** (as described in lines 201-216 and in the Appendix).
- Recent studies on iterative data synthesis have found that continuously using synthetic training data can led to model degeneration because the model tends to converge to a single modal distribution. As noted in [1], recursively using model-generated data can lead to "model collapse". This suggests that the training instability in our ID setting is likely caused by the use of synthetic data.
- This phenomena align with our findings in the paper. In the OOD and transfer settings, where new training data is used, training remains stable. The new data typically constrains the model training process, helping it maintain a diverse distribution and preventing it from converging to a single distribution. Therefore, we recommend using dialogue data with different distributions at the SFT or DPO stages to prevent "the curse of recursion."
- Despite this, the ID experiment results were still impressive, achieving the highest agreement score of 57.18 on Eval-P, a significant improvement over Fennec (note that in the ID experiment, our methods and Fennec used the same training data). All above evidence proves the effectiveness of our method.
Regarding "most gain coming from a better dataset," the direct evidence is that the performance of our Initial model(trained on the our collected data), is only 49.64, which is significantly lower than Fennec's 55.36. This shows that our data did not yield better results through direct training. However, considering another aspect of data or task diversity, our collected data is certainly better (see Sec 4.1 line 133-135). We sampled from a large-scale dialogue dataset rather than a specific data source. We then apply the K-Means algorithm to cluster the data. Subsequently, we sample data from these clusters, ensuring that the training dataset encompasses a diverse set of dialogue scenario. **This validates our motivation: even without human collected or labeled data, the model can achieve comparable or even better performance.** This also indicates that we can significantly reduce labor costs for preference evaluation tasks in the future.
> Q: Are they overlapping each other or completely different?
2. In our experiments, only a minimal chance exists for the model to output semantically similar criteria. Our prompt is provided in Table 10, and Table 11 shows specific examples. It’s important to note that the idea of automatically generating criteria has been proposed in previous work [2]. Our contribution lies in guiding the model to prioritize criteria with high discriminative capabilities. Table 1 shows that we can achieve better results with fewer inference steps (branches).
> Q: Why is each criteria subtree only a binary tree?
3. You are correct that evaluation trees can extend to multiple nodes. However, considering: (1) **Computational efficiency**, each sample to be evaluated includes 10 criteria (k=10), each criterion includes 2 scoring guidelines, followed by 4 different judgments (see lines 151-157). Thus, there are 80 different subtrees. The purpose of building the evaluation tree is to expand the search space, and the **current method already has a high computational complexity, making additional children unnecessary.** (2) Scoring guidelines and judgments are sampled by adjusting temperature and swapping response positions. The diversity obtained by adjusting temperature is limited and requires different conditions (criteria and scoring guidelines). (3) **Another purpose of constructing the evaluation tree is to obtain preference pairs (chosen, reject) for DPO training**. Clearly, a binary tree is sufficient.
> Q : It is unclear how to sample multiple evaluation path? By using a high temperature?
4. In lines 151-154, we have stated how different tree nodes are generated, i.e., criteria are ensured to be diverse by adjusting the model’s prompt, while scoring guidelines and judgments are obtained by adjusting temperature (specifically, judgments can be obtained by swapping response positions).
> Q: What is the Eval-P benchmark?
5. Eval-P was proposed by AutoJ [3]. We will add the citation at line 199.
We also provide a more detailed background and related work to clarify our work, as well as clearer contributions and new experimental results. Please refer to the common response (Author Rebuttal). We hope this can further address your concerns.
[1] AI models collapse when trained on recursively generated data(Nature)
[2] Branch-solve-merge improves large language model evaluation and generation (Meta 2023)
[3] Generative judge for evaluating alignment (ICLR 2024)
---
Rebuttal Comment 1.1:
Comment: Thank you for the response and addressing my questions. I am still concerned about the ID setting shown in Table 1. On the same dataset, compared to Auto-J and Fennec the proposed method is not really doing better. It is sometimes worse. This is evidence that the methodology is not the reason behind performance gain. Thank you for bringing up model collapse. One can definitely avoid model collapse even during iterative data synthesis [1]. The main problem here is that external data seems to be helping the most rather than the evaluation tree process.
[1] https://x.com/RylanSchaeffer/status/1816535790534701304
---
Rebuttal 2:
Title: Further Discussion
Comment: **Thank you very much for your suggestions, which have helped us improve the quality of our work.** Following your advice, we have conducted new experiments and to provide the following clarifications:
+ We have never denied the contribution of data, especially the additional unlabeled data (in the OOD setting) used in our approach. However, it is important to note that this data is unlabeled, making it widely accessible without increasing extra costs.
+ These additional data are simply dialogue data. The purpose of the evaluation tree is to generate the criteria, scoring guidelines, and judgments required to train the evaluation model. Therefore, including only dialogue data is insufficient for training an evaluation model. Hence, we cannot consider the new unlabeled dialogue data as the key to improving performance. Our approach requires combining these dialogue data and constructing SFT and DPO data.
+ **Most importantly**: Our proposed method is based on the observation in Figure 1, which shows that increasing the number of branches can enhance evaluation performance. This means that even without the additional data, the initial model can achieve good evaluation performance (**though it may require > 40 branches**). The SFT and DPO methods are designed to enable the model to efficiently reach the final result (**< 3 branches**). This is also why we employed branch ensemble in Section 4.4, as branch ensembles generally yield better decision results and provide a higher upper bound on performance. The purpose of designing the evaluation tree is to allow the model to more quickly identify the key evaluation paths, which can be derived from the branch ensemble results (as referenced in Section 4.4). **Thus, the evaluation tree is largely aimed at accelerating the evaluation process by sampling fewer branches (yet more crucial)**. I believe our method's design and experimental results are well-supported by the substantial evidence mentioned above.
+ We verified whether model collapse was the reason for the impact on ID performance. By mixing the original training data with the newly synthesized data when training the SFT model according to your advise (thanks again), **we observed significant performance improvement, which supports our initial hypothesis in Section 5.7**. Continuously training the SFT model and DPO model with the same data affects model convergence. However, compared to the OOD setting, there is still a lack of new unlabeled data, making direct comparisons still unfair. If we obtain more unlabeled domain-specific data, we believe our methods can achieve even better results. Considering the rebuttal time limitation, we may not be able to conduct additional experiments. However, our experiments, along with the common response experiments on the Reward Bench, should be sufficient to address your concerns.
| | branch | Eval-P w/tie | Eval-p w/o Tie |
| ---------------------------------- | ------ | --------------- | --------------- |
| Ours ID SFT (copy from paper) | 1 | 56.68/86.64 | 70.76/89.11 |
| | 5 | 55.96/86.57 | 72.91/88.13 |
| Ours ID DPO (copy from paper) | 1 | 55.24/84.26 | 69.87/86.95 |
| | 5 | 57.18/85.63 | 74.88/88.52 |
| Ours OOD SFT (copy from paper) | 1 | 54.59/87.14 | 70.56/88.52 |
| | 5 | 55.10/87.86 | 73.69/89.99 |
| Ours OOD DPO (copy from paper) | 1 | 55.89/89.44 | 75.76/90.67 |
| | 5 | 56.75/90.37 | 77.23/92.24 |
| Ours ID SFT + Mix Trainng (new) | 1 | 56.87/87.92 | 70.92/88.99 |
| | 5 | 56.32/87.86 | 75.17/89.79 |
| Ours ID DPO + Mix Trainng (new) | 1 | 57.54/90.66 | 78.60/93.32 |
| | 5 | **58.41/91.52** | **79.69/93.72** |
Caption:Our Mix Training setup differs from that in the paper by using both original and synthetic data to train the model, whereas the paper only used synthetic data for training the sFT model. As a result, we achieved an Agreement of 58.41 vs. 55.80 , and a Consistency of 79.69 vs. 74.14 compared to Fennec.
---
Rebuttal Comment 2.1:
Comment: Hi Reviewer GhxJ, do you think the authors' new response has addressed your concerns? | Summary: They present an approach to improving LM evaluation by having models first generate an evaluation criteria, then a scoring guideline, and then finally a final judgement. They then develop a procedure for collecting training data corresponding to these three steps by applying branching/pruning approach (sample multiple criteria, from each sample multiple guidelines, etc...). They then use the generated data to train a DPO and SFT model. They find that their method outperforms baseline evaluation approaches according to correlation with human judgement on dialogue evaluation.
Strengths: * The problem of improving LM evaluation is important
* The idea of enabling language models to hierarchically sample evaluations (e.g. fist criteria, then guideline, then judgement) is very neat, and similarly the idea of applying a tree-based sampling procedure to automatically generate data is quite cool.
* I think they do fairly thorough experiments and compare to quite a few baselines.
Weaknesses: * The paper is honestly pretty hard to follow. There's a lot of moving parts and it's not explained in an easy to digest way.
* The specific method presented seems a little bit ad-hoc, and could be justified better in the paper (e.g. why use criteria, then guideline, then judgement, why not some other sequence of steps?).
* Looking at Figure 1, it doesn't seem that their method improves all that much over the baseline
Nits:
* The related work seems pretty sparse. There's lots of work on improving LM evaluation in math reasoning settings that isn't discussed.
* Figure 4, the text is really small and hard to read.
Technical Quality: 2
Clarity: 2
Questions for Authors: * The description of the growth / pruning part is pretty confusing and in general a better explanation of what exactly is happening there would be really helpful.
* "Agreement quantifies the proportion of evaluations that meet the criteria for swap consistency and align with human judgments.": why both of these criteria need to be met? Isn't the swap part already covered by the consistency evaluation?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: They do a good job of discussing the limitations. I would also note that it is unclear how effective this is when applied to more challenging tasks like mathematical reasoning (e.g. MATH benchmark) as a limitation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for appreciating our idea of transforming the evaluation task into a tree search problem. We hope the following responses can further address your concerns:
> Q: The paper is honestly pretty hard to follow. There's a lot of moving parts and it's not explained in an easy to digest way
1. Preference evaluation is indeed a rapidly developing field. We have detailed the task background, related work, and our contributions in the common response, hoping to alleviate your concerns. Additionally, considering that Reviewer YdUv Strengths 2 and Reviewer sKgh Strengths 1 acknowledge our work clear and easy to understand, could you please specify which parts is difficult to comprehend? This will help us make better revisions.
> Q: The specific method presented seems a little bit ad-hoc, and could be justified better in the paper
2. We believe our method for decomposing the evaluation task is reasonable, though not the only possible approach. Using a different decomposition method **would not diminish** our contributions: (1) Human evaluations also rely on criteria, scoring guidelines, and judgments. Recent research [1] has highlighted the need for multiple rounds of negotiating these criteria and guidelines for human evaluators. Another study [2] employed an automated evaluation process. There are certainly better ways to decompose evaluation tasks, and we are open to exploring them further. (2) Our primary focus is not on the decomposition method itself, but on how to avoid relying on human supervision and efficiently identify the key evaluation criterion, as discussed in common response 4.
> Q: Looking at Figure 1, it doesn't seem that their method improves all that much over the baseline
3. (1) Our method shows significant performance improvement over the initial model (w/ tie 55.89 vs. 49.64 w/o tie 75.76 vs. 57.02). (2) For the AutoJ and Fennec methods, it's important to note that our settings differ. Our experiments are conducted in OOD scenarios, using training sets samples from large-scale data rather than human collected (Figure 4 illustrates the differences in data domains). **Additionally, we have added new experiments in the common response 5** to demonstrate the effectiveness of our method. On the RewardBench, our model outperforms the 70B LLaMA3 and the Prometheus v2, which was trained on 200K evaluation data, while our training data accounts for only 10% of theirs.
> Q: The related work seems pretty sparse.
4. As discussed in the common responses 1 and 2, we will add more related work to address your concerns. It's important to clarify that the challenges in preference evaluation (LLM evaluation) include not only accuracy evaluations like MMLU and Math Reasoning but also **open-ended generation** that requires multiple evaluation dimensions. Our work aims to address multi-dimensional evaluation in complex and diverse user intent scenarios, as mentioned in the abstract (line 4: Particularly, in complex dialogue scenarios involving diverse and intricate user intents). Furthermore, our experiments include *Code* and *Math* test data in Table 2, reflecting improvements in these areas. **Our new experiments on RewardBench show significant performance improvements in reasoning tasks (nearly 10 points, as referenced in the common response)**.
> Q: Figure 4, the text is really small and hard to read.
5. We will add a Table for Figure 4 to explain the percentage of each category. Figure 4 illustrates the color differences in various domains, indicating significant differences in the constructed OOD data domains, which are more challenging.
> Q: The description of the growth / pruning part is pretty confusing and in general a better explanation of what exactly is happening there would be really helpful.
6. I'm not sure what caused your confusion. If you could specify, it would help us make better revisions. In fact, we have attempted a clearer explanation: The purpose of constructing an evaluation tree: (1) to expand the candidate space of the evaluation process and (2) to create SFT data for different evaluation paths and (chosen, reject) DPO data. Our method achieves goal (1) through heuristic sampling and temperature sampling (lines 151-154). Goal (2) is accomplished using two consistency pruning methods, with consistent samples used as SFT data and inconsistent ones as DPO data (lines 158-161).
> Q: why both of these criteria need to be met? Isn't the swap part already covered by the consistency evaluation?
7. Agreement (AGR) reflects the consistency between LLM evaluations and human judgments. Consistency (CNS) represents the extent of bias introduced after swapping positions, indicating the stability of the model's evaluations. Please refer to [2].
[1] A Holistic Approach to Undesired Content Detection in the Real World (OpenAI 2023)
[2] Branch-solve-merge improves large language model evaluation and generation (Meta 2023)
[3] Generative judge for evaluating alignment (ICLR 2024)
---
Rebuttal Comment 1.1:
Comment: I appreciate your response. If you could add some more clarity in the paper about how the AutoJ and Fennec settings differ from yours, that would be great.
Regarding the confusing parts: I think it would be less confusing with a clear example or an LM evaluation using your framework somewhere in the main text (e.g. a concrete example in a figure somewhere). Your method has a large description length, which generally goes against my prior, and makes more skeptical that every aspect of the method is necessary and also muddles the true contribution of the work. I think positional consistency is a pretty random heuristic. I agree that LMs might be sensitive to position and whatnot, but LMs are sensitive in all kinds of ways to prompts, so why focus on just position; you could invent so many other arbitrary heuristics that would probably be just as good and provide no real insight whatsoever. Pair these criticisms with the fact that one of the evaluation metrics is less standard (e.g. consistency) (essentially created to show that this paper's method works), I would have a hard time accepting the paper as it stands. That being said I think the reward bench evaluation you added looks promising, so I am willing to raise by 1 point.
---
Rebuttal 2:
Title: Further Discussion
Comment: Thanks for your review comments!
> If you could add some more clarity in the paper about how the AutoJ and Fennec settings differ from yours:
We outline the settings and core challenges, highlighting the differences:
+ Auto J's approach evaluates two different AI responses using a one-step evaluation process. The dialogue data in its training dataset was human created to match the distribution of the test set, as shown in Figure 4. During training, only the SFT method was used, with no DPO training included.
+ Fennec employs a multi-step evaluation, using criteria and scoring guidelines to alleviate the complexity of the evaluation task. The training also involved human selected dialogue data and included only SFT training, with no DPO training.
+ In our approach: 1)There is no human selection or labeling cost—both the sampling of dialogue data and preference labels are fully automated. 2)We use a multi-step evaluation process to simplify the difficulty of complex evaluation tasks. 3)Most importantly, we incorporate DPO optimization, which helps the model achieve better performance with fewer branches, enhancing evaluation effectiveness.
+ Our method demonstrates that even a 7B parameter evaluation model has significant potential compared to the 70B model used in [1] for evaluation tasks. This advantage is largely due to our task decomposition (similar to Chain of Thought), which breaks down the evaluation process into three steps: generating evaluation criteria, scoring guidelines, and final judgements results. These steps and their intermediate outputs simplify the task, making the 7B model more capable in complex scenarios.
> Your method has a large description length, which generally goes against my prior, and makes more skeptical that every aspect of the method is necessary and also muddles the true contribution of the work.
+ If possible, could you please point out the details that might be unclear to you? This will help us explain our work more clearly.
+ We will also include a new Figure to illustrate how the evaluation task is conducted: The evaluation model receives a user query and two AI responses, and it only needs to assign one of three labels (win, lose, or tie) to indicate which AI response is better, or if they are equal.
+ In the Methods section (Section 4), we cover the following: 1) Section 4.1 explains how we collect the dialogue dataset. 2) Section 4.2 shows how we train an initial model. 3) Section 4.3 details how we construct the evaluation tree. 4)Section 4.4 describes how we collect training data based on the evaluation tree. 5) Section 4.5 demonstrates how we train the SFT and DPO models. Among these, only Section 4.2 follows a process similar to other related work. The remaining sections introduce new approaches developed in our work, which may explain some discrepancies with your prior understanding. Even compared to the latest research [1], our work offers novel contributions: We demonstrate that it's possible to improve evaluation model performance without relying on human labeled data, and we also showcase the potential of a 7B model, which is popular in both academia and industry.
> I think positional consistency is a pretty random heuristic. I agree that LMs might be sensitive to position and whatnot, but LMs are sensitive in all kinds of ways to prompts, so why focus on just position
+ The issue of positional consistency was not proposed by us; please refer to references [2] and [3].
+ Given that works [3], [4], and [5] all consider positional consistency and use Agreement and Consistency as evaluation metrics, I do not agree this as a flaw in our work.
+ We fully agree with your point that LLMs exhibit more bias beyond positional consistency. However, since positional consistency is the most direct and easily improvable, I believe it is necessary to address these biases during training. In fact, with pirwise evaluation, if positional consistency significantly decreases when positions are swapped, it would be unacceptable for downstream applications such as reward models or model corrections.
Considering that both Reviewer YdUv and Reviewer sKgh also agree our presentation to be clear, we hope to engage in further discussion to address any concerns for you.
[1] Self-Taught Evaluators (Meta 2024,8)
[2] Large language models are not fair evaluators
[3] Generative judge for evaluating alignment
[4] PROMETHEUS 2: An Open Source Language Model Specialized in Evaluating Other Language Models
[5] OffsetBias: Leveraging Debiased Data for Tuning Evaluators
---
Rebuttal Comment 2.1:
Comment: Hi Reviewer cKwh, do you think the authors' new response has addressed your concerns? | Summary: The paper investigates how to improve the quality of automated evaluation through fine-tuning (SFT and DPO). The main algorithm proposed by the paper is to construct an search tree which consists of node of (criterion, scoring guide, and judgment). This tree is later pruned and modified and the different paths serve as fine-tuning data for SFT and DPO.
My current rating is tentative. If the authors can kindly clarify the details of the paper, I'm happy to raise the score.
Strengths: 1. The paper is very clear and easy to read.
2. The investigation is very thorough. Experiment is comprehensive (the in-distribution, out-of-distribution evaluation setup is great).
3. The main claim of the paper is substantiated (I.e., improving efficiency through fine-tuning).
Weaknesses: I don't think this paper has substantial weaknesses.
1. There are some imperfections of text -- mostly just need to be clarified. Missing notation definitions, etc.
2. The performance improve over Auto-J on AGR is minor (55.13 -> 57.18). OOD evaluation, Zephyr-7B AGR is 56.75 and GPT-4 is 62.28 (which is close, but not quite close). CNS however is beating GPT-4. This would be very helpful for me to understand a bit more about what CNS is, and whether beating GPT-4 on this metric is meaningful or not (see Q6).
Technical Quality: 4
Clarity: 4
Questions for Authors: I do however have a few questions:
1. It seems that no human preference/judgment label is used. The initial policy is GPT-4. Then the creation of data for SFT and DPO are through the tree construction. Is my understanding correct? If so, this is a type of self-improving/bootstrapping method, which I find quite interesting. It also has greater impact on other fields of LLM.
2. Sec 4.2, consistency pruning. I'm not sure I understand self-consistency -- In Figure 3 left most tree, how did you obtain different versions of J1? The figure label says "red dot is judgment result by swapping the response positions" -- isn't it "positional-consistency" if swapping positions is involved? Please clarify.
3. When you collect preference label (Sec 4.4), for Branch Ensemble, how do you create the ensemble?
4. LLM-as-a-Judge. The writing says additional analysis is in Section 5.4. Sec 5.4 talks more about Auto-J performance in different scenarios. Where is the result of using Auto-J as a judge to create labels for DPO?
5. Table 2 Auto-J dagger (second row). What is this!? The caption did not explain what this means. It might be in the text but I couldn't find it.
6. Can you expand on why AGR and CNS are good evaluation metrics? Appendix A.2 only has training details.
7. I appreciate the honesty in Sec 5.7 -- can you offer some explanation on why DPO is unstable for in-distribution training? It's often hard to explain empirical behaviors -- it's perfectly ok if you don't have an explanation.
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your appreciation of our work. We hope the following responses can address your concerns:
> Q: It seems that no human preference/judgment label is used.
1. You are right that we do not use any human labels (see common response 4). As you can see, many research fields and studies lack sufficient resources (time and money) to employ annotators for labeling preference data. However, the APIs from OpenAI or Claude are also very expensive, and we aim to address these problems by providing open-source evaluation datasets and models to contribute to the community.
> Q: how did you obtain different versions of J1?
2. Yes, there are two types of self-consistency here: (1) Consistency after swapping positions (yellow dot J1 and red dot J1) and (2) Consistency due to sampling temperature (tmp>0) leading to different judgments (J1 and J2). Thus, we obtain four evaluation results, and these judgments should be consistent, based on the same criteria and scoring guidelines.
> Q: how do you create the ensemble?
3. The idea of using a model ensemble stems from our findings in Figure 1, which show that increasing the number of branches significantly improves the model's inference performance. Therefore, we increase the number of branches during inference (to about 80) to collect the model's preference labels (including *'win', 'lose', 'tie'*). We then use the ensemble results from all branches as the final preference labels.(It is worth noting that the ensemble is highly effective because the classification labels are limited, although we still approach this as a generative task rather than a classification task.)
> Q: Where is the result of using Auto-J as a judge to create labels for DPO?
4. The results in "w/ GPT-4" are those evaluated using GPT-4 for preference evaluation (rather than dialogue evaluation) and used for training. However, the improvement may not be significant as shown in Sec 5.4. We suspect this is because the preference evaluation task is also challenging for GPT-4, further demonstrate the importance of this area we are exploring.
> Q: Auto-J dagger (second row). What is this!?
5. In line 218, we explain the meaning of the dagger (it represents our reproduced results). We will also clarify this in Table 2 to ensure a clearer description.
> Q: Can you expand on why AGR and CNS are good evaluation metrics?
6. The purpose of preference evaluation is to measure alignment with human preferences, evaluating whether different AI responses better reflect human behavior or values. Agreement (AGR) [1] reflects the consistency between LLM evaluations and human judgments, serving as a direct metrics. Consistency (CNS) represents the extent of bias introduced after swapping positions, indicating the stability of the model's evaluations. Currently, it is a relatively good until we find more precise and fine-grained metrics.
- To address your observation on *"The performance improve over AGR is minor, compared to GPT-4"*:
Firstly, it's important to note that **most responses can be judged as either "win" or "lose," while a "tie" typically indicates that an effective discriminative criterion has not been identified (even with human evaluations).**
If a more suitable criterion or strict scoring guidelines were applied, many of these "tie" labels could be classified as either "win" or "lose."
From this perspective, our method effectively identifies more discriminative criteria and scoring guidelines. This allows cases that human evaluators might label as "tie" to be classified as either "win" or "lose." Consequently, this leads to a more noticeable AGR improvement in scenarios "w/o tie" labels. This directly supports our claims and demonstrates the effectiveness of our method.
- *"why CNS however is beating GPT-4?"*
Our evaluation models mitigate position bias by training on data where response positions are swapped. In our DPO training, the consistency of swapping positions of resposne is included in our training objectives and datasets (line 158-161).
> Q: can you offer some explanation on why DPO is unstable for in-distribution training?
7. We are very happy to discuss this issue, particularly as it may be related to the "curse of recursion" [2]:
- First, there is a key difference between in-distribution (ID) settings and other (OOD and transfer) settings: the training data in the ID setting consists of high-quality, well-defined domain data collected by humans. Thus, in our experiments, both the SFT and DPO processes use the same dialogue data. In other words, we iteratively synthesized new training data, and the queries for these data are the same. In contrast, new data is used in the OOD and transfer settings (as described in the Experiments and Appendix).
- Recent studies on iterative data synthesis [2] have found that continuously using synthetic training data can led to model degeneration because the model tends to converge to a single modal distribution.
- This phenomena align with our findings in the paper. In the OOD and transfer settings, where new training data is used, training remains stable. The new data typically constrains the model training process, helping it maintain a diverse distribution and preventing it from converging to a single distribution.
- Therefore, we recommend using dialogue data with different distributions at the SFT or DPO stages to prevent the "curse of recursion."
We also provide a more detailed background and related work to clarify our work, as well as clearer contributions and new experimental results. Please refer to the common response (Author Rebuttal). We hope this can further address your concerns.
[1] Generative judge for evaluating alignment (ICLR 2024)
[2] AI models collapse when trained on recursively generated data(Nature)
---
Rebuttal Comment 1.1:
Comment: Hi Reviewer sKgh, do you have any comments regarding the authors' rebuttal? | Rebuttal 1:
Rebuttal: We are grateful to Reviewer cKwh, GhxJ, and RFGW for appreciating the novelty and interest of our approach. We also appreciate the acknowledgment of our writing and presentation by Reviewer YdUv and sKgh.
We need to make the following clarifications: how our research differs from other work (1 and 2); the specific research area our work focuses on (3); the contributions of our work (4); and the new experimental results (5).
1. First, I need to clarify that "preference evaluation" have become one of the most rapidly developing research areas since the advent of LLMs. As noted in the review, there is still relatively little related work in this emerging field. Unlike previous evaluation benchmarks such as MMLU, GSM8K, and many other automated evaluation tasks that contain golden answers to evaluate accuracy, "preference evaluation tasks" have the following characteristics:
+ These tasks **do not have golden answers**, such as open-domain dialogue or summarization tasks, which largely rely on human evaluation. For instance, in [1], human evaluation scores improved, but automated metrics did not.
+ These tasks **have multiple evaluation dimensions (criteria)** and cannot rely solely on accuracy metrics. For example, in coding and math problems, the *reasoning steps*, *logical*, and *readability of the format* are often more important than just getting the correct answer, as reported in [2].
+ The primary importance of these tasks is their alignment with human preferences, which is a crucial aspect of current alignment training. Therefore, the methods, such as PandaLM [3], AutoJ [4], Promthus [5], and BSM [6], aim to enhance consistency with human preferences, providing solid baselines and related work for our research.
Additionally, even in scenarios where correctness is the primary metric, we often need to rank AI responses that are both incorrect, determining which one is better (i.e., requiring a more fine-grained score rather than simply correct or incorrect). Therefore, preference evaluation serves as a valuable supplement to the existing benchmarks.
2. We have indeed discussed and defined "preference evaluation" (lines 3-5) and their distinction from other evaluation methods (lines 23-32) in our work. Additionally, we have talk about the differences between automated metric (ROUGE) and LLM evaluations. As previously mentioned, benchmarks like MMLU are solely automated metric evaluations and do not encompass multi-dimensional, human-aligned preference evaluations. To provide a clearer and more comprehensive distinction of this task, **we promise to include more relevant references (such as MMLU, MMLU-pro, MixEval) in the introduction and related work sections**.
3. In our paper, some terms like "Dialogue evaluation" and "LLM evaluation" might cause confusion. We are actually focus on "preference evaluation using LLMs in open-domain dialogue tasks." we will keep consistent description in next version.
4. Our goal is to verify that **if we can enhance LLM evaluation capabilities without introducing human-supervised signals?**. The contributions are as following:
+ Our methods and experiments support this idea: (1) training data distributions are automatically constructed (methods 4.1). (2) the SFT and DPO training data (preference labels) are synthesized by LLMs (methods 4.3 and 4.4), with the only external signal being the initial model data generated by GPT-4. Thus, compared to the scenario in Figure 1, our exploration is more realistic and challenging.
+ We are the first to model and optimize "how to find appropriate evaluation criteria," significantly reducing the number of branches. This is supported by our observation that **increasing the number of model branches (up to 40 branches) yields better judgment results** (Figure 1 and lines 49-54). Our method optimizes the criteria space, allowing the model to achieve high performance with evaluations involving just 1-3 branches.
+ Scoring preference data for preference evaluation is currently challenging (this refers to evaluation preferences, not just preferences between two dialogues). We discovered that **branch ensemble can provide a better judgment upper limit (though not a golden label)**. Using this approach, we are also the first to introduce evaluation tree and DPO training in the evaluation model's training (to our knowledge).
5. We further validated the effectiveness of our method on RewardBench [8], which is very popular in the community for verifying preference alignment. We collected additional 10K dialogues, resulting in a total of 16K SFT data from the paper and 6K unlabeled data for DPO training. Despite this, it is still a small compared to the 200K training data of Prometheus v2 [7]. We observed that our methods significantly improves model performance (76.1 vs. 72.1), especially on reasoning tasks, with nearly a 10 point increase.
| |Score|Chat|Chat Hard|Safety|Reasoning|
|--|--|--|--|--|--|
|GPT4| 85.9|95.3|74.3| 87.2 | 86.9|
|**Ours (DPO)** |76.1| 95.7|51.4|83.5| 73.6|
| Llama3-70B-Instruct|76.0| 97.6 |58.9| 9.2|78.5|
| Calude3-sonnet-0229|75.7|93.4|56.6 |83.7|69.1|
| Prometheus-8X7B-v2.0 |75.3|93.0|47.1|83.5|77.4 |
| Prometheus-7B-v2.0|72.4| 85.5| 49.1| 78.7| 76.5|
| **Ours (SFT)** |72.1| 93.6 | 51.6| 82.7|60.3 |
| Llama3-8B-Instruct | 64.8| 85.5| 41.6|67.5|64.8|
**reference:**
1 Learning to summarize from human feedback (Neurips 2020)
2 PROVER-VERIFIER GAMES IMPROVE LEGIBILITY OF LLM OUTPUTS (OpenAI)
3 Pandalm: An automatic evaluation benchmark for llm instruction tuning optimization (ICLR2024)
4 Generative judge for evaluating alignment (ICLR2024)
5 Prometheus: Inducing fine-grained evaluation capability in language models (ICLR2024)
6 Branch-solve-merge improves large language model evaluation and generation (Meta2023)
7 PROMETHEUS 2: An Open Source Language Model Specialized in Evaluating Other Language Models
8 RewardBench: Evaluating Reward Models for Language Modeling (Allen AI 2024) | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: In this work, the authors propose a tree-based data sampling method to conceptualize the evaluation process as a decision tree, where each node represents an evaluation action, and each path from the root to a leaf node represents a trajectory of evaluation reasoning. The proposed method involves generating supervised data and preference pairs derived from the evaluation tree for SFT and DPO training. This approach aims to reduce the dependency on labeled data and improve the performance of the evaluation model across in-distribution, out-of-distribution, and transfer evaluation settings. Experimental results demonstrate that the proposed model can enhance evaluation efficiency and performance.
Strengths: 1. The proposed method reduces the dependency on human labeled data by generating supervised data and preference pairs from the evaluation tree.
2. The paper is well-written.
Weaknesses: 1. Potential Biases --- The initial multi-branch training data is generated using only GPT4, which could introduce bias to the training data. Moreover, the branch ensemble method could also introduce bias to the training data. If the training data is biased or unrepresentative, the model's evaluations may also be biased. The authors should consider labeling a small annotation set to validate the branch ensemble approach.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. In table 1, what is the number of branches generated by Initial, SFT, and DPO for evaluation?
Typos:
1. Line 36: single quote format error
2. Line 256: "" -> ``''
3. Figure 4: font too small, hard to read
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, limitation discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for your review comments, as well as your appreciation of the importance and writing quality of our work.
**For Concerns:**
> Q: Potential Biases
We believe that incorporating synthetic data is essential for the future development of LLMs. Of course, reducing bias is also a crucial issue (points 1 and 2), and while we can attempt to mitigate the harm caused by biases, we cannot completely eliminate them (point 3). In our experiments and settings, the impact of bias is relatively small because our evaluation align with human evaluation results (point 4).
1. The use of synthetic data has become indispensable in both research and industry, enhancing performance [1, 2, 3] and supporting theoretical analysis [4]. OpenAI's GPT-4, one of the most widely used models today, excels in both safety and bias, having undergone extensive user testing with no more comparable open or closed-source alternatives.
2. One of our key contributions is exploring how to use LLMs to make improvements **without rely on human supervision (line 39)**. Based on this condition, we did not introduce additional high-quality human-supervised data. Of course, we believe that incorporating such data could further enhance model performance and reduce bias。
3. In fact, both humans and LLMs inherently exhibit bias (though not all biases affect practical use). Addressing how to mitigate bias should be part of a comprehensive discussion, and we will further explore the impact of data bias on evaluation in future work. "Labeling a small annotation set to validate the branch ensemble approach" cannot be effectively done until we clearly understand the types of biases introduced.
4. Given that our experimental metric is "agreement and consistency with human preferences," our results demonstrate that our method achieves judgments more aligned with human evaluations (as our experimental evaluation are based on human evaluation results). This indicates that our approach does not introduce significant bias. If any bias exists, it likely aligns more closely with human preferences rather than originating from synthetic data.
**For Questions:**
> Q: what is the number of branches generated by Initial, SFT, and DPO for evaluation?
1. The numbers in the third column of Table 1 (Branch) indicate the number of branches used in inference or evaluation. (For obtaining training data, we sample 10 criteria, each with 2 scoring guidelines, and each score includes 4 judgments, resulting in a total of 80 = 10 * 2 * 4 branches.)
We have also outlined the background and significance of current task, along with our contributions, in the common response (Author Rebuttal), which we hope addresses your other concerns.
[1] LLM Critics Help Catch LLM Bugs (OpenAI 2024)
[2] The Llama 3 Herd of Models (Meta 2024)
[3] Phi-3 Technical Report: A Highly Capable Language Model Locally on Your Phone (MicroSoft 2024)
[4] Physics of Language Models: Part 3.3, Knowledge Capacity Scaling Laws (ICML 2024)
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for the responses. I will keep the current scores that I believe reflects my overall assessment of the paper. | null | null | null | null | null | null |
Is Value Learning Really the Main Bottleneck in Offline RL? | Accept (poster) | Summary: The paper presents an empirical analysis to determine the main challenge in offline RL for control among value function learning, policy extraction, and policy generalization to test-time states. With various deep learning-based experiments, it reaches the conclusion that policy extraction and policy generalization are the main bottlenecks instead of value function learning.
Strengths: 1. The paper is systematic on specifically isolating the three components such as by using decoupled RL algorithms (having value function learning and policy learning phase separately). This is important to avoid confounding factors.
2. The paper covers results across various data dimensions such as coverage, sub-optimality, and amount.
3. The paper has clear takeaways, which are helpful in determining actionable advice.
4. The paper provides an alternative direction (e.g. better policy extraction algorithms instead of value function learning algorithms) for researchers to pursue in trying to improve offline RL.
Weaknesses: 1. It is a bit concerning that the main results in Figures 1, 2, 6 are over only 4 seeds. In my experience, offline RL algorithms can be quite fragile especially with so much variation in coverage, dataset amount etc, that 4 seems quite limited. Given that most of the environments are not image-based, more than 4 seeds seems reasonable.
2. Related to above, the same figures do not report any information on variance of performance, which makes it difficult to determine if the reported means have been accurately reported. I would suggest reviewing some guidelines [1]
3. While the takeaways are nice, the claim to “always use” (above Section 5) is too strong, in my opinion.
4. In section 4.4, it's unclear if the first reason is a good explanation for better performance. While the actions of DDPG + BC are more extreme, I don't think extreme actions are what we should be aiming for. The experiments are presented as though these extreme actions lead to good performance (even if this was not the intention). While it may be true in the evaluated domains, it is unclear if this can be a general finding.
I will be willing to increase the score if the issues above are addressed, especially 1 and 2.
[1] Empirical Design in Reinforcement Learning. Patterson et al. 2023.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. There seems like there is a relation between DDPG + BC’s focus on limited exploration (some improvement but staying close to behavior policy) and test-time generalization. More specifically, by encouraging a policy to be close to the behavior policy, the test-time generalization should naturally be better since it will not deviate much from those states. Section 4 does not comment on this relation, what are the authors thoughts on this?
2. Regarding the limitations mentioned at the end, the authors may be interested in this paper [1] as well which discusses how some value-based objectives may be unreliable for performance. [1] does not give an suggestions, but does report a similar finding for another use-case.
[1] Why Should I Trust You, Bellman? The Bellman Error is a Poor Replacement for Value Error. Fujimoto et al. 2022.
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Yes the authors report on the limitations in Appendix A.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and constructive feedback about this work. We especially appreciate the reviewer's feedback about statistical significance and the clarity of our claims. Following the reviewer’s suggestion, we have added more seeds and variance metrics to improve the statistical significance of the results. We believe these changes have strengthened the paper substantially. Please find our detailed responses below.
---
* **“It is a bit concerning that the main results in Figures 1, 2, 6 are over only 4 seeds.” / “the same figures do not report any information on variance of performance”**
Thanks for raising this point. Following the suggestion, we’ve added 4 more seeds to Figures 1, 2, and 6 by adding **7744 more runs** (now we have **8 seeds** in total for every experiment in the paper) and have reported **standard deviation metrics** in our data-scaling matrices. Please find the improved results in **Figure 1 in the additional 1-page PDF**. Our conclusions remain the same, and we have updated these results in the current version of the paper.
* **About extreme actions**
We fully agree with the reviewer that extreme actions do not necessarily mean they are more optimal. Our original intention was to show that AWR has *limited expressivity* compared to DDPG+BC, because AWR actions *always* lie in the convex hull of the dataset actions (at least in principle), while DDPG+BC can go outside of it. This resulted in more extreme actions for DDPG+BC (Figure 3), but this is not to say that extreme actions are better. We apologize for the confusion and we will clarify this point in the draft to prevent such potential confusion.
* **“the claim to “always use” (above Section 5) is too strong”**
Thanks for the suggestion. We initially used the term “always” because we found that DDPG+BC is better than or as good as AWR in most of the cases (15 out of 16 settings — please refer to **Table 1 in the new 1-page PDF**). However, following the suggestion, we have toned down the takeaway by removing the word “always” in the current draft.
* **“DDPG+BC’s focus on limited exploration” / “by encouraging a policy to be close to the behavior policy, the test-time generalization should naturally be better since it will not deviate much from those states”**
Thanks for the great question. We also believe that, in general, the higher the BC coefficient in DDPG+BC is, the better it can prevent encountering out-of-distribution states at test time. (We’d also like to note that this also applies to AWR since it has a temperature hyperparameter of a similar role.) However, in practice, due to function approximation or imperfect policy learning, the agent would almost always visit out-of-distribution states, even when there’s *only* the BC term, and thus the generalizability of the policy can still significantly affect performance even in this case. We provide one example of such evidence in Appendix C. That said, as the reviewer mentioned, we expect that a small BC coefficient would incur even further challenges in generalization.
* **About the Fujimoto et al. paper**
Thanks for the pointer! We think this paper indeed points out similar limitations of the Bellman error. We will cite and discuss this paper.
We would like to thank the reviewer again for raising important points about statistical significance and clarity, and please let us know if there are any additional concerns or questions.
---
Rebuttal Comment 1.1:
Comment: Thanks to the authors for addressing my concerns and running more trials! I have updated my score.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: We would like to thank the reviewer for appreciating our changes and adjusting the score accordingly. We believe these updates and clarifications have indeed strengthened the paper. | Summary: This paper attempts to understand the relative importance of policy learning and value learning in offline reinforcement learning. The analysis is broken into two parts: (1) when decoupling the policy and value learning steps the authors test the relative data efficiency of the two steps, and (2) the authors test how well the policy generalizes at test time. Experiments are conducted on a wide variety of domains and the authors argue that they show that policy learning is often the main bottleneck of offline RL.
Strengths: 1. The high level experimental methodology of comparing the relative importance of policy learning and value learning by varying the dataset size in decoupled algorithms is an interesting idea. It could be useful to guide future work to know if there is more upside to improving on policy learning or value learning.
2. The result that IQL+DDPG outperforms IQL+AWR (which was used in the original IQL paper) is an interesting standalone result.
3. While somewhat preliminary and cursory in the paper, the idea of OPEX and TTT to update the policy at test time with a fixed value is interesting as a way to show that the value function has more information in it than the policy extracts.
Weaknesses: 1. The results are very heuristic and often unclear. The definitions of "policy bound" and "value bound" in terms of color gradients is vague and not very informative. Visually, it is hard to get much out of the figures which just throw a ton of data at the reader without a very intuitive way to interpret it. This could maybe be resolved by creating a more clear single metric that indicates the tradeoffs or some summary statistics that show aggregate trends averaging across methods or datasets. Currently, the results are pretty hard to parse and not very convincing as a result.
2. The empirical results are not as conclusive as the analysis/text suggests. For example, the main claim of section 4 that "policy learning is often the main bottleneck of offline RL" does not seem to be the right takeaway or emphasis of the results. Instead the results in both figure 1 and 2 indicate that sometimes policy learning is the bottleneck and sometimes value learning is the bottleneck.
3. The methodology in section 5 is a not clear. It seems that the authors run offline-to-online IQL (line 304), but this would be using AWR, which the previous section suggests not to do. Moreover, the online algorithm updates not just the policy, but also the value. The paper does not consider the generalization of the value function, but only the policy. Perhaps a cleaner way to test interesting out of distribution generalization would be to consider the state distribution of the optimal policy? This would of course not test near optimal states (so maybe some noise could be added or something as in later experiments), but could be conceptually cleaner than fully changing the setting to online.
4. It is not clear why there is such focus on the generalization of the policies and not the values. They seem to both matter, especially for things like OPEX or TTT to work. It would be interesting to see how well the value functions are generalizing as well as the policies to go to the papers main claims to be comparing the relative importance of these two steps.
5. In general, the paper tries to cram in too many semi-related results. I would encourage the authors to focus the story a bit more clearly and maybe split some parts into the appendix or into a separate paper.
Technical Quality: 2
Clarity: 2
Questions for Authors: See weaknesses
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes, but could do more to address how the results are not always clear cut.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and constructive feedback about this work. It appears that we were not entirely clear about our main messages in the initial draft, which we believe may have caused some confusion about our claims. Below we describe how we have revised our paper to prevent potential confusion and have added new results about aggregation metrics and generalization analysis.
---
* **The results are not as conclusive as the paper suggests; they indicate that sometimes policy learning is the bottleneck and sometimes value learning is the bottleneck.**
Thank you for raising this valid point. We believe our initial draft was not entirely clear about our analysis framework. We first would like to clarify that there are *two* types of information we can obtain from the data-scaling matrices (Figure 1).
- (1) By looking at each **individual matrix**, we can see scaling behaviors with the amounts of value and policy data in that *specific* setting.
- (2) By comparing different matrices from **different value/policy algorithms**, we can understand the effect of each algorithmic component.
As the reviewer pointed out, if we look at individual matrices (the first perspective), some matrices are value-bounded (i.e., the performances are more affected by the amount of value data) and others are policy-bounded, and the results appear to be less clear and problem-dependent. **However, if we compare different algorithms (the second perspective), we can observe a much clearer trend in our results:** namely, (1) the choice of a policy extraction algorithm often affects performance more than the choice of a value learning algorithm (except antmaze-large), and (2) DDPG+BC is almost always better than AWR. Please refer to our **global response** for aggregation metrics that quantitatively support these observations.
We then find the reason behind this difference between AWR and DDPG+BC **by now looking at individual matrices** (Section 4.3). The gc-antmaze result in Figure 2 shows a clear difference between the two algorithms: with AWR, an increase in value data doesn’t necessarily translate to performance improvement, unlike with DDPG+BC. This suggests that AWR, one of the most widely used policy extraction algorithms, imposes a **“bottleneck”** that inhibits the full use of the learned value function (the “Takeaway” box in Section 4). In this sense, we argue that policy extraction is often the main bottleneck in current offline RL algorithms (esp. any method that involves weighted/filtered BC, including the recent scaling work [4]). Please note that this is different from saying that every *individual* data-scaling matrix is policy-bounded (which is indeed not true).
We believe our initial draft was not very clear about these points, and will revise the manuscript to clearly convey these findings.
* **“the results are pretty hard to parse” / “Lack of clear aggregation metrics”**
Thank you for the helpful suggestion. Following the suggestion, we have added two types of quantitative aggregation metrics. Please refer to our **global response** for the details.
* **“It is not clear why there is such focus on the generalization of the policies and not the values.”**
Thanks for raising this question. First, we would like to note that our main focus in the second analysis (Section 5) is **not** on comparing value generalization and policy generalization (unlike our first analysis); our main point is rather to show that *test-time generalization* (regardless of policy or value) is one of the important yet overlooked issues in offline RL. We used the phrase “policy generalization” in the paper because the policy is what is deployed at test time, but we didn’t intend to mean that policy generalization is necessarily more important than value generalization. Instead, we wanted to show how current offline RL algorithms are often *already* great at learning near-optimal actions on in-distribution states, and how bad they can be on test-time out-of-distribution states. This observation highlights an important (but often overlooked) open question in offline RL, which is very different from the previous main focus on "value learning" about pessimism and conservatism. We apologize for this potential confusion (which we think stems from some phrases in Sections 1, 3, and 5), and will revise the paper to make this point very clear (e.g., we will use just “generalization” not “policy generalization” whenever it’s more appropriate).
That said, we do have some empirical results that allow us to compare policy and value generalization, and we discuss these aspects in the **global response** above in case it’s of interest to the reviewer.
* **The use of IQL+AWR in our offline-to-online RL experiment / MSE under** $d^{\pi^*}$
Thanks for the suggestion. Following the suggestion, we have repeated the same experiment with IQL+DDPG+BC and additionally measured the MSE under $d^{\pi^*}$. We obtained similar results and our conclusion remains the same. Please refer to our **global response** for the details.
* **“In general, the paper tries to cram in too many semi-related results.”**
Our primary motivation in this work is to understand the best way to improve the performance of current offline RL. To this end, we believe conducting a **holistic** analysis of different elements that affect the performance of offline RL is necessary, and thus we carried out such analyses (across value learning, policy extraction, and generalization) in this paper. That being said, to provide more coherent insights, we have substantially revised several subsections and paragraphs in the current version, which we hope helps address this concern.
We would like to thank the reviewer again for raising important points about our claims, which substantially helped strengthen the paper. **Please let us know if we have addressed the reviewer’s concerns and if so, we would be grateful if you are willing to upgrade the score.**
---
Rebuttal Comment 1.1:
Comment: Thanks for the thorough response and additional experiments/plots.
- Indeed, aggregating the metrics does provide significantly more evidence for the main claim of the paper. I would highly recommend making these the main results figures and using the huge block of heatmaps as supporting evidence in the appendix. One other small suggestion would be to add IQM metrics that aggregate the value learning methods *only* across the best policy extraction method per-environment, and the policy learning methods *only* across the best value learning method. While this is using a sort of "oracle" for the other part of the algorithm, it would be nice to see aggregates that are not biased by averaging in the worst choices for the other half of the algorithm.
- And ok, I see that you want the second half of the paper to be about generalization of both policy and value in some sense. If this is true, I would suggest reframing the title/abstract/intro to reflect that this is a distinct point from the policy vs. value comparison. Or, as suggested in the initial review, maybe this make more sense as two separate papers, the connection is still not super clear to me.
I will increase my score to a 6 to reflect the clearer results and framing, but still with some uncertainty since the proposed changes are substantial and the two halves of the paper do not seem to quite hang together yet.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thanks for engaging in the discussion and adjusting the score accordingly. We are grateful that the reviewer appreciates the changes, and we will update the paper to incorporate the proposed changes and additional results.
With regards to the question that the "two halves of the paper do not seem to quite hang together", we would like to articulate our rationale for studying policy extraction and generalization together in this paper more clearly. Akin to how with any ML algorithm, there are two problems: optimization and generalization (e.g., an ERM bound decomposes into two terms: one focusing on optimization and the other on generalization), the performance of any RL algorithm is also affected by the efficacy of policy and value optimization, and corresponding generalization. If the policy and value functions were optimized perfectly and could generalize perfectly as well, then that offline RL algorithm should attain perfect performance. Conversely, if any of these components do not function as well, the performance would not be perfect. Therefore, to be able to perform a holistic analysis of offline RL challenges, we separated them into (1) value and policy learning, and (2) generalization.
While it might seem approaches to analyze and improve policy extraction, value learning, and generalization might look distinct from each other, as the reviewer mentioned, we would like to note that our main goal is to exhaustively highlight the challenges/bottlenecks in an existing research area. This is analogous to how several prior analysis papers in RL also present a diverse set of challenges and propose methods that might appear disconnected -- for example, Fu et al. (2019) [1] studied sampling errors (high UTD), replay buffers, function approximation, and sampling schemes -- all in one paper, without much connection to each other necessarily, but have influenced many follow-up works on individual topics such as Q-function divergence, replay buffer studies, sampling distributions, etc; Lyle et al. (2023) [2] studied the plasticity loss phenomenon from a variety of perspectives, across from supervised learning to RL and from optimizers to metrics and solutions, where these various insights have motivated various subsequent works and solutions. Likewise, we hope that our analysis results, across the three bottlenecks in offline RL, motivate future work into building techniques to solve each of these challenges, perhaps from the starting points shown in various sections of our paper, resulting in more advanced and effective offline RL algorithms.
Once again, we would like to thank the reviewer for the suggestions, which we believe significantly helped improve the quality of the paper.
[1] Fu et al., Diagnosing bottlenecks in Deep Q-Learning Algorithms. ICML 2019. \
[2] Lyle et al., Understanding plasticity in neural networks, ICML 2023. | Summary: This paper empirically analyzes the bottlenecks in offline RL from three aspects: value learning, policy extraction, and policy generalization at evaluation time. Through the empirical evaluation, two observations were made: 1) the policy extraction algorithms affect the performance of offline RL significantly, often more than its underlying value learning objective. 2) the sub-optimal performance of offline RL agents is mainly attributed to a lack of generalization of the unseen state during evaluation instead of accuracy in the training distribution.
Strengths: **Originality**: Good. This paper tries to analyze the bottleneck of offline RL methods and includes some novel and interesting observations that were not systematically discussed before.
**Clarity**: Good. The paper is well-structured and easy to follow. The takeaway after the empirical analysis helps the reader understand the experiment's results.
**Significance**: This is important. The reason behind offline RL's underperformance is an important topic for the future development of offline RL methods.
**Technical Accuracy**:
1. Throughout evaluation with a decent amount of experiments
2. The experiment design are motivated and backed by hypothesizes
Weaknesses: 1. There is a lack of variance measures, such as standard deviation (std) or confidence intervals (CI), for the data-scaling matrices in Figures 1, 2, and 6, making the numerical results less plausible.
2. For some experiments in empirical analysis 1, an increase in data leads to a decrease in performance. For instance, in Figure 1 (gc-antmaze-large), both IQL+AWR and IQL+DDPG show that the 10-10 configuration performs worse than the 1-10 and 10-1 configurations, which seems to contradict to the assumption, more data leads to better value/policy approximation. These observations lack an adequate explanation.
3. I am concern that one of the main question (as outlined in the title as well) “is value function learning the main bottleneck of offline RL” is not sufficiently examined in the empirical experiments. While the amount of data used for training can be an implicit indicator of how well the value function is trained, it does not directly tell us the distribution of approximation errors (e.g., overestimation). Overestimation could still be a significant issue in offline reinforcement learning. It would strengthen the augment if an direct comparison between the predict value and true value for can be conducted, just like in [1] and [2].
[1] Van Hasselt, H., Guez, A. and Silver, D., 2016, March. Deep reinforcement learning with double q-learning. In *Proceedings of the AAAI conference on artificial intelligence* (Vol. 30, No. 1).
[2] Fujimoto, S., Hoof, H. and Meger, D., 2018, July. Addressing function approximation error in actor-critic methods. In *International conference on machine learning* (pp. 1587-1596). PMLR.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. For empirical analysis 2, I am wondering if the online data mainly improves value learning, policy learning, or both of them. The current evaluation only shows the improvement of evaluation MSE in actions but I'm wondering if the Q-value estimation loss would follow the same pattern, maybe the author could provide more insights on this?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations were not discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and constructive feedback about this work. We especially appreciate the reviewer's feedback about statistical significance as well as the question on our claim about value learning. Following the reviewer’s suggestion, we have added variance metrics as well as four more seeds (8 seeds in total) to improve the statistical significance of our results. We believe these changes have strengthened the paper substantially. Please find our detailed answers below.
---
* **“lack of variance measures”**
Thanks for raising this point. Following the suggestion, we have added standard deviations to our data-scaling matrices, and report them in **Figure 1 in the additional 1-page PDF**. We have also updated our paper accordingly.
* **“For some experiments in empirical analysis 1, an increase in data leads to a decrease in performance.”**
As the reviewer mentioned, there are some cases where an increase in data doesn’t necessarily lead to improved performance. We believe this is mainly due to statistical errors. To address this concern, (1) we’ve added 4 more seeds to our data-scaling matrices by adding **7744 more runs** (now we have **8 seeds** in total for every experiment in the paper), and (2) we have added variance metrics to the table. Please find them in **Figure 1 in the new 1-page PDF**. There are still some rare cases where an increase in data doesn't lead to performance improvement, but the new standard deviation metrics tell us that this is likely because of statistical noise. We believe that these changes have significantly strengthened the statistical significance of our results.
* **About the title / overestimation in value learning**
We agree that overestimation is an important issue in offline value learning in general, and would like to clarify the scope of our claims. Concretely, we would like to clarify that we’re *not* suggesting that value learning is less or not important compared to policy learning in general. Rather, our claim is that, (due to recent advancements in offline value function training) *current* state-of-the-art offline RL methods often tend to be at the point where improvements in policy extraction and generalization can lead to larger improvements in performance; in this sense, we claim that policy extraction and generalization are often the main “bottlenecks” in current offline RL, hence the title. As we show in our experiments (Figures 1 and 2), we are at the point where improvements in value functions (by adding more data) often do not translate to policy performance if the policy learning algorithm is not chosen appropriately. Hence, we think it is useful to concretely highlight and emphasize policy learning as a bottleneck at this point since the community has focused on value learning substantially. Thanks for asking this question, and we have revised our paper (Intro and Section 3) to make this point clearer.
* **“For empirical analysis 2, I am wondering if the online data mainly improves value learning, policy learning, or both of them.”**
Thanks for the interesting question. In this paper, we mainly measure the accuracy of the policy because the policy is what is deployed at test time. We expect additional online data would improve both the value and the policy in general, but it might be a bit challenging to faithfully measure the accuracy of the learned value function (as opposed to policy accuracy), given that (1) the Bellman loss often does not necessarily correlate with the actual performance [3] and (2) offline value functions (especially with techniques for mitigating overestimation like IQL/CQL) often involve pessimism/conservatism, which makes it difficult to directly compare against the actual ground-truth Q value. While we do have a comparison between the value and the policy in terms of data-scaling in the paper, we leave dissecting the effect of each component for *generalization* (with additional online data) for future work.
We would like to thank the reviewer again for raising important points about statistical significance and the score of our claims, and please let us know if there are any additional concerns or questions.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for their detailed responses and extra experiments. The statistical measurements make the results more convincing. I will keep my accepting rating.
---
Rebuttal 2:
Title: Official Comment by Authors
Comment: We would like to thank the reviewer for appreciating our changes. We believe these updates and clarifications have indeed strengthened the paper. | Summary: This paper addresses the question of why offline RL often underperforms imitation learning. They formalize the question they choose to ask, " is the bottleneck in learning the value function, the policy, or something else? What is the best way to improve performance given the bottleneck?", and provide three potential explanations: imperfect value function estimation, imperfect, policy extraction from value function, and imperfect policy generalization to novel states during evaluation. They next perform a series of experiments to test each hypothesis, and conclude two main reasons for offline RL's underperformance: policy extraction from value functions, and test-time policy generalization. They use these observations to highlight important takeaways for RL practitioner and RL researchers.
Strengths: I think that this is an exceptionally well-written paper. They make it very clear what their hypotheses are, how they test for each, and what the reader should take away from each subsection.
I further think that this paper addresses important questions in offline RL, and brings to light important directions for future work, which is a valuable contribution.
The full experimental details as well as code are provided, which is very useful for the community in reproducing the results and building off of this work.
Weaknesses: At a high level, there are not many weaknesses I can name in this work. One is perhaps that their contributions are mainly empirical observations, and it would be nice to support these with theoretical results (even in very simple settings full of necessary assumptions), but I believe that even without theory this paper is very strong.
Technical Quality: 4
Clarity: 4
Questions for Authors: From what I can tell, most of the empirical results are in environments with continuous action spaces. Do you expect the results to carry over to environments with discrete action spaces?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors discuss and address limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the positive review and constructive feedback about this work! We especially appreciate the question on discrete-action MDPs. Please find our answer to the question below.
---
* **“... most of the empirical results are in environments with continuous action spaces. Do you expect the results to carry over to environments with discrete action spaces?”**
Thanks for raising this point. There are two main differences between discrete-action and continuous-action MDPs: (1) discrete-action MDPs do not always require a separate policy extraction procedure, as we can directly enumerate over actions to choose the argmax action, and (2) DDPG is not straightforwardly defined in discrete-action MDPs. Hence, one of our takeaway messages (“use DDPG+BC instead of AWR”) may not be directly applicable to discrete-action MDPs. However, we expect that our main findings — namely (1) policy extraction can inhibit the full use of the learned value function (if there’s a separate policy extraction step) and (2) test-time policy generalization is one of the significant bottlenecks in offline RL — still apply to discrete-action MDPs as well. For example, recent works in RL fine-tuning of LLMs, which use a discrete action space, have observed similar findings to us: Tajwar et al. (2024) [1] showed that sampling actions (i.e., responses) from the policy (akin to sampling actions from the policy for making an update in DDPG+BC) lead to better performance than AWR that does not sample a new action from the policy for training, and Cobbe et al. [2] showed the effectiveness of test-time verifier reranking methods, which is closely related to the test-time generalization argument and TTT/OPEX in our paper. While of course we do not focus on LLMs in this study, these prior works illustrate that similar findings are applicable to discrete actions. We will add a discussion about discrete-action MDPs in the paper.
We would like to thank the reviewer again for the helpful feedback and please let us know if there are any additional concerns or questions. | Rebuttal 1:
Rebuttal: We appreciate all four reviewers’ detailed feedback and suggestions. We would like to highlight the additional results we provide in the new 1-page PDF.
* **Adding $\mathbf{4}$ more seeds ($\mathbf{8}$ seeds in total) and standard deviation metrics:** Following the reviewers’ suggestions, we have added $\mathbf{4}$ **more seeds** as well as **standard deviation metrics** to our data-scaling matrices by adding $\mathbf{7744}$ **more runs** (**Figure 1** in the new PDF). Now, every experiment in our paper is aggregated over $\mathbf{8}$ **seeds** in total.
* **Aggregation metrics:** Following the suggestion of Reviewer C9VZ, we have added two new aggregation metrics that clearly highlight our takeaways.
* First, we aggregate normalized returns over the entire data-scaling matrices for each value and policy algorithm (by marginalizing over every other factor), and report the interquartile mean (IQM) metrics [5] in **Figure 2** in the new PDF. The results support our finding that the difference between policy extraction methods is often larger than that between value learning methods.
* Second, we aggregate the performances from different policy extraction methods for each environment and value algorithm. **Table 1** in the new PDF shows that DDPG+BC is indeed almost always (15 out of 16 tasks) better than or as good as AWR. We will add these results to the paper.
* **Offline-to-online RL experiment with IQL+DDPG+BC:** To address a concern of Reviewer C9VZ, we have repeated our offline-to-online RL experiment (Figure 5 in the original draft) with IQL+DDPG+BC, instead of IQL+AWR.
* In the original offline-to-online RL experiment, we used IQL+AWR because (1) we wanted to use the simplest setting to illustrate the generalization issue (note that DDPG+BC requires a separate exploration mechanism in the online phase since it learns a deterministic policy, unlike AWR), and (2) most of the pathologies of AWR (e.g., the mode-covering issue, limited expressivity, etc.) go away if we’re allowed to use *online* rollouts [1].
* That being said, following the suggestion of Reviewer C9VZ, we repeated the same offline-to-online experiments with IQL+DDPG+BC with Gaussian exploration noises, and report the results in **Figure 3** in the new PDF. The new results show a very similar trend to the original results, suggesting that the conclusion remains the same.
* In addition, following the suggestion of the reviewer, we also measure the MSE metric under the state-marginal distribution of the oracle optimal policy (“$d^{\pi^*}$ MSE”). **Figure 3** in the new PDF shows the results, which suggests that the $d^{\pi^*}$ MSE is also correlated with performance (especially compared to the training/validation MSEs), but the correlation is slightly weaker than the evaluation MSE metric. We believe this is because the evaluation MSE directly measures the accuracy of the current policy under the *current* policy distribution.
---
Additionally, below we provide additional comments about test-time generalization to **Reviewer C9VZ**. We provide them here due to the character limit of each response.
* **Additional comments to Reviewer C9VZ about the point “It is not clear why there is such focus on the generalization of the policies and not the values.”**
As we discussed in the main response, our main focus in the second analysis is *not* to argue that policy generalization is necessarily more important than value generalization, but to highlight the significance of the effect of test-time generalization (regardless of policy or value) on performance, which is an important but often overlooked bottleneck in offline RL. That being said, in the paper, we do have some empirical results that allow us to compare policy generalization and value generalization, and we discuss them here in case it is of interest to the reviewer.
First, as the reviewer pointed out, our experiments with OPEX/TTT imply that *values often generalize better than policies*. OPEX/TTT only updates the policy (or actions) from the *fixed* offline value function on test-time states, and the fact that this often improves performance indicates that the information in the value function has often not been fully transferred into the policy. Hence, in this case, we would be able to say that *policy* generalization is being the “bottleneck” in performance.
Second, we also found that policy generalization *alone* can affect performance significantly. The results in Appendix C suggest this, where we show that just changing the representation of the BC policy can lead to a very significant difference in performance (this in fact leads to near-SOTA performance on gc-antmaze-large, even without using any RL!) and we show that this is due to nothing other than the difference in policy generalizability (note that this is a BC setting, so there’s no value function).
---
Below are the references that we use throughout our response:
[1] Tajwar et al., Preference Fine-Tuning of LLMs Should Leverage Suboptimal, On-Policy Data, ICML 2024. \
[2] Cobbe et al., Training Verifiers to Solve Math Word Problems, 2021. \
[3] Fujimoto et al., Why Should I Trust You, Bellman? The Bellman Error is a Poor Replacement for Value Error, ICML 2022. \
[4] Springenberg et al., Offline Actor-Critic Reinforcement Learning Scales to Large Models, ICML 2024. \
[5] Agarwal et al., Deep Reinforcement Learning at the Edge of the Statistical Precipice, NeurIPS 2021.
Pdf: /pdf/3da2f3eb0eb37b20ce1825ce195f1ad0f4ae308b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
If You Want to Be Robust, Be Wary of Initialization | Accept (poster) | Summary: This paper provides a theoretical study on the impact of number of epoch and initialisation to adversarial attack for GNN, potentially generalise to DNN. The theoretical evidence is supported by some empirical results.
Strengths: The study of how number of training epoch and initialisation affect GNN robustness is novel and interesting.
The presentation is clear.
Both theoretical and empirical evidence are provided.
Weaknesses: The study presents novel insights into Graph Neural Networks (GNNs), however, it is somewhat discussed in Deep Neural Networks (DNNs) regarding the number of training epochs [1, 2, 5] and initialisation methods [3, 4].
While the study is interesting, its contribution is somewhat limited due to the lack of proposed methods based on the findings.
From my understanding, the empirical validation is based on a single architecture, which may introduce limitations and bias. Given that the findings focus on initialisation, the empirical validation should cover a wider range of architectures.
The additional study on DNN and GIN is appreciated. However, the empirical evidence primarily focuses on outdated adversarial setups. The evidence would be more convincing if the authors provided empirical results using current and more practical adversarial setups, such as large-scale datasets (e.g., ImageNet-1K), modern architectures (e.g., ResNet, ConvNeXt, ViT), and recent adversarial attacks (e.g., AutoAttack, APGD) [5].
I believe that without the proposed method, validating the theoretical findings on a wider range of adversarial setups, as suggested, would help improve the contribution of this paper.
[1] Mo, Yichuan, et al. "When adversarial training meets vision transformers: Recipes from training to architecture." NeurIPS 2022.
[2] Pang, Tianyu, et al. "Bag of tricks for adversarial training." ICLR 2021.
[3] Hua, Andong, et al. "Initialization matters for adversarial transfer learning." CVPR 2024.
[4] Vaishnavi, Pratik, Kevin Eykholt, and Amir Rahmati. "A Study of the Effects of Transfer Learning on Adversarial Robustness." Transactions on Machine Learning Research.
[5] Singh, Naman Deep, Francesco Croce, and Matthias Hein. "Revisiting adversarial training for imagenet: Architectures, training and generalization across threat models." NeurIPS 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weakness.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive feedback and we would like to answer their main concerns and questions as follows:
**[W1 - Regarding the provided references]** We thank the reviewer for these references that we weren’t aware of and we will include a complete analysis and discussion in our “Related work” section. It seems that the majority of these works are rather interested in the specific effect of initialization and other dynamics in the case of adversarial training. Our framework is rather more focused on the classical training and how different parameters (such as number of layers, number of epochs and learning rate) can affect the final underlying robustness. The majority of these proposed papers are also focusing on the empirical side hence our theoretical analysis can be of relevant to close the gap. In fact, we believe that our theoretical analysis can be adapted for the case of adversarial training (where we can consider that the “real” inputs have a gradient towards the downstream task and the “generated adversarial” inputs have a rather negative one).
**[W2 - On the considered benchmarks]** We would like to thank the reviewer for seeing the value of the generalization section of our theoretical framework (which we note is not our main focus). We have drafted a more complete answer in our “General Rebuttal” as it was a common point between the reviewers. Typically, we wanted to underline that our main interest revolves around GNNs in which we have enough knowledge to make assumptions about the architectures and the internal dynamics of the Message-passing framework. For instance, in the specific case of GCN (Theorem 2), the upper-bound is dependent on the “normalized walks” within the input graph (meaning that denser graphs shall result in more adversarial effect). With a higher knowledge of the domain, similar theoretical results can be derived in the case of other domains such as NLP or Images resulting in more useful insights. Consequently, we believe that the introduced generalization part can be seen as pointers aimed to help close a gap which we have seen to be missing in the literature and which can be a first direction for other researchers to expand (such as the previously discussed adversarial training setting). Upon your suggestion (for which we are deeply grateful), we have initiated some new experiments to show the validity of our results on the ResNet family as well and to extend from MNIST (which was used for the DNN) to the Cifar-10 dataset. Table 1 reports the effect of the initialization on the ResNet model with both PGD and FGSM attacks. We will add these results (and expand them to other attacks) in our revised manuscript.
Table 5: Effect of Initialization on ResNet when subject to FGSM and PGD attacks using CIFAR-10 Dataset.
| Initialization | Clean Accuracy | FGSM($\epsilon=0.03$) | FGSM($\epsilon=0.07$) | PGD($\epsilon=0.03$) | PGD($\epsilon=0.07$) |
|----------------|:--------------:|:---------------------:|:---------------------:|:--------------------:|:--------------------:|
| Orthogonal | 91.6 $\pm$ 0.2 | 44.1 $\pm$ 0.6 | 39.2 $\pm$ 0.5 | 21.9 $\pm$ 0.6 | 11.8 $\pm$ 0.5 |
| Uniform | 91.2 $\pm$ 0.3 | 46.8 $\pm$ 0.4 | 41.8 $\pm$ 0.3 | 24.3 $\pm$ 0.3 | 13.6 $\pm$ 0.4 |
| Kaiming | 92.3 $\pm$ 0.1 | 42.3 $\pm$ 0.2 | 36.9 $\pm$ 0.2 | 20.7 $\pm$ 0.4 | 10.1 $\pm$ 0.3 |
| Xavier | 92.1 $\pm$ 0.2 | 42.9 $\pm$ 0.3 | 37.6 $\pm$ 0.4 | 21.2 $\pm$ 0.5 | 10.6 $\pm$ 0.1 |
We have also conducted a number of experiments on the interaction of our theoretical results with possible other defense methods (such as Adversarial training and TRADES [1]) which are in the same direction as the references you have provided.
Table 6: Effect of Initialization on ResNet on the clean and attacked accuracy when subject to PGD adversarial training (AT-PGD) and TRADES with $\epsilon=8/255$.
| Initialization | Clean Accuracy (AT-PGD) | Attacked Accuracy (AT-PGD) | Clean Accuracy (TRADES) | Attacked Accuracy (TRADES) |
|----------------|:-----------------------:|:------------------------:|:-----------------------:|:------------------------:|
| Orthogonal | 83.8 $\pm$ 0.31 | 51.0 $\pm$ 0.37 | 82.6 $\pm$ 0.47 | 55.6 $\pm$ 0.32 |
| Uniform | 82.9 $\pm$ 0.17 | 54.1 $\pm$ 0.20 | 82.1 $\pm$ 0.23 | 57.9 $\pm$ 0.28 |
| Kaiming | 83.8 $\pm$ 0.22 | 46.5 $\pm$ 0.27 | 82.8 $\pm$ 0.27 | 52.4 $\pm$ 0.35 |
| Xavier | 83.2 $\pm$ 0.15 | 46.9 $\pm$ 0.19 | 82.7 $\pm$ 0.19 | 52.2 $\pm$ 0.23 |
—
[1] Zhang, Hongyang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. "Theoretically principled trade-off between robustness and accuracy." In International conference on machine learning, pp. 7472-7482. PMLR, 2019.
---
Rebuttal Comment 1.1:
Title: Official Comment by Reviewer A9tB
Comment: I thank authors for the rebuttal that address my major concerns. This is not further questions but rather a few suggestions that might improve the paper.
- I understand the results for ImageNet-1K is not feasible for the rebuttal, but I strongly encourage the authors to include them in the future.
- If the authors could compare the impact of initialisation on adversarial attacks with other defence mechanisms, it would underscore the significance of exploring initialisation for adversarial robustness. For example, varying the initialisations could result in a ~5% difference in attacked accuracy, while other defences might similarly reduce attacked accuracy by ~5%. This comparison would highlight the potential and importance of further investigating initialisation strategies for improving adversarial robustness in the future.
Consequently, I increase my score to 5 borderline accept as my concern on contribution remains.
---
Reply to Comment 1.1.1:
Comment: We sincerely thank the reviewer for their thorough feedback and for taking the time to review our rebuttal. We agree that the comparison of the impact of different initialization schemes to the effectiveness of different existing defense methodology is an exciting future research direction. You provide an interesting perspective of our work, which we will gladly attempt to add to the revised manuscript. | Summary: This paper studies the impact of weight initialization (and training epochs) on adversarial robustness of a GNN model, both theoretically and empirically. The analysis is also extended to DNNs in general, although this is not the focus of this paper. The theoretical and empirical analysis both suggest that increase to norm of weights of initialization and increase in training epoch both have negative influence on adversarial robustness. The paper later compares different initialization strategies and show they lead to difference in robustness in their experiments.
Strengths: This paper studies the influence of model initialization and training epochs to the adversarial robustness of the model, with both theoretical and empirical analysis. This paper also studies both GNN and DNN in general, with a great balance between staying focused, and comprehensiveness of the paper. The presentation of the paper is great and easy to follow.
Weaknesses: Although there are many theories and claims presented in the first half of the paper, they can all simply be interpreted as "smaller norm of weights lead to more tightly bounded distance in output space given a same perturbation and thus better robustness", which is not that interesting. Especially, it seems that it did not consider the softmax function, or other type of normalization that may be applied.
Given these theories give very loose bound anyways, I am more interested to see how different initialization make a difference in practice. One big question of Fig.2 is that if they all trained to convergence and how their test accuracy look like. I hope to see more details about the experiment setting as well.
Section 6.4 is the most interesting and important section in my view, where it compares different initialization strategies and their influence on robustness. It is unfortunately too short and it could be made great if authors could expand the experiments, draw some conclusions, or reveal more deep insights on the choice of initialization. To prove that different initialization do lead to different robustness, I think it need to cover more models and attacks. And importantly, as there is randomness in the initialization and training (if using dropout for example), I don't know if the results shown in Fig.4 is a single run of experiment or average among multiple runs. It should be average of multiple runs to avoid randomness which can lead to different conclusions.
I am happy to be convinced to the value of this paper if these major concerns are addressed.
Technical Quality: 3
Clarity: 4
Questions for Authors: 1. Regarding to Fig.2 , do they all trained to convergence and how their test accuracy look like?
2. Regarding to Fig.4, is it a single run of experiment or average of multiple runs?
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Presentation of limitations of this paper is decent.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their insightful feedback. We would like to answer their main concerns and questions as follows:
**[W1 - General comment]** We feel slightly misunderstood by this comment and therefore want to apologize for any confusion that may have arisen from a possible lack of clarity in our explanations. In what follows, we try to address some of the perspectives that we feel weren’t fully clear:
- Starting from our risk definition, we can fairly consider that if for an input point $x$, the output representation of the clean classifier and the attack classifier is close in the output space, then we can expect that the attack has failed (since the softmax function is just a monotone transformation based on the final output presentation). The motivation for not taking into account the softmax function in our setting was due to the desire to be general which is important in the case of GNNs since different downstream tasks are considered (node classification, node regression, graph classification/regression and link prediction).
- “Smaller norm weights” was actually already investigated by other papers (such as Parseval regularization[1]). In our case, we rather consider how different training dynamics (such as number of epochs, the considered initialization and the learning rate) can affect reaching robustness. Hence, the novelty of our approach is focused on the training dynamics instead of making the statement that “smaller weight norm” will help in robustness (which is already well investigated and well known in the adversarial literature).
- Regarding the tightness of the bounds, the idea was to show the link between the previously discussed dynamics and the adversarial robustness. The provided bounds were actually the best we could come up without additional assumptions (making the study not relevant in practice). For instance, the bound can be tightened as stated after Theorem 2: " the dependence of $\gamma$ on $t$ can be sharpened by having $(1+\eta L)^t$ instead of $2^t$. With small $\eta$ (which is the case usually in practice), $(1+\eta L)^t \approx 1+ t\eta L$ resulting in a bound which depends linearly in $t$”.
**[W2 - Regarding the generalization aspect]** We are grateful that this sub-result is of interest to you and the other reviewers. We have drafted a more detailed clarification to this point in our “General Rebuttal”. The main idea is that our interest revolves around GNNs in which we have enough knowledge to make assumptions about the architectures and the internal dynamics of the Message-passing framework. For instance, in the GCN’s case (Theorem 2), the upper-bound is dependent on the “normalized walks” within the input graph (meaning that denser graphs shall result in more adversarial effect). We believe that similar insights can be derived for other architectures such as those in the image domain. Consequently, the proposed generalization serves as a theoretical pointer to other researchers from the images or NLP domain to extend these results and eventually find more relevant and concise upper-bounds. We finally would like to refer the reviewer to the additional results on ResNet using the CIFAR 10 dataset in our general rebuttal (Table 1) and also the effect of adversarial training and other defenses such as TRACES on the same architecture (Table 2 - Response to Reviewer 1) which will be added to our revised manuscript.
**[Q1 - On the convergence of the models]** All the considered models in Figure 2 were trained until convergence. Of course as pointed out, the initialization can have an effect of the clean accuracy. We have therefore chosen the distribution’s range of parameters (value of $\sigma$ of the gaussian distribution and the scale value in the Orthonormal and Uniform distributions) such as to be close to the state-of-the art performance of a GCN (with a maximum 2% margin). In the now following Table 4, we present the clean accuracy in the case of the extreme cases of the initial distributions:
Table 4: GCN’s clean accuracy when subject to different initialization distribution using different parameters.
| | Gaussian ($\sigma=0.1$) | Gaussian ($\sigma=0.9$) | Uniform ($\text{scale}=1$) | Uniform ($\text{scale}=4$) | Orthogonal ($\text{scale}=1$) | Orthogonal ($\text{scale}=4$) |
|----------|:-----------------------:|:-----------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:-----------------------------:|
| Cora | 83.5 $\pm$ 0.4 | 82.8 $\pm$ 0.9 | 83.8 $\pm$ 0.6 | 82.5 $\pm$ 0.8 | 84.1 $\pm$ 0.31 | 83.2 $\pm$ 0.5 |
| CiteSeer | 73.1 $\pm$ 0.3 | 71.6 $\pm$ 0.7 | 72.8 $\pm$ 0.7 | 70.5 $\pm$ 0.5 | 72.4 $\pm$ 0.4 | 71.6 $\pm$ 0.6 |
**[Q2 - On the number of trials]** We apologize if this wasn’t highlighted enough in our manuscript. As explained in our experimental setting (Section 6.1 and in details in Appendix H): “To mitigate the impact of randomness during training, each experiment was repeated 10 times, using the train/validation/test splits provided with the datasets.” We will add this also to the figure caption as well in the revised manuscript.
—
[1] Cisse, Moustapha, Piotr Bojanowski, Edouard Grave, Yann Dauphin, and Nicolas Usunier. "Parseval networks: Improving robustness to adversarial examples." In International conference on machine learning, pp. 854-863. PMLR, 2017.
---
Rebuttal Comment 1.1:
Comment: I appreciated the prompt response from the authors.
[Q1] Thanks for the additional results. It looks like increase to the norm of initialization has negative impact of clean accuracy, which is as expected. So there is essentially a trade-off between accuracy and robustness (as most defensive method would have). I think this should highlighted in the paper. It would most interesting if we can get more general insights regarding to this trade-off, e.g. which does this trade-off compared to other defense methods; which initialization method achieves the best trade-off. Not say authors should address these now, but rather some extensive exploration directions to consider in the future.
[Q2] Thanks for response. My question is addressed.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate the reviewer’s thorough feedback and the time taken to review our rebuttal. We are pleased to have addressed their questions effectively. We will expand and add these insights to our revised manuscript.
We also agree, that the mentioned areas are very interesting future research directions. | Summary: This work investigates the relationship between weight initialization and adversarial robustness, specifically in Graph Neural Networks (GNNs). In this setting, a defender wants to train a GNN for which, given an input graph X, and adversary cannot find a similar graph which induces a different output from the GNN than X. The authors prove an upper bound on the adversarial risk of a given GNN that corresponds to the norm of the initial weights of the model. Bounds of this type are proved for both node feature robustness and structural robustness, with initialization shown to be more important in the structural case. Similar bounds are derived for other GNN architectures, like the GIN, and for DNNs more generally. Through experiments, it is shown that, as indicated by the theory, training models for more epochs can lead to higher robustness and lower variance parameters in weight initialization leads to higher robustness. These trends are shown to hold for different architectures, datasets, and weight initialization techniques.
Strengths: This paper seemingly has high originality, being the first to study the impact of weight initialization on model robustness. It may have significant implications for the field of adversarial robustness, and may lead to improved adversarial training techniques. Clarity is one of this paper's main strengths. It is clear how the claims made in the introduction relate to the theoretical results, and the proofs provided in the appendix are easy to follow. Overall, I think this is a well-written paper with interesting results.
Weaknesses: While the experimental section does validate many of the main results of the paper, there are other experiments that I would have liked to see to back up some claims made in earlier sections. Specifically, it would have been nice to see empirical results mirroring figure 2 but showing how different initializations impacted clean accuracy to show what the trade-off between accuracy and robustness looks like in a realistic setting.
Also, I think the framing and ordering of this paper may limit its reach. Based on the abstract, this paper seems to be targeted towards those who are interested in graph neural networks, despite the fact that the generalized results would be compelling even absent the discussion of GNNs. I feel this paper would be more effective if it put the generalized results (section 5) first, and then followed that up with a case study in GNNs to give an example of how this theory can be applied in specific settings.
Figures 1-3 are difficult to parse without closely reading their descriptions in the text. I would appreciate if the legends were made more descriptive and the captions were more detailed as to what is being shown.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How tight are the bounds presented in sections 4 and 5? Do you think future work might be able to tighten these bounds?
- In section G.1, it looks like the definition of adversarial risk is average-case (taking the expectation over all perturbations in a neighborhood), rather than worst-case. Was this intentional? It seems in conflict with the definition offered in equation 2.
- It seems like these results may also suggest that regularization (to target final weights with a smaller norm) is good for robustness. There have been recent papers that study the relationship between regularization and robustness [1,2], do you think your paper relates to this line of research?
- What are the implications of this for fine-tuning? Is this framework still useful when starting from a pretrained model?
[1] Nern, Laura F., et al. "On transfer of adversarial robustness from pretraining to downstream tasks." Advances in Neural Information Processing Systems 36 (2024).
[2] Dai, Sihui, Saeed Mahloujifar, and Prateek Mittal. "Formulating robustness against unforeseen attacks." Advances in Neural Information Processing Systems 35 (2022): 8647-8661.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations are addressed in the problem setup and conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to sincerely thank the reviewer for their feedback and their comments.
We are grateful for your comment on the novelty of our theoretical analysis and direction. Regarding the generalization of the results, we have drafted a complete response in the “General Rebuttal” as all the reviewers evoked this point. While this interest is certainly encouraging, we note that the framing of the paper was in direction with our main interest which revolves around GNNs in which we have enough knowledge to make assumptions about the architectures and the internal dynamics. For instance, in the GCN’s case (Theorem 2), the upper-bound is dependent on the “normalized walks” within the input graph (meaning that denser graphs shall result in more adversarial effect). We believe that similar insights can be derived for other architectures such as those in the image domain. Hence, this part of our manuscript can be seen as pointers to more advanced specific studies in each domain that can eventually result in interesting upper-bounds. Finally, we would like to refer the reviewer to the additional results on ResNet using the CIFAR 10 dataset in our general rebuttal (Table 1) and also the effect of adversarial training and other defenses such as TRACES on the same architecture (Table 2 - Response to Reviewer 1).
In what follows, we try to address the additional questions/concerns raised by the reviewer:
**[W1 - In respect to the clean accuracy of Figure 2]** In our experimental results, we have chosen to report the success of the attack since we thought it’s a better experimental representation of the theoretical results (our adversarial risk definition consists of the gap between the clean/attacked model). Nonetheless, ensuring a reasonable initial/clean accuracy (close to the state-of-the art by a 2% order) was important, hence the choice of the distribution’s range of parameters (value of $\sigma$ of the gaussian distribution and the scale value in the Orthonormal and Uniform distributions). Table 2 reports the clean accuracy in the case of the extreme cases of the initial distributions used in the analysis in Figure 2. We will add the complete figure of clean accuracies in the Appendix.
Table 3: GCN’s clean accuracy when subject to different initialization distribution using different parameters.
| | Gaussian ($\sigma=0.1$) | Gaussian ($\sigma=0.9$) | Uniform ($\text{scale}=1$) | Uniform ($\text{scale}=4$) | Orthogonal ($\text{scale}=1$) | Orthogonal ($\text{scale}=4$) |
|----------|:-----------------------:|:-----------------------:|:--------------------------:|:--------------------------:|:-----------------------------:|:-----------------------------:|
| Cora | 83.5 $\pm$ 0.4 | 82.8 $\pm$ 0.9 | 83.8 $\pm$ 0.6 | 82.5 $\pm$ 0.8 | 84.1 $\pm$ 0.31 | 83.2 $\pm$ 0.5 |
| CiteSeer | 73.1 $\pm$ 0.3 | 71.6 $\pm$ 0.7 | 72.8 $\pm$ 0.7 | 70.5 $\pm$ 0.5 | 72.4 $\pm$ 0.4 | 71.6 $\pm$ 0.6 |
**[Q1 - Regarding the tightness of the upper-bounds]** Indeed, these bounds can be tightened depending on the used optimization algorithm and the considered assumptions. In the case of Gradient Descent (GD) that we considered in the paper, we did not find better bounds for the iterates than those presented in the appendix (Eqs. (3) and (4) under non convex assumption and Eqs. (12) and (13) under strong convexity assumption). Regarding the final bounds we gave in sections 4, they can be tightened as stated after Theorem 2: " the dependence of $\gamma$ on $t$ can be sharpened by having $(1+\eta L)^t$ instead of $2^t$. With small $\eta$ (which is the case usually in practice), $(1+\eta L)^t \approx 1+ t\eta L$ resulting in a bound which depends linearly in $t$”.
**[Q2 - On the definition of adversarial risk]** This is a small typo in writing, note that we have used the correct formulation in the Theorem’s proof (Appendix E - Eq. 10). We thank the reviewer for spotting this and we will correct it in the revised manuscript.
**[Q3 - Linking to regularization techniques]** The idea of adversarial defense through regularization techniques is indeed very related to our work. The main difference is that in addition to the final weight’s norm, we also consider the initial weights, the learning rate and the number of epochs. Hence, our method can be seen as a generalization of studying the regularization techniques, in which we also focus on the training dynamics.
**[Q4 - Extending to fine-tuning]** This is indeed a very interesting direction that could be seen as a sub-result of our theoretical analysis. For now, there isn’t much work on pre-trained GNNs, but we expect this field to expand in the coming years, which makes this question all the more relevant. In the case of other domains (such as NLP), there are two directions that could be investigated in this perspective. The first direction is when we consider that we have no control over the pre-trained model, hence the only control is over the classification/regression head that would be trained. In this case, our theoretical result is applied in the choice of the initial distribution of the weights and the number of fine-tuning epochs where a balance between accuracy and robustness should be found. In the second direction, where we have control over the pre-trained model, we don’t think our theoretical result can be applied since the pre-training consists of a number of “self-supervised tasks” (such as token masking or contrastive learning), and we believe that our adversarial risk definition doesn’t necessary extend from the pre-training embedding space to the downstream tasks. This latter point can be indeed an excellent research direction in which the goal would be to investigate the best pre-training dynamics capable of ensuring the downstream adversarial robustness of a pre-trained model.
---
Rebuttal Comment 1.1:
Comment: Thank you for your thorough response to my questions. I find the clean accuracies presented in Table 2 reassuring, and I appreciate the additional results validating the extension of the theory to different architectures. I will be raising my score to a 7.
---
Reply to Comment 1.1.1:
Comment: We are deeply grateful to the reviewer for taking the time to review our rebuttal. We are glad that we have been able to respond and address their concerns. | Summary: The paper investigates the under-explored impact of weight initialization on the robustness of Graph Neural Networks (GNNs) against adversarial perturbations. The authors present a theoretical framework linking weight initialization strategies and training epochs to the model's resilience to adversarial attacks. They derive an upper bound that connects the model's robustness to initial weight norms and training epochs, showing that appropriate initialization strategies enhance robustness without degrading performance on clean datasets. The findings are validated through extensive experiments on various GNN models and real-world datasets subjected to different adversarial attacks, extending the theoretical insights to Deep Neural Networks (DNNs).
Strengths: 1. Originality:
* The paper addresses a novel dimension of adversarial robustness in GNNs by focusing on weight initialization strategies, an area largely unexplored in existing literature.
* Extends the theoretical framework to apply broadly to DNNs, showcasing its versatility and broader applicability.
2. Quality:
* The theoretical analysis is rigorous, providing a clear connection between initialization strategies, training epochs, and model robustness.
* Extensive experiments with diverse models and real-world datasets validate the theoretical findings, demonstrating the practical significance of the proposed framework.
3. Clarity:
* The paper is well-structured, with a logical flow from theoretical analysis to experimental validation.
4. Significance:
* The insights on the impact of weight initialization on adversarial robustness can influence future research directions and the development of more robust GNNs and DNNs.
Weaknesses: 1. Hyperparameter Tuning:
* The paper does not discuss the sensitivity of the proposed initialization strategies to other hyperparameters, such as learning rate, batch size, or the number of layers. This omission could lead to challenges in replicating and generalizing the findings, as the effectiveness of the initialization strategies might vary with different hyperparameter settings.
2. Practical Implementation:
* The practical implementation details of the proposed initialization strategies in real-world scenarios are not deeply explored. Practitioners might struggle to adopt these strategies without clear guidance on integrating them into existing workflows. Including a discussion on how to balance the trade-off between choosing the right number of epochs to achieve optimal clean accuracy and the most robust model would strengthen the paper.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Interaction with Other Defense Mechanisms: how do the proposed initialization strategies interact with existing adversarial defense mechanisms, such as adversarial training or regularization techniques? For example, does combining your initialization methods with adversarial training lead to improved robustness, or are there any diminishing returns?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The theoretical framework is primarily validated on GNNs, leaving uncertainty about its applicability to other types of neural networks, such as recurrent neural networks (RNNs) or transformer models. This limitation reduces the perceived impact and versatility of the findings. Extending the theoretical analysis and experimental validation to include these other neural network architectures would enhance the generalizability of the results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to sincerely thank the reviewer for their feedback and comments.
As detailed in our “General Rebuttal”, the main focus of our research is related to GNNs, hence why the experimental setting was rather focused on this side. The extension to other models (such as DNNs in the paper or the additional ResNet on Cifar-10 that we provide in the general rebuttal - Table 1) was rather to show the extensibility of the theoretical results which we have seen to be missing from the corresponding literature. We will point out this further in our limitation section. In what follows we aim to address the raised concerns and questions one-by-one.
**[W1 - Regarding the hyper-parameters]** Note that the provided theoretical results (Theorem 2 & 3) does indeed show the possible effect of the “number of layers” and “learning rate” (Equation 3 Section A of the Appendix). Hence, the reviewer is right on their intuition regarding the possible effect of hyper-parameters. Since the goal of our experimental setting (described in Appendix H) was aimed to validate the theoretical results, we followed the same hyperparameters that are “usually” used in the literature to ensure the classifier and the attacker’s convergence. Typically, we focused on a 2-layer GCN (since it can reach state-of the art accuracy and also because too many more layers result in the downgrade of the accuracy – which is known as over-smoothing in the GNN literature).
**[W2 - Regarding the practical usage of the results]** This is indeed a fair and important point. Our theoretical analysis main result is that the dynamics of the training (Number of epochs, learning rate and the number of layers) also affect the final model’s adversarial robustness. Consequently, one possible future direction that could be investigated is a robustness metric to track the advancement of a model’s robustness (similar to the validation accuracy which is used to select the top model). Typically, one possible metric or measure would be to approach the proposed adversarial risk quantity (Equation2) using an estimator (such as randomised or stratified sampling from the input’s neighbourhood and evaluating the distance to the input’s prediction).
To summarise, in addition to taking into account our proposed results (such as finding the right trade-off of epochs to reach convergence and adversarial robustness and choosing the right initialization), we need to investigate a model’s validation approach to keep track of its adversarial aspect. We will incorporate these practical guidelines in our revised manuscript, serving to make our theoretical and experimental insights more useful in practice.
**[Q1 - On the interaction with the adversarial defenses]** We have indeed studied the possible combination of our proposed insights with some “benchmark” graph adversarial defenses in Appendix G.4 where we studied RGCN (Figure 8) and GNN-Jaccard (Figure 9). For the sake of generalization, and we thank the reviewer for pointing out the idea, we have computed the results on adversarial training (based on PGD) and using TRADES [1] on the ResNet model using the Cifar-10 dataset. Table 2 reports the clean and attacked accuracy, showcasing therefore the application of our theoretical insights both to adversarial defenses but also to other architectures (besides GNNs and DNNs).
Table 2: Effect of Initialization on ResNet on the clean and attacked accuracy when subject to PGD adversarial training (AT-PGD) and TRADES with $\epsilon=8/255$.
| Initialization | Clean Accuracy (AT-PGD) | Attacked Accuracy (AT-PGD) | Clean Accuracy (TRADES) | Attacked Accuracy (TRADES) |
|----------------|:-----------------------:|:------------------------:|:-----------------------:|:------------------------:|
| Orthogonal | 83.8 $\pm$ 0.31 | 51.0 $\pm$ 0.37 | 82.6 $\pm$ 0.47 | 55.6 $\pm$ 0.32 |
| Uniform | 82.9 $\pm$ 0.17 | 54.1 $\pm$ 0.20 | 82.1 $\pm$ 0.23 | 57.9 $\pm$ 0.28 |
| Kaiming | 83.8 $\pm$ 0.22 | 46.5 $\pm$ 0.27 | 82.8 $\pm$ 0.27 | 52.4 $\pm$ 0.35 |
| Xavier | 83.2 $\pm$ 0.15 | 46.9 $\pm$ 0.19 | 82.7 $\pm$ 0.19 | 52.2 $\pm$ 0.23 |
—
[1] Zhang, Hongyang, Yaodong Yu, Jiantao Jiao, Eric Xing, Laurent El Ghaoui, and Michael Jordan. "Theoretically principled trade-off between robustness and accuracy." In International conference on machine learning, pp. 7472-7482. PMLR, 2019.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for the detailed response. My concerns have been addressed. And I would like to remain my score.
---
Reply to Comment 1.1.1:
Comment: We are deeply grateful to the reviewer for taking the time to review our rebuttal. We are glad that we have been able to respond and address their concerns. | Rebuttal 1:
Rebuttal: **General Comment to all the Reviewers:**
We are grateful to all the reviewers for their comments on the potential novelty of our theoretical insights and direction and we are also happy that our “generalization to other model” section has caught their attention. We would like to point out that our main research area revolves around GNNs (which in itself represents a vast research area from which different applications have emerged). In this perspective, our deep knowledge on the different architectures and adversarial attacks/defenses (which are different from the Image domain for instance, given the discrete aspect of the graphs) is within this area. The generalization part (Section 5) has been added as we have seen that our theoretical framework can easily be adapted to different architectures and that similar theoretical insights are missing from other domains (such as Images and NLP). In this direction, the adaptation to DNN (Theorem 3) aimed to give pointers and allow other researchers interested in these domains, which are more knowledgeable in architectures such as CNNs or RNNs, to unlock a new dimension in adversarial robustness.
Nonetheless, upon specific request of some reviewers, we have provided additional results using ResNets in Table 1 on the Cifar-10 Dataset with both PGD and FGSM attacks in order to show the extension of the DNN’s previously provided results to this type of architecture. We will add the corresponding figures (exploring a range of $\epsilon$) to our manuscript.
We will additionally try to expand our results to the ImageNet dataset (which was a bit of struggle to manage for the rebuttal deadline given that each experiment is based on training 10 models of a specific initialization and evaluating the attacks).
Table 1: Effect of Initialization on ResNet when subject to FGSM and PGD attacks using Cifar-10 Dataset.
| Initialization | Clean Accuracy | FGSM($\epsilon=0.03$) | FGSM($\epsilon=0.07$) | PGD($\epsilon=0.03$) | PGD($\epsilon=0.07$) |
|----------------|:--------------:|:---------------------:|:---------------------:|:--------------------:|:--------------------:|
| Orthogonal | 91.6 $\pm$ 0.2 | 44.1 $\pm$ 0.6 | 39.2 $\pm$ 0.5 | 21.9 $\pm$ 0.6 | 11.8 $\pm$ 0.5 |
| Uniform | 91.2 $\pm$ 0.3 | 46.8 $\pm$ 0.4 | 41.8 $\pm$ 0.3 | 24.3 $\pm$ 0.3 | 13.6 $\pm$ 0.4 |
| Kaiming | 92.3 $\pm$ 0.1 | 42.3 $\pm$ 0.2 | 36.9 $\pm$ 0.2 | 20.7 $\pm$ 0.4 | 10.1 $\pm$ 0.3 |
| Xavier | 92.1 $\pm$ 0.2 | 42.9 $\pm$ 0.3 | 37.6 $\pm$ 0.4 | 21.2 $\pm$ 0.5 | 10.6 $\pm$ 0.1 | | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Model LEGO: Creating Models Like Disassembling and Assembling Building Blocks | Accept (poster) | Summary: This paper proposes a novel framework for model disassembling and assembling. A component locating technique is introduced to disassemble task-aware components from the models. And an alignment padding strategy and a parameter scaling strategy are also designed to assemble these useful components.
This work is meaningful to the interpretability of deep neural network models, as it combines the theory of biology and brain science theory to design human-interpretable models, which can uncover the black box CNN.
Strengths: The paper is clearly written and easy to follow.
The proposed task is very novel, and the whole pipeline is well presented and easy to understand, and makes sense.
The locating-then-assembling paradigm is well supported by the informative components identification and alignment padding strategy.
The authors tested their proposed method on various popular CNN models and obtained convincing results.
Weaknesses: In section 3.1.1, the insight and rationale of the proposed metric is not clear. Why can this metric measure the contribution of features?
Also, in the 4.2, why the Parameter Scaling Strategy can balance the effects of different componens?
Technical Quality: 4
Clarity: 3
Questions for Authors: refer to the weakness part.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: This paper only design CNN-based method, ignoring the popular Vit model.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments. We are pleased that you find our work to be novel and that you appreciate the clarity of our proposed pipeline and the convincing nature of the results. Below are our responses to each of your comments (your comments are highlighted in italics).
> Q1: *In section 3.1.1, the insight and rationale of the proposed metric is not clear. Why can this metric measure the contribution of features?*
>
We apologize for the confusion. The "contribution of features" refers to the extent to which features impact the prediction results (as described in lines 97 to 99 of the paper). This contribution is dynamic and continuous throughout the network layers during inference (lines 100 to 104). Based on this concept, we introduced two mechanisms for contribution flow: contribution aggregation and contribution allocation.
It is important to note that contributions are relative, meaning that a larger feature value has a greater relative impact on the model's final prediction (lines 129 to 131). In Sections 3.1.1 and 3.1.2, the contribution of features during aggregation and allocation is defined by their relative magnitude (Equations 5 and 8). Additionally, to account for nonlinear functions such as ReLU, we only use features with positive values to compute this contribution (Equations 4 and 7). Our experiments demonstrate that this method of calculating contributions effectively identifies key parameters related to subtasks.
> Q2: *Also, in the 4.2, why the Parameter Scaling Strategy can balance the effects of different components?*
>
We apologize for any confusion. The term "effects of different components" refers to the varying magnitudes of parameters from different source models, which can influence the output disproportionately. Even if an input does not belong to a particular task-aware component, if the parameters of this component are of a significantly larger magnitude, it can produce a disproportionately large response, thereby skewing the prediction (lines 228 to 232 of the paper).
The Parameter Scaling Strategy addresses this issue by scaling the parameters of each task-aware component before assembly. This scaling is based on the response of each component to the same input, ensuring that the magnitudes of the parameters are kept at a consistent level. This prevents any single component from disproportionately affecting the prediction. Ablation experiments presented on lines 316 to 320 of the paper demonstrate the effectiveness of this strategy. We hope this explanation clarifies your concern.
> Q3: Limitations: This paper only designs CNN-based methods, ignoring the popular ViT model.
>
We understand your concern. The proposed MDA method, including the underlying concepts of contribution aggregation and allocation, is indeed applicable to Transformer models. The method is not tightly coupled with CNNs and can be applied to linear layers and other structures beyond convolutional layers (Appendix C), making it a general paradigm for model disassembly and reassembly tasks. This is demonstrated in the experiments presented in Section 5.3 of the paper. Additionally, we conducted a preliminary model disassembly experiment using ViT-Tiny on ImageNet-10, and the results are shown in Table 1.
Table 1: Disassembly performance of ViT-Tiny on ImageNet-10.
| Disassembled Task | Base (%) | Disa (%) |
| --- | --- | --- |
| 0-1 | 87.70 | 89.25 (+1.55) |
| 1-3 | 75.10 | 78.10 (+3.00) |
| 3-9 | 77.51 | 79.29 (+1.78) |
As shown in Table 1, the disassembled model can still be used for inference, with accuracy exceeding that of the original model.
Additionally, we further tested the assembly performance of ViT-Tiny on CIFAR-10 and CIFAR-100. However, the results were not satisfactory. For instance, when assembling disassembled CIFAR-10(0-3) and CIFAR-100(11-17) models, the accuracy without fine-tuning was only 13.27%. The primary reason for this is the task interference that occurs when submodels from different source models are assembled, a phenomenon that is particularly severe due to the self-attention mechanism inherent in Transformers. We believe that adjusting parameter distribution from different submodel parameter spaces before assembly could be a potential solution to this issue, which will be a focus of our future work.
---
Rebuttal Comment 1.1:
Title: Good response
Comment: This response successfully addressed my concerns. I maintain the accept score
---
Reply to Comment 1.1.1:
Comment: We are pleased that our responses have addressed your concerns. Thank you for your constructive feedback and support for our work. | Summary: This paper draws inspiration from the information subsystem pathways in biological vision systems and proposes a Model Disassembling and Assembling (MDA) approach.
- For model disassembling, the authors introduce the concepts of contribution aggregation and contribution allocation within convolutional filters and their kernels. The relative contribution of features is calculated and linked to the corresponding parameters for subsequent structure pruning.
- For model assembling, a new model is established by combining the disassembled, task-aware components derived from different source models without necessitating retraining.
To validate the effectiveness of MDA, the authors conduct extensive experimental evaluations across diverse architectures, including CNNs and GCNs. The results show that MDA can work effectively on these architectures. In addition to model creation, the authors also explore potential applications of MDA, such as model decision route analysis and model compression.
Strengths: 1. The concept of Model Disassembling and Assembling (MDA) is novel, offering a new perspective on model architecture. It suggests that each part of a model related to specific tasks or categories can be considered an independent functional component. These components can be flexibly reused and assembled to achieve various combinations for multi-classification tasks.
2. The concept of MDA, along with the proposed methods and formula definitions, is clearly presented. The experiments are well designed, and the comprehensive results validate that MDA can work effectively for model reuse or creation, and other task scenarios. Moreover, the extensive experiments provide deeper insights into MDA, enhancing the overall quality of the paper.
3. The paper is written in a clear and accessible manner. The illustrations, methods, and formulas are easy to understand, and the language used is straightforward. The provided source codes are also well-structured and easy to read.
4. The proposed method demonstrates significant performance benefits. Besides creating models by reusing model components, the basic idea of MDA has the potential to inspire future research directions, such as model interpretability and model architecture design.
Weaknesses: 1. As the number of categories increases, the performance of disassembling decreases. This means that to achieve the best disassembling effect, it is necessary to decompose each category separately, which is time-consuming and may result in a heavier assembled model.
2. Manually setting thresholds (Eqns. 9 and 10) to obtain relative contributions may not guarantee consistently good results across tasks.
3. While the authors claim that model assembling does not necessitate retraining or incur performance loss (Line 204), the assembled models are often inferior to the baseline without retraining or fine-tuning (see Table 1).
4. In the assembled model, different categories fail to follow their own pathways extracted from the model disassembling. For example, a disassembled conv-filter of CIFAR-10-cat0 might generate a high response on CIFAR-100-cat0 because the conv-filter has never seen CIFAR-100-cat0, which can be regarded as inter-class interference. This is why the assembled model initially shows degraded performance. Ideally, model assembling should ensure that pathways between different categories are mutually exclusive.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. The results in Table 3 have shown the effectiveness of MDA when applied to GCN models. However, could the authors provide more details on the design of MDA for GCN models?
2. Does model disassembling require additional training? If so, please present the details on how the numerical results in Tables 1 and 3 were obtained.
3. The fine-tuning of the assembled model will inevitably cause a shift in the internal pattern of the model. Do the pathways of the fine-tuned model remain the same as the original ones?
4. Have the authors considered the interference between categories when assembling the disassembled components? Are there any potential solutions to this issue?
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The limitations have been discussed in the paper. Additionally, there is no societal impact from the work performed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments. We are pleased that you find our work on model disassembly and reassembly to be novel and consider it to be inspiring for model interpretability and architectural design. Below are our responses to each of your comments (your comments are highlighted in italics).
> Q1: As the number of categories increases, the performance of disassembling decreases. This means that to achieve the best disassembling effect, it is necessary to decompose each category separately, which is time-consuming and may result in a heavier assembled model.
>
Thank you for your comment. We respectfully disagree with this viewpoint. Firstly, there is no evidence suggesting that the performance of disassembling decreases with an increasing number of categories. In fact, as shown in Table 1, regardless of the number of categories into which the model is disassembled, the performance of the disassembled models generally exceeds that of the original model, though the degree of improvement may vary. This is because model disassembly extracts only the parameters relevant to the target categories, reducing interference from unrelated parameters and thereby enhancing prediction accuracy. More importantly, the time required for model disassembly is minimal compared to training a new model, as detailed in our response to the first comment by Reviewer FhUd.
> Q2: *Manually setting thresholds (Eqns. 9 and 10) to obtain relative contributions may not guarantee consistently good results across tasks.*
>
Thank you for your feedback. We agree that manually setting thresholds (Equations 9 and 10) may not consistently yield optimal results across different tasks. The choice of thresholds represents a trade-off between the performance of the disassembled model and the number of model parameters. Consequently, the optimal thresholds for Equations 9 and 10 may vary across different tasks. To address the impact of manually chosen thresholds on the results, we are currently exploring a more automated approach. This involves developing adaptive algorithms that dynamically adjust the thresholds to meet the specific requirements of different tasks. Thank you again for your valuable suggestion.
> Q3: *While the authors claim that model assembling does not necessitate retraining or incur performance loss (Line 204), the assembled models are often inferior to the baseline without retraining or fine-tuning (see Table 1).*
>
Thank you for your comment. The table you are referring to should be Table 2. We acknowledge that in some scenarios, the performance of assembled models without fine-tuning can be inferior to that of the original models. As discussed on lines 270-272 of the paper, the assembled submodels originate from different source models, and even the data used to train these original models may differ. Consequently, during prediction, the numerous parameters of these submodels can interfere with each other, affecting the accuracy of the assembled model. This is indeed a limitation of our current approach and an area for future work, as mentioned on lines 341-343 of the paper.
However, this method still holds significant value. Currently, to address this limitation, we fine-tune the assembled models to mitigate interference among the submodels. It is important to note that this fine-tuning requires only a few iterations (e.g., the fine-tuned results in Table 2 are based on 10 iterations), which is substantially less computationally expensive than training a model from scratch. In some cases, the performance of the final fine-tuned model even exceeds that of the original model. Notably, fine-tuning is not required during the model disassembly phase.
> Q4: *In the assembled model, different categories fail to follow their own pathways extracted from the model disassembling. For example, a disassembled conv-filter of CIFAR-10-cat0 might generate a high response on CIFAR-100-cat0 because the conv-filter has never seen CIFAR-100-cat0, which can be regarded as inter-class interference. This is why the assembled model initially shows degraded performance. Ideally, model assembling should ensure that pathways between different categories are mutually exclusive.*
>
Thank you for your detailed comment. We understand your concerns and agree with your observations. Indeed, the failure of certain categories to adhere to their own pathways extracted from the model disassembly can result in degraded performance due to what you described as "inter-class interference." To address this issue, we have implemented the following measures during model assembly:
1. Parameter Scaling Strategy: During model assembly, we apply a parameter scaling strategy to mitigate significant differences in parameter magnitudes between disassembled submodels. This helps prevent the assembled model's predictions from being disproportionately influenced by submodels with larger parameter magnitudes. The results of our ablation studies on this strategy are presented in Table 4.
2. Minor Fine-Tuning Mechanism: As mentioned in our response to your previous comment, we consider performing a small number of fine-tuning iterations after model assembly. This helps the model adapt to the new data distribution and can enhance overall performance.
Additionally, we are exploring more generalized and stable methods for assembling submodels to prevent the occurrence of "inter-class interference." Thank you once again for your valuable feedback.
---
Rebuttal 2:
Title: Rebuttal by Authors [Q5-Q8]
Comment: > Q5: *The results in Table 3 have shown the effectiveness of MDA when applied to GCN models. However, could the authors provide more details on the design of MDA for GCN models?*
>
We apologize for any confusion. We are pleased that Table 3 demonstrates the effectiveness of MDA on GCN models. In fact, the core computation of GCN models is given by $\mathbb{H}^{l+1} = \delta(\tilde{\mathbb{A}}\mathbb{H}^{l}\mathbb{W}^{l})$, where $\tilde{\mathbb{A}}$ is the degree-normalized adjacency matrix, $\mathbb{H}^{l}$ and $\mathbb{W}^{l}$ represent the features and weights at layer $l$, respectively, and $\delta$ denotes an activation function.
This process can be simplified to linear operations involving $\mathbb{H}^{l}\mathbb{W}^{l}$. The MDA method proposed in our paper, which is designed for convolutional operations (discussed in Sections 3 and 4 of the paper), is also applicable to linear operations (as detailed in Appendix C). Therefore, no special modifications are required to apply MDA to GCN models; the same MDA method used for CNNs can be directly employed. This underscores both the versatility and effectiveness of the MDA method.
> Q6: *Does model disassembling require additional training? If so, please present the details on how the numerical results in Tables 1 and 3 were obtained.*
>
Thank you for your question. Model disassembling does not require any additional training. The performance results presented in Tables 1 and 3 are derived directly from the disassembled models. As observed, the performance of these disassembled models generally exceeds that of the original models. The detailed reasons for this performance improvement can be found in our response to your first comment.
> Q7: *The fine-tuning of the assembled model will inevitably cause a shift in the internal pattern of the model. Do the pathways of the fine-tuned model remain the same as the original ones?*
>
This is an interesting question. We have disassembled both the pre-fine-tuned and fine-tuned assembled models to examine the internal decision pathways. Our analysis reveals that fine-tuning does indeed result in changes to the internal decision pathways of the model. These changes allow the fine-tuned model to better adapt to the new data distribution, thereby mitigating the interference between parameters from different categories during inference.
> Q8: *Have the authors considered the interference between categories when assembling the disassembled components? Are there any potential solutions to this issue?*
>
Thank you for your question. We have indeed considered the issue of category interference when assembling disassembled components. Details regarding this concern are addressed in our responses to your third and fourth comments.
In the paper, we propose two approaches to mitigate this interference: the "parameter scaling strategy" and "minimal fine-tuning mechanism." These methods can alleviate inter-category interference to some extent without incurring significant computational overhead. Furthermore, in our future work, we plan to explore adaptive parameter adjustment strategies from the perspective of the parameter space of submodels. For example, we are considering leveraging diffusion models to adjust parameter distributions. This approach would involve adapting the parameters of the submodels based on the characteristics of each category, enabling the assembled model to rapidly adapt to new data distributions and potentially eliminating the need for fine-tuning.
However, it is important to note that the current methods for model disassembly and assembly still hold significant value.
---
Rebuttal Comment 2.1:
Comment: Thank you for your detailed response, which effectively addressed my concerns. I have also reviewed the comments and responses provided to the other reviewers, and I find that they align with my understanding of this work.
I believe that the model disassembly and assembly presented in this work are both insightful and significant for the deep learning community. I also look forward to seeing its potential application to a broader range of models and tasks.
Based on the above considerations, I will maintain a positive recommendation for the acceptance of this work.
---
Reply to Comment 2.1.1:
Comment: We are very pleased that these responses address your concerns. Thank you for your valuable feedback and recognition of our work. | Summary: This paper proposes the Model Disassembling and Assembling (MDA) task for CNN classifiers, introducing techniques for extracting and reassembling task-aware components. Experiments show reassembled models perform comparably or better than original models. The approach offers new applications in decision route analysis, model compression, and knowledge distillation, with future extensions planned for Transformers and multi-modal models.
Strengths: - The idea presented by this paper is novel and the technique adopted for solving the problem is new.
Weaknesses: 1. The definition of sub-task is strongly tied to category, severely limiting it to classification tasks. Additionally, the entire method seems complicated, appearing to be overfitted to classification tasks and the CNN architecture. Its inability to handle other types of tasks, especially self-supervised learning, which is widely recognized as the future, raises concerns about the value of this work.
2. There is a lack of systematic comparisons with other works, such as deep model reassembly[1], and the study only includes self-constructed specific tasks, making it hard to evaluate the technical soundness.
3. Given the widespread adoption of transformers in the field, the absence of experiments involving transformers is unfortunate.
[1] Yang, Xingyi, et al. "Deep model reassembly." Advances in neural information processing systems 35 (2022): 25739-25753.
Technical Quality: 2
Clarity: 2
Questions for Authors: Could you provide an intuitive explanation for why your method sometimes outperforms the baseline, given that the baseline is end-to-end trained for the classification task?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: As stated in the weakness part, the absence of the results on widely-adopted transformer architectures and its inapplicability to models trained with tasks other than classification (e.g., masked image modeling) may limit the value of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and comments. We are pleased that you found our work to be novel and our techniques for solving the problem to be innovative. Below are our responses to each of your comments (your comments are highlighted in italics).
> Q1: *The definition of sub-task is strongly tied to category, severely limiting it to classification tasks. Additionally, the entire method seems complicated, appearing to be overfitted to classification tasks and the CNN architecture. Its inability to handle other types of tasks, especially self-supervised learning, which is widely recognized as the future, raises concerns about the value of this work.*
>
Thank you for your constructive comments. We highly value the points you raised and provide our responses below.
**On the Definition of Sub-tasks**: The definition of sub-tasks in our model disassembly and reassembly is not tightly bound to categories. Beyond classification tasks, sub-tasks can be defined in various ways, including:
1. Image-related Sub-tasks: Besides classification, sub-tasks can extend to other image-related tasks, such as object detection and image segmentation. For instance, sub-tasks could include detecting/segmenting "pedestrians" or "vehicles." In image generation, sub-tasks could involve generating "eyes" or "noses." Therefore, the concept of model disassembly and reassembly is not strictly limited to classification tasks but has potential applications in detection, generation, and more.
2. Natural Language-related Sub-tasks: Additionally, sub-tasks can be defined for natural language processing tasks. In tasks like natural language generation or question answering, factual knowledge such as "The Eiffel Tower is located in Paris" can be defined as a sub-task. Queries like "Where is the Eiffel Tower?" or "Is the Eiffel Tower in Paris?" fall under this sub-task. Other factual knowledge, such as "The capital of the UK is London," can also be defined as sub-tasks. Hence, the ideas of model disassembly and reassembly have potential applications in NLP models, including large language models, and can be used for learning or knowledge editing.
**On the Generalization of the Method**: The entire method is not overfitted to classification tasks and CNN architectures. Below are core analyses and explanations of our proposed MDA method:
1. Model Disassembly: Our model disassembly method is based on the proposed contribution aggregation and allocation. Contribution refers to the impact of features or parameters on the model's final prediction (lines 97-99 of the paper). Contribution aggregation refers to how contributions flow when multiple input neurons point to a single output neuron, and contribution allocation refers to how contributions flow when a single input neuron points to multiple output neurons (Figure 1 in Section 3.1 and Figure 5 in Appendix C). Based on contribution aggregation and allocation, we can continuously locate features most relevant to the prediction and thus identify the parameters most closely connected to these features (lines 157-162).
2. Model Reassembly: For disassembled parameters, through padding strategies to match parameter dimensions (lines 208-211) and parameter scaling strategies to align parameter magnitudes (lines 228-232), sub-models from different source models can be assembled and directly used for inference.
As seen, our proposed model disassembly and assembly method is not strictly bound to CNNs. It can serve as a general paradigm for solving model disassembly and reassembly tasks, with the potential to generalize to other tasks and deep neural networks. Specifically, besides convolutional layers, we also provide methods for contribution aggregation and allocation in linear layers, which are essential for networks like graph neural networks (Appendix C). Furthermore, in Section 5.3, we demonstrate the effectiveness of our method on graph convolutional networks, indicating that the MDA method remains effective. Additionally, in response to your third comment (Q3), we have included experimental results demonstrating the applicability of our proposed method to Transformer models.
**On Applicability to Self-supervised Learning**: We acknowledge the importance of self-supervised learning but believe it does not limit the applicability of our model disassembly and reassembly method.
1. Self-supervised learning generally involves using pre-tasks like context prediction or contrastive learning to obtain pre-trained models, which are then applied to downstream tasks like classification, detection, segmentation, or question answering in NLP. Given our above discussion on sub-task definitions and method generalization, applying model disassembly and assembly to models based on self-supervised learning is feasible and practical.
2. Regarding pre-trained models obtained through self-supervised learning, we believe directly applying model disassembly and reassembly to pre-trained models might not be particularly practical since these models are typically used for broad feature learning and cannot be directly applied to specific downstream tasks. However, this does not preclude the applicability of model disassembly and reassembly. For instance, pre-trained models can be decomposed into parameters related to the pre-task and unrelated ones, facilitating model compression by removing parameters unrelated to the pre-task. As shown in Appendix J.2, this approach to model compression has potential benefits.
Thank you again for your valuable feedback. Applying model disassembly and reassembly to broader areas, such as NLP, is a key focus of our ongoing and future research efforts.
---
Rebuttal 2:
Title: Rebuttal by Authors [Q2]
Comment: > Q2: *There is a lack of systematic comparisons with other works, such as deep model reassembly[1], and the study only includes self-constructed specific tasks, making it hard to evaluate the technical soundness.*
>
Thank you for your valuable feedback. Here is our response:
**Systematic Comparisons with Other Methods**: We apologize for any confusion regarding this matter. In lines 44 to 47 of the main text and lines 503 to 504 of Appendix B, we highlight the key differences between our work and other related studies. To provide a systematic comparison, we have specifically compared our method, MDA, with the Deep Model Reassembly (DeRy) [1], as you pointed out. This comparison covers aspects such as problem definition, methodology, and effectiveness. It is evident that MDA differs significantly from DeRy and offers clear advantages. For instance, MDA disassembles models into task-aware components that handle different subtasks vertically. These disassembled submodels can be directly used for inference, are interpretable, and can be assembled to perform various tasks.
Moreover, compared to other related work, MDA has several distinctive features. For example, there is no need for predefined subcomponents during the training of the original model, and no additional learning modules are required during disassembly and reassembly. In summary, this work represents the first approach to model disassembly and reassembly related to subtasks, allowing for the flexible creation of new models in a manner akin to assembling with building blocks.
| | DeRy [1] | MDA [This Paper] |
| --- | --- | --- |
| Problem Definition | DeRy aims to partition different layers of models into equivalent sets and then reassemble layers from these sets. | MDA is the first to aim at disassembling a model into different task-aware components, with each component corresponding to one or more specific sub-tasks, which can be assembled to solve a larger task comprising these sub-tasks. |
| Model Disassembly Method | DeRy uses covering set optimization to partition different layers of models into equivalent sets. To maintain these sets, each time a new model is partitioned, it requires calculating the "functional similarity" for relevant layers across all models, increasing unnecessary computational overhead. | MDA disassembles the model based on proposed contribution aggregation and allocation, independent of other models and solely related to the sub-tasks themselves. This direct approach significantly enhances disassembly efficiency and practicality. |
| Effectiveness of Disassembled Models | The parameters generated by DeRy's method, which partitions models by layers, lack practical meaning, resembling a black box that cannot be understood or used directly for inference. | MDA disassembles the model according to the inference paths of different sub-tasks, making the model more transparent and interpretable. More importantly, the task-aware components disassembled can be used directly for inference without any training. |
| Model Reassembly Method | DeRy reassembles models by solving integer programming to stack layers from different models horizontally. If the dimensions of two consecutive layers do not match, additional parameters for concatenation layers are introduced, necessitating retraining. | MDA reassembles models by vertically assembling different decision paths and uses a parameter scaling strategy that requires no training to mitigate sub-task interference. This approach usually yields excellent results and maintains interpretability during the assembly phase. |
| Effectiveness of Reassembled Models | DeRy can only partition and reassemble models for the same task. For example, different models for ImageNet classification can be partitioned by layers and reassembled, but the new model can still only be used for ImageNet classification. | MDA can assemble models used for different sub-tasks. The reassembled model can solve new, previously unseen larger tasks comprising these sub-tasks. |
| | | |
**Self-constructed Specific Tasks**: Here is our response to your concern:
1. This work is the first to propose sub-task-based deep model disassembly and assembly (Section 2). Therefore, there are currently no standard datasets or tasks, nor comparable settings from previous works.
2. The datasets used to evaluate the final model performance are publicly recognized, and the evaluation process is standard. In addition to the commonly used model accuracy, we provide more detailed experimental results, such as parameter counts and FLOPs of disassembled models (Appendix G) and more granular baseline effects before model assembly (Appendix H).
We understand your concerns and hope that the above response addresses your questions. Designing a comprehensive benchmark for model disassembly and reassembly will be one of our priorities in future work.
---
Rebuttal 3:
Title: Rebuttal by Authors [Q3&Q4]
Comment: > Q3: *Given the widespread adoption of transformers in the field, the absence of experiments involving transformers is unfortunate.*
>
We understand your concern. In fact, the proposed MDA, such as contribution aggregation and allocation, can be applied to transformers, as discussed in our response to your first comment (Q1). We conducted a preliminary model disassembly experiment using ViT-Tiny on ImageNet-10, and the results are shown in Table 1.
Table 1: Disassembly Performance of ViT-Tiny on ImageNet-10
| Disassembled Task | Base(%) | Disa(%) |
| --- | --- | --- |
| 0-1 | 87.70 | 89.25 (+1.55) |
| 1-3 | 75.10 | 78.10 (+3.00) |
| 3-9 | 77.51 | 79.29 (+1.78) |
As shown in Table 1, the disassembled model can still be used for inference, and its accuracy is higher than that of the original model.
Furthermore, we tested the model assembly performance of ViT-Tiny on CIFAR-10 and CIFAR-100. However, we must acknowledge that the results were not satisfactory. For example, when assembling the disassembled CIFAR-10 (0-3) and CIFAR-100 (11-17) models, the accuracy without fine-tuning was only 13.27%. As Reviewer gfoc mentioned in their final comment, task interference occurs when sub-models from different source models are assembled together, and this issue is particularly severe due to the self-attention mechanism unique to transformers. We believe that adjusting parameter distribution from different sub-model parameter spaces before assembly is a potential solution to this problem, which will be a focus of our future research.
> Q4: *Could you provide an intuitive explanation for why your method sometimes outperforms the baseline, given that the baseline is end-to-end trained for the classification task?*
>
That's an excellent question. The superior performance of the proposed MDA method over the base model can be attributed to two main factors:
1. The source model, from which the disassembled components are derived, can be considered as trained on additional data relative to the target task. This means it has learned more features and possesses better feature extraction capabilities.
2. More critically, during model disassembly, our proposed method extracts only the parameters relevant to the subtasks, discarding those unrelated. This means that during inference, there is no interference from parameters associated with other subtasks, leading to higher inference accuracy.
The performance improvement demonstrates that disassembling and identifying subtask-related model parameters is both feasible and effective. Besides the accuracy improvement, the experiments in our paper also show that our method accurately identifies subtask-related parameters and effectively removes irrelevant ones (as detailed in Table 5 of Appendix G).
---
Rebuttal 4:
Title: Willing to Provide Further Clarifications
Comment: Dear Reviewer,
Thank you for your valuable comments and feedback. We have thoroughly addressed the questions you raised in our detailed responses. If you have any further questions or concerns, we would be more than happy to provide additional clarifications. We believe this would be greatly beneficial in refining and improving our work.
Sincerely,
The Authors of Model Lego
---
Rebuttal Comment 4.1:
Comment: Thank you for your detailed response.
Although NeurIPS rules state that "Reviewers are not required to take comments into consideration," I reviewed all your comments. However, I found them not satisfactory.
Specifically, the main issues with your response are as follows:
The response was *overwhelmingly long*, exceeding the NeurIPS rebuttal limit of 6000 characters. This length made it difficult for me to grasp the key points and conclusions in your response.
The overall content seemed somewhat vague, with extensive explanations of what your method *might* achieve. However, an experiment result is worth more than a thousand words. For instance, you spent considerable effort discussing whether your method could be applied to other tasks like generation or self-supervised learning, but the explanations remained highly theoretical and philosophical without supporting results. Moreover, I was hoping for more system level with other methods. I expected to see how your method outperforms others in a fair setting, demonstrating clear advantages or unique capabilities, but such results were absent.
Additionally, your experiments applying the method to transformers further demonstrated that the generalizability of the approach is quite limited (even in a very toy setting, from my perspective). This reinforces the idea that philosophical explanations is far from insufficient in validating your claims. Practical experiments reveal that applying this method to other tasks or models is not as straightforward as suggested. So I would also expect similar difficulties in applying your method to other tasks (for example, masked image modeling or next-token-prediction in self-supervised learning).
Given the above points, my concerns are not addressed with the author's response, and I will be maintaining my original rating.
---
Rebuttal 5:
Comment: Dear Reviewer,
Thank you for your constructive suggestions.
We apologize for exceeding the word limit in our previous response. Here, we summarize our replies to your four comments on weaknesses and questions (denoted as Q1 to Q4) in the most concise manner:
- **Limitation of Subtask Definitions (Q1)**: This work introduces model disassembly and reassembly related to subtasks (Section 2). Besides classification, it is evident that subtasks can be defined in image detection, segmentation, generation, and subtasks related to factual knowledge in natural language.
- **Complexity and Generalization of the Method (Q1)**: The proposed approach is not coupled with the CNN architecture. This paradigm can be applied to a broader range of deep neural networks, such as graph neural networks (Section 5.3 in the paper).
- **Applicability of Self-Supervised Learning (Q1)**: Regarding self-supervised learning, such as masked image modeling, the pre-trained model obtained is used for feature learning and is not tied to a specific task. Thus, only subtasks related to feature learning can be defined and utilized for model compression (as discussed in Appendix J.2). However, for downstream task models based on self-supervised learning, our method is equally applicable as it is to standard models.
- **Systematic Comparison with Other Methods (Q2)**: This work is the first to propose model disassembly and assembly related to subtasks, significantly differing from existing work. For example, previous work does not even decompose a sub-model that can be directly inferred from. Therefore, fair quantitative comparisons are not feasible. Nevertheless, we provide detailed comparisons across problem definition, methods, and effects, highlighting the advantages of our approach.
- **Specific Tasks of Self-Constructed Models (Q2)**: As the first work on model disassembly and assembly, there are no established experimental settings to follow. However, our experimental environment is fair, using publicly available datasets, with detailed experimental setups provided. In addition to model accuracy, we have also provided experimental data on model parameters and FLOPs (Appendix G), as well as more fine-grained baselines (Appendix H).
- **Experimental Results on Transformers (Q3)**: We have supplemented experimental results on Transformers, demonstrating that disassembled sub-models perform well on different subtasks. However, due to the unique nature of self-attention mechanisms, there can be interference between subtasks when assembling these sub-models, leading to unsatisfactory performance without further fine-tuning. We have provided explanations and potential solutions for this issue. Along with model disassembly and reassembly on large language models, this will be a direction for our future work.
- **Reasons for Performance Exceeding Baselines (Q4)**: The primary reason is that the original model, from which the components are disassembled, has been trained on more extensive data compared to the baseline models for the target tasks. Furthermore, only parameters relevant to subtasks are included in the disassembled model, reducing interference from other task parameters and thus leading to more accurate predictions.
The detailed responses to these points can be found in the rebuttal above. Once again, we appreciate your thorough review and valuable suggestions.
Best regards,
The Model LEGO Authors | Summary: The paper introduces Model Disassembling and Assembling (MDA), a novel method inspired by the biological visual system to create new deep learning models without retraining. By disassembling pretrained CNNs into task-aware components and reassembling them, the approach maintains performance while enabling efficient model reuse. Experiments show that the reassembled models match or exceed the original models' performance, highlighting MDA's potential for applications like model compression and knowledge distillation.
Strengths: 1. The proposed method allows for arbitrary assembly of new models from task-aware components, similar to building with LEGO blocks, which is novel and interesting.
2. MDA creates new models without requiring extensive retraining, saving significant computational resources.
3. The experimental results are impressive. The reassembled models can match or even surpass the performance of the original models.
Weaknesses: 1. The computation of contributions and component locating may introduce additional time and computational cost, especially when the number of classes or tasks are large. It is recommended to add a analysis of the complexity of this method.
2. Is this method applicable to transformer models and large models?
3. In some cases, the performance of MDA can even outperforms the base model, why?
4. Some typos in writing should be corrected. For example, in Section 5.2.1, the quotes `’sth’` are all right quotes.
Technical Quality: 4
Clarity: 3
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: The authors have adequately discussed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed review and comments on our work. We are pleased that you found our work to be novel and interesting, and that you found the experimental results impressive. We are also delighted that you pointed out how our method allows for the arbitrary creation of new models, similar to playing with Lego bricks, significantly saving computational resources. Below are our responses to each of your comments (your comments are highlighted in italics).
> Q1: *The computation of contributions and component locating may introduce additional time and computational cost, especially when the number of classes or tasks are large. It is recommended to add a analysis of the complexity of this method.*
>
We greatly appreciate your concern regarding this issue. The proposed method indeed introduces additional computational costs. Specifically, let $K$ represent the number of classes or tasks, and $d$ represent the total dimension of features in the network. The complexity of our method consists of the following components:
- Contribution Aggregation and Allocation: The computational complexity for this part is $O(K \cdot d)$.
- Component Locating: The computational complexity for this part is $O(K \cdot d)$.
Although the proposed method introduces the aforementioned computational costs, these steps are essential for accurately identifying the parameters relevant to the subtasks. Moreover, compared to training a model, the time overhead required for these steps is almost negligible.
> Q2: *Is this method applicable to transformer models and large models?*
>
Thank you for your question. Here is our response:
**Applicability to Transformer Models**: The proposed MDA method, including the underlying concepts of contribution aggregation and allocation, is indeed applicable to Transformer models. The method is not tightly coupled with CNNs and can be applied to linear layers and other structures beyond convolutional layers (Appendix C), making it a general paradigm for model disassembly and reassembly tasks. This is demonstrated in the experiments presented in Section 5.3 of the paper. Additionally, we conducted a preliminary model disassembly experiment using ViT-Tiny on ImageNet-10, and the results are shown in Table 1.
Table 1: Disassembly performance of ViT-Tiny on ImageNet-10.
| Disassembled Task | Base(%) | Disa(%) |
| --- | --- | --- |
| 0-1 | 87.70 | 89.25 (+1.55) |
| 1-3 | 75.10 | 78.10 (+3.00) |
| 3-9 | 77.51 | 79.29 (+1.78) |
As shown in Table 1, the disassembled model remains functional for inference, with accuracy surpassing that of the original model.
Additionally, we tested the assembly performance of ViT-Tiny on CIFAR-10 and CIFAR-100. However, the results were not satisfactory. For example, when assembling disassembled CIFAR-10(0-3) and CIFAR-100(11-17) models, the accuracy without fine-tuning was only 13.27%. The primary reason for this is the task interference that occurs when submodels from different source models are assembled, a phenomenon that is particularly severe due to the self-attention mechanism inherent in Transformers. We believe that adjusting parameter distribution from different submodel parameter spaces before assembly could be a potential solution to this issue, which will be a focus of our future work.
**Applicability to Large Models**: Applying MDA to large models, such as GPT models based on decoder-only Transformers, is feasible and is a current and future focus of our research. For large models in tasks like natural language generation or question answering, it is crucial to clearly define subtasks. Our current approach involves defining different factual knowledge (e.g., "The Eiffel Tower is located in Paris" and "The first Olympic Games were held in Athens") as subtasks. Architecturally, the primary parameter layers in Transformers are linear layers, and our MDA method's contribution aggregation and allocation are applicable to these layers. The main challenge lies in addressing the task interference issue mentioned earlier.
In conclusion, we fully agree on the importance of the question you raised. We believe that the proposed MDA method can serve as a general paradigm applicable to various neural networks, and advancing the disassembly and reassembly of models on Transformers and large Transformer-based models is a key focus of our ongoing and future research efforts.
> Q3: *In some cases, the performance of MDA can even outperforms the base model, why?*
>
This is an excellent question. The superior performance of the proposed MDA method over the base model can be attributed to two main factors:
1. The source model, from which the disassembled components are derived, can be considered as trained on additional data relative to the target task. This means it has learned more features and possesses better feature extraction capabilities.
2. More critically, during model disassembly, our proposed method extracts only the parameters relevant to the subtasks, discarding those unrelated. This means that during inference, there is no interference from parameters associated with other subtasks, leading to higher inference accuracy.
The performance improvement demonstrates that disassembling and identifying subtask-related model parameters is both feasible and effective. Besides the accuracy improvement, the experiments in our paper also show that our method accurately identifies subtask-related parameters and effectively removes irrelevant ones (as detailed in Table 5 of Appendix G).
> Q4: *Some typos in writing should be corrected. For example, in Section 5.2.1, the quotes `’sth’` are all right quotes.*
>
Thanks for pointing it out. We have corrected the typos in the paper and thoroughly reviewed the entire document. | Rebuttal 1:
Rebuttal: Dear Reviewers FhUd, UTXG, gfoc, and 17d9,
Thank you for the time and effort you have invested in reviewing our paper. We are particularly grateful for your recognition of the novelty and insight of our work and are pleased that our proposed model disassembly and reassembly approach has been deemed inspiring for model interpretability and model design.
Through our detailed responses to each of your questions and comments, we believe that the paper has been significantly improved. We have compiled the common concerns raised by the reviewers and have addressed each comment individually. If you find that our responses satisfactorily address your concerns, we would be grateful if you could consider increasing the score. If you have any further questions, we would be more than happy to engage in additional discussions.
> Reviewers FhUd, UTXG, gfoc, and 17d9 all raised questions about why the performance of MDA can even exceed that of the base model.
>
We apologize for any confusion caused. We have addressed this question in detail in the responses to each reviewer’s comments. In summary, the performance improvement of MDA can be attributed to two main factors: 1.The original models from which the components are disassembled can be considered as having been trained on additional data compared to the baseline model for the target tasks. 2.The disassembled submodels contain only the parameters relevant to the corresponding subtasks, which helps avoid interference from parameters related to other subtasks.
> Reviewers FhUd, UTXG, and 17d9 all raised questions about the applicability of MDA beyond convolutional neural networks.
>
We have addressed this question in detail in our responses to each reviewer. In summary, we have explained the potential applications of MDA methods to other architectures and tasks, such as classification, segmentation, and large language models. Additionally, we have provided supplementary experiments in the paper and responses demonstrating the effectiveness of MDA in architectures like graph convolutional networks and Transformers.
While model disassembly is applicable to networks such as Transformers, we acknowledge that task interference can occur during model assembly in Transformer architectures. We have provided explanations for this phenomenon and potential solutions. We will continue to explore the applications of model disassembly and assembly in deep learning, with a focus on Transformer models and Transformer-based large language models as a key area of ongoing and future research.
> Reviewer UTXG has focused on issues related to the definition of subtasks in model disassembly and reassembly, the generalizability of the method, the applicability of self-supervised learning, and a systematic comparison with related methods.
>
Thank you for your comments. We have provided detailed responses to each of these issues under the respective comments. These include explanations on the definition of subtasks related to images and non-images, an overview of the proposed method and its generalizability, how model disassembly and reassembly can be applied to self-supervised learning, and a systematic comparison with other methods. We believe these detailed explanations will help reduce any misunderstandings about our paper.
> Reviewer gfoc emphasized the issue of category or subtask interference during model assembly in several comments.
>
Thank you for your suggestions. We have provided detailed explanations under the respective comments regarding the causes and consequences of interference between classes or subtasks, the specific measures we have taken to mitigate this interference, and potential future approaches to further address it. We believe these responses will effectively address your concerns.
Beyond the issues mentioned above, more detailed responses to each reviewer’s comments can be found below their respective sections. We sincerely thank the reviewers for their time and valuable feedback, which have greatly contributed to the improvement of the paper. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Nonlinear dynamics of localization in neural receptive fields | Accept (spotlight) | Summary: This paper investigates when localized receptive fields arise in supervised neural networks. Extending a recent work of Ingrosso and Goldt, the authors propose that simple single-neuron models learn localized receptive fields when trained on data with sufficiently negative excess kurtosis, while if the excess kurtosis is sufficiently positive they learn delocalized receptive fields.
Strengths: The topic of this paper is of broad interest in both machine learning and neuroscience, and on the whole I think this manuscript makes a worthy contribution on top of the work of Ingrosso and Goldt. There are some weaknesses which dampen my enthusiasm (see below), but on the whole I favor acceptance.
Weaknesses: I have two primary concerns about the results presented:
- First, Lemma 3.1's treatment of time is not sufficiently precise. Can you provide a more precise answer than simply "early in training" or "before A.3 is violated"? Indeed, the logic in the paragraph beginning on Line 174 is not clear. What you show is that, in some cases (see concern below), eq. (5) generates localized RFs in a similar location to those observed in actual training. This does necessarily mean that "the gradient flow in Eq. (5) holds sufficiently long to detect the emergence of localization in the weights," as you write in Lines 177-178. Moreover, you have not in fact defined what you mean by "detect the emergence of localization;" this must be reified. Can you see a change in participation ratio, even if only numerically, before the approximation breaks down?
- Second, the experiments are rather limited, and rely largely on exemplars rather than systematic statistical investigation. This is important given the gap in Lemma 3.1: the authors rely on experiments to justify their claim that this approximation provides meaningful information about when localization will occur, but all they actually show is that there is resemblance in a few cases. The paper would be much stronger if the authors could also show that their claims hold statistically over many realizations of the data generation process and training procedure.
Technical Quality: 2
Clarity: 3
Questions for Authors: - A neuroscientific quibble: in the first sentence of the Introduction (Line 19), one need not restrict attention to the "mammalian nervous system." There are numerous examples of localized receptive fields in non-mammalian species; see for instance the beautiful works of Knudsen & Konishi on audition in owls.
- The authors might consider citing Sengupta et al., "Manifold-tiling Localized Receptive Fields are Optimal in Similarity-preserving Neural Networks" (NeurIPS 2018) in their discussion of unsupervised learning algorithms that give localized receptive fields.
- Another potentially relevant reference is Shinn, "Phantom oscillations in principal component analysis" (PNAS 2023).
- The reference in Footnote 3 is wrong; it should be to Appendix C.2 not C.3.
- In Line 187, there is a small typo: "termdepends" -> "term depends"
- Using a perceptually uniform non-grayscale colormap to represent time might improve the legibility of the plots relative to the grayscale used in the submitted manuscript.
- The final paragraph of the conclusion is not tied to anything that came before; if you care about orientation selectivity you should mention & measure it in earleir portions of the paper. Otherwise, this should be omitted.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: The authors do a largely adequate job of discussing the limitations of their work, up to the technical weaknesses noted above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their extremely helpful feedback, in
particular for making us aware of relevant work and clarifying minor
misconceptions. We also appreciate your insistence to more
comprehensively quantify and replicate our analyses.
> **Lemma 3.1's treatment of time is not sufficiently precise.**
We thank the reviewer for this criticism. Indeed, lemma 3.1 does not
provide a precise criterion for *when* the approximation breaks down.
This is because the only assumption that is violated as a result of
localization, A3, does not break down at any predictable $t$. To make
this claim clearer, we have included Fig. 2 & 3 in
the Global Response. We see across examples that the analytical model
closely tracks the empirical one until a strongly localized bump emerges
in the weights; when no localization emerges, the analytical model
maintains its precision through training. Please see the figure and its
caption for more detailed information.
We would happily incorporate suggestions from the reviewer on how best to represent or quantify this notion of time.
> **the experiments are rather limited, and rely largely on exemplars
rather than systematic statistical investigation**
We thank the reviewer especially for stating this limitation. To address
this, as well as other reviewers' concerns about the precision of our
model and analysis, we have included Fig. 1 in the Global Response. This figure shows
the IPR against the excess kurtosis for various values of $g$ and $k$,
respectively, for $\texttt{NLGP}$ and $\texttt{Kur}$.
> **one need not restrict attention to the "mammalian nervous
system"**
Thank you for the reference; Knudsen & Konishi (1978) is indeed
relevant. We will broaden our terminology from \"the mammalian nervous
system\" to \"animal nervous systems\".
Knudsen & Konishi (1978). Center-surround organization of auditory
receptive fields in the owl. DOI:10.1126/science.
> **consider citing Sengupta et al.**
Thank you for the reference; we will add Sengupta et al. (2018) to the
list of unsupervised learning algorithms producing localized receptive
fields. As far as we understand, this algorithm does not optimize an
explicit or implicit sparsity criterion as in sparse coding, so it is
also an example of an alternative model for the emergence of
localization. \[Har+20\] cited in the submission is another such
example; we will incorporate both of these references in the submission
via discussion of their modelling assumptions wrt those we employ.
> **The final paragraph of the conclusion \[on orientation
selectivity\] is not tied to anything that came before \...**
We are asked by NeurIPS to address the limitations of our work, and
since we display oriented receptive fields in Figure 1 (left, center) we
would like to call attention to the fact that we cannot yet capture this
feature in our analytical model, and neither do the simulated receptive
fields appear oriented. To improve on this point, we will introduce the
terminology of oriented receptive fields when referring to Figure 1
(left, center) in the introduction so that this terminology does not
spring out of nowhere in the conclusion, as the reviewer has pointed
out. Our analytical and simulated receptive fields are not oriented to
any degree, so we do not think it is necessary to explicitly measure
this property.
> **typo, corrections**
> **us\[e\] a perceptually uniform non-grayscale colormap**
We particularly appreciate the reviewer's thorough examination of our
work and their identification of typographical errors. A compiled list
of the typographical corrections we plan to implement is provided in the
global response. We also intend to reformat our plots to use a
non-grayscale colormap to improve legibility, which another reviewer
encouraged as well. Please see Fig. 4 in the attached PDF for an example.
---
Rebuttal Comment 1.1:
Title: Thanks for the reply!
Comment: Thank you for your thorough reply to my comment. I think you've adequately addressed my comments and those of the other referees, so I will raise my score as I think this paper should be accepted. | Summary: SETTING: Olshausen & Field famously showed that requiring natural image patches to be constructed from a small number of independent elements from an overcomplete dictionary populates that dictionary with spatially localized feature detectors. But localization also appears in DNNs trained on discriminative-learning tasks. It is also ubiquitous in the cortex. This raises the question of what fundamentally drives the formation of localized receptive fields.
APPROACH: The MS attempts to explain the emergence of localized RFs by reference to the statistics of the stimuli (inputs). In particular, the authors examine the time evolution of network weights trained with gradient descent to discriminate "natural-image-like" stimuli--analytically for a single (ReLU) unit of neural network, and in simulation for a weighted sum of multiple such units (a two-layer NN with one output unit).
More precisely, the authors consider stimuli/inputs with circulant covariance matrices, a 1D idealization of natural images (the covariance matrices of natural images are block-circulant with circulant blocks). Under some additional simplifying assumptions (see below), they derive an ODE for the time evolution of the weights when the network is trained to discriminate stimuli according to whether their spatial correlations are "long" or "short." They show that this ODE depends only on the marginal distribution of a single "pixel" rather than the joint (and is valid until localization begins). Investigation of this ODE (Appendix C.1) reveals that localization will occur (w large for some entries, small for the rest) when the inputs have (sufficiently) negative excess kurtosis. The authors also demonstrate this in simulation with data distributions with varying levels of excess kurtosis, both by examining the learned receptive fields and by numerically integrating the ODE. Finally, they extend their simulations to two-layer NNs.
Strengths: The MS persuasively presents a new route to localized receptive fields. The proofs look correct (but I did not verify every line) and the simulations confirm that they are not undermined by the various approximations employer along the way. The problem is an important one and of long-standing interest to the field (computational neuroscience).
Weaknesses: This reviewer struggles to see how these results can be generalized beyond this very simplified setting of, essentially, a "network" Y = ReLU(<w, X>) (or its cousin, the SCM). Introducing just one more learnable layer breaks (or at least vitiates) the connection between stimulus kurtosis and localized receptive fields (as the authors show in Fig. 6).
This makes it hard to find this explanation of RF localization more compelling than the classical one in terms of efficient codes.
Technical Quality: 4
Clarity: 3
Questions for Authors: --One of the chief motivations for the study is that even DNNs trained on supervised-learning tasks acquire localized RFs. In support of this claim they cite the AlexNet paper and some papers on visualizations of convnets. Obviously the convolutions in these networks enforce spatial localization. Do the authors have in mind here localized (bandpass) *frequency* RFs?
--Is *negative* excess kurtosis consistent with the statistics of natural scenes?
--Can the authors provide more intuition about assumptions A1 and A2, and how restrictive they are?
--In the proof, l. 634, shouldn't it be P(<w,X> > 0) = 1/2? (There is no > 0 in the expression in the MS. Am I misunderstanding this?)
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their excellent feedback. We especially
appreciated your precise and astute questions with regard to the
validity and generalizability of our model, and also for reading closely
enough to identify a typo in our proof.
> **how these results can be generalized beyond this very simplified
setting ... one more learnable layer breaks (or at least vitiates) the
connection between stimulus kurtosis and localized receptive fields**
The reviewer correctly points out a key limitation of this work:
extensions to more complicated models quickly lead to difficulties in
extending our analytical approach and its interpretation. Indeed, we
observe that experiments with two-layer networks are noisier (as the
reviewer pointed out in referencing Figure 6), as did [IG22].
However, we consider it worth mentioning that some aspects of our
analytical approach seem to extend to the two-learnable-layer regime,
though we reserve this as an avenue for future work. As an example,
using the approach we have developed, consider the steady-state equation
for a single-neuron model with two learnable layers, $\mathbf{w}_1$ and
$w_2$, after plugging in for $w_2$:
$$\varphi\left( \frac{\Sigma_1 \mathbf{w}_1}{\sqrt{\langle \Sigma_1 \mathbf{w}_1, \mathbf{w}_1 \rangle}} \right) = \frac{1}{\sqrt{2 \pi}} \frac{(\Sigma_0 + \Sigma_1) \mathbf{w}_1}{\sqrt{\langle (\Sigma_0 + \Sigma_1) \mathbf{w}_1, \mathbf{w}_1 \rangle}}$$
This steady state is very similar to the one implied by Eq. 5 in Lemma
3.1, the only difference being an additional scaling term in the
denominator of the right side. We have found that manipulating this term
does not substantially change whether a given $\varphi$ yields localized
receptive fields, but this observation is still preliminary. However,
dynamics play a very important role in the precise structure obtained at
convergence, something we are exploring for potential future work. As
the reviewer points out, adding another learnable layer obfuscates the
relation between stimulus kurtosis and localization because the
second-layer weight can be pulled into ReLU term and viewed as a
rescaling of the stimulus. A thorough analysis will likely not be as
simple as just studying $\varphi$ and will likely require a careful
analysis of initializations, but we expect that general strategies and
ideas from this work will carry over into future ones.
> **hard to find this explanation of RF localization more compelling
than the classical one in terms of efficient codes**
We would like to emphasize that we don't wish to claim that our approach
is much more compelling than the sparse coding work of
[OF96]. Instead, we hope to present it as an
alternative bottom-up, learning model that should be investigated
further because of its ability to reproduce key qualitative phenomena of
visual receptive fields with an alternative set of assumptions that some
may view as more minimal (due to lack of sparsity regularization).
> **the convolutions in these networks \[such as AlexNet\] enforce
spatial localization**
The kernel size of the first-layer convolutional
kernels is indeed usually taken to be much smaller than than the size of
the input image, thus enforcing a degree of spatial localization. For
AlexNet, for example, receives input image of size $224 \times 224$ with
an $11 \times 11$ kernel in the first layer. Here, we mean to refer to
the further localization that emerges within these convolutional kernels
within that kernel bandwidth; i.e., the oriented receptive fields
visible in Figure 1 have width in the edge direction much smaller than
11.
> **Is negative excess kurtosis consistent with the statistics of
natural scenes?**
While we focus on analyzing the model of emergent localized receptive
fields of [IG22], this is a great question about broader
relevance. We can provide some context in the case of simple cells in
primary visual cortex (V1), of which sparse coding is classically taken
as a model [OF97] Retinal ganglion cells prior to V1
are understood to perform edge detection. Edge detection naturally
induces concentrated marginals, corresponding to positive and negative
edges, with a large amount mass at zero, corresponding to no edge. Such
distributions typically have negative excess kurtosis (unless a very
large amount of mass is at zero, corresponding to a nearly uniform
input).
> **Can the authors provide more intuition about assumptions A1 and A2,
and how restrictive they are?**
We thank the reviewer for this question. We attempted to address in
lines 160-3 in the submission, which we expand on here.
A1 and A2 are motivated by the limiting cases of the $\texttt{NLGP}(g)$
data model from [IG22]: $g \to 0$ (no localization) and
$g \to \infty$ (localization). A1 is implied by the weaker assumption
$\mathbb{E}[X_j \mid X_i = x_i, Y = y] \propto x_i$ after applying S3.
A2 captures that when conditioning on $X_i$, nearby entries in
$\mathbf{X}$ are almost deterministic, while distant ones are
unaffected. The $i$-th row and column of
$\operatorname{Cov}[X_j \mid X_i = x_i, Y = y]$ are zero. Local
dependence implies entries $(k,j)$ of the conditional covariance for
$k,j$ near $i$ should be small, while distant ones remain unchanged.
Weak dependence (S1) implies $\sigma_i^y$ is largest near $i$ and zero
elsewhere. Subtracting $\sigma_i^y {\sigma_i^y}^\top$ from $\Sigma_y$
expresses these intuitions, supported by $\texttt{NLGP}$ limiting cases.
As such we consider these assumptions to be relatively loose. Fig. 1 of
the Global Response, which shows a sharp uptick in IPR as excess
kurtosis becomes negative, supports this.
> **... l. 634 shouldn't it be P(\<w,X\> \> 0) = 1/2?**
Yes; thank you for the correction.
---
Rebuttal Comment 1.1:
Comment: Thanks for answering my questions. I stand by my high rating. | Summary: The authors analytically derive the learning dynamics of (extremely) simple neural network models and characterize conditions under which units learn localized receptive fields. This builds on celebrated work in computational neuroscience on sparse coding and independent components analysis, offering a new perspective for biological findings.
Strengths: The paper is beautifully written (but see comments about figures below). It is well-motivated and accessible to a broad audience. The target audience for this kind of work may be somewhat niche, but it fits within the scope of the NeurIPS conference. The authors summarize prior work concisely and clearly. The assumptions of the analysis are stated precisely. The proofs in the appendix are well written, and the main claims of the theory are tested experimentally (though see below for some areas I am a bit unclear on).
Weaknesses: To enable analytic tractability, the paper relies on very strong and simplifying assumptions. Figures 4, 5, 6, and the right hand side of Figure 3 are hard to see. It could be helpful to add color and show fewer lines. Also, the description of these figures is the one part of the text I had trouble following. Quantifying the outcomes somehow would be helpful for me to understand exactly what I'm supposed to be looking at in these examples. For instance, in Figure 4A, the claim is that the model learns an oscillatory filter that resembles a sinusoid. But the left panel in Fig 4A, though oscillatory, doesn't look sinusoidal?
Ultimately I believe these weaknesses can be addressed during the rebuttal phase and the strengths outweigh the weaknesses.
Technical Quality: 4
Clarity: 3
Questions for Authors: - In addition to showing examples in Figures 4, 5, and 6, can you somehow quantify localization?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Limiting assumptions are clearly stated and discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their well-formulated concerns and
suggestions, especially with regard to improving the interpretability
and readability of our work.
> **Figures 4, 5, 6, and the right hand side of Figure 3 are hard to
see. It could be helpful to add color and show fewer lines.**
We thank the reviewer for the suggestion to improve legibility of the
receptive field evolution. In the Global Response, we have plotted
receptive fields with a non-grayscale (blue-red) colormap to improve
legibility, which another reviewer suggested as well.
> **the description of these figures is the one part of the text I had
trouble following.**
> **Quantifying the outcomes somehow would be helpful for me to
understand exactly what I'm supposed to be looking at in these
examples.**
> **in addition to ... Figures 4, 5, and 6, can you
somehow quantify localization?**
We thank the reviewer for the suggestion to quantify the localization
phenomenon. In Fig. 1--3 of the Global Response, we quantify
localization with the inverse participation ratio (IPR), defined as
$\operatorname{IPR}(\mathbf{w})=\left(\sum_{i=1}^D w_i^4\right)/\left(\sum_{i=1}^D w_i^2\right)^2$,
where $w_i$ is the magnitude of dimension $i$ of weight $\mathbf{w}$.
This measure, also used by [IG22], is large when
proportionally few weight dimensions \"participate\" (have large
magnitude), and small when weight dimension magnitudes are more uniform.
> **For instance, in Figure 4A, the claim is that the model learns an
oscillatory filter that resembles a sinusoid. But the left panel in Fig
4A, though oscillatory, doesn't look sinusoidal?**
In Fig. 4 of the Global Response, we quantify this claim by fitting a
sinusoid to the resulting receptive field, and find that we can do so
very well. We also rescale the receptive field to make the sinusoidal
structure of the steady state more apparent. Please see the figure
caption for more details.
---
Rebuttal Comment 1.1:
Title: Reviewer Response
Comment: Thanks for the clarifications. The sinusoids are still a bit hard to see, but the new firgure is an improvement. Perhaps an "approximate" sinusoid. Overall I think this is minor; I retain my positive score. | null | null | Rebuttal 1:
Rebuttal: We thank the reviewers for their excellent feedback. In particular, we
thank each of the reviewers for providing specific, actionable questions
and concerns. We have done our best to address each of these in our
responses.
### Minor corrections to errata
We also thank the reviewers for reading our manuscript very closely and
identifying typos. Those corrections, along with additional we
identified upon our own re-reading, are listed below:
1. Line 163: \"does and not does\" $\to$ \"does and does not\"
2. Line 171: \"$o(1)$\" $\to$ \"$o_N(1)$\", to clarify dependence
w.r.t. dimension $N$ and not time $t$
3. Line 187: \"termdepends\" $\to$ \"term depends\"
4. Line 238: \"steady\" $\to$ \"steady state\"
5. Line 634:
$\mathbb{P}(\langle \mathbf{w}, \mathbf{X} \rangle) = \frac{1}{2}$
$\to$
$\mathbb{P}(\langle \mathbf{w}, \mathbf{X} \rangle > 0) = \frac{1}{2}$.
6. Line 649: \"blow up\" $\to$ \"dominate\"
### On extension to 2D
We would also to clarify a potential misconception about our work. Our
analysis is presented in terms of $N$-dimensional inputs (i.e.,
$\mathbf{X} \in \mathbb{R}^N$), which are most naturally interpreted as
1D images. However, our analysis extends naturally to 2D images as well
through (un)vectorization, just as is done experimentally in
[IG22]. To do this, we construct $N^2$-dimensional signals
with a special covariance given by
$\Sigma_y = \tilde{\Sigma}^{a}_y \otimes \tilde{\Sigma}^{b}_y \in \mathbb{R}^{N^2 \times N^2}$,
where
$\tilde{\Sigma}^{a}_y, \tilde{\Sigma}^{b}_y \in \mathbb{R}^{N \times N}$
are covariances along the two axes of the 2D image. $N^2$-dimensional
data sampled from such a distribution can be turned into a 2D image by
inverting the vectorization operation:
$\tilde{\mathbf{X}} \equiv \operatorname{vec}^{-1}(\mathbf{X}) \in \mathbb{R}^{N \times N}$,
which puts the first $N$ entries of $\mathbf{X}$ in the first column of
$\tilde{\mathbf{X}}$, the next $N$ entries of $\mathbf{X}$ in the second
column of $\tilde{\mathbf{X}}$, and so on. This is the procedure we used
to generate 2D receptive fields in Figure 1 (Right) in the manuscript for the SCM.
We focused on the 1D case because it is, following the argument above,
the most general case, and also the most natural to consider with a
feedforward network that does not make any assumptions on intra-layer
connectivity.
Pdf: /pdf/acbc61ccc5e463e9160e9e5defe520aab3ac4349.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
3D Equivariant Pose Regression via Direct Wigner-D Harmonics Prediction | Accept (poster) | Summary: This paper proposes a method for object orientation estimation from images. The method predicts object orientation in the frequency domain of SO(3), by using SO(3) equivariant layers that operate on coefficients of Wigner-D matrices. The network is trained with a MSE loss between the predicted Wigner-D coefficients and the Wigner-D coefficients associated with a ground truth rotation label. The network is evaluated on two datasets, PASCAL3D+ and ModelNet10-SO(3), and outperforms all baselines in terms of rotation precision and accuracy. Additionally, an ablation study is performed which demonstrates that the proposed output formulation is better than traditional rotation formulations, like quaternions, euler angles and rotation matrices, and that the method is robust to the sampling strategy used at inference time.
Strengths: - The paper introduces a new way to train Wigner-D coefficients for accurate object pose prediction. They apply a weighted MSE loss between the predicted coefficients and ground truth coefficients.
- The proposed approach achieves SOTA performance on the PASCAL3D+ and ModelNet10-SO(3) datasets.
- The experimental section is thorough. Competitive baselines are included and statistical data is reported for the proposed method. The authors include a new variation on the ModelNet10-SO(3) dataset, which evaluates the sample efficiency of the method and baselines.
Weaknesses: - The contribution is interesting, but minimal compared to existing work. The neural network model and evaluation scheme used in this paper was introduced in [1]. However, this is not entirely clear the way the paper is written. For instance, Sections 4.2, 4.3, 4.4, and 4.6 are all describing specific techniques from [1]. It would be easier to understand the contributions specific to this paper's network if these sections were moved to Background.
- The second contribution of the paper, as stated, is confusing. What is meant by the "intrinsic properties of SO(3) equivariant representations"? Also, the claim "structurally guarantee 3D rotational equivariance" is not correct. It is not possible to guarantee 3D rotational equivariance of the network when the input is an image.
- The core contribution of the paper is the use of a regression loss applied to learned Wigner-D coefficients. However, this aspect of the method is not examined closely. There is no information of how the values of $w_l$ are defined in Eqn. 5, nor any experiment justifying why MSE is preferred. The contribution would be strengthened by a comparison of other sensible loss functions, an experiment that showed the sensitivity of the method to the values of $w_l$. Additionally, it is not clear how the proposed loss is affected by object symmetries. Could the authors explain why regression on Wigner-D coefficients does not suffer from inaccurate predictions when the orientation is ambiguous (e.g. tables have 180 degree symmetry in ModelNet10-SO(3))?
- The ablation experiment in Section 5.5 needs to be described in more detail. How exactly was the method modified to produce euler, quaternion, etc. outputs? Were SO(3) equivariant layers still used?
- Some of the writing could be improved for clarity. In Section 1, "Accurate determination of 3D orientation presents a more complex problem than translation due to the intricacies of rotation symmetries and the high dimensionality of pose space" is confusing: how is 3D orientation higher dimensional than 3D translation? Also, "SO(3)-equivariance is crucial for accurate pose estimation ..." is not entirely true, as evidenced by the competitiveness of non-equivariant baselines in Tables 1&2.
- Figure 1, some of the images are taken from other sources without attribution (e.g. the healpix grid is from https://healpix.sourceforge.io
[1] Klee, David M., et al. "Image to sphere: Learning equivariant features for efficient pose prediction." arXiv preprint arXiv:2302.13926 (2023).
Technical Quality: 3
Clarity: 2
Questions for Authors: - Did you explore evaluating the model with a higher resolution grid or with gradient ascent? Given that the regression loss can be precise, the model may be able to achieve even lower rotation error with more grid points.
- The regression loss is very effective at achieving high precision, but seems to lose the ability to model object symmetries or uncertainty (according to qualitative results in Figure A2). Is there a way to combine the benefits of high precision and distribution modeling? Other pose works have used a classification-regression framework [1], and it would be interesting to see if this idea could be applied here. For instance, apply the classification loss over a HEALPix grid first, then apply the proposed regression loss if the prediction is close.
- In Appendix C.6, it seems like the frequency level is set for both the S2 and SO(3) convolutions in the network. Did you try only modifying the frequency for the output of the network? The issue with higher frequencies reducing performance may be due to the non-linearity (ReLU in spatial domain), which could be a bit unstable with high frequency signals. Some other works have used gating functions for non-linearities on Wigner D coefficients [2].
- From the introduction, "in the context of spherical CNNs, ..., the 3D rotation parametrization in the spatial domain is inadequate because these SO(3) equivariant networks operate in the frequency domain." What do the authors mean by inadequate? Clearly, conversion from frequency to spatial domain is possible (the proposed method does so in its non-linearity layers).
[1] Liao, Shuai, Efstratios Gavves, and Cees GM Snoek. "Spherical regression: Learning viewpoints, surface normals and 3d rotations on n-spheres." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2019.
[2] Liao, Yi-Lun, and Tess Smidt. "Equiformer: Equivariant graph attention transformer for 3d atomistic graphs." arXiv preprint arXiv:2206.11990 (2022).
Confidence: 5
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: The limitations are discussed in Appendix F.
What is meant by, "our method can suffer from significant errors due to the loss of spatial information when projecting 3D data onto spherical harmonics"? What 3D data? Where are these significant errors described in the experimental results or discussion?
The authors mention that, "computational cost associated with Wigner-D coefficients and SO(3) equivariant networks should be improved...";
this is a bit vague and contradicts Table A6, which shows the proposed method has the fastest frame rate during evaluation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer f5v8 for constructive comments and suggestions.
**[W1. Clarifying contributions]**
Please see our general response above
**[W2. Claims of structural guarantee of 3D rotational equivariance]**
The statement "structurally guarantee 3D rotational equivariance" is indeed misleading. While the network can use components like spherical convolutions and Wigner-D matrix predictions to maintain equivariance, perfect 3D rotational equivariance cannot be guaranteed with 2D image inputs, as they inherently lack full 3D structural information. The network approximates 3D equivariance using the spherical mapper to handle 3D rotations effectively, but absolute equivariance is not achievable with 2D data. We will clarify that the network aims to approximate 3D rotational equivariance as closely as possible given these constraints.
**[W3. symmetry, other losses, $w_l$]**
Please see our response above
**[W4. Detailed configurations of Table 3 ablation studies]**
Table 3 in Section 5.5 demonstrates the effectiveness of our proposed Wigner-D matrix compared to other rotation representations in the spatial domain (Euler angles, Quaternion, axis-angle, rotation matrix). For all cases, we retained the backbone networks and SO(3) equivariant layers. The only modifications were the output prediction dimension size and the ground-truth rotation representation.
**[W5-1. Clarification on the complexity of 3D orientation and translation]**
We acknowledge that the original statement might be confusing. To clarify, we can represent the 3D translation and orientation:
* 3D translation: movement along the x, y, and z axes. (3 dims)
* 3D orientation: involves rotations around these axes, represented by Euler angles (3 dims), quaternions (4 dims), or rotation matrices (9 dims).
We will update our manuscript to improve clarity: 3D orientation is more complex due to rotational symmetries and the non-linear nature of rotations. Unlike translation, rotations present challenges such as gimbal lock and the need for continuous representation without singularities.
**[W5-2. Overstatement of SO(3) equivariance]**
The claim "SO(3)-equivariance is crucial for accurate pose estimation" might be overstated. While it can enhance performance and robustness, it is not the only way to achieve competitive results. As shown by the non-equivariant baselines in Tables 1 and 2, alternative approaches also deliver strong performance. We will revise the manuscript to reflect this nuanced perspective.
**[W6. Attribution of images in figure 1]**
We apologize for the oversight in attributing images. We will revise the figure to clearly credit all original sources.
**[Q1. Evaluation of model with higher resolution grid and gradient ascent]**
Table R2 shows the impact of changing the grid resolution $Q$ on performance. The higher resolution (18.87M) of the SO(3) grid improves the performance at a lower threshold at inference time.
Table R5 shows the evaluation results using gradient ascent on the pose distribution. While gradient ascent does provide some performance improvement, the increase in inference time outweighs these gains, so argmax is our preferred method for simplicity and fast evaluation.
**[Q2. Combining high precision and distribution modeling]**
Our method currently struggles to model object symmetries and uncertainty. However, Table R1 demonstrates that combining our approach with a distribution loss based on cross-entropy [48, 34] is effective for symmetric object modeling, and a classification-regression framework [38] can be an alternative solution for distribution modeling.
**[Q3. Adjusting frequency levels and mitigating high-frequency instability]**
Increasing the frequency level at the network's output is challenging due to the need to define the maximum frequency level of spherical harmonics at the model initialization. Adjusting the final output frequency can help mitigate performance drops at high frequencies caused by non-linearity. Using gating functions on Wigner-D coefficients in Equiformer (Liao and Smidt, ICLR 2023) could address instability issues.
**[Q4. Clarification on spatial domain parametrization in spherical CNNs]**
The term "inadequate" refers to the practical challenges and inefficiencies of using spatial domain parametrizations directly within spherical CNNs. These challenges arise because spherical CNNs naturally operate in the frequency domain, where rotations are represented and manipulated more efficiently using spherical harmonics and Wigner-D matrices.
Although conversion between the frequency and spatial domains is possible (by inverse Fourier transform), maintaining consistent parametrization in the frequency domain is more straightforward for the SO(3)-equivariance with spherical CNNs and proves to be more effective, as demonstrated in Table 3 of the main paper.
**[L1. Clarifying the loss of spatial Information in spherical harmonics projection]**
In Section F, the statement about significant errors refers to the loss of spatial information due to truncating the frequency level of spherical harmonics for efficiency. "3D data" refers to data points on the 2-sphere. These errors, not explicitly detailed in the experimental results, arise from truncating higher frequency components, leading to a loss of spatial detail. We will update the manuscript to clarify this point and provide a more precise explanation.
**[L2. Clarifying computational costs in the limitation section]**
In Section F, we note that while our method achieves the best inference time (Table A6), there is room for improvement in space complexity. The computational cost, due to the complexity of Wigner-D coefficients and convolutions in SO(3)-equivariant networks, remains high. These operations require handling high-dimensional representations and specialized mathematical processes. Therefore, reducing memory usage and computational load is a target for future enhancement.
---
Rebuttal 2:
Comment: Thank you for the response. I appreciate all the new results presented in the PDF, and the decision to revise the background/related work sections.
It is exciting to see the proposed MSE loss can be combined with the Cross-Entropy loss from I2S to achieve both high precision and accurate distributions.
Regarding R3, what exactly is the takeaway from this plot? That the model puts more weight on higher-frequencies early on in training? Is this something that could be addressed with a weighted MSE loss to boost performance in the very low data regime?
Regarding R4, it would be nice to understand the pitfalls of the MSE loss over Wigner-D coefficients. For instance, could the gradient of the MSE loss push the model away from the ground truth rotation in some cases? This doesn't need to be addressed here but it would be interesting to study in the future.
Regarding R5 and R2, it is interesting that there is a sweet spot in terms of how precise the model can be. Do you think this is due to the limited precision of the truncated Fourier series?
Given the proposed revisions and the new results, I updated my rating.
---
Rebuttal Comment 2.1:
Title: Response of the Official Comment by Reviewer f5v8
Comment: Thank you for updating your rating by reviewing our rebuttal and revision proposal of the background/related work sections, and thank you for your thoughtful feedback.
* * *
**Takeaway from the plot in Table R3**: The plot demonstrates that the model consistently assigns greater weight to higher frequencies ($y$-axis and value), which correspond to more complex rotational components in the Wigner-D matrix. This suggests that the model effectively captures and prioritizes these complex aspects, regardless of the number of training views ($x$-axis).
Additionally, the plot further indicates that the weighted MSE loss plays a key role in enabling the model to focus on more complex rotations early stage in the training process, ensuring that these components are adequately emphasized even when data is limited (very low data regime).
* * *
**Understanding the pitfalls of the MSE Loss in Table R4**: Yes, exploring whether the MSE loss could potentially push the model away from the ground truth rotation is indeed a valuable direction for future research. However, at this point, we have not observed or verified instances where the model moves away from the ground truth rotation due to the MSE loss.
* * *
**The “sweet spot” in terms of the model precision in Tables R2 and R5**:
The "sweet spot" observed in Tables R2 and R5 is indeed an interesting phenomenon. However, we do not attribute this is due to the limited precision of the truncated Fourier series. Instead, it seems to be more related to the trade-off between computational cost and performance.
In Table R2, as the SO(3) grid precision increases, we observe that model accuracy improves and then saturates. This suggests that beyond a certain point, further increasing the precision (e.g. the number of points, $Q$) yields diminishing returns in accuracy relative to the additional computational cost. For example, we identified a good trade-off at $Q$=2.36M points (6th row), where further increasing to $Q$=18.87M points (7th row) improves accuracy at finer error thresholds (Acc@3° ) but also significantly increases space complexity.
Similarly, in Table R5, employing gradient ascent does enhance model accuracy and reduces error, particularly for more precise pose estimations. However, this also results in a significant increase in inference time. Thus, the observed "sweet spot" in precision is more likely due to the inherent trade-offs in computational resources and the model's ability to effectively utilize them, rather than limitations imposed by the truncated Fourier series. | Summary: The paper tackles 3D pose estimation from a single image. It builds on Image2Sphere which maps CNN features to the sphere which are then processed by spherical CNNs to produce a distribution over SO(3). The submission proposes using an MSE loss function in the spectral domain instead of cross-entropy in the spatial domain. It outperforms the baselines on pose estimation on ModelNet10-SO(3) and Pascal3D+.
Strengths: S1) There could be value in studying losses and other operations in the spectral domain instead of spatial.
S2) The method seems to outperform the baselines in the attempted benchmarks.
Weaknesses: W1) The paper seems to be a lightly modified version of Image2Sphere [1], but this is not properly acknowledged in the writing. Following Figure 2 and Section 4, the feature extraction, spherical mapper, Fourier transformers and pose predictor seems to be the same, the only difference being that in the submission, the last IFT is skipped and the loss is computed in the spectral domain during training (at test time it still needs the IFT so it is the same as Image2Sphere as far as I understand). Although Image2Sphere [1] is cited, the method section mostly describes the same things which implies they are new contributions. Please rewrite it focusing on the new contributions and being explicitly about what comes from prior work.
W2) The main difference with respect to Image2Sphere is the loss being computed between Wigner-D coefficients instead of spatially. However how this is actually done is poorly explained.
a) How does one L245 "convert the GT 3D rotations from Euler angles (...) to the Wigner-D matrices"? The text seems to imply that the Wigner-Ds are evaluated at the ground truth rotation, but then the loss in Eq (5) does not seem to make sense since the output of the model are the coefficients associated to each D matrix. I think what should be done is inverting a function on SO(3) that is an impulse on the ground truth pose but there is no mention of it in the paper.
b) What is the "similarity between Wigner-D coefficients and an SO(3) grid" is L263? It seems that this somehow outputs a distribution on SO(3) that is queried spatially, so it seems like it should be an inverse SO(3) Fourier transform?
W3) Experiments on SYMSOL are missing. The dataset is designed to assess performance on symmetric objects, so methods that predict distributions over SO(3) are necessary; the baseline Image2Sphere handles it well. I believe the proposed method might perform poorly on it because of the way the ground truth is computed or the MSE loss, which are the differences with respect to Image2Sphere. I believe a comparison and discussion of this possible limitation is warranted.
W4) The paragraph L113 "Rotation representation in frequency domain." is confusing. It talks generally of 3D rotations but actually describes the specific case of rotations of spherical functions. I think the U in eq (1) should be the same as the D in eq (2) but they are described as different things.
## References
[1] Klee et al, "Image to Sphere: Learning Equivariant Features for Efficient Pose Prediction", ICLR'23.
Technical Quality: 2
Clarity: 2
Questions for Authors: In L212, is looks like S is a function on the sphere defined in the spatial domain, but it is said to be C'xN where N is the number of spherical harmonics. Should N be the number of points on the sphere instead?
Typos:
L44: leverages -> leverage
L70: faciltate -> facilitate
L87: Foruier -> Fourier
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: Possible limitation on symmetric objects should be discussed (see W3).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer HCET for constructive comments and suggestions.
**[W1. Clarifying our contributions compared to I2S [34]]**
Please see our general response above.
**[W2-a. Conversion from Euler angles to Wigner-D during training (L245)]**
The conversion from Euler angles to Wigner-D matrices involves a mathematical transformation using the $ZYZ$ sequence of rotations and the corresponding Wigner-D matrix elements. The model outputs are indeed the coefficients associated with the Wigner-D matrices. The loss function in Eq. (5) compares these predicted coefficients directly with the ground-truth coefficients, facilitating the direct regression of 3D rotations in the frequency domain.
Additionally, unlike Image2Sphere [34], we do not perform the inverse Fourier transformation (iFT) of the output of the networks during training. Instead, we train on the output Wigner-D coefficients in the frequency domain, and we do not apply the iFT even at test time.
We hope this explanation clarifies the process and resolves any confusion. Thank you again for your valuable feedback.
**[W2-b. Computing the similarity with the SO(3) grid during inference (L263)]**
The similarity calculation in L263 is different from an inverse SO(3) Fourier transform. It is a process to find candidate Euler angles from the predicted Wigner-D coefficients. The predefined SO(3) HEALPix grid includes a set of predefined Wigner-D coefficients corresponding 1:1 to Euler angles. We calculate the similarity between this grid and the predicted Wigner-D coefficients to obtain a discretized distribution. This probability distribution provides multiple hypotheses for the pose (i.e., Euler angles) predictions of the input image. We then use argmax or gradient ascent to determine the final predicted Euler angles at inference, as shown in the results of Table R5. This method is similar to the approaches used in [48, 34].
While an inverse SO(3) Fourier transform could convert the predicted Wigner-D coefficients to Euler angles, it has high time complexity, as it requires separate calculations for each frequency level. The SO(3) iFT method produces a single value, making distribution modeling difficult.
In contrast, our design choice of querying the predefined HEALPix SO(3) grid enables distribution modeling by calculating simple vector-matrix multiplication. This allows us effective symmetric object modeling by combining the existing cross-entropy loss [48, 34], as demonstrated in Table A1 and Figure A1.
**[W3, L1. Evaluation on SYMSOL]**
Please see our general response above.
**[W4. Clarification on paragraph L113 "Rotation representation in frequency domain"]**
The paragraph "Rotation representation in frequency domain" describes how 3D rotations are represented in the frequency domain using spherical harmonics and the Wigner-D matrix.
The Wigner-D rotation representation in L113-125 is not limited to a specific case of 3D rotations but can be converted from any 3D rotation representation, such as Euler angles, quaternions, and 3D rotation matrices. Our SO(3) equivariant network predicts the Wigner-D representation in the frequency domain instead of predicting rotations in the spatial domain (Euler angles, quaternions, etc.).
To address the confusion, we clarify that the $U$ in Equation (1) and the $D$ in Equation (2) are indeed the same. We will update the manuscript to use consistent notation in Equations (1) and (2). We apologize for any confusion caused by this inconsistency.
**[Q1. Clarification of the spherical notation in L212]**
In Section 4.3, $\mathcal{S}$ is a spherical harmonics function defined in the frequency domain.
$N$ denotes the total number of spherical harmonics in the frequency domain.
$p$ denotes the number of points on the sphere in the spatial domain.
We will update the manuscript to clearly indicate that $\mathcal{S}$ is defined in the frequency domain and ensure the distinction between $N$ (number of spherical harmonics) and $p$ (number of points on the sphere) is clear.
**[Q2. Typos]**
We will revise our manuscript according to your findings of typos. Thank you for your feedback.
---
Rebuttal Comment 1.1:
Comment: W1: Thank you, I really hope this is updated in the main text since multiple reviewers pointed out that the method section might be seen as claiming contributions from prior work.
W2-a: I am interested precisely in the "mathematical transformation" mentioned in the rebuttal which does not seem to be described anywhere. Specifically, the Euler angles represent an element of SO(3), while each Wigner-D matrix element is a fixed map from SO(3) to $\mathbb{C}$. So I think the map is from an element of SO(3) to a set of coefficients corresponding to the Wigner-D matrix elements; one way to obtain this map is by the inverse SO(3) Fourier transform of the impulse function on SO(3) centered on the ground truth rotation, but this procedure is not described anywhere so I am not sure if that's what is being done or I am missing something.
W2-b: "The predefined SO(3) HEALPix grid includes a set of predefined Wigner-D coefficients corresponding 1:1 to Euler angles" -> sounds like this is exactly what the inverse SO(3) Fourier transform of function that is one at the cell corresponding to some Euler angles and zero elsewhere would give. So my understanding is that instead of computing the IFT of the predicted coefficients, the IFTs are precomputed for each discrete delta function of the grid, and the similarity between predicted coefficients and the precomputed is used as a distribution on SO(3). So I believe that for functions of bandwidth $B$ the approach stores $O(B^{6})$ coefficients (the grid has $O(B^{3})$ points and each delta function is represented by $O(B^{3})$ coefficients). The procedure to compute the distribution would also be $O(B^{6})$, which is the same as the naive algorithms for the SO(3) Fourier transform (SOFT). Since faster algorithms can reduce that to $O(B^{4})$ and even $O(B^{3}\log^{2}(B))$, I think the proposed method is actually slower and uses significantly more memory than just computing the IFT of the predicted coefficients. Please clarify.
"The SO(3) iFT method produces a single value, making distribution modeling difficult." -> This seems incorrect; Fourier transforms and inverses are maps from function to function, not single values.
W3: Thank you for adding SYMSOL experiments. They show that the proposed MSE and the previously used likelihood losses are complementary and combining both may help -- I think this is a much more convincing result than the current ones in the submission. It should be noted however, that it still doesn't surpass IPDF results on SYMSOL and I believe there are follow-ups that outperform it.
---
Rebuttal 2:
Title: Response of the Official Comment by Reviewer HCET
Comment: Thank you for your thorough comment and insightful feedback.
**[Reply to W1]**
We will ensure that the main text is updated to reflect the clarifications provided in the rebuttal, as multiple reviewers have raised concerns about the clarity of the method section.
* * *
**[Reply to W2-a]**
We apologize for any lack of clarity in our original submission regarding the "mathematical transformation”. Specifically, we map the Euler angles, which represent an element of $ SO(3) $ , to a set of coefficients corresponding to the Wigner-D matrix elements. These coefficients are critical in capturing the rotational properties in the frequency domain.
**Mapping from SO(3) to Wigner-D Coefficients:**
As you correctly pointed out, each Wigner-D matrix element provides a fixed map from $ SO(3) $ to $ \mathbb{C} $. Our method involves predicting these Wigner-D coefficients directly, bypassing the complexities and potential pitfalls of spatial domain parametrizations.
To achieve this, we utilize an equivariant network to predict the Wigner-D matrix coefficients in the frequency domain directly from the input image features. The prediction process does not explicitly involve an inverse $ SO(3) $ Fourier transform of an impulse function centered on the ground truth rotation. Instead, our network learns to map the input features to the Wigner-D coefficients that represent the corresponding $ SO(3) $ rotation, leveraging the equivariant properties of the network to ensure rotational consistency.
The output of our network is a vector of Wigner-D coefficients that directly encode the rotation in the frequency domain. These coefficients are then converted back to a rotation matrix or another suitable representation (e.g., Euler angles) as needed for evaluation or further processing.
**Clarification of the GT Transformation:**
We would like to gently remind you that Appendix A (and B.2) contains the detailed conversion equations between the Euler angles and the Wigner-D matrix, including the small Wigner-d matrix. We will further revise the transformation process in our final manuscript, elaborating on how the conversion between Euler angles and Wigner-D matrices is handled in our approach.
* * *
**[Reply to W2-b]**
Thank you for your detailed feedback on the computational efficiency of our SO(3) grid generation and similarity computation for SO(3) distributions. We would like to clarify and justify our approach.
**Clarification on the Inference Procedure and Similarity Calculation**:
The similarity calculation described in L263 is designed to identify candidate Euler angles from the predicted Wigner-D coefficients by comparing them against a predefined SO(3) HEALPix grid at inference.
You are correct that our approach involves precomputing the inverse SO(3) Fourier transforms to generate the set of predefined Wigner-D coefficients. This approach indeed requires significant space complexity, with a high time complexity of $ O(B^6) $ naively to initialize the predefined SO(3) grid. Consequently, our inference method is actually slower and uses more significantly more memory than just computing the IFT of the predicted coefficients.
However, after the predefined SO(3) grid is initialized, both the training and testing phases benefit from this precomputation, leading to reduced computation times during these phases. Additionally, for repeated runs, the grid can be stored and reloaded as needed, further optimizing execution time.
We apologize for the inaccurate expression: "The SO(3) iFT method produces a single value, making distribution modeling difficult”. What we intended to convey was that the inverse SO(3) Fourier transform results in a specific mapping, not a single value.
We hope this explanation addresses your concerns and clarifies our design choices. Our approach aims to balance computational feasibility with the ability to accurately model complex pose distributions in SO(3) space.
* * *
**[Reply to W3]**
Thank you for acknowledging the addition of the SYMSOL experiments. As you noted, the proposed MSE loss and the previously used likelihood loss work complementarily. However, our model does not surpass the performance of I-PDF [48] on SYMSOL, and we agree that future research should focus on further improving symmetry modeling to address these challenges.
---
Rebuttal Comment 2.1:
Comment: ## Mapping from SO(3) to Wigner-D Coefficients:
Unfortunately it is still not clear to me how the map from Euler angles to Wigner-D coefficients happen. The appendices A and B mostly repeat textbook information that is not helpful:
- definition of spherical harmonics (6)
- relation of wigner D and d (7)
- rotating spherical harmonics with Wigner-D (8)
- rotation in 3D from Euler angles (A.2.1)
- expression for Wigner-D (9)
- different ways to represent rotation (B.1)
- rotation of spherical harmonics again (10) -- same as (8).
- relation of wigner D and d again (11) -- same as (7).
- expansion into spherical harmonics (12).
- rotation of spherical harmonics coefficients (13).
- decomposition into spherical harmonics (14).
Another suggestion is to clean up the repetitive writing. For example equations (1), (8) and (10) are the same and show the rotation of the spherical harmonics using Wigner-Ds which doesn't even seem relevant to the approach. Equations (2), (7) and (11) are also the same.
Another guess at what might be happening is that the Wigner-Ds might be *evaluated* at the ground truth Euler angles to produce the supervision signal so the model is trained to predict a set of *values* of Wigner-D functions and not the Wigner-D coefficients that come out of a SO(3) FT. This would look a lot like "positional encoding" that maps coordinates to their evaluation in Fourier basis (sin/cos for the Euclidean spaces). But then I don't see how the output Wigner-D coefficients of the spherical CNN are mapped to the evaluation of the same coefficients at the ground truth. This would also contradict most of the text that refers to "Wigner-D coefficients" and not Wigner-D evaluations.
## Clarification on the Inference Procedure and Similarity Calculation:
I think computing the similarity against $O(B^{3})$ vectors of $O(B^{3})$ coefficients each is also $O(B^{6})$ even if all vectors are precomputed so doing a fast SOFT would still be faster?
## Conclusion
I think significant rewrite is needed, both to clarify technical details and also the contributions wrt Image2Sphere [see W1] so I do not recommend acceptance at this time. | Summary: The paper proposes a method for regressing the rotation of an object from an image, where several (~10s) of training views of the object with known rotations are available. In particular, the approach proposes a rotation-equivariant network that predicts continuous Wigner-D matrix coefficients in the frequency domain, which can be converted into a discretised heatmap in SO(3) to extract the predicted rotation(s). The contributions are (1) a method for predicting in the SO(3) frequency domain directly and (2) an approach for decoding this to a rotation at inference time. The approach is evaluated on two datasets with respect to rotation error and demonstrates good performance. Ablations and sensitivity analyses in the paper and appendix validate many of the design choices.
Strengths: S1. Originality. The idea of directly predicting Wigner-D matrix coefficients is original and is a sound design decision. It avoids some of the discretisation limitations of prior work and is intuitive.
S2. Quality. On the whole, the paper is well-presented, with useful figures and a clear structure. The writing is also quite decent, and the results are clearly displayed. In particular, the related work is well-written, clear, concise and sufficiently complete to position the work in its research context. In addition, the experiments validated the overall efficacy of the approach well, except for a couple of points below, especially the graphs of performance w.r.t. training data cardinality.
S3. Clarity. The paper is clear overall and does a good job of directing the reader's attention appropriately. One caveat is that it is not self-contained (with respect to the main paper and also the main+appendix), with some components left out of the main treatment that reduced the overall clarity of the treatment. Nonetheless, the introduction motivated the problem and approach very well, and each section was itself well-motivated, making the paper easy to follow.
S4. Significance. The paper's contributions of predicting the frequency-domain coefficients and decoding them to a heatmap are likely to be of positive but mild significance to the NeurIPS community. The task itself is of high significance, since rotation estimation from an image (for known objects) is a challenging task that is important for many applications, including factory robotics, augmented reality, automotive, etc.
Weaknesses: W1. Contributions/claims.
W1.1 The method in sections 4.1-4 is presented as part of the contribution, but appears to be the same as previous work, including [34]. If this is not the case, the design differences and the rationale should be better explained. If it is the case, this limits the contribution to sections 4.5-6, which is (a) directly predict and optimise the Wigner-D matrix coefficients, and (b) convert to a discretised heatmap and take the argmax. While (a) is likely key to the success of the method, it would appear to be a small variation on an existing approach; (b) on the other hand is very minor and indeed not claimed as a contribution. What this reviewer would like to see here is an expansion of these sections with some interrogation (and testing, in section 5) of the design choices. For example, is MSE a theoretically-grounded choice here, what are the weights w, why argmax instead of clustering for multi-hypothesis outputs?
W1.2 Relatedly, the list of contributions has 3 items, but 2-3 do not seem to be contributions of this paper. That is, the contribution of leveraging SO(3)-equivariant representations to guarantee equivariance has clearly been done before (e.g., [27,34,40,33], among others); and achieving SOTA performance is not a contribution.
W1.3 The claim that the proposed method is continuous and therefore better than the discretised methods (e.g., line 97) is dubious, when the predicted rotations are indeed discretised (section 4.6). The claim of loss of precision due to discretisation (for other methods) is also not validated in the experiment section; and one would assume that this method would also lose precision for the same reason (as Q decreases).
W1.4 The claim that parametric models are insufficiently expressive for this task (L82) is not validated. It seems likely, but would depend on the model and would ideally be directly tested to back up the claim.
W2. Design choices. (Partially overlaps with above)
W2.1 Spherical mapper. While this is taken from existing work, it is not explained why it's a reasonable design decision to warp a 2D planar signal to the sphere, or given any motivation. It's clear that it's necessary for the method, to allow S2- and SO(3)-equivariant operations, but geometrically it seems quite dubious. Could the feature warping be made more geometrically meaningful by first running the image through DepthAnythingv2 and projecting out from a point on the crop's centroid ray?
W2.2 The text mentions other approaches for non-linearities that avoid the extra (i)FFT steps (L233) but does not test or validate this design choice.
W2.3 MSE (5). This does not seem to be a theoretically- or empirically-justified choice but is key to the contribution. Lines 254-258 loosely refer to this, but this section would seem to be a good opportunity to provide a justification for the design.
W2.4 Discretised distribution on SO(3). The design choices around this part are not interrogated or justified - it reads as rushed or incomplete. For example, it states that the inference scheme "models objects with ambiguous orientations or symmetries by employing multiple hypotheses", but this is never done, as far as I can see. Instead, the argmax or gradient ascent is used, returning a single mode. Nowhere does the paper demonstrate how to use the approach to extract multiple hypotheses or test efficacy on extracting equivalent rotations for symmetric objects (see experiment section below). This is a missed opportunity and leads to the paper arguing against itself.
W3. Clarity.
W3.1 As alluded to earlier, the treatment is not self-contained. In particular, the Wigner-D matrix representation is central to the method and, for completeness and ease of understanding, should be included in the main paper. The reader would want to see the formulation that directly connects the task to the representation, instead of leaving that jump a bit vague (L121) and referring to the appendix. The function f in (4) and (12) are presumably not the same, but are not specifically defined for this task. Essentially, it reads as being incomplete, missing some of the material that connects the theory to the specific instantiation here.
W3.2 L248. Please provide/derive the weights w for completeness.
W4. Experiments.
W4.1 The comparisons (especially in Table 1) would be more meaningfully grouped by the backbone extractor. For example, we should actually be comparing the similar method [34] with "ours (ResNet-50)", where a 0.6deg improvement is achieved. Similarly, it would be useful to compare the ResNet101 version on [34] in this table (like the version in Tab 2) since it is the closest existing method.
W4.2 Missing dataset. SYMSOL, used in the closest related work [34,40], is not included in this paper, despite being a good opportunity to evaluate the performance with complex symmetries. Leaving it out gives the impression that the method cannot handle such cases, even though it seems that it ought to be able to.
W4.3 Multi-hypotheses untested (despite L271). This claim ought to be experimentally tested, since it is such a relevant and interesting potential feature of the approach.
W4.3 Ablations/sensitivity analyses. A sensitivity analysis on the influence of discretisation on precision (Q) is missing (despite claim in L96); the choice of MSE is not tested.
W5. Minor.
W5.1 The paper needs a proofread - there are many typos, syntactical and grammatical errors.
W5.2 Confusing formatting error with footnote 2 appearing the page before it appears in the caption (L262.5).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Is 4.1-4 pre-existing work? If so, please make this clear in the paper.
2. Recommendation to rework the list of contributions to focus on the new aspects of the proposed approach and their significance.
3. Recommendation to validate the claim that the method does not lose precision due to discretisation.
4. Recommendation to expand on the design decisions W2.1-4.
5. Recommendation to test objects with complex symmetries via the SYMSOL dataset; and test the ability to predict multiple hypotheses as claimed.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: Yes, the authors adequately address the limitations and broader impact in the appendix, with an appropriate level of self-reflection and consideration.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer HnJh for constructive comments and suggestions.
**[W1.1, W1.2, Q1, Q2 clarifying the contributions]**
Please see our general response above.
**[W1.3, W4.4, Q3. Continuity of rotations, Sensitivity analysis of the SO(3) HEALPix discretization]**
We carefully claim that our learning method focuses on continuous rotations. We directly learn the Wigner-D coefficients, which are derived from 3D rotations (Euler angles), without any discretization during the training phase. The use of the SO(3) HEALPix grid during inference serves two purposes:
1. To convert SO(3) rotations from the frequency domain to the spatial domain, and
2. To address pose ambiguity by providing multiple solutions.
As a result, we obtain a distribution with very sharp modality. By taking the argmax of this distribution, we achieve sufficient precision in 3D orientation estimation, specifically around 1.5 degrees.
Table R2 shows the effect of changing $Q$ on performance is. For the ModelNet10 benchmark, we achieve similar results in the common target metrics such as Acc@15 and Acc@30 even with a lower size of SO(3) discretization grid ($Q$=4.6K).
With our model's choice of $Q$=2.36M, we obtain high scores even on low-threshold evaluation metrics like Acc@3. Comparative experiments at lower thresholds are provided in appendix Table A1.
Therefore, the continuity of rotations in our method ensures that we can predict an accurate, more precise pose. This is a significant advantage over other methods that suffer from precision loss due to discretization, as our approach maintains high accuracy across varying levels of discretization.
**[W1.4 The claims of parametric model]**
Our statement about the insufficient expressivity of parametric models (L82) refers to the potential lack of flexibility due to the dependency on predefined prior models, compared to non-parametric models. We acknowledge that this claim was not directly validated in our work. To provide a more accurate representation, we will revise our claim to reflect that.
**[W2.1. Justification of the spherical mapper]**
The spherical mapper maintains the geometric structure of the image when projecting onto the S2 sphere, as detailed in [34]. This method involves lifting the 2D image onto the sphere and converting spherical points using spherical harmonics. Table R6 shows that the spherical mapper outperforms simple Fourier transforms on 2D feature maps.
Using depth information from methods like DepthAnythingv2 for 3D lifting is a good idea and can enhance geometric accuracy. Additionally, centroid ray regression has been explored in research such as [70]. However, incorporating external depth modules increases computational costs and broadens our research scope, so we consider this for future work.
**[W2.2 Non-linearites for equivariant layers]**
The FFT-based approximate non-linearity [19] and equivariant non-linearity for tensor field networks [51,65] mentioned in the text provide non-linearity based on Fourier kernels, avoiding the need for FFT and iFFT steps. However, these methods are not included in our quantitative evaluation as their code is not publicly available. Testing these approaches could potentially improve performance or efficiency when combined with our Fourier-based SO(3) equivariant network.
**[W2.3, W.4.3, Q4 Design choice of MSE]**
Please see general response above.
**[W.2.4, W4.2, W4.3, Q5. Evaluation on SYMSOL]**
Please see general response above.
**[W2.4 Discretised distribution on SO(3). (argmax vs. clustering)]**
Table R5 shows the evaluation results using gradient ascent on the pose distribution. While gradient ascent does provide some performance improvement, the increase in inference time outweighs these gains, so argmax is our preferred method for simplicity and fast evaluation.
**[W3.1 Clarification of Wigner-D representation, ]**
We will clarify the Wigner-D matrix representation in the main paper. While detailed equations are provided in Sections A.2, A.2.2, and B.2 of the appendix, we will include a concise explanation in the main text.
Specifically, the Wigner-D representation is implemented in a flattened form at different frequency levels. For example, the matrix coefficients in a frequency level $l$ is represented as a flattened vector of size $(2l+1)*(2l+1)$. We use a maximum frequency level of 6, resulting in a vector of size 455 (i.e., $1∗1+3∗3+5∗5+...+13∗13=455$) for the Wigner-D coefficients. This will be clearly explained to directly connect the task to the representation in our final manuscript.
**[W3.1 Notation of $f$ in eq (4) and (12)]**
We will clarify the notation to differentiate the two $f$ functions in Equations (4) and (12). These functions are indeed different.
Equation (4) represents the general form of a group equivariant network.
Equation (12) describes the rotation of spherical harmonics using the rotated coefficients of the Wigner-D matrix.
We will update the notation to clearly distinguish between these two functions and ensure the connection between the theory and its specific instantiation is clearly defined.
**[W3-2. Derive the weights $w$]**
Please see our response above.
**[W4.1. Fair comparison of the backbones in Table 1]**
Thank you for the suggestion. We include a comparison in the appendix (Table A1) that aligns the sizes of the backbone networks. Some networks, like Inception-v3, are not directly comparable, so we group representative methods for fair comparison based on similar backbones. These results show that our method outperforms existing methods, even on finer metrics like Acc@3 and Acc@5, demonstrating its effectiveness.
**[W5. Typos and formatting errors]**
We will conduct a thorough proofreading to correct typos, syntactical, and grammatical errors. Additionally, we will adjust the positioning of footnote 2 to ensure it appears at the correct location where it is cited in the text. Thank you for your helpful suggestions.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response, and in particular the results included in the PDF. My primary concerns---regarding (W1) the contribution overlap with [34], (W2) the design choice of the MSE, and (W4) the lack of evaluation on SYMSOL and its symmetric objects---have largely been addressed in the rebuttal.
Specifically, the authors have committed to moving the material from existing work into a background section, making it substantially clearer what contributions have been made. While the contribution is relatively small in my view, it is sufficient and the material in the rebuttal has strengthened it to an extent (Table R1 especially). The authors have also interrogated the choice of MSE quite convincingly, addressing its strengths and shortcomings, and ablating it in Table R4. Finally, and most importantly for my decision, the authors have evaluated on the SYMSOL dataset, showing that the method performs poorly for highly-symmetric objects, but interestingly can be combined with the loss from [34] to achieve greater performance than either, on this dataset.
A small note in relation to the rebuttal: I maintain that the claim that the proposed method is continuous and therefore better than the discretised methods (e.g., line 97) should be removed. I understand that it is continuous in the training loop, but it most certainly suffers from discretisation at inference time. This claim should be more carefully worded, especially now that the authors have good evidence regarding this effect (in Table R2).
Having carefully read the rebuttal and the responses from and to the other reviewers, I am inclined to increase my rating to a WA.
---
Reply to Comment 1.1.1:
Title: Response of the Official Comment by Reviewer HnJh
Comment: Thank you for increasing the score to WA and acknowledging that our rebuttal successfully addressed concerns W1, W2, and W4.
We will remove the claim that the proposed method is continuous and therefore better than discretized methods, as in L97, by following your suggestion. We recognize that while the method is continuous during training, it does undergo discretization during inference. Consequently, we will carefully rephrase this point to emphasize the continuity of our method during training. | Summary: In this work authors predicts SO(3) poses for objects by predicting Wigner-D coefficients in frequency space. Similar to other work [1], it first lifts 2D features to 2-sphere using pre-defined grid and orthographic projection to sphere using this grid, convert them to frequency domain, applies SO(3)-equivariant spherical convolution on top to predict a vector of Wigner-D coefficients. At inference time, these Wigner-D coefficients are mapped back to spatial rotation R.
Authors show experiments on multiple datasets showing better accuracy compared to other work. Authors also show few-shot views training accuracy and ablation study for different design components of the network.
References
[1] Klee, David M., et al. "Image to sphere: Learning equivariant features for efficient pose prediction." arXiv preprint arXiv:2302.13926 (2023).
Strengths: Overall, the whole paper is well organized with sufficient figures to aid in understanding. Most of the math needed is included in the paper or cited the right resources. Experiments show that method is outperforming other pose-estimation methods that works in spatial / frequency domain. Ablation studies confirm authors claim of different network components.
Authors also provide sufficient detail on the model architecture, spherical convolution design, and training hyper-paramters, suggesting reproducibility.
Appendix has lots of nice extra experiments along with inference time.
Weaknesses: Some pose visualization on objects would be nicer and should be compared to other methods.
Technical Quality: 3
Clarity: 2
Questions for Authors: How would this method work for in-the-wild datasets?
Would this method be able to handle occlusions?
If we have to use "transformers" instead of convolution, what would change?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Authors discuss limitations of their approach in the last section. Societal impact is discussed in appendix in detail.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer iMp8 for constructive comments and suggestions.
**[W1. Comparison of pose visualization]**
We present a comparison of pose visualizations in Figure R2. This visualization method is the same to those used in Figures A2 and A3 in the appendix.
We compare to the I2S [34] baseline. The left side of Figure R2 shows results from ModelNet10-SO(3), while the right side presents on PASCAL3D+. The numbers next to "Err" above the input images represent the error in degrees between the model's predicted pose and the ground truth (GT) pose.
These results demonstrate that our model provides more accurate and precise pose estimations, even in cases where the I2S baseline fails. Additionally, on the PASCAL3D+ benchmark, which includes objects captured in real-world scenarios, our model consistently shows correct pose estimations, particularly in challenging situations where the I2S baseline struggles.
**[Q1. Evaluation on in-the-wild datasets]**
The PASCAL3D+ dataset is an in-the-wild dataset captured in typical real-world environments, which we evaluated in Table 2 and Figure R2. As far as we know, PASCAL3D+ is the primary in-the-wild dataset used for 3D orientation estimation. We have not found any other in-the-wild datasets that are commonly used for this task.
**[Q2. Handling occlusion]**
Yes, our method is capable of handling occlusions.
While our SO(3) equivariant estimator can offer some robustness to occlusions due to its rotational invariance and localized feature extraction, handling occlusions effectively often requires additional techniques and strategies. Our network can extract local features in a manner that is robust to 3D rotations, allowing it to identify visible parts of objects even when other parts are occluded. Additionally, the network's ability to recognize objects regardless of their orientation helps in identifying partially visible objects.
The results in Table R1 for the SYMSOL II dataset, which includes self-occlusion scenarios in symmetric solids, demonstrate that
our model jointly trained with $\mathcal{L}_{\text{dist}}$ [48, 34] handle occlusions better than the I2S [34] baseline. Our model improves accuracy by directly predicting a single clear pose, unlike the baseline model.
Figure R1 shows the SYMSOL II visualization results, specifically in examples 4, 5, and 6. When provided with occluded marks of symmetric objects, our model effectively uses these cues to estimate a single pose.
**[Q3. Transformer instead of convolution]**
We choose a convolution-based design, because using transformers instead of convolutions results in a slight performance drop.
Table R3 presents the results when transformers are used as the backbone network instead of convolutions. We trained the model using a Vision Transformer (ViT) pre-trained backbone pre-trained by the geometric task of cross-view masked image modeling [A].
Although the ViT is heavier and requires longer training time (1.4x), its performance actually declines. This suggests that convolutional backbones may still be more effective for 3D orientation estimation tasks.
[A] CroCo: Self-Supervised Pre-training for 3D Vision Tasks by Cross-View Completion (Weinzaepfel et al., NeurIPS 2022)
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal.
Comment: Reply W1: I was rather asking to fit bounding boxes onto the object to compare with Ground-truth. Errors on sphere doesn't show exactly, how well the predicted pose is fitting on the object.
Reply Q1: I meant the performance on dataset outside the training distribution. Does the model generalize to different dataset that the one it is trained on?
---
Rebuttal 2:
Title: Response of the Official Comment by Reviewer iMp8
Comment: First, thank you for taking the time to read and consider our rebuttal.
* * *
**Reply to R1 regarding pose visualization comparison:**
We would like to clarify that the visualization of pose distribution using the surface of the 2-sphere with the Mollweide projection [A] conveys the same information as visualizations that use bounding boxes or the $x$, $y$, $z$ axes. For example, the pose distributions shown on the spheres in Figures R2, A2, and A3 can be converted into single pose values that can be directly plotted onto the object, as demonstrated in Figure 6 of [B].
[A] Implicit-PDF: Non-Parametric Representation of Probability Distributions on the Rotation Manifold (Murphy et al., ICML 2021)
[B] A Laplace-inspired Distribution on SO(3) for Probabilistic Rotation Estimation (Yin et al., ICLR 2023)
However, we believe that the visualization method on the 2-sphere using the Mollweide projection [A], as used in Figures R2, A2, and A3, offers the additional benefit of enabling multi-hypothesis plotting in 3D pose space. This allows for deeper insights into symmetry and pose ambiguity modeling.
* * *
**Reply to Q1 regarding in-the-wild datasets (OOD scenarios):**
Evaluating out-of-distribution (OOD) performance is generally not the primary focus in the context of this 3D orientation estimation task. However, we have conducted OOD generalization experiments using our proposed method by training the model on different datasets, between ModelNet10 and PASCAL3D+. The results are presented below:
| Training Dataset | Evaluation Dataset | Acc@15 | Acc@30 | Rot. Err. |
|------------------|--------------------|---------|---------|-----------|
| ModelNet-SO(3) | ModelNet-SO(3) | 0.7590 | 0.7668 | 15.08° |
| ModelNet-SO(3) | PASCAL3D+ | 0.0004 | 0.0019 | 112.98° |
| PASCAL3D+ | ModelNet-SO(3) | 0.0015 | 0.0086 | 130.44° |
| PASCAL3D+ | PASCAL3D+ | 0.7495 | 0.8965 | 8.92° |
**Table:** Cross-dataset evaluation for validating out-of-distribution generalization on ModelNet10-SO(3) and PASCAL3D+ datasets.
As the results indicate, the model does not perform well when evaluated on out-of-distribution dataset. Nevertheless, we recognize this as an important area for future research, and we appreciate your suggestion to explore this further.
---
Rebuttal Comment 2.1:
Comment: I thank authors for the clarifications. I acknowledge the response. | Rebuttal 1:
Rebuttal: We appreciate the reviewers for their constructive comments and recognition of the strengths of our paper:
* Reviewer iMp8: We appreciate your positive feedback on the paper's structure, figures, equations, detailed method section, and ablation studies.
* Reviewer HnJh: Thank you for highlighting the strengths of our Wigner-D prediction in overcoming discretization method limitations. We value your compliments on the clarity and structure of our manuscript.
* Reviewer HCET: We appreciate your recognition of our use of a loss function in the spectral domain and the superior performance on benchmarks.
* Reviewer f5v8: Thank you for acknowledging our novel approach to object pose estimation and the thoroughness of our experimental section.
We address common questions in below:
## 1. Contributions
**R: HnJh [W1.1, W1.2, Q1, Q2. Clarifying the contributions, List of contributions]**
**R: HCET [W1. Clarifying our contributions compared to I2S [34]]**
**R: f5v8 [W1. Clarifying contributions]**
We would like to clarify that while our method shares the same backbone structure as [34], the key innovation lies in how we optimize using the Wigner-D matrix directly. This is a significant departure from [34], which does not utilize the Wigner-D matrix in this manner. Specifically, we propose a continuous rotation learning method through direct Wigner-D prediction.
We clarify that Sections 4.1-4 describe foundational concepts and methodologies based on pre-existing work, which may give the impression that these are new contributions. To address this, we will revise the paper to make it clear that these sections are based on prior work and move them to the background section. This will better highlight our novel contributions in Sections 4.5-6.
To address the reviewer's concerns, we clarify and emphasize the novel aspects of our contributions:
1. *Frequency-Domain Prediction*: Our approach uniquely predicts Wigner-D coefficients directly in the frequency domain, avoiding issues like discontinuities and singularities in traditional spatial domain methods, ensuring precise and continuous pose estimation.
2. *Tailored MSE Loss*: We introduce a frequency-domain specific Mean Squared Error (MSE) loss. This tailored loss function supports continuous training for SO(3) pose estimation and has the potential to integrate cross-entropy loss for distribution modeling, effectively addressing object symmetry challenges.
3. *Superior Performance and Data Efficiency*: Our SO(3)-equivariant network consistently outperforms existing methods on standard pose estimation benchmarks, demonstrating data sampling efficiency in data-limited scenarios.
These contributions provide substantial improvements in accuracy and robustness over the baseline methods, as demonstrated in our experimental results. We believe that our approach represents a meaningful advancement in the field of 3D orientation estimation.
## 2. Results on SYMSOL
**R: HnJh [W.2.4, W4.2, W4.3, Q5. Evaluation on SYMSOL, Test of the distribution on SO(3).]**
**R: HCET [W3, L1. Evaluation on SYMSOL]**
**R: f5v8 [W3-3. Effects on Wigner-D regression loss in symmetric objects]**
Table R1 shows symmetric object modeling on the SYMSOL datasets. Compared to the first row and second row [34], our model with only the Wigner-D regression loss derives on sharp modalities, which can be less effective than [34] for symmetric objects in SYMSOL I.
For clearly defined pose cases (e.g., SphereX in SYMSOL II), our Wigner-D loss alone performs well. However, in other SYMSOL II scenarios, the sharp distributions produced by our model can lead to low average log likelihood scores. This metric is particularly harsh on models with sharp peaks, making them vulnerable to very low scores in some failure cases.
In the third row, joint training of our method with the distribution loss [48, 34] achieves better performance than the baseline [34], demonstrating its ability to model symmetric objects. These results highlight our method's potential in handling complex symmetries and predicting multiple hypotheses. Figure R1 shows the visualization of pose distribution.
Most real-world objects have unique, unambiguous poses, validating our single pose regression method (e.g., ModelNet10-SO3, PASCAL3D+). If the task needs to cover symmetric cases, our model can be modeled with distribution loss [48, 34].
## 3. Design choice of MSE
**R: HnJh [W1.1, W2.3, W4.3, Q4. Design choice of MSE]**
**R: f5v8 [W3-2. Comparison with other loss functions]**
The choice of MSE for regression loss is for its simplicity and effectiveness. Cosine and angular distances measure only angles between vectors, ignoring magnitude, which is crucial in frequency domain applications. Chordal and geodesic distances often require normalization to the unit sphere, making them less intuitive and more computationally intensive, which can be a drawback in practical terms.
Table R4 compares various loss functions. While Huber and L1 losses are alternatives, they do not perform as well as MSE in our context. Geodesic loss in the frequency domain is ineffective because it requires separate calculations for each frequency level of the Wigner-D matrix, potentially losing the precision of the original 3D rotation, as we truncate the Fourier basis at a frequency level of 6.
**R: HnJh [W3-2. Derive the weights $w$ (L248)]**
**R: f5v8 [W3-1. The value of $w_l$]**
Figure R3 shows visualization of the weights $w$. The figure shows a consistent learning pattern, even as the # of training views changes. Notably, the weights increase with higher frequency levels, indicating that more complex rotations are effectively modeled with greater emphasis on higher frequencies. This demonstrates the effectiveness of weighting more complex rotations more heavily during training.
We have expanded these sections and included the relevant experiments in Section 5 to provide a comprehensive understanding of our design choices.
Pdf: /pdf/563e17da49b53fc12ebae9c5006505819c1a9133.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Probabilistic Federated Prompt-Tuning with Non-IID and Imbalanced Data | Accept (poster) | Summary: Mainly in the setting of imbalanced data, this paper proposes a probabilistic federated prompt-tuning pre-trained model from two aspects: local prompt generation and global prompt aggregation. In local prompt generation, each local set is assumed to be a random set of prompts distributed by a hierarchical generative model. In global prompt aggregation, non-parametric algorithm aligns local prompts of similar clients. On public computer vision datasets, experiment results demonstrate the effectiveness of the proposed method.
Strengths: 1. This method is interpretable, and the local prompt is obtained based on Bernoulli distribution and Gaussian distribution sampling, and the EM algorithm is used to optimize the iterative process.
2. The quantitative results of imbalanced data and the prompts embedding results of t-SNE can fully show the effectiveness of the proposed method.
Weaknesses: 1.The overall framework of the method is not clear enough. Although there is an overall framework diagram in the appendix, it omits a lot of details. It is confusing that the two DNNS are on the server side, and are the two DNNS on different clients consistent?
2.The quantitative comparison results of this method show GMM-PT, but the advantages of the proposed method and the theoretical distinction can be further explained.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. The specific implementation details of the model, such as the DNN employed, the specific configuration of the pre-trained model seem to be unclear?
2. The lack of discussion of federated prompt-tuning pre-trained models in related work seems to be very relevant to this paper.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The lack of a discussion of limitations and deficiencies is important.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for recognizing our contribution.
Your questions are addressed below:
**Q1. Workflow Diagram.** We apologize for the confusion. Our workflow diagram illustrates two different phases of our algorithm as annotated in the caption of Fig. 4. These two phases are iterative and thematically separated by the purple arrow:
On the left, the diagram highlights the local phase where each client (1) pulls a subset of prompts from the global set of the previous iteration and then (2) fine-tunes them.
On the right, the diagram highlights the global phase where local clients send their prompt sets to the server which then aggregates them.
**Q2. Implementation Detail.** Our work is based on a pre-trained Vision Transformer (ViT_B_32 pre-trained by ImageNet from PyTorch [a]. Each local client will fine-tune the pre-trained model using a local set of learnable prompts. At the end of each iteration, the local prompt sets will be shared with the server which then aggregates them into a global prompt set. Different baselines use different aggregation algorithms (see Appendix G).
[a] https://pytorch.org/vision/main/models/generated/torchvision.models.vit_b_32.html.
**Q3. Discussion of federated prompt-tuning pre-trained models.** We have provided this discussion in the introduction at lines 42-50. As the existing literature on federated prompt-tuning is relatively few [21-23] with most existing work utilizing FedAvg to aggregate local prompt sets, ignoring the prompt alignment issue, we decided to merge this discussion with our positioning in the introduction. If this work is accepted, we can use the 10th page to expand this discussion.
We hope our response has addressed your questions. Please let us know if you have any follow-up questions for us.
---
Rebuttal Comment 1.1:
Comment: Thanks for the response. The authors have addressed my concerns, and I will raise my score to accept.
---
Reply to Comment 1.1.1:
Title: Thank you for raising the score to accept
Comment: Dear Reviewer HV8G,
Thank you very much for agreeing to raise the score from weak accept to accept. We find this very encouraging!
We would appreciate it if you could also update the score in the original review.
Best regards,
Authors | Summary: This paper studies the problem of prompt tuning in FL to address the data imbalance problem. The topic is interesting and broad enough for the community. The motivation and problem setting are good and promising. Experiments are sufficient to support the effectiveness of the proposed method.
Strengths: 1. The problem of heterogeneous and imbalanced FL is interesting.
2. The overall writing and structure of this paper is good.
3. The experiments are sufficient to support the effectiveness of the proposed method.
Weaknesses: 1. The local imbalance setting is very extreme, as described in section 4.1. There’s no real-world application to support this setting.
2. This paper lacks discussion of existing FL methods that address the class imbalance problem like:
[1] Xinyi Shang, Yang Lu, Gang Huang, and Hanzi Wang, “Federated Learning on Heterogeneous and Long-Tailed Data via Classifier Re-Training with Federated Features,” in IJCAI, 2022.
[2] Xinyi Shang, Yang Lu, Yiu-ming Cheung, and Hanzi Wang, “FEDIC: Federated Learning on Non-IID and Long-Tailed Data via Calibrated Distillation,” in ICME, 2022.
[3] Wenke Huang, Yuxia Liu, Mang Ye, Jun Chen, Bo Du, “Federated Learning with Long-Tailed Data via Representation Unification and Classifier Rectification” in TIFS, 2024.
Although the addressing problem is different in that the class imbalance is global, there are still connections in terms of problem settings.
3. The proposed prompt aggregation method seems not to directly address the problem of data imbalance. It is a more general solution to address the problem of heterogeneous FL. From the experiments, it can also be observed that PFPT achieves better performance on both non-iid and imbalance scenarios.
4. In the experiment, there should be more imbalance settings to support the robustness of PFPT. Currently, there is only one setting.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. What are the real-world applications to support the scenario of extreme local imbalance in FL?
2. How does PFPT explicitly solve the problem of data imbalance?
3. Global imbalance seems more common in real-world applications. Can PFPT be effective in addressing the global imbalance problem?
Confidence: 5
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for the detailed questions which we have addressed below.
**Q1. Real-world application to support this setting.** We would like to emphasize that our work has first showcased its improved performance on standard non iid settings simulated with Dirichlet prior -- see the first 2 rows of Tables 1-4 -- on local class distribution which is the common practice in prior heterogeneous FL work (including the references [1-3] suggested by the reviewer). This already implies a certain degree of imbalance in local data distribution. The extreme imbalance setting is considered as a limit test in the worst case. Previous works have also studied such setting [a-f]. Extreme class imbalance often happens in practical scenarios, e.g.,
(a) rare disease detection with extremely low prevalence rate (less than 1/10000) [e]; and
(b) mobile keyboard prediction where on-device datasets are often imbalanced, with a few classes (e.g., commonly typed words or phrases) dominating [f].
[a] FedIIC: Towards Robust FL for Class-Imbalanced Medical Image Classification (https://arxiv.org/abs/2206.13803)
[b] FL with Non-IID Data (https://arxiv.org/abs/1806.00582)
[c] Addressing Class Imbalance in FL (https://arxiv.org/abs/2008.06217)
[d] FL with Matched Averaging (https://www.arxiv.org/abs/2002.06440)
[e] Complementary Pattern Augmentation for Rare Disease Detection (https://arxiv.org/abs/1911.13232)
[f] FL for Mobile Keyboard Prediction (https://arxiv.org/abs/1811.03604)
**Q2.**
**A. Discussion of existing FL methods addressing class imbalance**
We thank the reviewer for suggesting these additional references [1-3]. We note that these works focus on full fine-tuning approaches in settings where the centralized dataset is imbalanced. Instead, we focus on light-weight prompt-tuning scenarios in which the local datasets are imbalanced but their centralization would be reasonably balanced. As such, our setting and the settings adopted in these additional references are complementary. We will cite them and include the above discussion in our revision.
**B. Explain how PFPT solves the problem of (local) data imbalance.**
Local data imbalance will cause diversity across local prompt sets (i.e., clients). We refer the reviewer to Fig. 2, which shows that the local prompts indeed diverge to capture different data patterns. This means the same prompt position across different clients might encode different context information of the fine-tuning task. A simple aggregation that simply combines prompts at the same position across different clients from different contexts might therefore collapse them into less informative prompts.
To avoid this, we must learn a prompt alignment so that we only aggregate prompts which encode the same aspect of context information. PFPT models this prompt alignment as a parameter of our generative model of the local prompts. Maximizing the observation likelihood of the local prompts will allow us to learn the most plausible alignment, hence solving the local data imbalance issue.
**Re: the concern that the proposed prompt aggregation method is a more general solution to address the problem of heterogeneous FL and achieves better performance on both non-iid and imbalance scenarios:**
We are not sure if this is considered a weakness. However, we agree that the title of our paper might be a bit too specific on imbalance data settings. Our main point is that existing prompt aggregation performs worse when the local data become more diverse and imbalanced. It can be observed (from all table) that the performance improvement over baselines increases when the data becomes more imbalanced. In the synthetic 4-dataset benchmark, with more diverse data collected from 4 different datasets, the performance gain over the baselines is much more pronounced.
**Q3. More imbalanced settings & how effective PFPT is in global imbalance**
Our work focuses on the common setting in existing heterogeneous FL work where the centralized dataset would be reasonably balanced but each local dataset can be imbalanced due to the client heterogeneity. This is the main cause of the solution drift issue.
However, to our understanding, we believe the reviewer wants us to explore the settings under which the centralized dataset is imbalanced (i.e., global imbalance). This presents an orthogonal challenge to the solution drift challenge considered in existing heterogeneous FL literature (including ours) which is beyond the current scope and deserves a separate treatment.
Nonetheless, we have run additional experiments comparing with the methods in the references [1-3] that the reviewer suggested. The experiments also use the same long-tailed datasets & FL scenarios detailed in those references. The results show that our method achieves higher accuracy than prior works substantially across all global imbalance settings with different imbalance factors (IF):
| CIFAR100-LT ($\alpha=0.1$) | IF = 100 | IF = 50 | IF = 10 |
|-------------------------|----------|---------|---------|
| FEDIC [2] | 33.67 | 34.74 | 41.93 |
| PFPT (ours) | 60.74 | 65.54 | 71.66 |
| CIFAR100-LT ($\alpha=0.5$) | IF = 100 | IF = 50 | IF = 10 |
|-------------------------|----------------|----------------|----------------|
| CReFF [1] | 34.67 | 37.64 | 47.08 |
| RUCR [3] | 36.83 | 40.80 | 50.90 |
| PFPT (ours) | 60.69 | 65.41 | 73.68 |
| ImageNet-LT ($\alpha=0.1$) | |
|-----------------------------------|:---------------:|
| CReFF [1] | 26.31 |
| FEDIC [2] | 28.93 |
| PFPT (ours) | 75.54 |
We hope the reviewer will consider increasing the rating if our response has addressed the questions sufficiently. We will be happy to answer any follow-up questions that the reviewer might have for us.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. Most of my concerns are solved. I'm willing to increase my score to 5. However, I still suggest the authors reconsider the title because "data imbalance settings" are too broad to be covered by the proposed method.
---
Reply to Comment 1.1.1:
Title: Thank you for increasing the rating
Comment: Dear Reviewer h6E3,
Thank you for the fast response. We appreciate the rating increase!
We will update the title in our revision.
Best regards,
Authors
---
Rebuttal 2:
Title: Re: Additional detail regarding the extra experiments we run in response to the reviewer's Q3
Comment: Please note that not all methods in those references are tested on all those FL scenarios. Hence, for each scenario (reported in a separate table -- see Q3 in the main rebuttal), we only compare with the (best) reported performance of the methods which were tested on that scenario.
--
All local data distributions and FL settings are configured to be the same for fair comparison:
Batch size=32 for $\alpha=0.5$, 128 for $\alpha=0.1$ and 32 for ImageNet-LT.
No. total clients=20.
No. online clients in each communication round=8.
No. epochs in local training=10.
--
We used the following released implementation of those references:
FEDIC: https://github.com/shangxinyi/FEDIC/blob/main/options.py
CReFF-FL: https://github.com/shangxinyi/CReFF-FL
RUCR: https://github.com/liuyuxia211/RUCR/blob/main/options.py
--
[1] Xinyi Shang, Yang Lu, Gang Huang, and Hanzi Wang, “Federated Learning on Heterogeneous and Long-Tailed Data via Classifier Re-Training with Federated Features,” in IJCAI, 2022.
[2] Xinyi Shang, Yang Lu, Yiu-ming Cheung, and Hanzi Wang, “FEDIC: Federated Learning on Non-IID and Long-Tailed Data via Calibrated Distillation,” in ICME, 2022.
[3] Wenke Huang, Yuxia Liu, Mang Ye, Jun Chen, Bo Du, “Federated Learning with Long-Tailed Data via Representation Unification and Classifier Rectification” in TIFS, 2024 | Summary: This paper introduces a novel approach to prompt-tuning within the federated learning (FL) framework, focusing on enhancing the adaptability of pre-trained models across diverse clients using a probabilistic method. They proposed a hierarchical approach to model the generation and aggregation of local prompts. They develop a way to associate local (client side) and summarizing prompts (server side), utilizing a weighted bipartite matching task that linearly interacts with the model's loss function to optimize prompt association. Their experimental results show that the proposed method is effective in data imbalance settings.
Strengths: - The authors introduce an innovative probabilistic approach to prompt tuning within the context of Federated Learning.
- The logic of this paper is clear and easy to follow.
- The insights provided in the paper have the potential to inspire further research within the community, suggesting directions for integrating probabilistic approaches into federated tuning.
Weaknesses: - The paper primarily focuses on prompt tuning; a comparative analysis with other efficient tuning methods would strengthen its persuasiveness.
- In Sec. 2, the related work mainly focuses on the solution drift issue, yet it lacks a comprehensive discussion on various efficient tuning methods under scenarios of data scarcity. The authors need to provide additional motivation for focusing on prompt tuning and expand on related work concerning prompt tuning.
- The paper addresses data imbalance in Federated Learning but provides limited evidence supporting the proposed method. More substantial theoretical or experimental evidence is needed to demonstrate its effectiveness in imbalance settings.
Technical Quality: 2
Clarity: 3
Questions for Authors: 1. In Sec. 3.1's remarks, an alternative tuning strategy, adapter networks, is mentioned. Could the authors clarify the advantages of their proposed method over these adapter network approaches?
2. In Sec. 3.2.3, I think the initialization of summarizing prompts will affect the performance. Could the authors specify how these prompts were initialized in the experiments?
3. In lines 178-181, it is hypothesized that each prompt captures a specific pattern or concept. Could the authors provide evidence or further explanation to support this assumption? This clarification would strengthen the theoretical foundation of the study.
4. In lines 223-225, could the authors provide more evidence for why within a single client, at most only one local prompt would be associated with the i-th summarizing prompt?
5. In Appendix G, "10 learnable prompts" are mentioned. Could the authors explore whether increasing this number could enhance the performance of the proposed method? More prompts could potentially allow for the learning of finer-grained concepts.
6. In line 201, could the authors provide more details on the method each client uses to select a subset of summarizing prompts?
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for recognizing our contribution and for the constructive feedback, which are addressed below.
**Q1. Discussing other fine-tuning approaches.**
We agree with the reviewer that a more comprehensive discussion on PFPT will make our contribution position clearer. However, we want to note that our main contribution is about addressing a technical issue (i.e., prompt alignment) of an existing federated prompt-tuning approach, which is essential in settings with heterogeneous and/or imbalance local data:
--
Existing federated prompt-tuning approaches [21-23] have ignored the important issue of prompt alignment across different clients. Aggregating misaligned prompts could result in suboptimal performance under imbalanced and/or heterogeneous local data (see our experiments). Our work aims to fix this problem. Therefore, our focus is necessarily specific to prompt-tuning.
--
Generalizing our proposed solution to other fine-tuning techniques and finding which one is most effective is indeed interesting but orthogonal to our contribution. We will investigate this in a separate follow-up work.
Nonetheless, we have provided a comparative analysis with adapter tuning in Appendix F (Table 7), which shows that prompt-tuning outperforms adapter-tuning (FEDAVG-Adapter, FEDPROX-Adapter) in most FL scenarios on 4 benchmark datasets. The results in Table 7 are quoted below and also supplemented with additional comparison with two other variants of federated adapter-tuning (FEDOPT-Adapter, SCAFFOLD-Adapter). We note that these new baselines were created during the rebuttal week for a more thorough comparison but they have not been investigated in prior literature.
| $\alpha=0.5$ | CIFAR10 | CIFAR100 | TinyImageNet | synthetic 4-dataset |
|------------------|----------------|----------------|----------------|---------------------|
| FEDAVG-Adpater | 93.86±0.17 | 75.95±0.40 | 78.88±0.23 | 55.55±0.79 |
| FEDPROX-Adapter | 93.69±0.20 | 75.75±0.16 | 79.01±0.56 | 58.39±1.15 |
| FEDOPT-Adapter | 89.34±0.77 | 35.96±1.32 | 23.51±0.65 | 31.85±0.85 |
| SCAFFOLD-Adapter | 78.10±0.39 | 18.64±1.54 | 22.41±0.65 | 29.72±0.71 |
| PFPT (Ours) | **94.39±0.51** | **80.24±0.24** | **86.91±0.14** | **76.89±0.17** |
| $\alpha=0.1$ | CIFAR10 | CIFAR100 | TinyImageNet | synthetic 4-dataset |
|------------------|----------------|----------------|----------------|---------------------|
| FEDAVG-Adpater | 92.66±0.26 | 65.04±0.68 | 57.62±0.80 | 30.58±4.67 |
| FEDPROX-Adapter | 93.04±0.33 | 64.59±0.82 | 58.62±0.56 | 32.92±1.34 |
| FEDOPT-Adapter | 85.32±1.40 | 22.07±3.03 | 14.76±1.06 | 20.01±1.82 |
| SCAFFOLD-Adapter | 79.06±0.81 | 12.27±1.76 | 17.87±1.19 | 18.68±2.44 |
| PFPT (Ours) | **93.39±0.22** | **75.08±0.51** | **82.31±0.26** | **70.29±0.32** |
| Imbalance | CIFAR10 | CIFAR100 | TinyImageNet | synthetic 4-dataset |
|------------------|----------------|----------------|----------------|---------------------|
| FEDAVG-Adpater | **92.33±0.26** | 49.8±0.79 | 40.90±1.34 | 10.86±8.94 |
| FEDPROX-Adapter | 92.13±0.13 | 50.75±1.71 | 37.65±2.19 | 13.19±7.92 |
| FEDOPT-Adapter | 74.22±1.11 | 11.75±3.64 | 10.16±1.00 | 21.13±5.20 |
| SCAFFOLD-Adapter | 80.56±1.08 | 20.56±2.12 | 22.49±1.79 | 4.33±1.91 |
| PFPT (Ours) | 91.45±0.08 | **72.05±0.93** | **78.21±1.25** | **62.23±1.02** |
In our revision, we will also include a broader discussion of existing fine-tuning techniques in Section 2, which is summarized in an additional comment below (due to limited rebuttal space).
**Q2. Initialization of summarizing prompts.**
The initialization of prompts would not affect the performance as it follows the standard initialization in [a] which is commonly used in deep learning. The reported std in our experiments is also relatively small, indicating that prompt initialization has little effect on performance variation.
[a] Understanding the difficulty of training deep feedforward neural networks. AISTATS (2010).
**Q3. Each prompt captures a specific pattern of concept.**
We refer the reviewer to Fig. 2 in our paper, which shows the t-SNE plots of the (learned) summarizing prompts on CIFAR-100 over 120 communication iterations. Yellow triangles denote the centroids of the t-SNE embeddings of the prompts. The dashed red line visualizes their trajectories. The plots show that each prompt follows a different trajectory, suggesting that each prompt does capture a specific pattern or concept.
**Q4. For a single client, at most only one local prompt would be associated with the i-th summarizing prompt?** This is due to the generative design in Section 3.2.1 (line 183-186). Per client, each summarizing prompt will flip a Bernoulli coin with learnable probability to decide whether to sample one local prompt from its vicinity. If the decision is positive, the sampled local prompt is said to be associated with the summarizing prompt. Hence, per client, for each summarizing prompt, there is at most one associated local prompt. This setup is similar to that of the setting in The Indian Buffet Process: An Introduction and Review. JMLR (2011)
**Q5. More prompts improve performance.** We have run additional experiments showing the increasing performance of our model with an increasing number of prompts -- see the figure in the attached PDF in our summary response.
**Q6. Method that each client uses to select a subset of summarizing prompts?**
We adopt the prompt selection mechanism in “Learning to Prompt for Continual Learning” (CVPR, 2022)
We hope the reviewer will consider increasing the rating if our response has addressed the questions sufficiently. We will be happy to answer any follow-up questions that the reviewer might have for us.
---
Rebuttal Comment 1.1:
Title: Follow-up
Comment: Dear Reviewer tCnK,
Thank you again for the detailed feedback. We hope our responses have addressed your questions sufficiently.
We are happy to discuss further if you have follow-up questions for us.
Best regards,
Authors
---
Reply to Comment 1.1.1:
Title: Re: Follow-up
Comment: Dear Reviewer tCnK,
May we know if our response has addressed your questions sufficiently?
We really appreciate your detailed feedback, which (as addressed above) will be incorporated into our revision.
Best regards,
Authors
---
Rebuttal 2:
Title: Extended discussion on fine-tuning approaches (supplement content to our main rebuttal)
Comment: Due to the limited rebuttal space, we defer this editorial discussion content to this separate comment:
--
Existing fine-tuning approaches either use prompts to adapt the input [a] or adapter networks [b] to adapt the pre-trained weights. Both of which will help modify the pre-trained model to fit the context of a downstream task.
Prompt-tuning methods focus on engineering cues, such as extra tokens which are appended as prefixes to the sequence of input embeddings to a multi-head self-attention unit. Such tokens or prompts provide beneficial context to performing the computational task, similar to how hints can be provided to assist puzzle solving.
Adapter tuning is a parameter-efficient method for fine-tuning large foundation models. Instead of updating all the parameters of a model, adapter tuning involves adding small, trainable modules (adapters) to the model. During training, only these adapters are updated, while the rest of the model’s parameters remain fixed.
The fundamental difference between these two approaches is that prompt-tuning only modifies the contextual information in the query, whereas adapter tuning instead alters how the pre-trained model behaves. In centralized learning, prompt-tuning is more memory efficient as the sizes of the prompts at the input embedding level only depend on the input sizes while the sizes of the update still depend on the model sizes. Thus, in the context of FL with resource-limited devices (e.g., wearable devices for health monitoring), prompt-tuning is more affordable in terms of memory usage. On the other hand, adapter networks can perform intricate updates to the model, and thus are suitable for more complex tasks.
Both fine-tuning approaches are less investigated in the federated settings with heterogeneous and imbalance data. A few recent works [21-23] (as cited in our main text) have investigated a potential integration between FedAvg and prompt-tuning but have not addressed the prompt alignment issue (as elaborated in lines 51-59 in our main text) and have also not considered the heterogeneous and imbalance data setting.
[a] "The power of scale for parameter-efficient prompt tuning." EMNLP (2021).
[b] “Learning multiple visual domains with residual adapters”. NeurIPS (2017). | Summary: This paper address the challenges of prompt-tuning pre-trained models in federated learning scenarios with diverse local data distributions. Specifically, it formulates the prompt summarizing procedure as a probabilistic set modeling task, treating each local set as an independent sample of a random point process and aligning similar prompts across different sets as part of the modeling parameterization. The research compares the proposed method's performance against various federated prompt-tuning baselines, demonstrating its effectiveness in combating data imbalance in extremely heterogeneous scenarios through a series of experiments and evaluations.
Strengths: 1. The paper introduces a novel probabilistic set modeling approach for prompt summarization, which enhances the understanding and processing of local sets as independent samples of a random point process.
2. The method is good in addressing data imbalance in extreme heterogeneous scenarios, which is a significant challenge in federated learning environments.
3. The research includes comprehensive experiments and comparisons against existing federated prompt-tuning baselines, providing robust evidence of the method’s effectiveness.
4. The use of classical weighted bipartite matching within the generative model’s framework adds a layer of theoretical rigor to the research, grounding the practical contributions in solid mathematical foundations.
Weaknesses: 1. “Papers to be submitted to NeurIPS 2024 must be prepared according to the instructions presented here. Papers may only be up to nine pages long, including figures. Additional pages containing only acknowledgments and references are allowed.” There is a minor formatting violation.
Technical Quality: 3
Clarity: 3
Questions for Authors: -
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: -
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for recognizing the strength of our paper and for helping us catch a format issue.
We thought the broader impact section is counted as part of the checklist content which is not counted towards the page limit. It is possible that we had misunderstood the policy regarding the broader impact statement. We will move this section to the appendix in the revision to make sure it is not considered part of the main text.
We hope the reviewer will consider increasing the rating of our paper as this minor format issue is the only concern and it does not impact the technical contribution of our work.
Thank you for your consideration.
---
Rebuttal Comment 1.1:
Title: Rating
Comment: Given the authors’ response and other reviews' comments, I will maintain my rating. | Rebuttal 1:
Rebuttal: We thank all reviewers for their constructive comments. We summarize below our responses to the reviewers’ questions and concerns, as well as additional results to support our method.
**Reviewer HV8G** requested several clarifications and minor adjustments of our manuscript, which we have thoroughly addressed.
**Reviewers tCnK and h6E3** requested additional discussion regarding other fine-tuning techniques and FL techniques that deal with class imbalance, which we have provided in the respective responses. We have also conducted extra experiments to compare our method with these techniques. We refer the reviewers to our response to **Q1 of reviewer tCnK** for an empirical comparison with adapter tuning techniques, and to **Q3 of reviewer h6E3** for another empirical comparison with FL techniques handling data imbalance (in the global settings suggested by the references provided by **reviewer h6E3**). These additional results show that our method performs robustly against other baselines across different settings.
**Reviewer tCnK** requested additional experiment showing that the model performance will increase with a larger no. of prompts (Q5). We have provided that experiments in the attached PDF.
**Reviewer h6E3** also requested for evidence that our experiment setting is realistic. We would like to highlight that all of our Dirichlet partitioning scenarios are standard in many previous FL work. Our extreme imbalance partitioning scenario has also been investigated in several well-cited works, which we have provided references in our response to **Q1 of Reviewer h6E3**.
**Reviewer DxxX** raised a format issue with the broader impact statement which will be fixed in our revised draft.
We thank all reviewers again for the constructive reviews and we hope our response has sufficiently addressed all questions. We will be happy to answer any follow-up questions that the reviewers might have during the Reviewer-Author discussion week.
Pdf: /pdf/95d89250a333ae9ca059326e63dc01a41735cf3b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Discovering plasticity rules that organize and maintain neural circuits | Accept (poster) | Summary: This paper introduces meta-learned plasticity rules for sequence generating neural circuits. While existing works demonstrated that the dynamics in HVC is generated by both excitatory and inhibitory synaptic updates, their approaches are based on a guessed rule. Motivated from this, the authors experimentally demonstrate that synapses with meta-learned plasticity rules introduce enhanced stability on the network dynamics under noisy environments, which is also biologically plausible (e.g., homeostasis)
Strengths: - The paper is overall well written and key question the authors are trying to validate (line 33-34) is presented clearly in the introduction.
- The motivation of this paper to understand the self-organizing networks with meta-learned plastic rules is reasonable and biologically plausible.
- In the field of modeling sequence generating circuits in HVC, the author's investigation is novel considering others are mostly based on a relatively simple and guessed rule.
Weaknesses: - Though I appreciate the novel problem statement using meta-learned plasticity rules, I could not find technical novelties in the paper. In other words, my primary concern is if this paper presents any novel insights in machine learning perspective.
- For example, meta-learning of plasticity rules has been widely explored in various tasks [1,2,3]. Some of these papers have already employed the similar optimization techniques such as CMA-ES to quickly optimize the meta-learning parameters. Hence, I think the authors should have discussed these methods in the paper.
[1] Miconi, Thomas, Kenneth Stanley, and Jeff Clune. "Differentiable plasticity: training plastic neural networks with backpropagation." International Conference on Machine Learning. PMLR, 2018.
[2] Najarro, Elias, and Sebastian Risi. "Meta-learning through hebbian plasticity in random networks." Advances in Neural Information Processing Systems 33 (2020): 20719-20731.
[3] Rodriguez, Hector Garcia, Qinghai Guo, and Timoleon Moraitis. "Short-term plasticity neurons learning to learn and forget." International Conference on Machine Learning. PMLR, 2022.
- Also, the idea of homeostasis using inhibitory and excitatory synapses are not new in machine learning perspectives [1,2]. I believe the main contribution in this paper is to understand the meta-learned plasticity rules in modeling neuronal sequences. If so, it seems like applying existing methods to understand another problem. Please correct me if I’m wrong.
[1] Yoshida, Naoto, et al. "Embodiment perspective of reward definition for behavioural homeostasis." Deep RL Workshop NeurIPS 2021. 2021.
[2] Kang, Beomseok, Biswadeep Chakraborty, and Saibal Mukhopadhyay. "Unsupervised 3D Object Learning through Neuron Activity aware Plasticity." arXiv preprint arXiv:2302.11622 (2023).
- It would be nice to have a background section for readers who are not familiar with neuronal sequence modeling and zebra finch HVC.
- The current reference (citation) style looks different to NeurIPS format. I think the references should be written in [x] or (x).
Technical Quality: 3
Clarity: 2
Questions for Authors: N/A
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their comments and for the additional references.
We agree with the reviewer that the novelty of this work primarily lies in the application of an existing technique to a specific question within neuroscience: how might the organization of useful circuit dynamics that accelerate learning be established and maintained without feedback? We argue, however, that the application of this technique to this question is nontrivial. As we argue in the common response, prior studies that employ meta-learning largely operate on feed-forward networks, whose outputs can be rapidly computed, have known solutions within the basis, or operate on a limited basis set (see common response for details and citations). In this work, we show solutions to meta-learning problems can be dense within a provided basis and that coefficient magnitude does not necessarily communicate the contribution of a term to shaping of synapses. We argue this work presents an important case study for other authors seeking to apply meta-learning to questions within neuroscience.
We thank the reviewer for pointing out important work from Miconi et al., Najarro et al., and Rodriguez, et al. We have added these references.
We have additionally added a background section on zebra finch physiology, including a review of areas HVC and RA. This section now reads:
“In the zebra finch, premotor nucleus HVC contains excitatory neurons that fire sparsely (typically in one burst of spikes) during song and are purportedly arranged in a feed-forward structure (Fig. 1c). A subset of these cells, known as $\mathrm{HVC}\_{\rm (RA)}$ neurons, project to downstream nucleus RA (robust nucleus of the archistriatum), which in turn projects to vocal neurons of the syrinx and to the brainstem, which regulates respiration \cite{Hahnloser_2002}. HVC additionally receives excitatory projections from nucleus Uva, which controls the onset of song syllables \cite{Moll2023} and provides input for the duration of song \cite{Danish2017}. $\mathrm{HVC}\_{\rm (RA)}$ neurons inhibit each other disynaptically via a population of inhibitory interneurons within HVC \cite{Kosche_2015}.”
We agree with the reviewer that the idea of homeostasis via modification of excitatory and inhibitory synapses is not new. However, our work suggests exactly how homeostasis might be maintained via these two distinct forms of input in the sequence generation context. We argue this is valuable because our proposal for homeostatic control can maintain not just the shape of an E cell’s firing rate response but also its timing relative to its input. We clarify this point in Sec. 2.3.3. We additionally thank the reviewer for suggesting additional references we missed.
We have also fixed the formatting of the references as the reviewer has suggested.
---
Rebuttal Comment 1.1:
Comment: Thank you for addressing my concerns and clarifying the contribution of this work. I understand that the primary contribution at the perspective of neuroscience. To be honest, I'm not much familiar with neuroscience and still not sure on how significant contribution this paper has made. Considering the answer and evaluation from other reviewers, I would like to increase my final score to 5 but decrease the confidence to 2. | Summary: The authors investigate the development and persistence of intrinsic connectivity structures within a
neural region without external input. They propose that a local plasticity rule can lead to self-organising connectivity motifs, creating inductive biases beneficial for subsequent information processing. The study focuses on sparse and sequential dynamics, as observed in the HVC region of zebra finches. The paper explores which plasticity rules can organise and maintain these sequential dynamics using meta-learning.The space of plasticity rules is parameterised using terms based on activations and synapse size.
The coefficients and their time constants are adjusted to minimise the loss, which comprises three
components: encouraging sparsity in activation, synaptic change, and improving the accuracy of time decoding. Initially, excitatory-excitatory rules are considered, followed by inhibitory-excitatory and excitatory-inhibitory rules. The study evaluates the efficiency of these rules under synaptic turnover and compares them to established Hebbian plasticity rules. Evolutionary algorithms identified a rule resembling a temporally generalised Oja’s rule, which
outperformed a multiplicative asymmetric Hebbian rule in the context of synaptic turnover.
Strengths: This paper is conceptually significant, presenting the novel idea that local synaptic plasticity
could be crucial for maintaining inductive biases in the form of intrinsically-maintained
computational motifs. These internally generated dynamics must interact with synaptic
changes induced by supervised learning—they need to support this form of learning without
being overwhelmed by it. This perspective introduces a fresh angle to the
understanding of the function and origin of local plasticity rules.
The authors primarily focus on identifying synaptic rules that can create stable inductive
motifs resilient to perturbations. The analyses presented are thorough and convincingly
support this point. The use of meta-learning to estimate these plasticity rules is an innovative
approach. Additionally, the detailed examination of the estimated rule offers an interpretable
outcome from a complex machine learning technique - a rare occurrence in the field.
A particular strength of the paper is its exploration of not only E-E but also E-I and I-E rules.
Furthermore, the authors again analyse the estimated rule in a manner that provides interpretable
outcomes.
Weaknesses: The manuscript occasionally feels unfinished and rushed, particularly in section 2.3.1.
The references to Figure 3 (blue circles, green stars?) and the lack of references to Figure 4
in the text were often frustrating. Additionally, I was keen to understand why the second term
estimated without noise lost its significance when noise was introduced, but the authors did
not address this point.
Furthermore, it was somewhat anticlimactic that the authors did not explicitly explore the
interaction of the estimated plasticity rule with subsequent supervised learning, as this was
strongly hinted at in the introduction. For instance, are networks with the estimated rule better
at reproducing songs than models with a traditional Hebbian rule?
Lastly, I found it odd that the decoding of time was not reported throughout the paper. Since
the reduction of loss can be attributed to three factors, this omission is noteworthy.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why does the second term, which was estimated without noise, lose its importance when noise
is introduced? The authors did not address this point.
- Why did the authors not explicitly explore the interaction between the estimated plasticity rule and
a subsequent supervised learning procedure, despite strongly hinting at this in the introduction?
- Are networks using the estimated rule more effective at reproducing songs than those employing a
traditional Hebbian rule?
- Why was the decoding of time not reported throughout the paper, given that the reduction of loss
could be influenced by three different factors?
- Could you clarify why the estimated rule is considered a temporally generalised Oja’s rule?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors were aware of the limitations of the methods used in the paper and addressed
them accordingly.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful evaluation of our manuscript. We have worked to address all comments and rewrite our paper accordingly.
We now provide a rationale for why the second term in Eq. 4 loses its significance in the perturbed context. We have added the following to Sec. 2.2:
“We additionally note that Eq. 6 does not contain $-x_i \tilde{x}_j$, which appeared in the reduced rule learned in the absence of synaptic turnover (Eq. 4). We expect this is because the roles of $-x_i \tilde{x}\_j$ and $- \\tilde{x}\_j x\_j$ are partially redundant: both terms can suppress synapses that run counter to the sequential dynamics in the network. When synapses are turned over, the latter term is preferable as it offsets constant potentiation of all synapses. In the unperturbed context, constant synaptic growth is unnecessary, and therefore $- \\tilde{x}\_j x\_j$ is problematic in that it can set all synapses to a driven neuron to zero (note $-w\_{ij} \\tilde{x}\_j x\_j$ cannot do this). Thus, $-x\_i \\tilde{x}\_j$ becomes preferable.”
As we note in the common response to all reviewers, we chose to focus on the self-organization of the timing scaffold as the utility of a timing representation to support motor learning has already been well studied (e.g. Duffy et al. PNAS 2019). See common response for full details.
Regarding comparison between the estimated rule and a traditional Hebbian rule, Hebbian plasticity that only potentiates coincident activity (as found in a Hopfield network) cannot be responsible for the organization of sequential activity since asymmetry in the connectivity is required; traditional Hebbian plasticity tends to foster symmetric connectivity (see Sompolinsky and Kanter, 1986). We therefore chose to compare the estimated rule to an existing rule previously used in the literature in the context of this system (Fiete et al., 2010) which employs a temporally asymmetric form of Hebbian learning that is similar to that of our estimated rule, but imposes a sum on the total synaptic strength onto and out of individual neurons. We compare the estimated rule to this pre-existing rule in Fig. 4d.
We have now reported the accuracy of time decoding in Fig. 2. For all rules, the dominant contribution to the loss is the accuracy of time decoding, reported as $1000(1 - R^2).$
We have clarified the sense in which the rule estimated in the unperturbed context on E$\rightarrow$E synapses is a generalization of Oja’s rule. We have added the following to Sec. 2.1.3:
“The 3 most important terms can be interpreted as a temporally asymmetric generalization of Oja's rule in that the Hebbian learning term, $x\_i x\_j$, is replaced by the first two terms in Eq. 4, which depend on the relative timing of $x\_i$ and $x\_j$.”
We note specifically, that if $c\_2$ in Eq. 4 is set to zero and the constants on the first and second terms are made very small and their coefficients are identical, this rule is exactly Oja’s.
We apologize for the lack of clarity in section 2.3.1. We have rewritten it to emphasize the following key points:
Plasticity rules learned on all synapses outperform rules that only operate on E$\rightarrow$E synapses, particularly when the rate of synaptic turnover is high.
Perturbing the learned plasticity rules revealed that the E$\rightarrow$E was largely the same as in the prior sections.
We found E$\rightarrow$I plasticity heavily depended on a term that was second order in the presynaptic activity, $\tilde{x}_i x_i$. It always had a positive coefficient.
Dependence on this term implied that the synaptic strength of a neuron’s self inhibition might depend on its activity level, suggesting a type of homeostasis.
The last paragraph of 2.3.1 now reads:
“To investigate how rules acting on all synapses generated improved time representations, we repeated the dropout analysis introduced in Section 2.1.3, We found that E$\rightarrow$E plasticity within these learned triples was largely similar to the rules previously learned on E$\rightarrow$E synapses alone (Eqs. 4 and 6): solutions were consistently sensitive to the removal of $\tilde{x}\_i x\_j$ and $w\_{ij} \tilde{x}\_j x_j$, which appeared consistently with positive and negative coefficients, respectively. Further, dependence on these terms persisted when we trained networks with turnover on E$\rightarrow$E synapses or I$\rightarrow$E synapses (Supplement, Fig. 2). As expected, we also found that solutions depended heavily upon terms that acted upon E$\rightarrow$I and I$\rightarrow$E synapses. In general, we found that this plasticity was more difficult to interpret due to increased trial to trial variability than the E$\rightarrow$E plasticity. We did find, however, that the E$\rightarrow$I plasticity rule consistently depended on the second order presynaptic term $\tilde{x}\_i x\_i$, which always appeared with a positive coefficient, suggesting that E cells project to inhibitory counterparts with a strength that increases (perhaps nonlinearly) with their level of activity. An implication of dependence on this term is that the strength of an E neuron's recurrent inhibition, defined as $|W\_i^{\\rm rec}| = |\sum\_k w\_{i, k}^{\rm E\rightarrow I} w\_{k, i}^{\rm I\rightarrow E}|$, where $i$ is the index of the E cell and $k$ indexes the I cells to which it projects, might depend on its level of activity; thus, ablation of excitatory inputs to an E cell might cause its recurrent inhibition to lower in a manner that homeostatically restores its firing.”
We have additionally fixed references to Figs. 3 and 4.
---
Rebuttal Comment 1.1:
Title: Reply
Comment: Thank you for addressing all points raised. We will keep our score. | Summary: This work applies a meta-learning procedure for plasticity rule discovery to a neural network model of sequence generation. Plasticity rules are constructed from parameterized basis functions, and the parameters are found through evolutionary search based on a fitness function quantifying how accurately the plasticity rules train the network to represent elapsed time within episodic rollouts. The work analyzes the plasticity rules discovered in a variety of conditions: with noise, with synaptic turnover, and with plasticity enabled on excitatory and/or inhibitory neurons.
Strengths: **Significance**: Hand-tuning plasticity rules can be tedious and suboptimal, so automated methods are important for the more efficient and effective design of models. By applying this method to a neural network model of sequence generation (specifically, a model of HVC), this work tests the procedure in a new setting and also develops a more robust plasticity rule for sequence-generating dynamics.
**Novelty**: It seems the novelty is in the application of this method to a HVC model and the evaluation of the learned plasticity rules under a variety of conditions. If there are other contributions, it would be helpful to be clearer about contextualizing this in the related work.
**Technical quality**: This work provided an extensive evaluation of the model in a variety of conditions. The analysis of the discovered plasticity rules was particularly interesting.
**Presentation clarity**: The methods and results are clearly explained.
Weaknesses: **Wiring vs plasticity**: It seems like an assumption of this work is that the sequence generating structure of HVC neural circuits emerges largely due to activity-dependent plasticity in a network with initially random connectivity. What is the role of activity-independent wiring that takes place during development? And how much of an effect does more structured initialization have on the discovered plasticity rules?
**HVC predictions**: As a main contribution of this work is improving a model of HVC, what experimental predictions will result from this computational analysis?
**Figures improvements**: The figures in general have pretty small text, inconsistency in formatting, and subpanels that are cramped. Fig 1 especially has a lot going on, uses a variety of fonts, and adopts a colorblind-unfriendly palette. Fig 2c, I didn't see why the 3 trials were needed; could it just show one and put other trials into the Supplementary? Fig 4d, use 45deg or 90deg xlabel angles instead?
**Text improvements** (minor): (1) The neuron model notation on Line 77 took me a while to parse and seems non-standard according to colleagues; ultimately, it is a simple exponential decay and such neural dynamics are usually expressed in dv/dt notation. (2) The spacing on Page 11 is messed up due to the type of latex linebreak used.
Technical Quality: 3
Clarity: 3
Questions for Authors: See above.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors adequately address the limitations in the Discussion section.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their helpful comments. We have worked to address their suggestions and make changes to the manuscript accordingly.
We agree with the reviewer that activity-independent wiring plays an important role in determining the capacity of a network to self-organize (see Lakshminarasimhan et al, 2024, Litwin-Kumar, 2017). Tyulmankov et al, 2022, also learned Hebbian-like rules via a supervised learning strategy, but first optimized connectivity in their networks via backpropagation. Our study makes a few assumptions about initial connectivity, e.g. we do not interconnect interneurons as this connectivity is very rare in HVC, but we did not explore, for instance, how the sparsity of different types of connectivity might affect the learning rules. We now note this in the discussion section, saying:
“... the activity-independent initial connectivity of the circuit likely has a strong bearing on the sort of plasticity that successfully can leverage it \cite{litwin_kumar, lakshminarasimhan_2024}. For instance, sparsity is known to affect the efficacy of supervised versus unsupervised plasticity. We did not explore this important facet while learning plasticity rules, though in principle it could be included as part of the optimization.”
We have attempted to clarify our contributions beyond the HVC model and evaluation of learned plasticity under synaptic turnover, as noted in the overall response.
We have also clarified the experimental predictions that stem from our improved model of HVC. The work makes an explicit prediction for the nature of the learning rule in HVC that could be tested using methods to infer learning rules from data (Pereira and Brunel, 2018). Further, the hypothesis that plasticity on inhibitory and excitatory synapses might maintain different aspects of HVC(RA) cell firing leads to several predictions: (1) inhibition plays an essential role in maintaining the firing profile of HVC(RA) neurons, (2) manipulating the firing rate envelope of an HVC(RA) cell should cause the neuron to change both its excitatory and inhibitory inputs.
We have simplified Fig. 1 in accordance with the reviewer’s comments: we removed the right portion of Fig. 1C and provided a more useful caption for the network diagram. We have additionally modified Fig. 2c to only include one trial. We have also standardized fonts and font sizes in all figures. We have further improved figure clarity by ensuring all symbols and colors are properly defined.
We have rewritten the equation on L77 as suggested. It now reads:
“Each neuron fired according to $x\_{j}(t) = \\left[ V\_j(t) - b \\right]^+,$ where $V\_j(t)$ evolves via $\\tau\_m \\dot{V}\_j(t) = -V\_j(t) + \\sum\_i w\_{ij} x\_i(t)$.”
We thank the reviewer for catching the spacing issue on page 11.
---
Rebuttal 2:
Comment: Thank you for answering my questions and addressing the suggested manuscript improvements. I am happy to maintain my original high score. | Summary: This paper proposes a method for discovering plasticity rules in spiking neural networks to achieve sequence generation using both excitatory and inhibitory dynamics. Rules are parametrized with basis function. The biological approach in the model involves also considerations homeostasis and robustness to perturbation. The HVC region of the zebra finch was used as inspiration. The study shows interesting finding, including a self-discovered generalisation of the Oja rule, robustness to perturbation and enhanced stability, including homeostatic mechanisms.
Strengths: - The method uses interesting biological concepts, including both excitatory and inhibitory signals, and local Hebbian-like rules.
- The study analyses various properties of the evolved neural circuits, including robustness to perturbation and robustness to synaptic turnover.
- The model includes the use of noise, an important factor that affects both biological and artificial systems.
- The computational study supports the idea, not new in biology, that both excitatory and inhibitory synapses are important in maintaining equilibrium.
- The method allows for testing of particular hypothesis, e.g., the utility of dense feedforward connectivity.
Weaknesses: - Unclear novelty: there have been many studies that evolved plasticity rules using some form of evolutionary computation and biological plausible rules. In fact, evolution of Hebbian-like rules has been a popular area of research for decades. While this study advances many previous studies, I feel that the novelty and contributions in this area are not well highlighted with respect to existing studies.
- It is unclear to me how the findings can be used more broadly. I agree that it is interesting the CMA-ES and meta-learning can optimize plasticity rules to have interesting biological properties, but how can this finding be used? This is the most important concern I have: what impact can this paper and its discoveries have in the field?
- Figures' clarity: there are a lot of assumption on the meaning of symbols, colours, etc, in the figure. E.g., Fig 1c, unless one is very familiar with the zebra fish, I don't think these graphs very useful. What is the big blue dot on top of the red ones? what do the colour in the cell-index vs. time mean? what does the graph mean?
- The manuscript describes synaptic turnover starting from the abstract, but a search of the term to find out what that is and how it is implemented did not provide satisfactory results.
- There may be an issue with the submission as the Supplementary information is not included, resulted also in broken references (e.g. line 197, page 7)
Technical Quality: 2
Clarity: 2
Questions for Authors: - How many fitness evaluation through CMA-ES were performed? How much time did one evaluation take? How did the fitness evolve over time?
- Following up from a point in the weakness. How can these findings be used? Is there a recommendation that we should use evolution to implement and tune artificial spiking neural networks?
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The paper addresses the limitations, in particular in relation to computational demands.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful consideration of our manuscript.
We address the reviewer’s concerns regarding novelty in the common response. Briefly, our unique contribution is the exploration of unsupervised and unrewarded plasticity via meta-learning that organizes and maintains a specific and biologically relevant computational motif, a sequence. We have clarified this in the introduction. While prior work on sequence self-generation used hand-built rules, we learn the optimal plasticity rule by maximizing its ability to represent time within the network dynamics. Further, we believe the inclusion of biological noise, such as instability in synaptic connectivity, in a learning procedure is novel, particularly in the sequence generation context.
Regarding broader impact, our work suggests a specific role for unsupervised plasticity: the establishment and maintenance of simple computational motifs. We hypothesize that the plasticity that organizes these computational motifs might be extremely specific to the structure being organized and may altogether lack feedback modulation. This suggests we should look for plasticity rules that might underlie the self-organization of other common computational motifs, such as line and ring attractors. From the standpoint of experimental predictions, this work provides a hypothesis for why behaviors like path integration may be innate to or rapidly learned by some species. It additionally suggests that such simple computation motifs in the brain may lack dopaminergic or other reward-related inputs.
We apologize for the unclear definition of synaptic turnover: by this we mean the disappearance of existing synaptic connections and emergence of nascent ones over time. We now explicitly define it, and include an equation in Sec. 2.2. Our addition reads:
“We next asked whether ongoing disruptions to network structure alter which plasticity rules are meta-learned. To explore this, we introduced synaptic turnover, a stochastic process by which existing neurons disappeared and new, small synapses emerged, to the simulation phase of the meta-learning loop. Prior to each network activation, all connections were updated according to
$$ w_{ij} \\leftarrow \\begin{cases}
0 \\quad \rm{ if } \\quad |w_{ij}| > 0 \\; \\mathrm{and} \\; x_{\rm ST} < p_{\rm ST}, \\\
\\; \\;w_{ij} \\quad \rm{ if } \\quad |w_{ij}| > 0 \\; \\mathrm{and} \\; x_{\rm ST} \geq p_{\rm ST}, \\\
\\; \\;\epsilon \\quad \rm{ if } \\quad |w_{ij}| = 0 \\; \\mathrm{and} \\; x_{\rm ST} < p_{\rm ST},\\\
\\; \\; 0 \\quad \rm{ if } \\quad |w_{ij}| = 0 \\; \\mathrm{and} \\; x_{\rm ST} \geq p_{\rm ST}\\}
\\end{cases}
$$
where $x_{ST} \sim U[0, 1]$, $p_{ST}$ dictates the rate of synaptic turnover, and $\epsilon$ is a small positive (negative) constant if the presynaptic cell is excitatory (inhibitory). Since per Eq. 1, synaptic turnover determines the set of synapses available to the plasticity rule since the rule is unable to act on connections of size 0. To ensure learned rules were robust to a spectrum of rates of synaptic turnover, only half the networks used to evaluate the batch loss underwent this process (Fig.1f, bottom)."
Note, OpenReview does not seem to render \begin{cases} within equations correctly.
We have added the following description of the computational resources used for this study to the discussion:
“Training E$\rightarrow$E plasticity across 10 networks typically required 24 hours of compute on 30 Cascade Lake or Ice Lake Intel CPU cores to yield reasonable solutions. Training plasticity across all sets of synapses as in Sec. 2.3 required up to 72 hours across the same number of cores.”
Following the reviewer’s suggestion, we have added a section with background on zebra finch physiology. This section reads:
“In the zebra finch, premotor nucleus HVC contains excitatory neurons that fire sparsely (typically in one burst of spikes) during song and are purportedly arranged in a feed-forward structure (Fig. 1c). A subset of these cells, known as $\\mathrm{HVC}\_{\rm (RA)}$ neurons, project to downstream nucleus RA (robust nucleus of the archistriatum), which in turn projects to vocal neurons of the syrinx and to the brainstem, which regulates respiration \cite{Hahnloser_2002}. HVC additionally receives excitatory projections from nucleus Uva, which controls the onset of song syllables \cite{Moll2023} and provides input for the duration of song \cite{Danish2017}. $\mathrm{HVC}_{\rm (RA)}$ neurons inhibit each other disynaptically via a population of inhibitory interneurons within HVC \cite{Kosche_2015}.”
---
Rebuttal Comment 1.1:
Comment: The authors made significant efforts to address my comments, and have also improved the paper according to the other reviewers' comment. I'm therefore happy to increase my evaluation. | Rebuttal 1:
Rebuttal: We thank all reviewers for their thoughtful feedback. We appreciate their overall support of the manuscript and their constructive criticism. Below, we address their major concerns.
R1 and R4 noted that the paper lacked a demonstration that the learned plasticity rule accelerated the ability of the self-organized sequential dynamics to support supervised learning of a downstream output pattern. We chose to focus on the self-organization of the timing scaffold as the utility of a timing representation to support motor learning has already been well studied (e.g. Duffy et al. PNAS 2019). Nicola and Clopath (Nat. Commun. 2017) demonstrated that providing a stable representation of time (i.e. a sequential input) accelerates a network’s ability to perform a birdsong-like motor task (see section “High dimensional temporal signals improve FORCE training”). Fiete et al. (J. Neurophys. 2004) argued that temporal sparseness in the premotor drive of HVC vastly simplifies the task of producing some sequential motor output by decomposing it into the approximately independent subtasks of producing the correct motor output at each moment in time; this was also demonstrated in a biologically plausible implementation of RL learning of the same task (Farries and Fairhall J. Neurophys. 2007). Lastly, providing a time input to agents has become popular in the reinforcement learning literature as it permits an agent to adjust its policy in accordance with elapsed time in the task (Wang et al. arXiv preprint arXiv:1611.05763 2016), suggesting another downstream benefit of time representation that deserves further exploration.
Regarding concerns of unclear novelty raised by R2, R3, and R5, we acknowledge this paper builds upon a rich body of work that attempts to evolve Hebbian-like rules via a supervised procedure. We now clarify in the introduction that our unique contribution is the exploration of unsupervised and unrewarded plasticity via meta-learning that organizes and maintains a specific and biologically relevant computational motif, a sequence. While prior work on sequence self-generation used hand-built rules, we learn the optimal plasticity rule by maximizing its ability to represent time within the network dynamics. Further, we believe the inclusion of biological noise, such as instability in synaptic connectivity, in a learning procedure is novel, particularly in the sequence generation context. This question is motivated by recent neuroscience findings about representational drift. Our work illustrates how the inclusion of such noise can shift the optimal learning rule, and demonstrates how plasticity responsible for self-organization can coexist with maintenance mechanisms in a single rule, which to our knowledge is also novel in the context of sequences.
We also point out that this work is novel in its application of meta-learning of plasticity rules to a problem in which the model network is highly recurrent, the rule basis is large, and the optimal rule is not known a priori. Prior work has largely focused on feedforward structures whose outputs can be computed quickly, streamlining the meta-learning process (Confavreux et al. NeurIPS 2020, Shervani-Tabar and Rosenbaum Nat. Commun. 2023, Lindsey and Litwin-Kumar NeurIPS 2020), rule bases that were relatively small (Shervani-Tabar and Rosenbaum Nat. Commun. 2023, Tyulmankov Neuron 2022), or problems where the optimal plasticity rule was already known (Confavreux et al. NeurIPS 2020). Here we show that meta-learning can yield successful rules that are dense in the given basis, requiring us to develop additional techniques to understand their function. Such an example is important to include in the literature for future authors intending to use meta-learning. We also believe the inclusion of an L1-like penalty on synaptic change is novel and represents a more biologically realistic constraint on a plasticity rule than penalizing term coefficients, as other studies have done.
R1 and R4 also noted section 2.3, “Including inhibitory plasticity,” was difficult to parse. We have rewritten this section to clarify our findings from the rule perturbation analysis and provide more detail regarding the single neuron manipulations we performed (see individual responses).
At the request of R2 and R5, we have added background on zebra finch physiology (see response to R5) that describes the constituent cell types and connectivity within sequence-generating nucleus HVC and the role the nucleus is thought to play in song production.
Regarding broader impact, our work suggests a specific role for plasticity that is unmodulated by feedback: the setup of simple computational motifs. We hypothesize that the plasticity that organizes these computational motifs might be extremely specific to the dynamics being organized and may not require supervision. We feel that this work will inspire future work to seek plasticity rules that underlie the self-organization of other common computational motifs, such as line and ring attractors. From the standpoint of experimental predictions, this work provides a hypothesis for why behaviors like path integration may be innate to some systems. It additionally suggests that the establishment of such simple computational motifs in the brain may not require dopaminergic or other reward-related inputs. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper proposes a conceptually simple set-up to learn plasticity rules that result in neural networks that perform a specific task. The set-up of the paper is elegant and made up of simple but effective elements, such as linear-threshold neurons and evolutionary optimisation. The authors obtain some intriguing results, e.g. that their method discovers plasticity rules well-known in experimental neuroscience.
Strengths: The paper tackles an interesting problem with a relatively straightforward set-up, which makes it easily understandable. It has a very high density of results, which are technically sophisticated as well as relevant for many questions in neuroscience.
Weaknesses: One of the main limitations of the paper is that this very promising and flexible set-up is only applied to one very idiosyncratic task of elapsed time decoding. This is a very specific task, and not one that brains do spontaneously (see next paragraph), so although it yields interesting results it seems a bit lacking. It would be tremendously useful to see the results in a broad range of tasks, to see e.g. under what conditions other common plasticity rules (Hebbian, anti-Hebbian, perhaps some dopamine-like reward-modulated plasticity) emerge.
The motivation of the system set-up is somewhat dubious. The authors point out the importance of "intrinsic, self-organised dynamics" but then include as the primary component of their loss function the performance of an external decoder that predicts simulation time. Neither the decoder nor the ground truth time values are in any way intrinsic to neural activity, so I would argue this is not representative of self-organisation.
On a separate note, the paper sorely lacks any demonstration of the "downstream" effects of these plasticity rules. For example, the author's motivation is that "self-organized computations provide scaffolds for solving difficult tasks". However, the authors don't actually show that the networks evolved with their discovered plasticity rules can actually solve any difficult task.
The paper gets really hard to follow towards the end, particularly Sections 2.3.2 and 2.3.3. The intervention to increase or decrease E afferents in Sec. 2.3.2, as well as the analysis metrics in Fig. 6a-c aren't very clear. It also surprising to see Sec. 2.3.3, the final results subsection of the paper, introducing a whole new model and a different plasticity rule, which are only partially related to the model and plasticity rules studied elsewhere. Overall, I could not understand well the set-up or results in either of these subsections.
The details of the meta-learning procedure should be explained more thoroughly in Sec. 2.1.1. Figures are quite busy and in general hard to parse.
Technical Quality: 3
Clarity: 3
Questions for Authors: The p-values in Sec. 2.2.1 are somewhat meaningless, because one can always run more simulations to decrease the p-value. I would recommend quantifying effect size (with e.g. Cohen's $d$) instead.
Please define important terms at first use (e.g. HVC).
What is $\mathcal{P}(\textbf{C})$ in L. 103?
References 4, 5, 11 and 28 are misformatted.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The "Discussion and limitations" section is very short and should be expanded.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful reading of our paper and their comments.
We agree with the reviewer that this promising methodology should be used to study the self-organization of other computational motifs, such as line and ring attractors, and we hope that this paper provides a path for doing so. We have added a line in the introduction highlighting this potential future direction. It reads:
“Though the results we present in this work are specific to sequence generation, our methodology might be applied to other basic circuit motifs such as line and ring attractors.”
We also agree the approach should be expanded in the future to include different types of plasticity, including feedback-based rules, while this is not entirely new: Shervani-Tabar and Rosenbaum (Nat. Commun., 2023) learn feedback-based, biologically plausible rules for deep network training. We view our work as a proof of principle for studying the self-organization of robust computational motifs.
We agree that neurons in general have no knowledge of ground truth simulation time. Time in our simulation is locked to the time of an input. We have clarified in our methods that we divide the simulation of each plasticity rule into 400 activations of 110 ms. At t = 10 ms of each activation, the initial kick of excitation is presented to the network. Since this stimulus is time-locked to the simulation clock, attempting to decode time elapsed since the beginning of the activation is the same as decoding the time elapsed since stimulus presentation. Thus, our networks track the time since the inciting pulse initiates the dynamics, and the decoder is merely a means by which to evaluate the success of the plasticity rule at accomplishing this.
We have rewritten section 2.1.1 to better describe the meta-learning procedure, as requested. It now reads:
“We next define a loss function that evaluates the quality of the sequential dynamics organized under a chosen $\\{c\_k, \tau\_k\\}$, and attempt to minimize it using an evolutionary strategy, Covariance Matrix Adaptation (CMA-ES)\cite{auger_and_hansen}. CMA-ES places a multivariate Gaussian on the space of the parameters to be optimized, then samples from that Gaussian and evaluates the fitness at each point. The fitness is then used to adjust the mean and covariance matrix of the distribution in an attempt to iteratively maximize fitness. We use CMA-ES to sample from the space of possible $\\{c\_k, \tau\_k\\}$. We evaluate the loss for each choice of values $\\{c\_k, \\tau\_k\\}$ defining the plasticity rule by simulating 10 randomly initialized networks under this rule and evaluating the resultant dynamics at the end of the simulation. Each simulation consists of 400 activations of 110 ms. At $t = 10$ ms of each activation, an arbitrary neuron is driven by a strong kick of excitation \cite{tupikov_2021_addition}. Following this, all other neurons in the network receive Poisson distributed input for a period of $100$ ms. We generate the Poisson inputs as a sum of a component which is generated randomly trial by trial, and a frozen component, held fixed from trial to trial, which we take to model song-locked input to HVC from the nucleus Uva \cite{Murray_2017}. The total loss for a given rule is simply the sum of losses across each of the 10 networks.”
We have attempted to declutter figures and enlarge the size of figure labels and text, where possible.
We have also clarified sections 2.3.2 and 2.3.3 to more clearly describe our procedure and results in perturbing networks organized under joint excitatory and inhibitory plasticity.
As requested, we have added a quantification of Cohen’s $d$ to Sec. 2.2.1. We found that the learned rules have a moderate to large effect on the loss when compared against the rule from the literature. We report this in the text as follows:
“We used the Kruskal-Wallis $H$ test to test for equality of medians. We found meta-learned rules trained with and without synaptic turnover outperformed the rule based based on rigid synapse constraints ($p=2.5 \, \mathrm{x} \, 10^{-8}$, Cohen's $d = -0.45$ and $p=7.2 \, \mathrm{x} \, 10^{-7}$, Cohen's $d = -0.44$, respectively), while in the absence of perturbation, medians were not distinct after 4-fold Bonferroni correction ($p=0.016$, Cohen's $d = -0.29$ and $p = 0.029$, Cohen's $d = -0.30$, respectively).”
We have added a definition of HVC, which historically was called the High Vocal Center, but is now simply used as a proper noun. We include a description of HVC physiology in the response to R2.
We have also amended $\mathcal{P}(\textbf{c})$ on L. 103 to $\mathcal{P}(\textbf{c}, \tau)$ (note: $\tau$ should be bolded as well). $\mathcal{P}(\textbf{c})$ has no definition; this was a typo.
We thank the reviewer for pointing out our misformatted references.
---
Rebuttal Comment 1.1:
Comment: Many thanks to the authors for their rebuttal. I remain unconvinced that the method is in any way intrinsic (time-since-input is functionally the same as time-since-simulation-start), but I still think it's an interesting set of experiments. | null | null | null | null | null | null |
Diff-eRank: A Novel Rank-Based Metric for Evaluating Large Language Models | Accept (poster) | Summary: This paper explores the potential of using the rank of model hidden states for evaluating model capabilities. Specifically, the authors propose calculating the rank difference between trained and untrained models as a measure of model performance. The core idea is that the rank difference can reflect the model's extent of "noise reduction". The authors compared the consistency of rank difference with other metrics (loss, accuracy) under multiple model series settings. The results showed the potential of rank difference in assessing model capabilities within certain model series and measuring the cross-modal alignment of MLLMs.
Strengths: - To the best of my knowledge, this paper is the first to explore and discuss the idea of using the rank of hidden states to evaluate model capabilities, which sounds promising.
- The paper provides an intuitive understanding and theoretical significance of using rank difference as an evaluation metric.
- The experiments in this paper are relatively comprehensive, and the writing is quite clear.
Weaknesses: - My main concern lies in the fact that for models within a series, the dimensionality of their hidden states is often positively correlated with size, which also implies a positive correlation with performance. A larger dimensionality of hidden states usually means a larger rank difference when the effective rank proportion remains the same. In other words, although larger models have a higher rank difference, the proportion of noise reduction might be smaller compared to smaller models. Does this suggest that rank difference may not accurately measure the extent of noise reduction? Could rank ratio be a more reasonable metric (erank_M1/erank_M0 in eq.3)?
- Although the authors observed a positive correlation between rank difference and model size within a single model series, this conclusion no longer holds when comparing models from different series simultaneously, as shown in Figure 3. Is this due to the varying dimensionality of hidden states across different models (where rank ratio might be effective)? Or is it caused by the different training methods employed for various models? If rank difference cannot be used to compare models from different series, what advantages does it offer compared to traditional metrics such as accuracy?
- When the number of test set samples lies between the dimensionality of hidden states for the large and small models being evaluated, Q in Equation 1 will take values from N and d, respectively. Could this lead to non-robust results? I believe experiments should be conducted to investigate this scenario.
Technical Quality: 3
Clarity: 3
Questions for Authors: See Weaknesses.
I am looking forward to authors' responses and open to rise my score.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The main limitation of this paper is that it fails to specifically discuss the advantages of rank difference compared to other metrics, especially considering that the rank differences of models from different series are not on the same scale.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: A larger dimensionality of hidden states usually means a larger rank difference when the effective rank proportion remains the same. In other words, although larger models have a higher rank difference, the proportion of noise reduction might be smaller compared to smaller models. Does this suggest that rank difference may not accurately measure the extent of noise reduction? Could rank ratio be a more reasonable metric (erank_M1/erank_M0 in eq.3)?
A1: Thank you for your thoughtful feedback and for suggesting the rank ratio as an alternative metric. Below, we'd like to address this concern and explain our rationale for using rank difference over rank ratio.
Firstly, we have also considered this metric initially and we agree that both rank difference and rank ratio can provide valuable insights. Each metric captures a different aspect of the change in effective rank between untrained and trained models. **The rank difference metric is designed to capture the absolute reduction in uncertainty or noise, which reflects the model's ability to compress and structure information.** This can be interpreted as a greater capacity for the model to distill relevant information and discard noise, even if the proportional change (as measured by rank ratio) might be smaller.
Secondly, a decrease in rank ratio ($\frac{erank_{M_1}}{erank_{M_0}} = 1 - \frac{erank_{M_0} - erank_{M_1}}{erank_{M_0}}$) as model size grows ($erank_{M_0}$ will also grow) typically corresponds to an increase in rank difference ($erank_{M_0} - erank_{M_1}$). However, the converse is not necessarily true. Our additional experiments in Table A below reveal that the rank ratio exhibits an oscillating downward trend as model size increases, indicating a degree of instability. In contrast, we found that rank difference demonstrates a more consistent and stable pattern of increase with model size. **This inconsistency suggests that the rank ratio may not reliably capture the extent of noise reduction or the model's ability to structure information.**
| Datasets/Models | OPT-125m | OPT-1.3B | OPT-2.7B | OPT-6.7B | OPT-13B |
| --------------- | -------- | -------- | -------- | -------- | ------- |
| Dolly | 0.5398| 0.4719| 0.4548 | 0.4485 | 0.4593 |
| Wiki |0.4761 | 0.3870| 0.3717| 0.3667|0.3822|
| Openwenbtext2 | 0.4984|0.3936 |0.3761 | 0.3443| 0.3555 |
| hh-rlhf |0.5180 | 0.4396| 0.4218|0.4062| 0.4242|
Table A: Using rank ratio as a metric for evaluation
In summary, while the rank ratio provides an interesting perspective, our empirical experiments support the use of rank difference as a more reliable metric for evaluating noise reduction in LLMs.
---
Q2: Although the authors observed a positive correlation between rank difference and model size within a single model series, this conclusion no longer holds when comparing models from different series simultaneously, as shown in Figure 3. Is this due to the varying dimensionality of hidden states across different models (where rank ratio might be effective)? Or is it caused by the different training methods employed for various models? If rank difference cannot be used to compare models from different series, what advantages does it offer compared to traditional metrics such as accuracy?
A2: Thank you for your question. We would like to clarify this point. It is indeed normal that rank difference cannot be directly compared across different model families. **This is primarily due to the varying architectures, training methodologies, and objectives used in different model series.**
**Besides, we would like to emphasize that during the LLM's pretraining phase, the focus is primarily on the validation loss rather than accuracy.** In addition, we have conducted further experiments that demonstrate that lower loss across different model families does not necessarily correlate with better performance. For instance, in our tests on the Wiki dataset, we observe that the test loss of LLaMA-7b is 1.72, while the LLaMA2-7b is 1.66 and the LLaMA3-8b is 1.85. While LLaMA3-8b has the highest loss, it actually demonstrates the best overall performance. **This example clearly illustrates that loss values cannot be reliably used to compare performance across different model families.**
Given these observations, the proposed rank difference, while not suitable for cross-family comparisons, still offers valuable insights within a single model series. **It shifts the evaluation towards model representations and provides new insights into the theoretical understanding of LLM's behavior, which traditional metrics like accuracy or loss alone may not capture.**
---
Q3: When the number of test set samples lies between the dimensionality of hidden states for the large and small models being evaluated, Q in Equation 1 will take values from N and d, respectively. Could this lead to non-robust results? I believe experiments should be conducted to investigate this scenario.
A3: Thanks for your question. In fact, this situation won't lead to non-robust results in our evaluations. Specifically, in our experiments, we did not encounter a scenario where d is consistently smaller than N, primarily because N would exceed the maximum input context length of the models. However, we have conducted experiments where d is consistently larger than N. Specifically, in Section 4.3, we select datasets where the sequence length N for each sample is less than 100, which is smaller than the dimensionality d of the models, as even the smallest model has a dimensionality of 768. Our findings in Section 4.3 indicate that the rank difference still increases as the model scales up. Thus, the results are robust even when d is consistently larger than N. **This consistency suggests that though d is sometimes larger than N and N is sometimes larger than d, it does not lead to non-robust results.**
---
Rebuttal Comment 1.1:
Comment: Sorry for the late. Thank you very much for the detailed responses for addressing my concerns. However, I still have some questions:
Considering that rank difference = erank_M1 - erank_M0, both erank_M1 and erank_M0 will increase as the hidden state dimensions grow. Rank difference can only remain unchanged if erank_M1 and erank_M0 increase by the same scale, which implies that erank_M0 needs to increase by a larger proportion (given erank_M0 < erank_M1), which is evidently challenging. Therefore, an increase in rank difference with larger hidden state dimensions is a natural outcome. Given that model size is positively correlated with the model's hidden state dimensions, it is also quite natural for there to be a positive correlation between model size and rank difference.
I believe a potential way to address my concerns is to fix the model size and compare the rank differences of different checkpoints. Ideally, the checkpoints should include cases of under-training, fully training (achieving the best accuracy), and overfitting. If a trend of rank difference first increasing and then decreasing is observed (with the best performance at the fully trained checkpoint), then the conclusions of the paper would be sufficiently reliable. Otherwise, if accuracy shows a trend of first increasing and then decreasing, while rank difference does not, then rank difference may not be a suitable substitute for accuracy.
---
Rebuttal 2:
Title: Response to Reviewer LQXh
Comment: Thank you for your detailed and insightful feedback. We believe that the rank difference metric provides valuable insights into the internal representations and redundancy within models, which are not directly captured by traditional metrics like accuracy or loss. This makes rank difference a useful complementary metric for understanding model behavior.
We also understand your concern regarding the natural correlation between model size and rank difference, and we agree that your suggested experiment, comparing rank differences across different training checkpoints for a fixed model size, would provide a more robust validation of our metric.
Following your suggestions, **we have conducted additional experiments to observe the behavior of rank difference across different training stages for a fixed model size**. In particular, we fix the model size by using the pre-trained OPT-1.3B model and continually train it on a cleaned Wikipedia dataset. We evaluate checkpoints at various stages of training, including random initialized (untrained), initialized from OPT-1.3B (pre-trained OPT-1.3B), fully trained (achieving the best accuracy on Openbookqa benchmark), and overfitting. The results of rank difference, loss, and accuracy are presented in the table below.
| Metrics/Training Stages |Random Initialized|Initialized from OPT-1.3B|Fully Trained|Overfitting
| ------- | ---- | ---- | ---- | ---- |
| Rank Difference| 0| 2.140| 2.161 | 2.156
| Loss| 10.830| 4.692| 4.654 | 4.663
| Accuracy|0.250| 0.332 |0.340| 0.336
According to the experimental results, **we observe that the trend of rank difference, first increasing before fully trained and then slightly decreasing when overfitting, aligns well with the trend of benchmark accuracy and the opposite trend of loss**. This suggests that rank difference can serve as a complementary metric that helps understand the model's behavior, and monitor the training progress.
We hope this additional analysis addresses your concerns. Thank you again for your constructive and thoughtful suggestion. We believe our findings strengthen the reliability of our conclusions.
---
Rebuttal Comment 2.1:
Comment: We thank the authors for there efforts to address my concerns and my concerns have mostly been solved. I update my rating accordingly.
---
Rebuttal 3:
Title: Thanks for raising your score.
Comment: Thank you for taking the time to reassess our paper and raising the score from 5 to 6. We are grateful for your thoughtful and constructive suggestions. Your insightful feedback, along with that of the other reviewers, will be incorporated into our revised revision.
Thank you again for your time and effort. | Summary: This paper introduces “rank difference” that measures the reduction in the rank of LLM’s representations. It evaluates the quality of LLMs, which could be used in addition to the reduction in the cross-entropy loss. The idea is based on the assumption that LLM’s representations (e.g., the hidden states of each token before the classification head) encodes the semantic and syntactic information about the input sentence. Before training, these representations are expected to be chaotic (hence high rank), but after training and alignment, the representations are expected to be more structured.
Based on this idea, “rank difference” is defined as the difference between $erank_{m0}(\Sigma_S)$ and $erank_{m1}(\Sigma_S)$ where $\Sigma_S$ is the covariance matrix of representations and $erank$ is the effective rank. The difference is computed between the representation before and after training. This measures the extend to which the model can compress the information, while other metrics such as cross-entropy measure the quality of model’s prediction.
Experiments are conducted on OPT models of sizes from 125m to 13B parameters. The results show that there is an upward trend between “rank difference” and model size (which reflects model performance) on a range of datasets, similar to the reduction in cross-entropy. In addition, rank difference is applied to measure the alignment between visual modality and text modality in Llava-1.5 and MiniGPT-v2. An ablation study on different model families is shown.
Strengths: This paper presents a new evaluation metric, grounded on information theory, and demonstrates that it correlates well with model performance (as measured by model size and downstream task performance). It also allows measuring the alignment between different modalities.
Weaknesses: 1. Although the idea is novel, its practical usability is limited. To evaluate the quality of systems (e.g., LLM, ASR, etc), downstream metrics such as accuracy, ROUGE, BLEU, WER, etc. are used as they better align with real use cases. While, a metric like cross-entropy is used as it is differentiable, allowing it to be a training loss. I’m not sure if “rank difference” will be adopted as it’s neither related to real use case nor applicable as a training objective.
2. Regarding experiments, current results (both text-only and visual-text) only show the trend post-training, so it would be more indicative of demonstrating how “rank difference” changes during training (e.g., similar to cross-entropy loss which usually decreases monotonically during training).
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How do you think this method could be adopted?
2. What is the computational cost of computing rank difference?
3. Would it be comparable across models of different representation sizes?
4. Does/how rank difference during the alignment stage (e.g., SFT or PPO/DPO) ?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations section is provided, and it has covered the main points
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Although the idea is novel, its practical usability is limited. To evaluate the quality of systems (e.g., LLM, ASR, etc), downstream metrics such as accuracy, ROUGE, BLEU, WER, etc. are used as they better align with real use cases. While, a metric like cross-entropy is used as it is differentiable, allowing it to be a training loss. I’m not sure if “rank difference” will be adopted as it’s neither related to real use case nor applicable as a training objective. And, How do you think this method could be adopted?
A1: Thanks for your thoughtful feedback regarding the practical usability of our proposed "rank difference" metric. Your points about the importance of downstream metrics and differentiable training losses are well-taken. We would like to address your concerns and highlight the potential value of our approach:
Complementary to Existing Metrics: We agree that downstream metrics like accuracy, ROUGE, BLEU, and WER are crucial for evaluating system performance in real-world scenarios. **Our rank difference metric is not intended to replace these measures but to complement them by providing additional insights into model behavior during training and evaluation.**
Insights into Model Representation: Rank difference can measure the "noise reduction" ability of LLM based on data representation and reflects the extent to which a pre-trained LLM eliminates the redundant dimension in the information-theoretic sense. **The rank difference metric provides unique insights into the internal representations of language models, which may not be directly captured by task-specific metrics.** This can be valuable for understanding model behavior from a theoretical perspective.
Future Research Directions: Rank difference may open up avenues for future research to explore how internal representation metrics can be integrated into different cases. Techniques such as pruning, quantization, and distillation may benefit from metrics that reveal internal redundancies. The rank difference metric may aid in identifying which parts of the model can be compressed without significant loss of information.
The adoption of new metrics in the field may take time and require extensive validation. Our work aims to contribute to the ongoing discussion about how to evaluate and understand LLMs from a new perspective. **We believe that a diverse set of evaluation tools, including both task-specific metrics and intrinsic measures like rank difference, can provide a more comprehensive view of model quality and behavior.**
---
Q2: Regarding experiments, current results (both text-only and visual-text) only show the trend post-training, so it would be more indicative of demonstrating how “rank difference” changes during training (e.g., similar to cross-entropy loss which usually decreases monotonically during training). Does/how rank difference during the alignment stage (e.g., SFT or PPO/DPO)?
A2:
Thanks for your suggestion. In our paper, we haven’t conducted experiments to observe the change of rank difference during training due to the limited computation resources at first. To address your concern and further investigate how "rank difference" changes during training, we conduct additional experiments by continually training OPT-1.3B on cleaned wikipedia dataset. We present the results in the table below.
|Metrics/Training Stages|1|2|3|4|5|6
|-|-|-|-|-|-|-|
|rank difference ($\uparrow$)|2.148|2.154|2.158|2.160|**2.161**|2.156
|loss ($\downarrow$)|4.655|4.654|4.653|**4.653**|4.654|4.663
|acc ($\uparrow$)|0.332|0.334|0.336|0.332|**0.340**|0.336
The additional results indicate that the rank difference shows a gradual increase during the model's training. The rank difference converges together with accuracy, which is later than the fluctuation of loss. It suggests that rank difference may be a useful metric for monitoring the training progress as it better correlates with the trend of accuracy.
---
Q3: What is the computational cost of computing rank difference?
A3: Thank you for your question. Below, we provide a detailed breakdown of the steps to reveal the computational cost.
Normalization of Representations: For a set of representations, the complexity is $O(Nd)$ for mean subtraction and $O(Nd)$ for normalization, resulting in a total complexity of $O(Nd)$.
Construction of Covariance Matrix: The covariance matrix computation involves matrix multiplication, which has a complexity of $O(Nd^2)$.
Obtaining Singular Values: For a $d \times d$ matrix, the complexity of SVD is $O(d^3)$ to obtain singular values.
Calculation Effective Rank: The effective rank calculation involves summing over $d$ singular values, which has a complexity of $O(d)$.
Combining the complexities of the individual steps, the total computational complexity for computing the rank difference is: $O(Nd) + O(Nd^2) + O(d^3) + O(d) = O(Nd^2 + d^3)$. The computational cost of computing the rank difference is primarily influenced by the dimensionality of the representations ($d$) and the number of tokens ($N$) in the dataset. Note that the relationship between $N$ and $d$ can vary depending on the model size. In smaller models, $N$ may be larger than $d$. In larger models, $N$ may be smaller than $d$.
---
Q4: Would it be comparable across models of different representation sizes?
A4: Thanks for your question. We'd like to clarify that our experimental design specifically addresses this concern. **Our study is actually designed to measure models within the same family of different (representation) sizes in Section 4.** By focusing on the same model family, we ensure that the fundamental architecture and training paradigm remain consistent, while the model sizes and representation sizes vary. We believe this approach provides a fair and meaningful comparison, as it isolates the effect of model size and representation dimensionality while controlling for other variables. | Summary: The article presents a measure known as "rank difference" to assess the effectiveness of Language Models (LLMs) by analyzing their internal representations. This metric is based on information theory and geometric principles aiming to quantify how LLMs eliminate unnecessary information post training. The authors illustrate the utility of rank difference, in both modal (language) and multi modal contexts.
In terms of language models the study reveals that the rank difference grows with model size indicating noise reduction capability. This pattern aligns with metrics such as entropy loss and accuracy. Regarding modal models the authors suggest an assessment approach utilizing rank difference to evaluate alignment quality across modalities showing that contemporary multi modal LLMs demonstrate strong alignment performance.
Key contributions of the paper include;
Introducing rank difference as a metric for gauging the "noise reduction" capability of trained language models.
Demonstrating the correlation between rank difference, loss metrics and downstream task accuracy underscoring its potential as an evaluation criterion.
Defining alignment measures, between linguistic modalities showcasing that modern multi modal LLMs excel in achieving alignment.
The study provides real world data that backs the idea of utilizing rank variance, in datasets and model scales indicating its reliability and effectiveness in assessing language learning models. In general the research introduces a viewpoint on evaluating language learning models moving away from metrics based on predictions, to focusing on model representations. This shift brings forth perspectives on comprehending how language learning models behave.
Strengths: Novelty:
The introduction of the rank difference metric offers an approach, to assessing LLMs by focusing on their representations rather than just their outputs.
Rooted in information theory and geometric principles providing a perspective on comprehending LLM behavior.
Expanding the metrics scope to modal setups and evaluating alignment coherence across different modalities.
Quality:
Assessments spanning datasets and model dimensions.
Convincing data showcasing the relationship between rank difference and traditional metrics like entropy loss and accuracy.
Analysis elucidating the mathematical underpinnings and implications of the rank difference metric.
Clarity:
Organized and easily understandable elucidation of ideas.
Thorough explanation of the proposed metric, its basis and practical implications.
Follows a sequence from problem statement to methodology and outcomes.
Significance:
Tackles queries regarding LLM evaluation underscoring the necessity for metrics that go beyond mere model results assessment.
Potential to shape how researchers and professionals evaluate and interpret LLMs offering insights, into model behavior and effectiveness.
This can be used in both mode and mode models making it versatile, for different situations.
Weaknesses: The experiments have a scope;
Regarding Training Dynamics; The paper fails to delve into how the rank difference evolves throughout training missing insights, into its behavior as the model progresses.
Considering Model Diversity; Broadening the evaluation to include a range of model families and architectures would enhance the validation of the rank difference metric.
In terms of Presentation;
Clarity; Some sections require explanations to improve readability.
Visual Support; Incorporating diagrams or visual aids could boost understanding and reader engagement.
Delving Into Analysis Depth;
Insights by Layer; Analyzing rank differences across model layers would offer a holistic view.
Comparative Examination; Providing in depth comparisons with established metrics and recent studies would spotlight both strengths and limitations of the metric.
Application, in Real Scenarios;
Real world Contexts; Integrating real world application scenarios or case studies would enhance the relevance of the paper.
Computational Considerations;
Efficiency Concerns; Addressing the efficiency of calculating rank difference for large scale models and suggesting optimizations could be advantageous.
Technical Quality: 2
Clarity: 2
Questions for Authors: Do you have any insights or early findings regarding the variations, in the rank difference metric as LLMs undergo training? Examining this aspect could offer insights into how the behaves and its usefulness throughout the model development process.
Have you thought about assessing the rank difference metric across model families or architectures aside from the OPT family? Incorporating a range of models could help validate how broadly applicable the metric is.
Could you delve deeper into comparing the rank difference metric with established evaluation metrics and recent research findings? This, in depth analysis would shed light on both the strengths and potential limitations of using the rank difference metric.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors recognize the limitations in conducting experiments to observe changes, in rank difference during training due to constraints. They emphasize the importance of conducting experiments to assess how applicable the metric is across model layers and its efficiency in computations.
Suggestions for Enhancements;
Training Progress; Conduct a small scale study or simulation to observe how rank difference evolves during training with models or subsets of data.
Model Variation; Assess the effectiveness of the rank difference metric across a range of model families and structures. Provide initial findings if a comprehensive evaluation is not feasible.
Computational Efficacy; Explore optimizations for calculating rank difference, such, as enhancements or parallelization methods and briefly touch upon any societal implications and ethical considerations to demonstrate awareness of broader impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: The paper fails to delve into how the rank difference evolves throughout training missing insights, into its behavior as the model progresses.
A1: Thanks for your question. To address your concern and further investigate how "rank difference" changes during training, we continually train OPT-1.3B on cleaned wikipedia dataset. We present the results in the table below.
|Metrics/Training Stages|1|2|3|4|5|6
|-|-|-|-|-|-|-|
|rank difference ($\uparrow$)|2.148|2.154|2.158|2.160|**2.161**|2.156
|loss ($\downarrow$)|4.655|4.654|4.653|**4.653**|4.654|4.663
|acc ($\uparrow$)|0.332|0.334|0.336|0.332|**0.340**|0.336
The additional results indicate that the rank difference shows a gradual increase during the model's training. The rank difference converges together with accuracy, which is later than the fluctuation of loss. It suggests that rank difference may be a useful metric for monitoring the training progress as it better correlates with the trend of accuracy.
---
Q2: Broadening the evaluation to include a range of model families and architectures would enhance the validation of the rank difference metric.
A2: Thank you for your comments. **We would like to clarify that we have indeed addressed this aspect in our study, specifically in Section 6.1 of our paper.**
**In addition to the OPT family, we have extended our experiments to include two other model families: Cerebras-GPT and OpenELM.** These families represent a diverse range of well-trained LLMs of various sizes. **By including these different model families, we inherently incorporated a variety of architectures into our evaluation.** Each family has its unique architecture, allowing us to test the robustness of our rank difference metric across different model designs. **As illustrated in Figure 5 of our paper, we observe a consistent increase in rank difference as models scale up across all three families.** This trend correlates with reduced loss and improved benchmark accuracy, supporting the validity of rank difference as an evaluation metric across diverse model architectures.
---
Q3: Some sections require explanations to improve readability. Visual Support; Incorporating diagrams or visual aids could boost understanding and reader engagement.
A3: Thanks for your suggestion. We will add more discussions shown in our rebuttal to our paper.
---
Q4: Analyzing rank differences across model layers would offer a holistic view. Comparative Examination; Providing in depth comparisons with established metrics and recent studies would spotlight both strengths and limitations of the metric.
A4: Thanks for your suggestions. We would like to clarify some aspects of our study to address your points:
Insights by Layer: Our findings, as shown in Table 4, reveal that only the last layer demonstrates a consistent increasing trend in rank difference across model sizes. This may be interpreted as LLM is an integrated system where information processing occurs across the entire architecture. **If we rely on early layers for analyzing the rank difference, this could lead to a loss of important information and we may miss crucial information processing that occurs in subsequent layers. The last layer, on the other hand, integrates this information, providing a more complete representation of the input data.** Our experimental results indeed show that early layers do not exhibit clear patterns in terms of the rank difference. This observation supports our focus on the last layer and provides insights into how information is processed through the model.
Comparative Examination: **In our study, we have indeed conducted comprehensive comparisons with the primary metrics used in LLM's evaluation, including loss and downstream task performance.** In particular, we demonstrate the correlation among our rank difference, the reduced loss and performance on downstream tasks in Section 4.2 and 4.3. This comparison shows how our metric aligns with the primary metrics. **To the best of our knowledge, loss and downstream task performance are the primary quantitative measures used for assessing LLM.** If you are aware of other specific metrics for comparison, we would greatly appreciate if you could provide references.
---
Q5: Integrating real world application scenarios or case studies would enhance the relevance of the paper.
A5: Thanks for your suggestion. We aim to integrate real world applications into our future work. Rank difference may open up avenues for future research to explore how internal representation metrics can be integrated into different cases. Techniques such as pruning, quantization, and distillation may benefit from this metric that reveals internal redundancies. **The rank difference metric may also aid in monitoring the LLM's training process as discussed in A2.**
---
Q6: Addressing the efficiency of calculating rank difference for large scale models and suggesting optimizations could be advantageous.
A6: Thank you for your suggestion. Our rank difference calculation method is designed with efficiency in mind, particularly for large-scale models. Here are some key points regarding the efficiency of our approach:
1. The efficiency of calculating rank difference for large scale models:
- We use PyTorch's efficient tensor operations, which are optimized for GPU computation. **The core operations (normalization, covariance calculation, and effective rank computation) are matrix operations** that scale well with increasing model sizes.
- **The rank difference can be calculated using a relatively small dataset**, further reducing computational requirements.
2. Potential Optimizations:
For extremely large models, we could implement batch processing of the representations to reduce memory requirements.
We are actively exploring additional techniques to enhance the efficiency of our method in the future. We appreciate your attention to this important aspect of our work. | Summary: The paper proposes a rank-based evaluation metric that quantifies the amount of redundant information in the hidden representations of a model and applies it to both text-only and multi-modal models. The effective rank is obtained by the rank of its covariance matrix and is interpreted using information theory. This rank represents the degree of variations among the principal components in data. The rank difference between two models is used to measure how much redundant information is reduced in a model relative to another. The proposed metric aligns well with commonly used evaluation metrics (loss and accuracy).
Strengths: 1. The paper presents a new evaluation metric through the view of the geometric property of representation matrices and information theory and shows that it aligns well with commonly used metrics like loss and accuracy for LLMs.
2. Experiments show how the rank difference varies by length, choice of the layer from which representations are extracted, type of models (text vs multi-modal), and algorithm design and can potentially guide future work on model compression.
Weaknesses: 1. It is unclear what additional information the absolute rank or the rank difference brings to the table apart from a new interpretation. The rank differences are hard to interpret (L261: both models align well with a high alignment score) given a lack of detail on how they are computed for models of varying sizes.
2. For the multi-modal models, the authors again propose two metrics: image reduction ratio and image-text alignment and while the reduction ratio is a bit intuitive, it is unclear why image-text alignment is defined as is and why we need the different ranks (erank 3, erank 4, and erank5). In Table 2, the reduction and alignment follow opposite trends, is it something informative?
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. L224-225: can you provide any explanation on why these occasional outliers appear? This is much vivid in Table 4 where any other layer does not follow this upward trend.
2. Was there any experiment also on how the rank difference involves over the training of a model and whether that signal is informative on how long a model can be trained?
3. Is there any explanation on why the different layers are not informative of the representation redundancy (table 4)?
Confidence: 2
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: It is unclear what additional information the absolute rank or the rank difference brings to the table apart from a new interpretation. The rank differences are hard to interpret given a lack of detail on how they are computed for models of varying sizes.
A1:
Thanks for your comments. We would like to provide further explanations as follows.
Firstly, as we have discussed in Section 1, **the effective rank of the hidden representations extracted by a model from a dataset can be considered as the uncertainty or randomness from (quantum) information theory. Meanwhile, the rank difference quantifies the reduction in uncertainty or noise in the representations before and after training for the model.**
Secondly, we interpret that **as training progresses, a model's representations transition from being random and high-dimensional to more structured and lower-dimensional. The rank difference reflects this transition by measuring the extent to which redundant information has been reduced**, which is also discussed in Section 1 and 3.
Thirdly, the rank difference is computed based on the effective rank of the covariance matrix of the data representations (shown in Section 3), and it does not rely on model sizes. **Rank difference can be directly calculated and compared among models of varying sizes within the same family.**
Q2: For the multi-modal models, it is unclear why image-text alignment is defined as is and why we need the different ranks. The reduction and alignment follow opposite trends. Is it something informative?
A2:
Thanks for your questions. We would like to highlight these key aspects:
Firstly, we want to design a metric to evaluate the degree of alignment between the image and text modalities, ranging from 0 to 1. When the MLLM reaches the perfect alignment, the metric is close to 1. As the alignment worsens, the metric will decrease. **In particular, the absolute rank can be seen as the amount of absolute uncertainty or randomness.** The ranks for individual images (erank3), text (erank4), and image-text pairs (erank5) show how much the model integrates and represents each modality. **If these three ranks from different modalities are close to each other, it means that they align well from the perspective of information theory.** Thus, we design this metric to reflect the degree of closeness (i.e., alignment) among different modalities.
Secondly, regarding the observation where Image Reduction Ratio and Image-Text Alignment follow opposite trends, **it is important to note that these two metrics measure different aspects of the MLLMs and should be considered independently.** In particular, Image Reduction Ratio focuses on the model’s ability to condense visual information, while Image-Text Alignment measures the quality of alignment between different modalities. **The trends observed in Table 2 do not imply a direct relationship between the two metrics. Instead, they highlight different dimensions of MLLM's performance.**
Q3: Can you provide any explanation on why these occasional outliers appear?
A3: Thanks for your question. Actually, the observed deviations are within an acceptable range and do not affect the robustness of our findings. In particular, several factors may contribute to the presence of these outliers:
Training randomness: The training process of large language models inherently involves randomness. **Even within the same model family, slight differences in training dynamics can affect the performance in our evaluations.**
Outliers are not unexpected: It is worth noting that outliers are not unique to the rank difference. **Other commonly used metrics, such as loss and accuracy, also exhibit occasional outliers in their trends across different model sizes. For instance, OPT-6.7B gains lower performance on openbookqa than OPT-2.7B in Table 1.** Thus, outliers are not unexpected in the evaluation of LLMs with a single metric.
Despite these occasional outliers, the overall trend remains consistent and supports our conclusion that rank difference generally correlates with loss reduction and accuracy improvement as model size increases.
Q4: Was there any experiment also on how the rank difference involves over the training of a model and whether that signal is informative on how long a model can be trained?
A4:
Thanks for your question. To address your concern and further investigate how "rank difference" changes during training, we continually train OPT-1.3B on cleaned wikipedia dataset. We present the results in the table below.
|Metrics/Training Stages|1|2|3|4|5|6
|-|-|-|-|-|-|-|
|rank difference ($\uparrow$)|2.148|2.154|2.158|2.160|**2.161**|2.156
|loss ($\downarrow$)|4.655|4.654|4.653|**4.653**|4.654|4.663
|acc ($\uparrow$)|0.332|0.334|0.336|0.332|**0.340**|0.336
The additional results indicate that the rank difference shows a gradual increase during the model's training. The rank difference converges together with accuracy, which is later than the fluctuation of loss. It suggests that rank difference may be a useful metric for monitoring the training progress as it better correlates with the trend of accuracy.
Q5: Is there any explanation on why the different layers are not informative of the representation redundancy?
A5: Thanks for your comment. We would like to offer several potential explanations: LLM is an integrated system where information processing occurs across the entire architecture. **If we rely on early layers for analyzing the rank difference, this could lead to a loss of important information and we may miss crucial information processing that occurs in subsequent layers.** The last layer, on the other hand, integrates this information, providing a more complete representation of the input data. We indeed observe that early layers do not exhibit clear patterns in terms of the rank difference in our experiments. This underscores the importance of considering the model as a whole when analyzing the representation.
---
Rebuttal Comment 1.1:
Comment: Thank you for the response. I have read other reviews, the author's rebuttal, and updated my scores accordingly. It would be very useful to integrate the explanations (including the outliers, the training evolution and the explanation for over layers) in the next version of the paper.
---
Rebuttal 2:
Title: Thanks for raising your score.
Comment: Thank you for your reconsideration of our paper and the adjustment of the score. We are very grateful for your acknowledgment of the empirical contribution our work provides to the field. We assure you that the valuable suggestions and insights from you and other reviewers, as well as our explanations, will certainly be integrated into our revised version.
We sincerely appreciate the time and effort you've dedicated to this. Thanks again for your review and comments. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
VisionLLM v2: An End-to-End Generalist Multimodal Large Language Model for Hundreds of Vision-Language Tasks | Accept (poster) | Summary: The paper proposes a model that supports a wide range of multimodal tasks, beyond text generation. The approach leverages an LLM, modality specific encoders and task specific decoders to effectively handle different tasks. The LLM communicates with task decoders via “super link”, which consists of special tokens and soft prompts for each task.
Strengths: - The work is extensively evaluated on many benchmarks
- The model competes with other more specialized approaches
- The paper is well written and easy to follow
- The paper addresses an important problem, building efficiently a generalist models is still an open-research question
Weaknesses: 1. The paper title is a bit misleading. It gives the impression that the paper proposes a single model that can handle many tasks, while it is basically an agglomeration of many powerful pretrained models (CLIP, LLM, Stable Diffusion, UniPose…).
2. The contribution is limited. The super link is basically soft prompts (learnable tokens) and special tokens per task. Soft prompt is widely used in other approaches (shared learnable query as in InstructBLIP) or (learnable query per task as in eP-ALM/MAPL). For the special tokens, FROMAGe uses special tokens to handle multimodal outputs. The main contribution of the paper is in scaling this approach to many multimodal tasks. It is also important for the paper to discuss these similar approaches.
3. The paper lacks details about the model training. In particular which training objectives are used on top of the task-specific decoders? And how the different losses are weighted?
4. Did the authors experiment with one-stage training instead of the 3-stages training? I didn’t find any experiment to support this design choice
5. The paper claims “Multimodal In-Context Ability. our model … exhibits superiority over the previous in-context models …” However there is no quantitative comparison to support the superiority of the proposed model in a a few-shot ICL setting.
InstructBLIP: Dai, Wenliang, et al. "Instructblip: Towards general-purpose vision-language models with instruction tuning." Advances in Neural Information Processing Systems 36 (2024).
eP-ALM: Shukor, Mustafa, Corentin Dancette, and Matthieu Cord. "ep-alm: Efficient perceptual augmentation of language models." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
MAPL: Mañas, Oscar, et al. "Mapl: Parameter-efficient adaptation of unimodal pre-trained models for vision-language few-shot prompting." arXiv preprint arXiv:2210.07179 (2022).
FROMAGe: Koh, Jing Yu, Ruslan Salakhutdinov, and Daniel Fried. "Grounding language models to images for multimodal inputs and outputs." International Conference on Machine Learning. PMLR, 2023.
Technical Quality: 3
Clarity: 3
Questions for Authors: Check weaknesses section (e.g. 3-4)
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors discuss the limitations and the broader impact in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer nisa,
Thanks very much for your valuable comments. We hope our responses can address your concerns and clarify our contribution.
**Q1: The paper title is a bit misleading because the model is an agglomeration of many powerful pretrained models instead of a single model.**
**A1:** We suppose that a network that could be trained end-to-end is considered as a single model. In the deep learning community, It is a common practice to link the different pretrained models in one network, such as Flamingo, LISA, and GLaMM. We extend the one-to-one linking to the one-to-many linking, which still should be regarded as a single model, NOT an agglomeration of many powerful pretrained models.
**Q2: Limited contribution regarding the soft prompt and special tokens.**
**A2:** (1) InstructBLIP, ep-ALM, and MAPL use learnable queries (i.e., soft prompts) to connect the modality encoders and LLM. FROMAGe uses a special token for image-text retrieval so as to handle multimodal outputs, where the images are not generated from the network end-to-end. These works still remain constrained to **text-based outputs**. However, our work is significantly different from theirs in that we support hundreds of tasks by largely extending the output formats for MLLM, e.g., **box, mask, keypoint, text, and image**.
(2) When scaling to various tasks across a broad range of domains, it requires us to address several challenges: (i) **precise decoder invocation**, (ii) **mitigating task conflicts**, and (iii) **efficient message transmission** in an end-to-end manner. Moreover, as the number of tasks increases, the difficulty of addressing these challenges will also significantly rise. Our proposed super link is a simple but non-trivial solution. It provides a new approache for connecting multiple decoders and coordinating hundreds of tasks, which have not been explored by previous works. Specifically, **first**, the super link contains a routing token that focuses on scheduling decoders. It can precisely determine which decode should be used after being trained with large-scale of corpus. **Second**, we use un-shared super-link queries for different decoders, which can mitigate the issue of task conflicts. **Last**, super-link queries can be easily accessed by their corresponding decoder and act as conditions for different decoders in solving various tasks.
(3) Despite the simplicity of our super link, it is able to extend MLLMs to handle hundreds of tasks across various domains. Without redundant design, a simple yet effective method might provide more clear insights into this research topic. Therefore, we believe that our contributions to the study of generalists are clear and should not be overlooked. We will include the above discussion in the revised version.
**Q3: Details about model training.**
**A3:** (1) We have provided the model training details, e.g., training strategy and hyper-parameters, in Appendix D.
Regarding the training objectives for task-specific decoders, we have given brief descriptions in Appendix B.2. Specifically, we keep the original losses for the decoders. Grounding-DINO optimizes the combination of classification loss, box losses (including l1 loss and gIoU loss), and mask losses (including binary mask loss and DICE loss). The training objective for UniPose is the combination of classification loss, box losses (including l1 loss and gIoU loss), and keypoint losses (including l1 loss and OKS loss). For Stable-Diffusion and Instruct-Pix2Pix, we employ two MSE losses: one between the CLIP text features and mapping features, and the other one between the ground-truth noise and predicted noise.
(2) We simply sum up all the losses from LLM and task-specific decoders without reweighting for each exponent, i.e., $L_{\text{total}} = L_{\text{llm}} + L_{\text{gdino}} + L_{\text{unipose}} + L_{\text{sd}} + L_{\text{ip2p}}$.
Thanks to the reviewer for the suggestion. We will add more detailed descriptions of training losses in the revised version.
**Q4: Experiment with one-stage training instead of three-stage training.**
**A4:** Please refer to common questions Q2 for the experiments and explanations. We will add this part in the revised version.
**Q5: Quantitative comparison for the in-context learning setting.**
**A5:** (1) In-context segmentation
To fairly compare with state-of-the-art methods, e.g., Painter [a1] and SegGPT [a2], we construct a benchmark based on the validation set of COCO2017, where the number of in-context examples used during inference ranges from 1 to 5. The results are shown in the following table.
| Method | mIoU |
| :----: | :----: |
| Painter [a1] | 44.26 |
| SegGPT [a2] | 54.25 |
| VisionLLM v2| 68.15 |
(2) In-context image captioning
We follow the same evaluation protocol as OpenFlamingo [a3] and use 4-shot to assess the performance between different methods. The validation set is built upon COCO2017. We present the results in the following table.
| Method | METEOR | CIDEr |
| :----: | :----: | :----: |
| OpenFlamingo [a3] | 13.80 | 104.61 |
| VisionLLM v2| 18.56 | 152.74 |
Generally, VisionLLM v2 exhibits clear performance advantages compared with state-of-the-art methods in both in-context learning settings, which demonstrates the superior in-context capacities of our method. These results will be updated in our final paper.
References
[a1] Wang, Xinlong, et al. "Images speak in images: A generalist painter for in-context visual learning." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
[a2] Wang, Xinlong, et al. "Seggpt: Segmenting everything in context." arXiv preprint arXiv:2304.03284 (2023).
[a3] Awadalla, Anas, et al. "Openflamingo: An open-source framework for training large autoregressive vision-language models." arXiv preprint arXiv:2308.01390 (2023).
---
Rebuttal Comment 1.1:
Comment: Thanks for detailed feedback. The reviewers addressed most of my concerns. I am still not convinced about the originality of the work. However, the work contains some important messages that could benefit the community. Thus, I will raise my score.
---
Rebuttal 2:
Title: Official Comment by Authors
Comment: Thank you so much for taking the time to re-evaluate our paper, thoughtfully considering our responses, and agreeing to raise the score. We will carefully take your valuable suggestions and continuously improve our paper. | Summary: This paper introduces an advanced multimodal large model (MLLM) that integrates visual perception, understanding, and generation in a unified framework. Unlike traditional models limited to text outputs, it expands its capabilities to tasks like object localization, pose estimation, and image generation and editing via a "super link" mechanism to connect the MLLM with task-specific decoders, facilitating flexible information and gradient feedback transmission while resolving multi-tasking training conflicts. The model is trained on data from hundreds of vision and vision-language tasks, allowing it to generalize across these tasks with shared parameters and achieve performance comparable to task-specific models. VisionLLM v2 aims to enhance the generalization of MLLMs.
Strengths: 1. The starting point of the paper is very good: enabling LLMs to use a variety of tools so that the model can handle various tasks.
2. Model's performance is excellent, reaching the state-of-the-art for general models across various tasks.
3. The experiments are extensive, involving a significant amount of engineering work, and integrating various tasks and datasets.
Weaknesses: 1. The training process is relatively complex, consisting of three stages, making it difficult to promote and use in the industry. The method described in the paper seems like it could also be done in a single end-to-end training stage, similar to LLMs. [1] is the one-stage training general vision model. Discussing the general models [1][2] trained in a one-stage manner with a pure language interface would be better. :)
2. In Table 3, the detection performance is basically on par with the single-task decoder performance later on. Does this imply that the performance mainly depends on the task-specific decoder? Is the method proposed in the paper primarily aimed at facilitating the integration and scheduling of various decoders? Will the computation of MLLM enhance the performance of the subsequent decoder?
3. The advantage of LLMs is not only their compatibility with various tasks but also their ability to mutually enhance different tasks, which is the so-called multi-task capability. In the method described in the paper, different tasks share the same MLLM and encoder. So, can joint training of different tasks, like GiT[1], lead to mutual enhancement?
Points 2-3 are just areas where I think there might be room for improvement, but they won't significantly affect my decision on whether to accept this paper. It's just for discussion. :)
[1] GiT: Towards Generalist Vision Transformer through Universal Language Interface. (ECCV2024)
[2] Fuyu-8B (blog)
Technical Quality: 3
Clarity: 3
Questions for Authors: This question is just to hear the authors' perspectives: Should a general model use a bridging approach (LLaVa or VisionLLM) or a simple multi-layer transformer (Fuyu8B, GiT, Gemini)? The former might be easier to implement but has complex training and scaling issues, while the latter has greater advantages in scaling and simpler training, similar to GPT. (Authors can answer freely without considering that the paper uses a bridging framework. This will not affect the final score.) :)
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This is purely academic research and does not involve any potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer W77r,
Thanks a lot for your insightful reviews and support for our work! We hope our responses can address your questions.
**Q1: Complex training process.**
**A1:** We kindly invite the reviewer to refer to the common question Q2 for the details.
GIT [1] mainly focuses on visual tasks and uses a smaller scale of datasets (i.e., COCO, RefCOCO series, COCO caption, ADE20K) for training. And GIT is basically evaluated on these datasets without evaluating its conversation ability. Fuyu-8B [2] is a decode-only transformer that is trained for image-level visual question answering. These two works focus on a single task or related visual tasks, making their training can be completed in one stage.
**Q2: Does the performance mainly depend on the task-specific decoder? Does the proposed method aim at facilitating the integration and scheduling of various decoders? Will MLLM enhance the performance of decoders?**
**A2:** (1) The performance largely depends on the task-specific decoders. We propose the end-to-end framework to increase the openness of decoders. For example, we can extend the capacity of Grounding-DINO by adapting to more profound categories, more diverse domains, and more flexible textual conditions.
(2) The answer to the second question, "Does the proposed method aim at facilitating the integration and scheduling of various decoders?", is yes. We integrate several task-specific decoders in one network, and use MLLM to select the task-specific decoder properly.
(3) The advantage coming from MLLM is its strong language processing ability for interpreting and reasoning over the user's prompt. This endows the model with greater textual perception and sentence reasoning capabilities, thereby enhancing the performance of tasks that rely on openness. We have conducted the ablation experiments by replacing the LLM from Vicuna-7B with stronger InternLM-20B. We find that for the grounding task on RefCOCO, the model obtains the significant +2.8 points on P\@0.5, which meets our expectations. While for object detection on COCO, we do not see performance improvement. This is because the traditional language model, e.g., BERT, is sufficient for encoding the category information.
**Q3: Can joint training of different tasks lead to mutual enhancement?**
**A3:** Please refer to common question Q1 for the details. We use different decoders to complete various tasks, and we do not observe obvious mutual enhancement among tasks in this work.
**Q4: Should a general model use a bridging approach (LLaVA or VisionLLM) or a simple multi-layer transformer?**
**A4:** Very good question. From our perspective, the bridging approaches are easier to implement, in order to enhance the LLM with various capabilities beyond the text output.
We agree with the reviewer that the simple multi-layer transformer architecture is more unified and concise and has advantages in simpler training. However, at the current technology level, there are many tasks and modalities in the real world that are hard to model using the next token prediction. For example, there are still some open research questions: Do image patches follow the left-to-right order? How to effectively represent structural outputs such as segmentation masks and protein structure?
In general, we believe it is necessary to integrate specialized models into general models to advance the field of generalist models in the short term.
---
Rebuttal Comment 1.1:
Comment: Thank you for the author's response. Regarding Q1, I'm sorry to say that I am not yet fully convinced. A single-stage training framework is essential for industrial applications; otherwise, the final paper might remain confined to academia. The frameworks that can be widely adopted in the industry are those that are simple and effective. Even if a single-stage training approach may reduce performance, the industry is likely to prefer it. We all hope to create work that is truly useful for the industry, not just a paper. Therefore, I hope the author will discuss this limitation in the camera-ready version, comparing it with single-stage works like GiT and Fuyu-8B. Looking forward to seeing the author's future work on refining and optimizing the training strategy.
---
Reply to Comment 1.1.1:
Title: Official Comment by Authors
Comment: Thank you for your thoughtful feedback. We design the three-stage training strategy in this work to effectively train the model from an initial vision encoder and a large language model, with the primary goal of maximizing performance.
We agree that it is meaningful for training simplicity and efficiency. As mentioned in common questions Q2, our model has the potential to be trained in a single stage by adjusting the dataset sample ratios. And we believe the performance gap between the one-stage and three-stage strategies could be significantly reduced by starting with pretrained, strong vision-language models (e.g. LLaVA, InternVL). In our recent exploration, we directly load the pretrained InternVL and augment it with Grounding-DINO. We meticulously adjust the dataset ratios of all combined datasets to simulate 1 epoch training on chat data and 12 epoches training on detection data. Then the model is trained within one stage to simplify the training pipeline. Through this careful configuration, we observe the much smaller performance decrease: -0.5 / -2.0 points on MMB EN/CN compared with original InternVL, -1.3 points for object detection on COCO compared with original Grounding-DINO.
In the final version of our paper, we will include a detailed discussion on the limitations of our approach compared to single-stage works like GIT and Fuyu-8B, highlighting the trade-offs between performance and training simplicity.
Thank you once again for your valuable feedback. We will incorporate these considerations into our future research to explore simpler training strategy, with the hope that our work will ultimately bridge the gap between academic research and industrial application. | Summary: This paper proposes an end-to-end generalist Multimodal Large Language Model (MLLM) for a variety of vision-language tasks, including captioning, detection, segmentation, and image generation. It introduces the concept of "Super Link" for triggering different tasks and attaches corresponding task decoders to the MLLM through query embeddings for end-to-end gradient updates. The model demonstrates strong results on the problems of interest and opens a new avenue for future multimodal generalist studies.
Strengths: The paper is well-written, and the idea is well-motivated. The evaluation results are generally sufficient.
Weaknesses: The paper lacks a comprehensive review of related work. Relevant works such as AnyGPT [r1], Chameleon [r2], the CM3 series, and other earlier publications that unify image generation in MLLM with image quantization and decoding should be reviewed and acknowledged.
Clarification is needed on how super-link queries are generated. For [DET] and [SEG], these can be boilerplate templates, but what about text-to-image generation? Are the super-link queries generated on-the-fly based on the context? Examples for [GEN], similar to the one at the bottom of Page 5, would be appreciated.
Regarding the joint multi-task training stage elaborated in Sec. 3.3, ablation studies on how each task benefits the others should be conducted. Does performing fine-grained vision tasks enhance the model's image generation capability, or do multimodal understanding and generation remain conflicting as shown in the literature? Do detection/segmentation tasks lead to better controllability or spatial reasoning for image generation? These questions need to be studied.
[r1] Zhan et al., AnyGPT: Unified Multimodal LLM with Discrete Sequence Modeling.
[r2] Chameleon Team, Chameleon: Mixed-Modal Early-Fusion Foundation Models.
Technical Quality: 3
Clarity: 3
Questions for Authors: see previous sections
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: appear to be sufficient
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear reviewer x7n8,
Thanks so much for your constructive comments and support for acceptance. We hope our responses can address your concerns.
**Q1: More related works.**
**A1:** Thanks for pointing out the missing related works. AnyGPT builds a multimodal text-centric dataset for any-to-any multimodal generation (text, image, speech, music) with sequence modeling. Chameleon uses fully token-based representations for both texts and images, capable of understanding and generating interleaved image-text sequences. CM3 series are autoregressive models for text-to-image and image-to-text tasks. All these works could unify image understanding and generation in one network. Our model can support more vision and vision-language tasks.
We will respectfully cite these works and add the discussion to the related work section of our paper.
**Q2: Clarification on how super-link queries are generated. Template example for text-to-image generation.**
**A2:** (1) Here is a template example for the text-to-image generation:
```
USER: Please generate an image with caption: a dog near the sea.
ASSISTANT: Sure, here it is [GEN].
```
We will add this example in our revised version.
(2) In the following, we would like to give clearer clarification about the generation of super-link queries.
The super-link queries are initialized as learnable weights `nn.Embedding`. During inference, whenever the LLM predicts the special token such as [DET], [GEN], the super-link queries will be automatically appended after it. This means that, in the current generation step, the size of input embeddings for LLM is expanded from ``[1, c]`` to ``[1 + num_embeds, c]``, where `num_embeds` is the number of super-link queries, `c` is the hidden size of LLM. So, it is true that the super-link queries are generated on-the-fly based on the context.
**Q3: Study on how each task influences the others.**
**A3:** We have provided detailed responses in the common questions Q1. Based on the current experimental results, we have not yet observed the positive impact of vision tasks on image generation.
---
Rebuttal 2:
Title: Concerns addressed?
Comment: Dear reviewer, the authors have responded to your questions - are you satisfied by their response? If so, would you like to update your rating? Is there any more information you need from the authors to make that decision?
---
Rebuttal Comment 2.1:
Title: Questions have been addressed in the rebuttal
Comment: The rebuttal has addressed the questions from the initial review and the authors indicated corresponding revisions in the final version. Therefore, I maintain my original positive review.
---
Reply to Comment 2.1.1:
Title: Thanks for your positive feedback
Comment: We are glad we could address your questions. We sincerely appreciate the time and effort you put into reviewing our paper and providing valuable comments. We will ensure the revisions are incorporated into the final version. | null | null | Rebuttal 1:
Rebuttal: Dear all reviewers and ACs,
We sincerely thank you for all the time and effort in reviewing our paper and giving valuable comments. We are really encouraged that all the reviewers appreciate the good motivation, extensive experiments, strong performance, and clear representations. We will first answer the common questions, and then give responses to each reviewer separately. Hope our responses can address your concerns.
We are happy to further discuss with you if there are still other concerns. Thanks for helping improve our paper.
### Common questions
**Q1: Multi-task benefiting.**
**A1:** (1) As indicated by previous works [a1, a2], different tasks with shared parameters may cause conflict with each other. This is mainly due to inconsistent optimization in multi-task learning. To validate this, we start from the same checkpoint and train the model on a single task (image VQA, instance segmentation, or image generation) for 1000 iterations. Then, we record the loss change for all three tasks. The results are presented in the following table, where the first column represents the training tasks, and the first row represents the testing tasks.
| Train \ Test | Image VQA | Inst Seg. | Image Gen. |
| :----: | :----: | :----: | :----: |
| **Image VQA** | - 0.01 | - 0.11 | - 0.04 |
| **Inst Seg.** | + 0.04 | - 0.12 | + 0.19 |
| **Image Gen.** | + 0.03 | + 0.02 | - 0.04 |
In the table, a decrease in the loss value indicates beneficial training for the task, while an increase is detrimental. We can observe that training on image visual question answering (VQA) is advantageous for all three tasks, which is reasonable as the conversation ability of MLLM is enhanced. Whereas training exclusively on instance segmentation or image generation leads to conflicts with other tasks. Uniperceiver-MoE [a1] has concluded that task conflicts are more significant when closer to the output layer, which aligns with our findings.
(2) In our model, using different decoders did not reveal complementary effects during the multi-task learning. Therefore, we employ un-shared super-link queries to address task conflicts. Using shared super-link queries, on the other hand, can even lead to a decrease in performance, as illustrated in Figure 4 of the paper.
(3) Although integrating different decoders in one network may lead to conflicts, we would like to claim the importance of building one end-to-end generalist MLLM: (i) enhance the capacities of MLLM to complete different tasks beyond the text outputs. (ii) increase the openness of decoders to adapt to more diverse domains and more flexible text instructions.
References
[a1] Zhu, Jinguo, et al. "Uni-perceiver-moe: Learning sparse generalist models with conditional moes." Advances in Neural Information Processing Systems 35 (2022): 2664-2678.
[a2] Yu, Tianhe, et al. "Gradient surgery for multi-task learning." Advances in Neural Information Processing Systems 33 (2020): 5824-5836.
**Q2: One-stage v.s. three-stage training.**
**A2:** (1) The intrinsic reason for the design of three-stage training is to ensure the conversation ability of the multimodal large language model (MLLM) while extending its other capabilities. However, this introduces a training conflict: the MLLM requires only 1 epoch of training on chat data to prevent overfitting, whereas the decoders need longer training epochs (e.g., Grounding-DINO need 12 epochs of training on visual data) to achieve convergence. Thus, we designed the three-stage training strategy: Stage-1 to obtain an MLLM with strong conversation ability, Stage-2 to finetune the MLLM to acquire basic additional capabilities, and Stage-3 to train the decoders only until convergence.
(2) One possible solution for one-stage training is to give a higher sample ratio for the visual data. In the following, we conduct the ablation study to study the effect of one-stage v.s. three-stage training. We use image-level chat data, COCO, and COCO-Pose for image understanding, instance segmentation, and pose estimation, respectively. For one-stage training, we repeat the COCO and COCO-Pose datasets 12 times. The three-stage training undergoes the same process as specified in the paper.
| | TextVQA | MME | MMB EN/CN | COCO | COCO-Pose |
| :----: | :----: | :----: | :----: | :----: | :----: |
| one-stage | 53.2 | 1284.4 | 61.9 / 51.4 | 54.9 / 44.6 | 74.1 |
| three-stage | 66.2 | 1507.1 | 77.8 / 68.5 | 56.3 / 47.6 | 74.2 |
As can be seen from the table, the conversation ability of the model is significantly decreased due to extreme data imbalance. And the performance of instance segmentation and pose estimation is also slightly reduced. These results prove the effectiveness of the three-stage training.
(3) We would like to emphasize that training a generalist model to support various tasks (multimodal conversation, object segmentation, pose estimation, image generation end editing, etc.) without performance degeneration is a significant challenge in our work, especially considering that we leverage large-scale datasets from various domains. It is extremely difficult to achieve optimal performance for all the tasks from a broad range of domains at the same training point. We dedicated substantial effort to address this issue, ultimately developing the three-stage training strategy. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Precise asymptotics of reweighted least-squares algorithms for linear diagonal networks | Accept (poster) | Summary: This paper studies the trajectory of a certain iterative scheme in the high-dimensional limit.
In each step, there're two update steps on a fresh batch: 1) regularized least squares; 2) element-wise nonlinear transformation.
This type of iteration is motivated and encompasses several interesting iterations such as IRLS, AGOP, etc.
The result is a precise characterization of the empirical distribution of the iterates at any finite time in the proportional limit.
This allows one to compute many interesting summary statistics.
Strengths: Though the iteration studied in this paper may look exotic at the first glance, it is well motivated (see the first two pages).
The proof uses, unsurprisingly, CGMT since the problem is given in a form that's ready for it.
That said, working out the details is not a trivial task.
I went through the whole argument in the appendix and got the impression that everything is treated in a fairly rigorous way.
In particular, all technicalities associated with CGMT such as justifying the exchange of min / max and proving uniform convergence have been properly addressed.
In doing so, the authors have invoked various tricks / auxiliary results commonly used in this literature.
Weaknesses: As the authors have already discussed, one obvious drawback of CGMT is that it needs sample splitting (which may make the iteration under consideration incompatible with its motivating special cases).
I actually think that CGMT has the potential to handle repeated batches, or even full batch, though the resulting state evolution is likely to be much less interpretable.
So I don't really object to the sample splitting assumption.
Also, CGMT is capable of offering non-asymptotic guarantees.
I wonder if the authors are interested in doing so (no pressure).
Technical Quality: 4
Clarity: 4
Questions for Authors: Minor comments:
1. In the equation between line 66/67, the fresh dataset $\boldsymbol{y}^{(t)}, \boldsymbol{X}^{(t)}$ hasn't been defined (correct me otherwise).
1. Line 49, typo: is --> are
1. Line 164, typo: $t = 0, 1, \dots, T$
1. Equation between line 165/166, why not add superscript $(t)$ to $\boldsymbol{\epsilon}$?
After all, the noise is also fresh and is not shared by other batches.
1. Regarding boundedness of $\boldsymbol{\theta}^*$, as mentioned, it's only used in Lemma 3.
Over there, I wonder if it's possible to remove this by truncating $\boldsymbol{\theta}^*$ entry-wise at a large constant $K$, doing the proof (in particular the argument in line 682-692), then sending $K\to\infty$.
(To clarify, this is a minor assumption and removing it requires technical work that may not worth it.
So I don't mind if the authors leave it as is.)
1. Line 246-249, the authors meant to discuss a second difference from prior works.
What's done in this paper is discussed but what's done in prior work is not.
So the comparison is unclear.
1. There are two notation sections that are almost identical. I suggest remove one of them.
1. Line 462, what is a "proper function"? Do you mean a "proper convex function"?
1. Line 466, "restate": please put a reference for CGMT.
1. Equation between line 577/578, $V$ should be $V_t$.
1. Equation between line 579/580, $v_i$ should be $v_i^{(t)}$, $V$ should be $V_t$.
1. Line 580, typo: "satisfies".
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the thorough review of our technical results and constructive feedback. We agree that obtaining non-asymptotic guarantees for this problem would be a very interesting technical challenge for future work. Here, we rely explicitly on a distributional characterization of the iterates, rather than obtaining a state evolution over deterministic scalars. This makes obtaining non-asymptotic guarantees more challenging (although likely possible) after the first 1-2 iterations.
We address your individual questions below. Thanks for catching some typos and the missing CGMT reference - we will fix all of these in our revision. Regarding a few individual questions:
- Boundedness of $\theta^*$: Thanks for this suggestion! Yes, we do believe this kind of truncation argument should allow us to weaken the boundedness requirement on $\theta^*$ to some weaker regularity condition on the distribution. However, we seem to run into some issues when trying to apply a truncation result recursively for each iteration (since we end up needing convergence of the empirical distribution of $(v, \theta^*)$ in a higher order Wasserstein metric). We will continue working on this and update the manuscript if we find a way around it!
- Line 246-249: Thanks for the catch. The prior work [CLOT21] assumes convergence of the initialization in $W_4$ and proves convergence of the estimator in $W_3$, which would not work for a recursive application of the result. We will add this in the revision.
References:
[CLOT21] Chang et al., “Provable benefits of overparameterization in model compression: From double descent to pruning neural networks,” AAAI 2021.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their comments (including the global one).
I took a brief look at them and keep my evaluation unchanged. | Summary: The paper considers a general algorithm of the form shown in Lines 66-67, as it includes several interesting methods as special cases.
The main result is Theorem 1, which proves asymptotic convergence in probability of any function $g\in PL(2)$ to the corresponding expectation. One important special case of this result is the convergence of the test errors of the iterates.
Synthetic experiments show that the errors predicted by Theorem 1 align well with the actual performance of the algorithms.
Strengths: - The paper is well-written, easy to follow, and well-motivated.
- One interesting contribution is that the paper understands a few different lines of research and is able to unify them using a general algorithm, where the reweighting function can be chosen to implement a specific type of method.
- Theorem 1 is a powerful result as it holds for any function in the specific class $PL(2)$ or for any continuous and bounded function.
- The proofs do not seem to be trivial to me. While I am not familiar with the proof techniques, the paper illustrates the technical points in the paragraph of Line 240, which makes the high-level proof idea clear and provides cues for interested readers to go deeper into the technical details.
Weaknesses: - The first weakness is the assumption that each batch of data is independent. Personally I feel that this is a very strict assumption as I can not imagine a situation where it is satisfied. For example, almost all optimization algorithms (say SGD) would also reuse the previously seen data. Perhaps it is difficult to remove this assumption, but I think more discussions about it could be beneficial. Specifically:
- Could the authors please provide experiments similar to Figure 1 and allow the algorithm to reuse the data? (it is not very clear to me whether Figure 1 takes fresh batches or not.)
- Could authors please comment on the technical hurdles if the independence assumption is not available?
- Prior analysis on IRLS does not have this assumption, see for example: Theorem 2 of "On the Convergence of IRLS and Its Variants in Outlier-Robust Estimation" and in particular Figure 2a shows the error prediction; Theorem 1 of "Globally-convergent Iteratively Reweighted Least Squares for Robust Regression Problems"; Theorem 3.2 of "Iteratively Reweighted Least Squares for Basis
Pursuit with Global Linear Convergence Rate". Would their analysis be useful in any way to remove the independence assumption?
- The second weakness is that the experimental evaluations in their current form are not very deep. For example:
- Is the error bound prediction accurate for $\ell_2$ squared test loss? The paper commented that $\ell_2$ squared loss does not belong to $PL(2)$, but having this experiment is important as $\ell_2$ squared losses are more commonly seen in my opinion and this experiment can verify whether the assumption of $PL(2)$ is too strong.
- It seems a little bit weird that alternating minimization exhibits non-monotonic errors in Figure 1b (purple). Is that because these are test errors, or because the $\ell_1$ loss is a bit different from the training loss (objective function)? How about $\ell_2$ losses or the prediction (residual) errors on test (training) data? All these can be easily verified by experiments, and having them would give a better understanding of the behavior of alternating minimization, and it would also make the analysis more complete and convincing in my humble opinion.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Is it necessary to define pseudo-Lipschitz of order $k$? It seems that for most of the cases, $k$ is simply chosen to be $2$.
- Lines 185-186: What is the precise relation between $\alpha$ and $p$? It is not very clear why we need two different symbols.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: See above.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and constructive feedback. We address your individual questions and concerns below.
*(1) “Could the authors please provide experiments similar to Figure 1 and allow the algorithm to reuse the data? (it is not very clear to me whether Figure 1 takes fresh batches or not.)”*
We obtain all figures in the paper by performing the iteration in equation 4, which has a fresh batch at each step. We will be sure to specify this explicitly in each figure caption. In general, we do not expect the same exact behavior in the “full-batch” setting (indeed, the total amount of data observed is smaller by a factor of T), but a similar behavior holds when samples are randomly chosen at each iteration (with repetition allowed). Please see part (a) of our global response for more discussion and supplemental simulations for this point.
*(2) “Could authors please comment on the technical hurdles if the independence assumption is not available?”*
Please see part (a) of our global response for discussion of this point.
*(3) “Prior analysis on IRLS does not have this assumption…”*
We agree that prior non-asymptotic convergence results for IRLS do not have this assumption. However, we emphasize that the type of result we achieve is substantially different in flavor and relies on a very different set of technical tools. Unlike these works, we aim to provide an exact asymptotic characterization of the distribution of the iterates in the high-dimensional limit. While the tools required for this necessitate stronger assumptions on the data (Gaussian) and algorithm (online/sample-split setting), the resultant guarantees are much stronger and can apply to a wider range of problem settings and algorithms.
*(4) “Is the error bound prediction accurate for squared test loss?...”*
We have included simulations for the squared loss in Appendix D. Even though the squared loss is not PL(2), we do find that the error prediction from Theorem 1 seems to still be accurate in this case. So, even though our result doesn’t formally hold for squared loss, we still believe our theorem can still be useful for trajectory predictions in this case.
*(5) “It seems a little bit weird that alternating minimization exhibits non-monotonic errors in Figure 1b…”*
We were also surprised by this behavior! We find that, rather than depending on the loss used ($\ell_1$ or $\ell_2$), this seems to depend on the noise level, with the non-monotonicity appearing more prominently in the low-noise regime (Fig. 1b). We agree that this deserves more mention in the main paper, so we will be sure to mention this in Section 3.1.
*(6) “Is it necessary to define pseudo-Lipschitz of order k?”*
It is true that we only use the choice k=2 in our analysis – we will simplify the definition accordingly.
*(7) “Lines 185-186: What is the precise relation between $\alpha$ and $p$?*
Thanks for the question – this reweighting function is equivalent to the one used by the IRLS-p algorithm from [MF12] with the choice $p=2-4\alpha$. The analysis in [DDFG10] also makes an explicit connection between IRLS updates of this form and $\ell_p$-minimization for $0<p\leq1$.
References:
[MF12] Mohan and Fazel, "Iterative reweighted algorithms for matrix rank minimization," JMLR 2012.
[DDFG10] Daubechies et al., "Iteratively reweighted least squares minimization for sparse recovery," *Communications on Pure and Applied Mathematics,* 2010.
---
Rebuttal 2:
Title: Reply
Comment: Dear authors, thank you for your reply to my comments.
I have taken a look at the rebuttal and also the comments from other reviewers.
I would like to keep my current, positive score, given the quality of the paper. | Summary: In this paper, the authors propose theoretical analysis of the high-dimensional dynamics of the reweighted least-squares methods in the context of "linear diagonal networks".
The general algorithm is given in Equation (4) and includes alternating minimization (AM), reparameterized IRLS, and linear recursive feature machines (lin-RFM) as special cases with different choices of reweighting function, see Table 1.
In the high-dimensional regime where the observation dimension $d$ and sample size $n$ are both large and comparable, the authors provide, in Theorem 1, distributional characterization of the algorithm behavior for a fixed number of iteration $T$ as $n,d \to \infty$ at the same pace.
The result is then extended the multi-variate setting in Theorem 2.
Simulations results are provided to support the proposed theoretical analyses.
Strengths: The paper is in general in good shape.
It is well written and easy to follow.
The proposed analysis is based on Convex Gaussian Min-Max Theorem and improves previous efforts.
Weaknesses: It seems that the problem under study is of interest, but it would be great if the author would better motivate the study and setting, e.g., by providing some take-home messages.
I have some detailed comments below
Technical Quality: 4
Clarity: 3
Questions for Authors: * line 77 contribution: I suggest that the authors could consider adding pointers to precise theoretical/simulation results when talking about contributions. For example, referring to Theorem 1 for the first contribution.
* line 91, perhaps add a paragraph on the related technique tool of Convex Gaussian Min-Max Theorem beyond [7], what is the technical challenge here, and whether the obtained technical results are of more general interest?
* line 172-175: the assumption of $nT > d$ is an artifact of the proof, or very intrinsic to the obtained theoretical results? I somehow feel this is a bit unrealistic, or at least, a regime that is not of great interest to study. Could the authors comment on this?
Also, this should be stated explicitly in the theorem.
* the authors mentioned a few time the keyword "feature learning", but after reading the paper I did not find any in-depth discussion on this. Perhaps having an additional remark to make the connection?
minor:
* line 197: of the algorithm in (4) or in Equation (4).
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: I do not see any potential negative social impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the detailed review and constructive feedback. We address your individual questions and concerns below.
*(1) “It seems that the problem under study is of interest, but it would be great if the author would better motivate the study and setting, e.g., by providing some take-home messages.”*
Thanks for the feedback - please see part (b) of our global response for the main take-home messages we hope to convey. In particular, we would like to emphasize the flexibility of our framework in characterizing the test error of several previously proposed algorithms which have been shown to perform well for sparse recovery tasks or to connect with feature learning in linear neural nets. Our results also provide theoretical support for the use of “weight sharing” to learn signals with additional group-sparse structure. We plan to emphasize these points more explicitly in the final discussion section.
*(2) “line 77 contribution: I suggest that the authors could consider adding pointers to precise theoretical/simulation results when talking about contributions. For example, referring to Theorem 1 for the first contribution.”*
Thanks for the suggestion - in the revision, we will include the appropriate references.
*(3) “line 91, perhaps add a paragraph on the related technique tool of Convex Gaussian Min-Max Theorem beyond [7], what is the technical challenge here, and whether the obtained technical results are of more general interest?”*
While the CGMT has been used extensively in recent literature to analyze estimation problems, a few new technical challenges arise when applying the result to an entire optimization trajectory, since each step relies on the previous iterate. The proof in [CPT23] relies on the fact that the “auxiliary optimization” obtained via the CGMT can be written in terms of a scalar functional of the previous iterate. However, the LDNN parameterization does not fall into the class of problems covered by this analysis technique. To deal with this, we instead obtain a distributional characterization of the iterates at each step. We do believe the proof technique for obtaining Wasserstein convergence guarantees could extend to studying other iterative algorithms in the sample-split setting. We state a few of these points in the last paragraph of the related work (Lines 141-147), but we will plan on adding a short description of the proof aspects we use which may be of more general interest.
*(4) “line 172-175: the assumption of nT > d is an artifact of the proof, or very intrinsic to the obtained theoretical results? I somehow feel this is a bit unrealistic, or at least, a regime that is not of great interest to study. Could the authors comment on this? Also, this should be stated explicitly in the theorem.”*
The setting that we emphasize here is actually the scenario where $nT < d$, so the total amount of observed data is smaller than the dimensionality. In such settings, effective algorithms must be able to take advantage of the signal sparsity to achieve good performance. While we focus on this more interesting regime, our technical results do not depend on this assumption and in fact hold for any constant value of $\kappa = d/n$.
*(5) “the authors mentioned a few time the keyword "feature learning", but after reading the paper I did not find any in-depth discussion on this. Perhaps having an additional remark to make the connection?”*
Thanks for the feedback - please see part (b) of our global response about this point. We will be sure to add an additional remark about this in revision.
*(6) “line 197: of the algorithm in (4) or in Equation (4).”*
Thanks - we will fix this in the revision.
References:
[CPT23] Chandrasekher et al. “Sharp global convergence guarantees for iterative nonconvex optimization with random data,” The Annals of Statistics, 2023. | null | null | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and helpful feedback. We will carefully consider and incorporate all the suggestions into the next revision of this paper.
We address here some of the main comments:
**(a) Sample-splitting assumption:** We agree that the sample-splitting assumption deviates from the existing literature on IRLS-style algorithms and linear diagonal neural networks. The primary technical reason for this assumption lies in application of the Convex Gaussian Min-Max Theorem after the first step of the optimization algorithm. In particular, the CGMT (cf. Theorem 3) requires an objective function where the only dependence on the Gaussian random matrix is in a bilinear term. If the previous iterate also depends on the same data matrix, this condition is violated. However, we believe that this setting is still of practical interest, e.g., in online/streaming data settings where data is only available batch by batch.
Moreover, this assumption allows us to obtain much more precise results than previous works which only obtain upper bounds on the error. We emphasize also that this assumption is somewhat standard when trying to obtain rigorous asymptotic characterization of optimization trajectories using similar techniques (e.g., reference [CPT23]). We leave relaxations of this assumption to future work, since the resulting state equations would likely be much more complicated (as mentioned by Reviewer kkyZ), and the proofs may require substantial new machinery. We will be sure to include some of these remarks as discussion points in the revised paper.
To further assess the strength of this assumption, we have attached additional simulations which compare three types of batch selection (under the same hyperparameter choices as in Figure 1a):
- Fresh batches at each iteration: this is the setting we consider in our paper and for which our theoretical results are derived
- Randomly sampled batches (with possible repeats): this corresponds to choosing $n$ samples at each iteration from a global pool of $nT$ samples, with possible repetition of data across iterations.
- Same batch: using the same $n$ samples for each iteration
We find that the theoretical results derived for Setting 1 seem to remain accurate for Setting 2, even though the batches are no longer independent (formalizing this would be an interesting technical problem for future work). However, as expected, the predictions are too optimistic for Setting 3 (which only uses $n$ total samples during training, rather than $nT$).
**(b) Motivation and connections to feature learning:** We also plan to incorporate the helpful suggestions of Reviewer yJHL to improve the motivation and contextualization of our results. Some important take-home messages we hope to highlight for our results include (i) a rigorous framework for comparing several existing (and new) algorithms from the perspective of generalization error, (ii) favorable guarantees on the test error within a constant number of iterations (which is more optimistic than the convergence results common in the literature), and (iii) provable benefits of weight-sharing in LDNNs in the presence of structured sparsity.
As a further clarification, our use of the phrase “feature learning” refers to the ability of these algorithms to learn LDNN parameters which effectively recover sparse or group-sparse signals. Note that in the case of linear models, learning “good” features amounts to learning which subset of coordinates in the input are important, so feature learning in linear diagonal networks is equivalent to learning the low-dimensional structure of the true signal (e.g., sparsity or group-sparsity structure). See also the discussion in [RMD24] for more discussion on this connection and how it was used to construct the lin-RFM algorithm (which we analyze). We will be sure to add this connection more explicitly after lines 39-40.
References:
[CPT23] Chandrasekher et al. “Sharp global convergence guarantees for iterative nonconvex optimization with random data,” The Annals of Statistics, 2023.
[RMD24] Radhakrishnan et al. “Linear Recursive Feature Machines provably recover low-rank matrices,” arXiV preprint, 2024.
Pdf: /pdf/c9b5af25c91c059cd3cfba7b44f7118751d8e1b6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Membership Inference Attacks against Large Vision-Language Models | Accept (poster) | Summary: This paper focuses on the membership inference attack (MIA) of large vision language models. It creates multiple MIA datasets and proposes a new method to identify whether an image or a text belongs to the training dataset.
Strengths: 1. The authors have proposed an intriguing research problem, which focuses on the membership attack of large vision language models.
2. The paper proposes a creative method to verify if the input is a member of the training set. It also creates a new benchmark for evaluation.
3. The overall writing is clear and easy to follow.
Weaknesses: The setting for detecting a description sentence is a little confusing for me. Why do you feed the model with a black image rather than others, like those generated by DALL-E? Also, why do you skip the instructions here? It is less possible to produce the target description for the target VLM just based on a black image and an empty instruction. Why don't skip the input image?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. As illustrated in the weakness section, could you please explain the setting of MIA in the description sentence?
2. How to set the threshold value \lambda in the MIA process?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper has analyzed its limitations in the Appendix, which I believe is reasonable.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer rMMx for the insightful feedback. We address the concerns below.
---
> **Q1:** [The setting for detecting a description sentence is a little confusing for me. Why do you feed the model with a black image rather than others, like those generated by DALL-E? Also, why do you skip the instructions here? It is less possible to produce the target description for the target VLM just based on a black image and an empty instruction. Why don't skip the input image? Could you please explain the setting of MIA in the description sentence?]
**A1:** Note that our MIA framework is for detecting text or images solely instead of image-description pairs. This setting is more practical. For example, we want to know whether private photo or address is used for commercial training. Naturally, when detecting the text, we assume that the attacker does not have any information about image. Therefore, we can not use images generated from DALL-E.
In line 109, we assume a grey-box setting on the target model, where the attacker can query the model by a custom prompt (including an image and text) and have access to the tokenizer, the output logits, and the generated text. Therefore, we are not allowed to extract the underlying language model from a VLLM. Additionally, if we provide None or empty to VLLM such as llama_adapter v2 in image place, then an error will be raised. Also, we note that in the open source code of llama adapter, when using the text-only data for finetuning, the model is supplied with an all-zero image. For the consistence of our experimental setting that we provide image and text modals as inputs, we choose to provide an all-zero image (black) to all models.
The reason we adopt an empty instruction is that, provided an all-zero (black) image, common instructions, such as "Describe this image in detail", the VLLMs have the tendency to output response that "This image is black". This will introduce additional bias to our task: the description text MIAs. Therefore, we use the empty instruction.
---
> **Q2:** [How to set the threshold value \lambda in the MIA process?]
**A2:** For the evaluation, we use the AUC score and TPR@low FPR score as the metric. The report of these scores as performance measures is standard in prior MIA works, such as [1]. Note that these two scores are threshold-independent. It's the area under ROC curve, which plots the true positive rate against the false positive rate for different threshold settings. So we don't need to select a specific $\lambda$ when we want to compare different MIA methods (classifiers). Note that if the task is to perform MIA via a specific method instead of evaluating different MIA methods, we can perform a hyperparameter search over a small validation set to select $\lambda$. We will add this explanation of the metric in the revised version.
---
If the reviewer rMMx has any remaining concerns, we are happy to clarify further.
---
**Refs**
[1] Shi, Weijia, et al. "Detecting Pretraining Data from Large Language Models." ICLR 2024.
---
Rebuttal Comment 1.1:
Title: Response to authors
Comment: Thanks for the detailed response. Although some presentations can still be revised, it is an interesting paper on its threat model design. Therefore, I will maintain my current score. | Summary: This paper presents an interesting idea of detecting training data in large vision-language models (VLLMs) through membership inference attacks (MIAs). The authors introduce the following interesting points :
- An MIA benchmark, specifically designed for VLLMs, called Vision Language MIA (VL-MIA). This benchmark includes tasks for both image and text detection across various VLLM architectures.
- A pipeline for MIAs on VLLMs. This approach does not require both side information as in the prior work. This includes a new detection metric called MaxRényi-K%.
- Extensive evaluation results. The authors demonstrate the effectiveness of their approach to diverse VLLMs.
Strengths: This paper is well-written and proposes an interesting approach that can be applicable to one side of information for MIA for VLLMs, which might not be well-studied in prior works.
The authors also provide a benchmark for evaluating MIA on VLLMs and present extensive results based on the proposed benchmark.
Weaknesses: This paper requires further clarification regarding the proposed methods and evaluation results. Please see the following questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: Regarding dataset generation:
If the member data is drawn from LAION_CCS, and non-member data is drawn from the generated dataset, is there any underlying bias caused by the nature of differences between natural and synthetic data (in VL-MIA/DALL-E)? Moreover, why did we choose instruction text for members and generated text for non-members in VT-MIA/Text?
Regarding the evaluation:
Based on Table 2, it seems that different datasets show different behaviors. For example, for VL-MIA/Flickr, Max_100% shows the best performance in most cases, and the performance gap is quite large. On the other hand, we cannot see a similar observation from VL-MIA/DALL-E. I wonder whether the authors have some insight into why it shows a clear difference between datasets.
Moreover, it seems that we need to vary a lot of parameters to find better performance (e.g., different alpha, and different K). How did the authors choose the hyperparameters from other baselines? Since the authors present a new benchmark, it might not be a good choice to use optimal hyperparameters from other sources.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss the limitations and broader impacts in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer e87W for the insightful feedback. We address the concerns below.
---
> **Q1:** Is there any underlying bias caused by the nature of differences between natural and synthetic data (in VL-MIA/DALL-E)?
**A1:** This is an interesting question and thanks for pointing it out. It is hard to make the member data and non-member data be in the same distribution. We considered different possible bias when constructing our datasets.
- One bias is that the member data and non-member data might contain different identities so that the separation would be trivial. We call it the identity bias. To resolve the identity bias, we use DALL-E to generate non-member images based on the description of the member images to ensure the member image and non-member image in the same pair contain the same identity, such as a basketball.
- Admittedly, as you pointed out, there might be other unresolved bias, such that the fact that member data are real (not AI generated) images and non-member data are fake (AI generated) images. We call it the reality bias. To resolve the reality bias, we provide "VL-MIA: Flickr", which divide member and non-member images based on release time, following the prior work [1] which uses latest wiki text as non-members.
Despite our effort aforementioned to reduce the bias, we additionally produced two new datasets designed to eliminate bias using i.i.d samples from the same distribution. We synthesize two new MIA datasets: **the Geometry dataset** and **the Password dataset**. The image in the Geometry dataset consists of a random 4x4 arrangement of geometrical shapes, and the image in the Password dataset consists of a random 6x6 arrangement of characters and digits from (E)MINST. The associated text is its corresponding content (e.g., characters, colors, shapes). We select half of the datasets, that can be considered as the member set, to finetune VLLM while the remaining part is the non-member set. This approach ensures that members and non-members are **i.i.d sampled** thus eliminating potential bias, and can be used in **any VLLM** with diversity and generalization just by simple finetuning. We provide some examples of this dataset in [Figure 2 of the PDF](https://openreview.net/forum?id=nv2Qt5cj1a¬eId=4ygbDFCzeE).
We conduct the image MIA by finetuning LLaMA Adapter v2.1 following the finetune instruction provided by the authors. The results are presented in [Table 4 of the PDF](https://openreview.net/forum?id=nv2Qt5cj1a¬eId=4ygbDFCzeE). We observe that our Modfied Rényi and Max Rényi still outperform other previous baselines.
---
> **Q2:** [why did we choose instruction text for members and generated text for non-members in VT-MIA/Text?]
**A2:** Sorry for the confusion, we'll clarify in the final release. Recall that in the instruction tuning stage of a vllm, every data entry contains an image, a question, and the corresponding answer to the image. We use the answer text as member data and GPT-4 generated answers under the same question and same image as non-member data. Specifically, for LLaVA v1.5 and LLaMA_adapter v2, we use the answers in LLaVA v1.5's instruction tuning as member data. (The word "instruction" in paper main body Table 1 is a typo and will be corrected as "instruction-tuning text".) For MiniGPT-4, we use answers in MiniGPT-4's instruction tuning as member data. We will revise the confusing content in Table 1 in the final release.
---
> **Q3:** Insight on the parameters $\alpha$ and $K$. How to select these parameters?
**A3:** In short, $\alpha$ controls the aggregation from the next-token distribution at some token to a single entropy score, and $K$ controls the aggregation from a sequence of entropy scores to the final score for this sequence.
- The parameter $\alpha$ controls how the entropy will represent the next token probability distribution at the current token. For example, $\alpha=0$ treats all non-zero next-token probabilities equally and $\alpha=\infty$ only involves the largest next-token probability.
In paper main body Table 2 we find that $\alpha = 0.5$ is the best choice in image MIA, while in paper main body Table 3, $\alpha = \infty$ is the best choice in text MIA.
- The parameter $K$ determines the percentage of tokens in a sequence with the highest entropy scores that are selected to compute the final score. In the experiment section of the paper, we find that different sequences have different representative parts, and therefore different $K$ may be applied.
In all, the optimal $K$ and $\alpha$ are largely determined by different data modalities and distributions. We propose this family of criteria to accommodate different possible data distributions. Therefore, similar to prior work [1], we suggest using a validation set and performing a small sweep over different $K$ and $\alpha$ to select the optimal parameters.
---
If the reviewer e87W has any remaining concerns, we are happy to clarify further.
---
**Refs**
[1] "Detecting Pretraining Data from Large Language Models." ICLR 2024.
---
Rebuttal Comment 1.1:
Title: response to authors
Comment: Thank the authors for your efforts in responding to questions and conducting additional experiments. I still have some questions as follows:
In the Text MIA experiments along with DALL-E, the authors compare generated data and real-world data, which might already contain underlying biases that could affect performance. I understand that the authors also used Flickr by splitting the data between the target model's release date, and it has even shown better performance with the authors' proposed method (e.g., Flickr has consistently outperformed results from Renyi) than others. Also, as shown in the additional results from the PDF, the performance difference on synthetic data seems marginal.
In this case, which dataset should we rely on more if the two datasets show different trends?
For example, if we can find one real-world text pair (before and after the target models' release date), and it also shows better performance with the proposed method, as presented in image experiments, why is there a difference? Which one should we trust more if the authors want to present a benchmark?
---
Reply to Comment 1.1.1:
Title: response to reviewer e87W
Comment: We appreciate the insightful feedback from reviewer e87W. Note that our results across all datasets exhibit a consistent trend. Specifically, in Table 4 of the rebuttal PDF, our proposed Rényi metric demonstrates a considerable improvement in AUC for the newly constructed synthetic dataset: from 0.65 to 0.69 with the description slice, and from 0.55 to 0.65 with the instruction+description slice, compared to the previous baselines. From Table 2 of our paper, we also notice that "Rényi ($\alpha = 0.5$)" generally outperforms previous baselines in image MIA for both Flickr and Dalle-E datasets.
Regarding the choice of datasets, we believe **both** the synthetic dataset and the Flickr dataset **should be considered**. Previous literature in machine learning [1,2,3] suggest that evaluating across multiple datasets, such as MNIST, CIFAR-10, ImageNet, and synthetic (Gaussian) data, can demonstrate its effectiveness in various scenarios. In our benchmark, the synthetic and real-world dataset have their own benefit. The advantage of our synthetic geometry and password dataset is that it adheres to the i.i.d. assumption of member and non-member data, and the membership can be fully determined. Meanwhile, the Flickr dataset aligns more closely with the real-world data distributions. We verify our method's performance in both synthetic and real-world datasets.
---
Reference:
[1] "Membership inference attacks against machine learning models." 2017.
[2] "Descending through a crowded valley-benchmarking deep learning optimizers." 2021.
[3] "ViLLA: Fine-Grained Vision-Language Representation Learning from Real-World Data." 2023. | Summary: The paper introduced a benchmark for membership inference attack on VLMs, proposed a pipeline for token-level image detection, and proposed a target-free metric for image MIA detections. The pipeline relies on the fixed sequence of the VLM output to obtain the output image, instruction, and description segments of logits and use them for evaluation metrics.
Strengths: 1. Novelty: There is no existing MIA benchmark datasets for VLM. The authors also proposed a new metrics for detecting MIA in single modality, especially in images.
2. The paper conducts extensive experiments with their proposed methods.
Weaknesses: 1. The explanation of some concepts are not clear. For example, when proposing target-free MIA metrics, the author mentioned that it's because we only have access to the image embeddings but not image tokens. How are these two terms defined here? Why do we not have access to the image tokens? Additionally, image tokens are defined again in line 95. If this is not available, why does the paper need to define the concept again? I think the authors should clarify these assumptions properly in the main text.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. For the cross-modal pipeline for detecting images, how do you determine which logits are for image, instructions, or description text, during the attack phase?
2. The proposed metrics can have different K and $\alpha$ values. Are there any ablation studies on when should what K and $\alpha$ values being used?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors clarified the limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer FZ3m for the insightful feedback. We address the concerns below.
---
> **Q1:** Clarification on image embeddings but not image tokens. Why do we not have access to the image tokens?
**A1:** As in the illustrative [Figure 1 of the PDF](https://openreview.net/forum?id=nv2Qt5cj1a¬eId=4ygbDFCzeE), the VLLMs consist of a vision encoder, a text tokenizer, and a language model. The output of the vision encoder (image embedding) has the same embedding dimension $d$ as the text token embedding.
When feeding the VLLMs with an image and the instruction, a vision encoder transforms this image to $L_1$ hidden embeddings of dimension $d$, denoted by $E_{\rm image} \in \mathbb{R}^{d \times L_1}$. The text tokenizer first tokenizes the instruction into $L_2$ tokens and then looks up the embedding matrix to get its $d$-dimensional embedding $E_{\rm ins} \in \mathbb{R}^{d \times L_2}$. The image embedding and the instruction embedding are concatenated as $E_{\rm img-ins} = (E_{\rm image}, E_{\rm ins}) \in \mathbb{R}^{d \times (L_1+L_2)}$, which are then fed into a language model to perform next token prediction. The cross-entropy loss (CE loss) is calculated based on the predicted token id and the ground truth token id on the text tokens.
We can see that in this process, image embeddings are directly obtained from the vision encoder and there are no image tokens. There are no causal relations between consecutive image embeddings as well. Therefore, as we stated in Section 1, target-based MIA that requires token ids cannot be directly applied.
In the final version, we will add the detailed description as well as the illustrative figure in the appendix.
---
> **Q2:** For detecting images, how do you determine which logits are for image, instructions, or description text, during the attack phase?
**A2:** Similarly to the figure in Q1, there is a one-to-one correspondence between the logit and input. For example, given image embedding with token length $L_1$, instructions with length $L_2$, and description text with length $L_3$, the language model will output logits of the shape $(L_1+L_2+L_3)\times |\mathcal{V}|$, where $\mathcal{V}$ is the vocabulary set, we can access the logits of the image by the slice $[0:L_1]$, the logits of instruction by the slice $[L_1:L_1+L_2]$, and the logits of description by the slice $[L_1+L_2: L_1+L_2+L_3]$.
---
> **Q3:** How to choose $K$ and $\alpha$ for the MaxRényi-K% metric?
**A3:** In short, $\alpha$ controls the aggregation from the next-token distribution at some token to a single entropy score, and $K$ controls the aggregation from a sequence of entropy scores to the final score for this sequence.
- The parameter $\alpha$ controls how the entropy will represent the next token probability distribution at the current token. For example, $\alpha=0$ treats all non-zero next-token probabilities equally and $\alpha=\infty$ only involves the largest next-token probability.
In paper main body Table 2 we find that $\alpha = 0.5$ is the best choice in image MIA, while in paper main body Table 3, $\alpha = \infty$ is the best choice in text MIA.
- The parameter $K$ determines the percentage of tokens in a sequence with the highest entropy scores that are selected to compute the final score. In the experiment section of the paper, we find that different sequences have different representative parts, and therefore different $K$ may be applied.
In all, the optimal $K$ and $\alpha$ are largely determined by different data modalities and distributions. We propose this family of criteria to accommodate different possible data distributions. Therefore, similar to prior work [1], we suggest using a validation set and performing a small sweep over different $K$ and $\alpha$ to select the optimal parameters.
---
If the reviewer FZ3m has any remaining concerns, we are happy to clarify further.
---
**Refs**
[1] "Detecting Pretraining Data from Large Language Models." ICLR 2024. | Summary: The rise of large vision-language models (VLLMs) has significantly advanced multi-modal tasks but also brought forth concerns about data security and privacy. This paper introduces a novel membership inference attack (MIA) benchmark specifically designed for VLLMs to detect training data, addressing the lack of standardized datasets and methodologies in this domain. The authors propose a new MIA pipeline for token-level image detection and introduce the MaxRényi-K% metric for improved detection. The key contributions include the development of the first VLLM-specific MIA benchmark, a cross-modal MIA pipeline for individual image or description detection, and the new MaxRényi-K% and ModRényi metrics, which show effectiveness in experiments.
Strengths: - The paper introduces the first benchmark specifically tailored for VLLMs in the context of MIAs.
- The MaxRényi-K% and ModRényi metrics demonstrate significant effectiveness in detecting training data.
Weaknesses: - **Small Evaluation Dataset**: The MIA evaluation dataset consists of around 600 images for each evaluation, which is relatively small. This limited dataset size can lead to less statistically robust results and may not fully capture the variability and challenges present in real-world scenarios. It would strengthen the paper to include evaluations on larger and more diverse datasets.
- **Lack of Standard Metrics**: The paper does not consider the _TPR at low FPR_ (True Positive Rate at low False Positive Rate) metric, which is a standard for evaluating worst-case membership privacy. Including this metric would provide a more comprehensive assessment of the model's privacy risks, especially in high-stakes applications where false positives must be minimized.
- **Generalizability**: While the proposed metrics and methods show effectiveness, it's important to discuss their generalizability to other types of vision-language models and datasets. Providing insights or experiments on different architectures or domains could enhance the applicability of the findings.
Technical Quality: 3
Clarity: 3
Questions for Authors: - How to choose $K$ for the MaxRényi-K% metric?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The size and the diversity of the benchmark is limited.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer M6Nm for the insightful feedback. We address the concerns below.
---
> **Q1:** Small Evaluation Dataset.
**A1:** We extend both VL-MIA/Flickr and VL-MIA/Text to 2000 samples. The results in the extended datasets can be found in [Table 2 and Table 3 of the PDF](https://openreview.net/forum?id=nv2Qt5cj1a¬eId=4ygbDFCzeE), where we can observe the same trend as our original results. We also introduce two new datasets as shown in A3 below. We will release these complete datasets in the final version.
> **Q2:** Does not consider the TPR at low FPR.
**A2:** Please see TPR at 5% FPR result in [Table 1 of the PDF](https://openreview.net/forum?id=nv2Qt5cj1a¬eId=4ygbDFCzeE). We will add the results in our final version. These results are aligned with AUC results in Table 1 of the paper, and $\alpha=0.5$ is the optimal choice for image MIA.
> **Q3:** Generalizability to other types of vision-language models and datasets.
**A3:** We thank the reviewer for pointing out this interesting question. To make our benchmark more comprehensive, we synthesize two new MIA datasets, see [general response](https://openreview.net/forum?id=nv2Qt5cj1a¬eId=4ygbDFCzeE) for details.
---
> **Q4:** How to choose $K$ for the MaxRényi-K% metric?
**A4:** In short, $K$ controls the aggregation from a sequence of entropy scores to the final score for this sequence.
The parameter $K$ determines the percentage of tokens in a sequence with the highest entropy scores that are selected to compute the final score. In the experiment section of the paper, we find that different sequences have different representative parts, and therefore different $K$ may be applied.
In all, the optimal $K$ is largely determined by different data modalities and distributions. We propose this family of criteria to accommodate different possible data distributions. Therefore, similar to prior work [1], we suggest using a validation set and performing a small sweep over different $K$ and $\alpha$ to select the optimal parameters.
---
If the reviewer M6Nm has any remaining concerns, we are happy to clarify further.
---
**Refs**
[1] "Detecting Pretraining Data from Large Language Models." ICLR 2024.
---
Rebuttal Comment 1.1:
Title: Thank you for the response
Comment: Thanks for the response and the updated results.
Using a validation dataset to tune hyperparameters is a reasonable approach. However, this method assumes that the attacker has collected some member and non-member instances in advance. I’m curious about how practical this assumption is in real-world scenarios. I would appreciate it if the authors could provide further clarification on this point.
---
Rebuttal 2:
Comment: We thank reviewer M6Nm for the meaningful feedback. Regarding the concern that our method assumes that the attacker has collected some member and non-member data in advance, we explain that this assumption is practical in real-world scenarios. For open-source models trained on open-source data, we can confidently obtain both member and non-member data, as demonstrated in this paper's Section 4. For closed-source models, such as GPT4, we speculate that commonly used datasets, such as MS COCO, and a wide collection of copyrighted materials [1], are used as training data by these closed-source models.
Note that all of the baseline methods (e.g., PPL, min-k) also require a validation set to estimate a threshold ($\lambda$ in Equation 1) to determine membership. The above choice of member and non-member dataset can be universally used for all MIA methods.
We would like to add these clarifications after line 305 with the revised version to enhance understanding on member and non-member data selection.
Refs:
[1] "Speak, memory: An archaeology of books known to chatgpt/gpt-4." arXiv 2023.
---
Remark: We have further **extended our benchmark size to 10k** and observed similar results, which will be released in the final version. In addition, our heuristic findings in the paper suggest that for text MIA, one can select a smaller k, e.g.,0 or 10; while for image MIA, one can select k=100.
Title: Clarification on the assumption
---
Rebuttal Comment 2.1:
Title: Thank you
Comment: Thank you for the reply. It appears that the threshold-based MIAs rely on somewhat strong assumptions, particularly the need for a validation dataset, which may limit their broader applicability. However, I still believe this paper offers valuable new insights into the privacy issue of vision-language models.
I will maintain my current score. | Rebuttal 1:
Rebuttal: Dear reviewers,
We appreciate your insightful comments. The attached PDF contains the necessary figures and tables corresponding to each individual response below.
During the rebuttal period, we expand our benchmark by incorporating new diverse datasets, as motivated by reviewers **e87W** and **M6Nm**. Specifically, in order to make our benchmark more comprehensive, we synthesize two new MIA datasets: **the Geometry dataset** and **the Password dataset**. The image in the Geometry dataset consists of a random 4x4 arrangement of geometrical shapes, and the image in the Password dataset consists of a random 6x6 arrangement of characters and digits from EMINST [1] and MNIST. The associated text is its corresponding content (e.g., characters, colors, shapes). We select half of the datasets, that can be considered as the member set, to finetune VLLM while the remaining part is the non-member set. This approach ensures that members and non-members are **i.i.d sampled** thus eliminating potential bias, and can be used in **any VLLM** with diversity and generalization just by simple finetuning. We provide some examples of this dataset in Figure 2 of the PDF.
We conduct the image MIA by finetuning LLaMA Adapter v2.1 following the finetune instruction provided by the authors. The results are presented in Table 4 of the PDF. We observe that our Modfied Rényi and Max Rényi still outperform other previous baselines.
---
Refs
[1] "EMNIST: Extending MNIST to handwritten letters." IJCNN, 2017.
Pdf: /pdf/0b0eef27c12e71c050026e30c40f5b408a0d37a1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Non-Euclidean Mixture Model for Social Network Embedding | Accept (poster) | Summary: This paper proposes NMM-GNN, a non-Euclidean mixture model that captures both homophily and hierarchies in social networks for embedding.
Strengths: 1.The paper is well-structured, clearly written, and easy to follow.
2.In the experiments section, the author compared baselines from different categories on multiple tasks (classification and link prediction). The results show that the NMM-GNN proposed in the paper consistently achieves the best performance.
Weaknesses: 1. The related work section does not cover most existing social embedding works. For example, many works, in addition to RaRE, also consider both the similarities and the social impact of nodes.
2. There is no clear proof or at least empirical experiments to demonstrate that it is more reasonable to embed both node similarity and social impact in spherical space instead of Euclidean space.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. My main concern is that social network embedding algorithms considering social impact are not novel. For example, when performing edge prediction, one can directly assume the probability of the edge is proportional to the degree of both nodes. I hope the author could further explain why the proposed method is novel enough in terms of encoding both similarity and impact.
2. Despite the experimental results and ablation study in the main article showing the good performance of the proposed method, I want to know how the speed of the method compares with existing baselines.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The author discussed the potential limitations of the paper in detail in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your insightful comments and feedback. Please find our response below to your questions.
Q1: The related work section does not cover most existing social embedding works. For example, many works, in addition to RaRE, also consider both the similarities and the social impact of nodes.
A1: Please refer to Table 9 in our main paper of “Category and Description of Baseline models” where we discuss and empirically evaluate against all the latest state-of-the-art models for social network embedding models. We cover 15 baseline models (up until the year of 2024) in all categories of learning models including (1) structural embedding models, (2) GNN embedding models, (3) homophily-based embedding models, and (4) mixture models. Our model consistently outperforms all the models for both classification and link prediction on four comprehensive metrics of Jaccard Index (JI), Hamming Loss (HL), F1 Score (F1), and AUC. Please refer to Tables 10, 12 (Appendix), and 14 (Appendix).
Q2: There is no clear proof or at least empirical experiments to demonstrate that it is more reasonable to embed both node similarity and social impact in the spherical space instead of Euclidean space.
A2: We have conducted extensive ablation studies, as shown in Table 10, which could be used as empirical experiments to justify our approach. In these studies, we evaluated several variations of the NMM model, using different non-Euclidean geometric spaces for homophily and social influence. Specifically, we compared the performance of models where E, S, and H denote Euclidean, Spherical, and Hyperbolic spaces, respectively. Our results demonstrate that modeling homophily in spherical space and social rank in hyperbolic space significantly outperforms other configurations, including those using Euclidean space.
Q3: My main concern is that social network embedding algorithms considering social impact are not novel. For example, when performing edge prediction, one can directly assume the probability of the edge is proportional to the degree of both nodes. I hope the author could further explain why the proposed method is novel enough in terms of encoding both similarity and impact.
A3: We would like to clarify and highlight that the novelty of our work lies in representing the factors in network science that explain how links are generated in the social network: homophily and social influence. As we observe through resulting topological patterns formed by homophily (cycles) and social influence (hierarchy), we have motivation to utilize the non-Euclidean geometric spaces of spherical/hyperbolic to model the resulting topologies. Specifically, we are among the first to not only jointly model for homophily/social influence, but also do so with the novel integration of multiple non-Euclidean geometric spaces used together (with novel space unification architecture) through an efficient Non-Euclidean Mixture Model (NMM). This addresses the dual aspects of homophily and social influence which we model in personalized way for each link of the social network in a unified framework, which, to the best of our knowledge, has not been previously explored.
Q4: Despite the experimental results and ablation study in the main article showing the good performance of the proposed method, I want to know how the speed of the method compares with existing baselines.
A4: In the main paper, we will be sure to include formal quantitative comparisons for computational efficiency. Regarding our model, NMM is highly efficient, with time complexity O(n * d), where n is number of nodes and d is dimension size. In comparison, here are the time complexity analyses for the remaining baseline models, mixture model of RaRE is O(n * d) which is comparable to the NMM mixture model, the GNN embedding models of GCN is O(n^2 * d + n * d^2), and GAT is O(n * d^2), the non-Euclidean GNN embedding models of k-GCN is O(k * [n^2 * d + n * d^2]) and HGCN is O(2d^2 + a * n * d^2) = O(a * n * d^2) where ‘a’ is the filter length, and NMM-GNN is O(n^2 * d + n * d^2) which is comparable to GNN embedding models. Moreover, we would like to point out that GraphVAE (of NMM-GNN) training is designed to be highly parallelizable, which allows for scalability. Moreover, our model is capable of learning on real-world, highly large-scale graphs on the order of millions of nodes and billion of edges, e.g., Friendster, while achieving the SOTA performance, which attests to its practical value to the network science community.
---
Rebuttal Comment 1.1:
Comment: Thanks a lot for your response. I appreciate and agree with your response, and I will change my rating from 6 to 7.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive feedback and the increase in the score. We are glad that we were able to address your questions. | Summary: The authors understand why links are generated through node-embedded representations of social networks. Specifically, spherical space is utilized to represent the homogeneity of nodes and hyperbolic space is utilized to represent the hierarchy and influence of nodes. By mixing these two spaces together, the corresponding link representation is finally obtained.
Strengths: 1.The writing is clear and easy to understand.
2.The experiments were adequate and demonstrated the power of the models
Weaknesses: 1.The idea of the article is not novel. Both hyperbolic and spherical spaces are geometric models that are very commonly used in the field of social networking.[1]
2.Although the authors are trying to explain linking relationships in terms of homogenization and hierarchy, however the learning of the model embedding is still unknown, which does not explain the emergence of social networks.
3.There has been much better work on embedding methods in hybrid spaces. [2]The authors don't contribute as much to this as their article suggests.
[1]Network geometry. Nature physics
[2]Motif-aware Riemannian Graph Neural Network with Generative-Contrastive Learning AAAI 2024
[3]Product Manifold Learning. AISTATS 2021
Technical Quality: 4
Clarity: 4
Questions for Authors: 1.How to analyze the reasonable causes of social network link generation from complex feature representations?
2.Can this model explain small-world networks in complex networks?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: 1.The model does not really give an adequate explanation of links in complex networks, but only from the perspectives of homogeneity and hierarchy.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Idea is not novel. Hyperbolic/spherical spaces are are commonly used in social networking. [1] Network geometry. Nature physics
A1: We would like to clarify and highlight that the novelty of our work lies in representing the factors in network science that explain how links are generated in the social network: homophily and social influence. We are among the first works to jointly model both factors in the network when comprehensively surveying the latest literature of work of 15 baseline models (up until the year of 2024) in all categories of learning models including (1) structural embedding models, (2) GNN embedding models, (3) homophily-based embedding models, and (4) mixture models. As we observe through resulting topological patterns formed by homophily (cycles) and social influence (hierarchy), we have motivation to utilize the non-Euclidean geometric spaces of spherical/hyperbolic to model the resulting topologies. Specifically, we are among the first to not only jointly model jointly for homophily/social influence, but also doing so with the novel integration of multiple non-Euclidean geometric spaces used together (with novel space unification architecture) through an efficient Non-Euclidean Mixture Model (NMM).
Q2: Although authors are trying to explain linking relationships in terms of homogenization and hierarchy, however learning of the model embedding is still unknown, which does not explain the emergence of social networks.
A2: We utilize feature learning through latent embeddings to represent nodes in terms of homophily and social influence factors. Homophily is based on feature similarity which can be captured through cosine similarity of embeddings, and social rank can be captured via norm space. Furthermore, these learned embeddings are highly interpretable. Specifically, in our visualizations of the network embeddings, we see that celebrity nodes are embedded towards the center of the Poincare disk, while nodes with lower social rank are embedded towards the boundary. Nodes of high homophily can be embedded closer to each other on the spherical space compared to what the hyperbolic or Euclidean spaces even allows for (better capturing cyclic influence). We will add this interpretable visualization to the paper.
Q3: There has been much better work on embedding methods in hybrid spaces. [2] Motif-aware Riemannian Graph Neural Network with Generative-Contrastive Learning AAAI 2024. The authors don’t contribute as much to this as their article suggests.
A3: The novelty of our work lies in representing the factors in network science that explain how links are generated in the social network: homophily and social influence which form resulting topological patterns formed by homophily (cycles) and social influence (hierarchy) motivating our use of non-Euclidean geometric spaces of spherical/hyperbolic. Specifically, we are among the first to not only jointly model jointly for homophily/social influence, but also doing so with the novel integration of multiple non-Euclidean geometric spaces used together (with novel space unification architecture) through an efficient Non-Euclidean Mixture Model (NMM).
The paper you reference is not modeling social networks (critical for our problem of investigation) but rather some general purpose graphs Cora etc. (1) this model does capture laws of how links are formed in social networks (homophily/social influence), and (2) do not exhibit the types of resulting topologies from those factors. Also distinct/individual non-Euclidean geometric spaces are considered but rather does not explore how to jointly integrate different curvature non-Euclidean geometric spaces (to have joint influence from social network factors).
Q4: How to analyze the reasonable causes of social network link generation from complex feature representations?
A4: Please refer to sections 3.2.1 “Link prediction using homophily based distribution” and 3.2.2 “Link prediction using social influence based distribution” where we explain the probability distributions for our mixture model NMM by considering the impact of high/low homophily (+ noise/sparsity factor) in addition to high/low social influence (+ noise/sparsity factor). To empirically validate our claims and quality of our NMM mixture model’s learned embeddings, in our experiment results on Table 10, we rigorously evaluate the results of social network classification and link prediction against 15 other SOTA baseline models.
Q5: Can model explain small-world networks in complex networks?
A5: Definitely, which is substantiated through rigorous empirical evidence. We evaluate on numerous datasets including: social network datasets (domain focus of this research), citation networks (Appendix), and attributed networks (Appendix). For example, the social network datasets include BlogCatalog, LiveJournal, Friendster which are a balanced mix of small-scale and the largest scale size (node counts: of 10k, 5M, 66M respectively) and both directed and undirected. These numerous datasets show the generalizability performance and scalability of our approach as they are in all sizes for vertices (4K to 66M), edges, edge types, attributes, classes (6 to 500) etc. Please see Tables 8, 11, 13 of main paper for details.
Q6: The model doesn't really give an adequate explanation of links in complex networks, but only from perspectives of homogeneity and hierarchy.
A6: It is largely agreed that social network links are formed due to either homophily or social influence from the network science community through numerous years of research development. In fact, most existing embedding models are designed based on the homophily aspect of social networks [4, 5]. However, research of RaRE [6] and work of [7] show homophily is insufficient, and social influence is also critical in forming connections. This is due to popular nodes having direct influence in forming links [8]. Refer to our paper for [4],[5],[6],[7].
---
Rebuttal Comment 1.1:
Title: Response to Rebottal
Comment: Thanks for your response! I have known the contribution and novelty of your work. At last, I have raised my score.
---
Reply to Comment 1.1.1:
Comment: We sincerely appreciate your positive feedback and the increase in the score. We are glad that the discussion on the contribution and novelty of our work helped to address your questions. | Summary: This paper proposes a new Graph-based non-Euclidean mixture model for social networks.
Under the assumptions that social network links are formed due to either homophily or social influence,
the homophily factor is modeled in spherical space and the social influence factor is in hyperbolic space.
The homophily regulated nodes lie on the surface of the spherical ball and the social influence-regulated
nodes lie on the open Poincare ball.
The projection to align these two spaces is also proposed.
The non-Euclidean GraphVAE is also integrated into the model.
Experiments show the proposed model outperforms other baselines.
Strengths: - The proposed framework models both the homophily and social influence factors for social network generation.
- It is integrated into the non-Euclidean graph-based VAE to further improve performance.
- Experiments show the proposed model outperforms other SOTA baselines.
Weaknesses: - The motivation of the space unification is unclear (See the questions below).
Technical Quality: 3
Clarity: 3
Questions for Authors: - Why is the space unification necessary? Each node has two coordinates, in the spherical and the hyperbolic space. The link between nodes i and j is generated either from the spherical or hyperbolic proximity (it is the "explaining away" situation as explained in the RaRE paper [6] ). The homophily and social influence relationships may be unrelated or may even contradict to each other (you may hate your boss). Therefore the two coordinates may not necessarily be aligned. It would be appreciated if the authors would elaborate on this.
- The $\kappa$-GCN proposed in [39] also deals with spherical and hyperbolic spaces. I would like to know the similarities and/or differences between the two approaches as the two coordinates in the proposed model can also be regarded as a coordinate in the product space. The authors do not discuss them. They just compare the experimental performance.
- What is $i$ in the definition of $\log_0^H$ and $\exp_o^H$ ?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: As for the computational efficiencies, more formal and quantitative comparisons would be necessary in the main texts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your insightful comments and feedback. Please find our response below to your questions.
Q1: The motivation of the space unification is unclear: Why is the space unification necessary? Each node has two coordinates, in the spherical and the hyperbolic space. The link between nodes i and j is generated either from the spherical or hyperbolic proximity (it is the “explaining way” situation as explained in the RaRE paper [6]). The homophily and social influence relationships may be unrelated or may even contradict each other (you may hate your boss). Therefore, the two coordinates may not necessarily be aligned. It would be appreciated if the authors would elaborate on this.
A1: We would like to clarify and highlight that the link between nodes i and j is a mixture model because each link is a weighted combination of influence from both spherical and hyperbolic spaces (not one or the other) as evidenced in Equation 6 of the paper. Hence, as shown in Figure 1b, the same node has 2 representations – one in the spherical space and one in the hyperbolic space, and because they represent the same underlying node, they need to be aligned. In the case of “you may hate your boss” if both you and your boss are not highly popular nodes e.g., celebrity nodes, then likely the social rank may be similar due to low social influence. At the same time, regardless of liking each other, you and your boss may share many similar characteristics like working at the same company, studied the same field like “computer science”, live in the same location/country, working on the same project problems etc. the homophily is still high by its very definition. Therefore, these two network factors do not contradict each other, but rather work together to explain how links are formed between users. We address this as well as scenarios of noise such as where nodes may still not be connected to each other even after exhibiting high homophily, as well as in the case they are not connected when having low social influence (or similar social rank). Please refer to sections 3.2.1 “Link prediction using homophily based distribution” and 3.2.2 “Link prediction using social influence based distribution” where we explain this and how our distribution models also represent factors to control sparsity of the network.
Q2: The K-GCN proposed in [39] also deals with spherical and hyperbolic spaces. I would like to know the similarities and/or differences between the two approaches as the two coordinates in the proposed model can also be regarded as a coordinate in the product space. The authors do not discuss them. They just compare the experimental performance.
A2: First, our model is not in the product space (e.g., where the entire model belongs to a cartesian product of non-Euclidean geometric spaces by default). Rather, our work is in a category called mixed space model that uses a multi-geometric space framework where different portions of the graph may possibly belong to different spaces (based on the amount of impact each of homophily and social influence has for that personalized pair of nodes). In the extreme case (Case 1) where only social influence is at play (e.g., weight of homophily representation is learned close to 0), the hyperbolic space will be used. On the other hand if only homophily is at play (Case 2) e.g., weight of homophily representation is learned close to 0, the spherical space will be used. In the normal case of both factors at play (Case 3), then both spaces will be used and can be jointly aligned with our space alignment mechanism. When using product space, Cases 1/2/3 will all not be distinguished from each other (though that would be a better representation) as all cases will be modeled by one complex non-Euclidean geometric space as a cartesian product of spherical and hyperbolic spaces.
Q3: What is ‘i’ in the definition of log_0^H and exp_0^H?
A3: The ‘i’ is the mathematical symbol - sqrt(-1) (the formal term: imaginary number). i (which is in italics) is notation e.g., z_i, which is referring to the i-th node.
Q4: As for the computational efficiencies, more formal quantitative comparisons would be necessary in the main texts.
A4: Thank you for your comment. In the main paper, we will be sure to include formal quantitative comparisons for computational efficiency. Regarding our model, NMM is highly efficient, with time complexity O(n * d), where n is number of nodes and d is dimension size. In comparison, here are the time complexity analyses for the remaining baseline models, mixture model of RaRE is O(n * d) which is comparable to the NMM mixture model, the GNN embedding models of GCN is O(n^2 * d + n * d^2), and GAT is O(n * d^2), the non-Euclidean GNN embedding models of k-GCN is O(k * [n^2 * d + n * d^2]) and HGCN is O(2d^2 + a * n * d^2) = O(a * n * d^2) where ‘a’ is the filter length, and NMM-GNN is O(n^2 * d + n * d^2) which is comparable to GNN embedding models. Moreover, we would like to point out that GraphVAE (of NMM-GNN) training is designed to be highly parallelizable, which allows for scalability. Moreover, our model is capable of learning on real-world, highly large-scale graphs on the order of millions of nodes and billion of edges, e.g., Friendster, while achieving the SOTA performance, which attests to its practical value to the network science community.
---
Rebuttal Comment 1.1:
Comment: Thanks for the detailed explanation.
As for Q1, I am aware that the proposed model is a mixture model.
What I wanted to know is the reason why the space unification is necessary (the motivation of the space unification loss). The space unification regularization term ensures that two geometric spaces are aligned together to make sure the two embeddings of the same node correspond to each other. Why do the two embeddings of the same node have to be close to each other?
If $\boldsymbol{Z}_i^H \approx \boldsymbol{Z}_i^S$ and $\boldsymbol{Z}_j^H \approx \boldsymbol{Z}_j^S$,
I wonder $\mathrm{dist}_h(\boldsymbol{z}_i^H,\boldsymbol{z}_j^H)
\approx$ $\mathrm{dist}_s(\boldsymbol{z}_i^S,\boldsymbol{z}_j^S)$
(I know this is oversimplified, but you may still say that the former distance is small then the latter distance is also small)
then I wonder
$p(e_{ij}=1\vert \mathrm{dist}_h(\boldsymbol{z}_i^H,\boldsymbol{z}_j^H))$
may tend to be equivalent to
$p(e_{ij}=1\vert \mathrm{dist}_s(\boldsymbol{z}_i^S,\boldsymbol{z}_j^S))$
---
Reply to Comment 1.1.1:
Title: Response to Follow-Up Question from Reviewer 3fkq
Comment: Thanks for your follow-up question. Please find our response below:
We would like to address a misunderstanding in your interpretation of the paper: It seems that you thought the alignment is to enforce the embeddings the same in two spaces, which is untrue. We require the projection of Z_i^H is close to i's embedding in spherical space Z_i^S. Note, it's impossible to equate these two directly as they are in different geometric spaces.
In this case, the two distances (of spherical and hyperbolic spaces) are also different from each other. Note in our paper dist_h(zHi , zHj ) = |norm(zHi ) − norm(zHj )| is not geodesic distance in hyperbolic metric space. It is defined based on their norm to reflect their rank difference.
Without the alignment, Z_i^H has too much degree of freedom, which can move freely as long their norm kept the same.
We also have an ablation study on this part. Please refer to Appendix section's Figure 2 (c), where we show the quality of using the space unification component (both with and without), and we observe that the performance improves consistently on across all datasets with the space unification component. | Summary: This work addresses the embedding of social networks with downstream tasks such as link prediction in mind. In this manuscript the authors propose to model the link as the mixture of two factors of node embedding, Spherical and Hyperbolic. Concretely, two kinds of node embedding are unified into a single loss by projecting from hyperbolic space into spherical space, and jointly trained in the framework of VAE on Graaph. The authors also conduct experiments against a wide range of baseline methods to highlight the capacity of the proposed method.
Strengths: The idea is novel. Overall things are good. The idea of bridging both is novel, and according to experiments performance better than purely hyperbolic embedding. Writing is overall clear and, although there are many details, it’s generally feasible to follow. Extensive comparison with a wide range of baselines from different categories. Good discussion of limitations.
Weaknesses: The major issue is on limited datasets in experiments. I would suggest the authors consider other datasets used in previous works, such as synthetic datasets, citation networks (PubMed, wikipedia citation, DBLP, Microsoft Academic Graph ) and other social networks (e.g. Twitter).
Also another factor limiting the impact of the proposed method is that the proposed method does not work on the whole-graph level, and thus only applies to node classification and link prediction tasks. If by any extension the embeddings could be aggregated to the whole-graph leve, there could be more significance (e.g. tasks like molecular classification, protein-to-protein interactions)
Technical Quality: 3
Clarity: 3
Questions for Authors: N/A
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Well addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your insightful comments and feedback. Please find our response below to your questions.
Q1: The major issue is on limited datasets in experiments. I would suggest the authors consider other datasets used in previous works, such as synthetic datasets, citation networks (PubMed, wikipedia citation, DBLP, Microsoft Academic Graph) and other social networks (e.g., Twitter)
A1: We would like to clarify that the primary goal of our work is to explain how links are generated in social networks e.g., user to other users. As such, for datasets, in the main paper we specifically showcase results on social network datasets as that is the domain focus of this research. However, in the Appendix (Tables 12 and 14), we also include experiments for the citation networks of Wikipedia datasets, to show that our model can also benefit other networks, as well as attributed network datasets. In total, we evaluate on seven datasets. The social network datasets include BlogCatalog, LiveJournal, Friendster which are the most prominent and recent large-scale networks from the latest research in the literature, and which are a balanced mix of small-scale and the largest scale size (node counts: of 10k, 5M, 66M respectively) and both directed and undirected. For citation networks, we evaluate on Wikipedia Clickstream and Wikipedia Hyperlink. For attributed networks, we evaluate Facebook and Google+ (containing ~16K attributes), and these numerous datasets show the generalizability performance and scalability of our approach.
Q2: Also another factor limiting the impact of the proposed method is that the proposed method does not work on the whole-graph level, and thus only applies to node classification and link prediction tasks. If by any extension the embeddings could be aggregated to the whole-graph level, there could be more significance (e.g., tasks like molecular classification, protein-protein interactions)
A2:
Our work can definitely be generalized to the graph-level because our method learns to represent the social science network factors based on topologies in the graph on clusters of nodes and edges. Thus, if the cluster of nodes and edges comprised of the entire graph and we subsequently applied graph pooling per node embedding (that we currently learn), we can reduce the embedding from node level to graph level. In this way, homophily/social influence can be modeled at the graph level. That said, it is unclear whether graph-level modeling would be specifically useful/interpretable for social network embedding models (as compared to node-level modeling). This is due to the social network setting requiring links to be generated that are per node (not at the graph level), since a user is recommended to another specific user in the practical social network setting. This is different from other network domains like molecular classification where the entire graph represents one molecule e.g., atoms form individual nodes and chemical bonds form the edges. On the other hand, one node represents one user (unlike molecular graphs). We’d also like to clarify that for this reason, node-level learning is in fact consistent with the recent state-of-the-art NN methods in the network science community though our model can still be generalized to learning at the graph-level. | Rebuttal 1:
Rebuttal: Dear Reviewers: Thank you for your time in reading our paper and for your useful comments/questions/suggestions on our paper. We have responded individually to each reviewer, however, we are also including a general summary answer to some common questions:
Datasets: In the main paper we specifically showcase results on social network datasets as that is the domain focus of this research. However, in the Appendix (Tables 12 and 14), we also include experiments for the citation networks of Wikipedia datasets, to show that our model can also benefit other networks, as well as attributed network datasets. In total, we evaluate on seven datasets ranging from small-scale to large scale for nodes, edges, types of edges, attributes, classes etc. on a very diverse range of datasets. These numerous datasets show the generalizability performance and scalability of our approach (which we rigorously evaluate on 15 SOTA baseline models of four different learning method categories on both social network classification and link prediction on 4 metrics).
Computational complexity: In the main paper, we will be sure to include formal quantitative comparisons for computational efficiency. Regarding our model, NMM is highly efficient, with time complexity O(n * d), where n is number of nodes and d is dimension size. In comparison, here are the time complexity analyses for the remaining baseline models, mixture model of RaRE is O(n * d) which is comparable to the NMM mixture model, the GNN embedding models of GCN is O(n^2 * d + n * d^2), and GAT is O(n * d^2), the non-Euclidean GNN embedding models of k-GCN is O(k * [n^2 * d + n * d^2]) and HGCN is O(2d^2 + a * n * d^2) = O(a * n * d^2) where ‘a’ is the filter length, and NMM-GNN is O(n^2d + nd^2) which is comparable to GNN embedding models. Moreover, we would like to point out that GraphVAE (of NMM-GNN) training is designed to be highly parallelizable, which allows for scalability. Moreover, our model is capable of learning on real-world, highly large-scale graphs on the order of millions of nodes and billion of edges, e.g., Friendster, while achieving the SOTA performance, which attests to its practical value to the network science community.
Novelty: We would like to clarify and highlight that the novelty of our work lies in representing the factors in network science that explain how links are generated in the social network: homophily and social influence. We are in fact one of the first works to jointly model both factors in the network when comprehensively surveying the latest literature of work of 15 baseline models (up until the year of 2024) in all categories of learning models including (1) structural embedding models, (2) GNN embedding models, (3) homophily-based embedding models, and (4) mixture models. As we observe through resulting topological patterns formed by homophily (cycles) and social influence (hierarchy), we have motivation to utilize the non-Euclidean geometric spaces of spherical/hyperbolic to model the resulting topologies. Specifically, we are among the first to not only jointly model jointly for homophily/social influence, but also doing so with the novel integration of multiple non-Euclidean geometric spaces used together (with novel space unification architecture) through an efficient Non-Euclidean Mixture Model (NMM). This addresses the dual aspects of homophily and social influence which we model in a personalized way for each link of the social network in a unified framework, which, to the best of our knowledge, has not been previously explored.
Analyzing Causes of Social Network Link Generation: As mentioned in detail in the paper, it is widely agreed from the network science community that two factors (homophily and social influence) affect how links are generated in the social network. Please refer to sections 3.2.1 “Link prediction using homophily based distribution” and 3.2.2 “Link prediction using social influence based distribution” where we explain the probability distributions for our mixture model NMM by considering the impact of high/low homophily (+ noise/sparsity factor) in addition to high/low social influence (+ noise/sparsity factor). To empirically validate our claims and quality of our NMM mixture model’s learned embeddings, in our experiment results on Table 10, we evaluate the results of social network classification and link prediction for Jaccard Index (JI), Hamming Loss (HL), F1 Score (F1), and AUC. Our model is comprehensively evaluated against 15 other state-of-the-art baseline models belonging to four different categories of learning representations. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Compute-Optimal Solutions for Acoustic Wave Equation Using Hard-Constraint PINNs | Reject | Summary: The paper attempts to train PINNs which solves acoustic wave equations. They do so by using hard-constrained PINNs which can enforce IC and BCs, and propose a collocation point sampling method (DAFS) based on the amplitude of the solution at different regions.
Strengths: The paper considers an interesting problem in acoustics and attempt to apply the techniques from PINNs to solve them.
Weaknesses: The paper itself feels less coherent, and seems like just an application of many existing PINN training techniques (e.g., hard constraint PINNs, collocation point sampling) into solving a certain problem, rather than providing a novel method or a coherent framework into solving a domain-specific problem.
The experimental section feels incomplete. Different point selection algorithms have not been extensively compared with, e.g., from that in Wu et. al. (2023). Furthermore, it would be interesting to see how the method can scale to more realistic acoustic problems (i.e., outside of 1D settings).
The paper itself also seems incomplete. The Appendix and the NeurIPS checklist are partially filled and have half-finished sentences.
The labels within the graphs can also be enlarged slightly to make them more readable.
Technical Quality: 2
Clarity: 1
Questions for Authors: 1. Can the optimal \alpha be selected without having to test out different values of \alpha? Is there some recommended value that can be used for different acoustics problems?
2. Are there any relation between selecting points in high-amplitude regions with selecting points in high-residual regions, such as those methods benchmarked in Wu et. al. (2023)? In the sense that these two are indirectly doing the same thing, or they end up selecting very similar points anyway.
3. In Figure 5, are there any intuition to why L1 loss peaks at larger \alpha but L2 peaks at smaller \alpha?
4. How is the computation time of the methods proposed?
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: The authors have provided limitations with selection of \tau.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer \textbf{AwP6} for their constructive comments and appreciations of our strengths such as "The paper considers an interesting problem in acoustics".
\issue{Weaknesses}
% The paper itself feels less coherent, and seems like just an application of many existing PINN training techniques (e.g., hard constraint PINNs, collocation point sampling) into solving a certain problem, rather than providing a novel method or a coherent framework into solving a domain-specific problem.
Our intention is to propose a general framework for imposing hard constraints into PINNs, particularly for acoustics problems with non-negligible first time derivative terms. Very few studies have thoroughly discussed the hard-constraint embedding of this first time derivative term.
%The experimental section feels incomplete. Different point selection algorithms have not been extensively compared with, e.g., from that in Wu et. al. (2023). Furthermore, it would be interesting to see how the method can scale to more realistic acoustic problems (i.e., outside of 1D settings).
We agree that the experimental section would benefit from a more extensive comparison with other point selection algorithms. In the revised manuscript, we will include a comparative analysis with methods such as those benchmarked in Wu et al. (2023). Additionally, we have expanded the scope of our experiments to include the application of our method to the 2D wave equation, demonstrating its scalability to more realistic acoustic problems. The results are detailed in the attached PDF.
%The paper itself also seems incomplete. The Appendix and the NeurIPS checklist are partially filled and have half-finished sentences.
We apologize for the incomplete sections in the original rushed submission. We will ensure that the Appendix and the NeurIPS checklist are fully completed in the revised version, with all sentences properly finished.
%The labels within the graphs can also be enlarged slightly to make them more readable.
We will enlarge the labels within the graphs in the revised version to enhance readability.
\issue{Questions (1)}
% Can the optimal \alpha be selected without having to test out different values of \alpha? Is there some recommended value that can be used for different acoustics problems?
We can choose a medium $\alpha$ around $0.5$ based on our preliminary experiments. Alternatively, conducting a few pre-experiments can help in selecting the optimal $\alpha$ for different acoustic problems.
\issue{Questions (2)}
% Are there any relation between selecting points in high-amplitude regions with selecting points in high-residual regions, such as those methods benchmarked in Wu et. al. (2023)? In the sense that these two are indirectly doing the same thing, or they end up selecting very similar points anyway.
This is an insightful question. Selecting points in high-amplitude regions relies on assumptions about the PDE solutions. However, selecting high-residual regions typically requires trial and error during training, whereas high-amplitude regions can be identified more directly based on the physical characteristics of the solution.
\issue{Questions (3)}
% In Figure 5, are there any intuition to why L1 loss peaks at larger \alpha but L2 peaks at smaller \alpha?
This is an interesting observation. The difference in the behavior of L1 and L2 losses with respect to $\alpha$ might be due to their sensitivity to outliers and the distribution of error.
\issue{Questions (4)}
%How is the computation time of the methods proposed?
The computation time of our methods is comparable to standard PINN training techniques. However, the inclusion of hard constraints and Dynamic Amplitude-Focused Sampling (DAFS) can slightly increase the computation time due to additional calculations. We will provide a detailed analysis of computation time in the revised manuscript.
\issue{Limitations}
% The authors have provided limitations with selection of \tau.
We have included the limitations with selection of $\tau(t)$ in the original submission.
---
Rebuttal Comment 1.1:
Comment: I thank the author for their response. I still believe that the paper requires some amount of revision and therefore will keep the current score. | Summary: The manuscript treats the one dimensional wave equation with a PINN approach and discusses the imposition of boundary and initial conditions directly into the network, as common practice in PINNs. The authors then propose a quadrature scheme based on a coarse finite difference discretization of the wave equation.
Strengths: The imposition of the time derivative seems to be a novel construction. Furthermore, the construction seems not to be limited to the wave equation.
Weaknesses: The main weakness of the manuscript is the focus on the very special and simple toy problem of the one dimensional wave equation. Solving the one-dimensional wave equation with PINNs is only of academic interest and insights obtained from it for the training of PINNs might not generalize. More specifically:
- The exact imposition of the time derivative should also work for general time dependent equations. The authors should comment on this.
- The sampling strategy employing a finite difference simulation to determine regions of high sampling density is not a generalizable approach. If a finite difference solver for the equation at hand is available, a PINN solver is typically not required.
- The authors determine an optimal function $\tau$ via considering six concrete examples. There is no guarantee that this approach will generalize to different equation types and is therefore of limited practical use.
- The authors might want to discuss the theoretical literature that proves the theoretical advantage of exactly imposed boundary conditions [1, 2, 3] and more elaborate constructions of distance functions.
[1] https://proceedings.mlr.press/v190/muller22b/muller22b.pdf
[2] https://arxiv.org/abs/2311.00529
[3] https://www.sciencedirect.com/science/article/abs/pii/S0045782521006186
Technical Quality: 2
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: The scope of the paper is too narrow.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer \textbf{cEZM} pointing out. We acknowledge the concern about the focus on the one-dimensional wave equation. While the 1D wave equation serves as an initial validation, our intention is to propose a general framework for imposing hard constraints in PINNs, including for the first time derivative. To address this, we have added the application of our method to the 2D wave equation in the attached PDF, demonstrating its broader applicability.
\issue{Weakness (1)}
% The exact imposition of the time derivative should also work for general time dependent equations. The authors should comment on this.
We thank Reviewer \textbf{cEZM} for pointing out the applicability of our method to general time-dependent equations. Indeed, the exact imposition of the time derivative is designed to be general and can be applied to a wide range of time-dependent PDEs. Compared to commonly benchmarked elliptic and parabolic partial differential equations, wave equations are more challenging to solve numerically.
\issue{Weakness (2)}
% The sampling strategy employing a finite difference simulation to determine regions of high sampling density is not a generalizable approach. If a finite difference solver for the equation at hand is available, a PINN solver is typically not required.
We agree that if a finite difference solver is available, it might reduce the necessity of using PINNs. However, considering future incorporation with real noisy data, PINNs have advantages in unifying forward simulation and inverse problems.
\issue{Weakness (3)}
% The authors determine an optimal function via considering six concrete examples. There is no guarantee that this approach will generalize to different equation types and is therefore of limited practical use.
We appreciate this observation. While our initial study explored six candidate functions to determine an optimal approach, we acknowledge the need for broader validation. In the revised version, we will discuss the limitations of this approach and suggest that future work should explore a more extensive set of functions and equation types to enhance generalizability.
\issue{Weakness (4)}
% The authors might want to discuss the theoretical literature that proves the theoretical advantage of exactly imposed boundary conditions [1, 2, 3] and more elaborate constructions of distance functions.
We appreciate the papers shared by Reviewer \textbf{cEZM}. These papers will help us make a better discussion of the advantages of exactly imposed boundary conditions in our revised version.
\issue{Questions}
% See weaknesses.
We have answered the questions in the weakness session.
\issue{Limitations}
% The scope of the paper is too narrow.
We have added the application of this method to the 2D wave equation. The results are in the attached PDF.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer. I think the pre-print still needs a decent amount of work, I am still critical about relying on a finite difference solver within a PINN workflow. I would like to keep my score. | Summary: This paper explores to solve the acoustic wave equation in the context of PINNs. Hard boundary and initial conditions are enforced by employing continuous functions within the PINN ansatz to ensure that these conditions are satisfied. A Dynamic Amplitude-Focused Sampling (DAFS) method is introduced to improve the efficiency of hard-constraint PINNs under a fixed number of sampling points.
Strengths: 1. Propose a general hard constraint imposition formula which correctly imposes all boundary conditions and initial conditions as required.
Weaknesses: 1. Only the wave equation is discussed.
2. The proposed Dynamic Amplitude-Focused Sampling (DAFS) method is trivial.
3. There are no comparisons with other methods in the experiments.
4. In the experiments, the relative errors between exact solutions and predictions are not given.
5. In the context of PINNs, it is better to give explicitly the formulation of training loss. Training details are also lacking.
6. Instead of tuning \tau (t) manually, it is better to train \tilde{u}(x,t) and \tau (t) simutanuously.
7. Many typos and grammar errors, such as "both and \alpha" in line 149, "x \in {\partial \Omega}_i" in line 125, "computational" in line 46.
8. The quality of Fig.7 should be improved.
Technical Quality: 2
Clarity: 2
Questions for Authors: In line 41, what does the "basic function" mean? Is it the function \tau (t)?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 1
Limitations: Only the wave equation is discussed. There are no comparisons with other methods in the experiments.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 2
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer \textbf{U3hW} for their constructive comments and appreciation of our strengths, such as "The hard constraint imposition formula are general."
\issue{Weaknesses (1)}
% Only the wave equation is discussed.
The wave equation is the focus of our study. We are proposing a general framework to impose hard constraints into PINNs, including the hard constraint for the first time derivative, and the wave equation is particularly suitable for this study. Additionally, compared to commonly benchmarked elliptic and parabolic partial differential equations, wave equations are more challenging to solve numerically.
\issue{Weaknesses (2)}
% The proposed Dynamic Amplitude-Focused Sampling (DAFS) method is trivial.
The DAFS method effectively distributes samples to achieve the same level of accuracy with fewer samples compared to the vanilla uniform distribution. Furthermore, DAFS incurs minimal computational cost. However, more studies are needed to determine the optimal use cases for DAFS and how to choose the high- and low-amplitude regions effectively.
\issue{Weaknesses (3)}
% There are no comparisons with other methods in the experiments.
We agree that comparing our approach with other methods would strengthen our study. In the original submission, we only showed the comparison of our sampling method with vanilla uniform random sampling. In the revised manuscript, we will include a comparative analysis with methods such as importance sampling and adaptive sampling to highlight the strengths and limitations of our approach.
\issue{Weaknesses (4)}
% In the experiments, the relative errors between exact solutions and predictions are not given.
The relative errors between exact solutions and predictions are shown in Figures 10 and 11 in the Appendix. We apologize for not mentioning this in the main text.
\issue{Weaknesses (5)}
% In the context of PINNs, it is better to give explicitly the formulation of training loss. Training details are also lacking.
We will add training details in the revised version. For our updated results of the 2D wave equation, we include the training details in the caption in the attached PDF.
\issue{Weaknesses (6)}
% Instead of tuning \tau (t) manually, it is better to train \tilde{u}(x,t) and \tau (t) simultaneously.
This is an interesting direction. We will explore this in future work. In the original manuscript, we selected $\tau(t)$ from a set of candidate functions that can enforce the initial conditions.
\issue{Weaknesses (7)}
% Many typos and grammar errors, such as "both and \alpha" in line 149, "x \in {\partial \Omega}_i" in line 125, "computational" in line 46.
Thank you for pointing that out. We apologize for these typos due to a rushed submission. We will correct all these typos in our revised version.
\issue{Weaknesses (8)}
% The quality of Fig.7 should be improved.
We will redraw Fig. 7 in the revised version.
\issue{Questions}
% In line 41, what does the "basic function" mean? Is it the function \tau (t)?
Yes. We will correct line 41 to ``(...) the basic function $\tau(t)$ (...)''.
\issue{Limitations}
% Only the wave equation is discussed. There are no comparisons with other methods in the experiments.
In the original submission, we only showed the comparison of our sampling method with vanilla uniform random sampling. In the revised manuscript, we will include a comparative analysis with methods such as importance sampling and adaptive sampling to highlight the strengths and limitations of our approach.
---
Rebuttal Comment 1.1:
Comment: Thank you for your answer. Since some concerns have not been solved, I would like to keep my score. | Summary: This paper improves the training efficiency of original physics-informed neural networks to solve the 1D wave equation threefold: first by extending ansatz to also take the first derivative into account, second by a sampling method that focuses on high-amplitude regions, and third by a framework for domain decomposition.
Strengths: + The related work is well presented.
+ The evaluation of the six candidate functions for \tau in section 4.2 provides interesting insights. The authors explore an advanced selection method for \tau based on the task at hand which might be an interesting research direction.
Weaknesses: [Originality] While considering the first derivative for the ansatz is a good addition, the contribution is only minor.
Sampling more collocation points in the regions that might be more difficult to solve is a practical approach however the comparison and distinction to other sampling methods is missing.
Lastly if I understand the domain decomposition framework correctly, the contribution is to wrap the entire training into a loop and, based on the training process's results, increase or decrease the subdomain size.
Evaluation results are only provided for the 1D wave equation. Further results for other differential equations are necessary to demonstrate the benefits of the proposed method.
[Clarity]
The framework for domain decomposition is not presented clearly. While the flow chart in Figure 7 provides an overview of the method additional textual explanations in Section 4.4 are needed.
There were few to no remarks about the training regime (#training points, optimizer, learning rate…, etc.), making it more difficult to reproduce results.
Minor remarks:
- N_pde is not introduced. It is probably the number of collocation points?
- Most of the Figures (e.g. Fig. 1, Fig 6.) are hard to read.
- Line 46: (…) optimal size of the computational [domain?] given (…)
- Line 149: Both [N_pde?] and alpha (…)
- Line 178: (…) In general, (...) performs better in general
Technical Quality: 3
Clarity: 2
Questions for Authors: Q1: In the abstract the DAFS method is described as a method "that optimizes the efficiency (…) under a FIXED number of sampling points. However in Section 3.2: "This strategy optimally selects the number of points, N_pde, used in training." I assumed that DAFS only distributes the collection points into high and low-amplitude regions. Is that correct?
Q2: How does DAFS compare to other state-of-the-art collocation point sampling methods? When should one use DAFS, and in which cases a different method?
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: While the authors clearly state that they are interested in the 1D wave equation it would have been interesting to see their proposed methods applied to the 2D wave equation of any other differential equations what are typically used in PINN benchmarks.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer \textbf{QJWv} for their constructive comments and appreciations of our strengths such as "The related work is well presented" and "The evaluation of the six candidate functions for $\tau$ in section 4.2 provides interesting insights. The authors explore an advanced selection method for $\tau$ based on the task at hand which might be an interesting research direction".
\issue{Weaknesses (Originality)}
We thank Reviewer \textbf{QJWv} for the acknowledgment of our addition. While we recognize that considering the first derivative provides a modest improvement, this enhancement is crucial for wave equation systems with non-negligible first time derivative terms. Very few studies have thoroughly discussed the hard-constraint embedding of this first time derivative term.
Thank you for the suggestion regarding the sampling method. We agree that comparing our approach with other sampling methods would strengthen our study. In the original submission, we only showed the combination of our sampling method with vanilla uniform random sampling. In the revised manuscript, we will include a comparative analysis with methods such as importance sampling and adaptive sampling to highlight the strengths and limitations of our approach.
Your understanding of our domain decomposition framework is correct. This framework adapts the subdomain size dynamically based on the training process's results, which helps in efficiently handling complex regions. We apologize for not clarifying this process in the manuscript due to the rush. We hope we will have the opportunity to provide more detailed examples and results in the revised version to illustrate the benefits of this adaptive approach.
\issue{Weaknesses (Clarity)}
We thank Reviewer \textbf{QJWv} for clearly pointing out the clarity issues in our paper. The initial submission was rushed, resulting in insufficient explanations in Section 4.4, typos, and less polished figures.
Thank you for kindly highlighting these minor clarity problems. We will address the \textbf{Minor remarks} as follows:
1. Yes, N\_pde refers to the number of collocation points used to calculate the PDE loss in PINNs.
2. We have updated Fig. 1 and Fig. 6. Fig. 1 shows the ground truth results of the benchmark used in this paper, where the x-axis represents the spatial variable $x$ and the y-axis represents the time variable $t$. Fig. 6 illustrates the different masking rates of the sampling strategy, and the results in Fig. 5 indicate that the best masking rate $\alpha$ is around 0.5.
3. Line 46: We will correct this to ``optimal size of the computational domain given (...)''.
4. Line 149: We will correct this to ``N\_pde and alpha (…) ''.
5. Line.178: This is a typo; we mean that $\tau=t^2,2t^2/(1+t^2)$ generally performs better.
\issue{Questions (Q1)}
Thank you for this question. We apologize for the confusion. You are correct that DAFS distributes the collocation points into high and low-amplitude regions in the example. In Section 3.2, we intended to convey that the redistribution of samples has the potential to achieve the same level of accuracy with fewer samples compared to the vanilla uniform distribution. However, we do not directly select the number of points, N\_pde, used in training. We will correct the sentence to "This strategy optimally distributes collocation points in training."
\issue{Questions (Q2)}
Thank you for this question. In our original submission, we only compared DAFS with the vanilla random sampling method. We will work on providing a more comprehensive comparison in our revised version.
From the observation of our numerical experiments on the wave equation, DAFS outperforms the random sampling method for traveling waves but underperforms for standing waves. This can be seen in Figure 2 of the attached PDF.
\issue{Limitations}
In Figure 1 of the attached PDF file, we have added the results of the 2D wave equation with $\tau(t) = t^2$. The comparison of ground truth and PINN-predicted wave fields of standing 2D waves shows that the hard-constraint embedding guarantees accuracy at the boundaries and initial steps but struggles to scale to large time domains when the frequency is high. | Rebuttal 1:
Rebuttal: We thank Reviewer \textbf{QJWv}, \textbf{U3hW}, \textbf{cEZM} and \textbf{AwP6} for their constructive comments and appreciations of our strengths such as "The related work is well presented", "The evaluation of the six candidate functions for $\tau$ in section 4.2 provides interesting insights. The authors explore an advanced selection method for $\tau$ based on the task at hand which might be an interesting research direction", "The hard constraint imposition formula are general", "The imposition of the time derivative seems to be a novel construction", "The construction seems not to be limited to the wave equation" and "The paper considers an interesting problem in acoustics".
We answer each reviewer's questions separately under their reviews. Thank you all again for reviewing our abstract! There are so many insightful questions and suggestions that can help us improve our abstract.
Pdf: /pdf/13f37835282a02c9107e05f7bc6c56fd3e28da07.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
ActAnywhere: Subject-Aware Video Background Generation | Accept (poster) | Summary: The task addressed by this paper is, given the the appearance and segmentation mask of a foreground subject in a video, for example a human running, to synthesize video backgrounds that are plausible and realistic, both in content and motion. For example, for a person running, if the ground is wet there should be splashes corresponding to footfalls, and the background scene should move along with the runner's movement.
The authors address this problem as a generative video outpainting task. They model it using a latent video diffusion model. The foreground subject appearances (encoded) and masks are concatenated with the latent feature vector, while the conditioning image (what the background should look like) is encoded via CLIP and is provided via cross-attention to the U-net.
A variety of qualitative results are included, demonstrating the ability to out-paint plausible scenes, including background motion, effects (splashing water) and even objects that the subject interacts with. Quantitative evaluations include ablations validating the main components of the model, as well as human rater studies comparing the quality of the output to competing methods.
Strengths: Originality & Significance.
The paper addresses a minor new variant on the outpainting/inpainting problem, which introduces some new challenges such as picking up on foreground contextual cues and using that to inform the background.
Quality & Clarity:
The paper is clearly written, seems correct, and demonstrates visually appealing results with a variety of different foreground subjects (person, car, duck, etc.). The supplemental videos help illustrate the model's capabilities well. The empirical results also support the quality of the generative model.
Weaknesses: The main weakness of this paper is very limited novelty, which compromises the claimed contributions of the paper.
Video inpainting/outpainting is a fairly well-known challenge. The main novelty here is that the masks are simply inverted: instead of deleting an unwanted person and generating the missing background pixels, here we retain only the desired foreground person and generate the missing pixels. I am not convinced that this difference is substantively novel.
Similarly, the modeling approach bears considerable similarity to [13] "Structure and Content-Guided Video Synthesis with Diffusion Models". In particular, compare Fig.2 from the two papers. The primary difference between the two is that this work conditions on foreground appearances and masks, while [13] conditions on estimated depth images. Again, the novelty is minor and unsurprising.
Technical Quality: 3
Clarity: 4
Questions for Authors: Have you tried unconditional generation, i.e. not providing any signal as to what the desired background should be? This could be quite interesting, since it would challenge the model to understand exactly what the foreground subject is doing and to invent plausible backgrounds. For example, this child seems to be interacting with an object, what could that object be? This woman is riding something… what fits?
Would you like to discuss the appearance of the Shutterstock watermarks in the generated backgrounds, particularly in figure 3? This could be an interesting example of the model using foreground context informing the background generation, as per my previous question.
Confidence: 5
Soundness: 3
Presentation: 4
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their thoughtful feedback and for acknowledging that the introduced problem possesses originality, the high quality of the results demonstrates the effectiveness of the proposed model, and that the paper is clearly written. We next address the reviewer’s questions and comments.
> Video inpainting / outpainting is a fairly well-known challenge…
As discussed in L40-L44 and L100-L112 of the submission, the key challenge of our task compared to general video inpainting / outpainting lies in **generating dynamic interactions** and **large background regions** that **follow the image condition**. This is acknowledged by reviewers Pq5G and gAcX. It is nontrivial how to extend the video inpainting / outpainting frameworks, which generally tackle a pixel harmonization problem, to solve these challenges. Also, in Appendix 6.1, we showed that our model, once trained, exhibits general video inpainting / outpainting capabilities, while general inpainting / outpainting methods are not able to solve our proposed task.
> The modeling approach is very similar to Gen1 [13]
Our key contributions include 1) introducing the novel problem of automated subject-aware video background generation with a background image as condition, and 2) proposing a specific video diffusion-based pipeline to solve this problem. Prior works such as Gen1 [13] are not able to solve the introduced problem, as demonstrated qualitatively and quantitatively in Sections 4.2 and 4.3 of the main manuscript. We found empirically that our proposed framework can effectively tackle the problem.
> Unconditional generation with the model
Thank you for the question. We have performed the requested experiment and included the results in Fig. 2 of the attached PDF. Specifically, we set the condition to zeros, same as the random condition dropping described in L192-L194 of the submission. We sampled two subjects from the HiC+ dataset, and for each we ran our model with three different seeds. From each of the three generated videos, we selected one frame to show its input segmentation along with the generation. We observed that the model can generate reasonable random backgrounds that fit the foreground.
> Watermark in the generated backgrounds
Webvid* videos are a major source of the HiC+ dataset, which all contain the “shutterstock” watermark. Training on them provides a dataset bias such that when conditioned on segmentations or a condition frame with the watermark, the generated results will also contain and complete (if the watermark appears partially in the input) the watermark. As the reviewer pointed out, this is an example of the model using foreground context to inform the background generation.
Despite that this issue is orthogonal to the focus of this work, we are aware of methods that can alleviate watermarks appearance in the generation, e.g. the “Domain Adapter” in AnimateDiff [17]. We leave the integration of such methods in our framework to future work.
*Frozen in Time: A Joint Video and Image Encoder for End to End Paper. Bain et al. ICCV 2021.
---
Rebuttal Comment 1.1:
Comment: Thank you for exploring unconditional background generation. I think this is an intriguing variant of the problem to showcase. I like how the model attempts to correct for the overly-tight cropping of the foreground person, by generating the missing shoe.
Given the general agreement from other reviewers that this problem is novel and interesting, I'll retract that part of my assessment, and boost the score.
---
Reply to Comment 1.1.1:
Comment: We are glad to hear that we addressed your concerns. We will include the unconditional generation results in the final version of the paper. Thank you for raising the score! And thanks again for your efforts spent reviewing our paper! | Summary: This paper introduces ActAnywhere, a video diffusion model designed to generate video backgrounds that adapt to the foreground subject's motion. By utilizing a sequence of foreground subject segmentation and a background image, the model produces realistic videos with coherent foreground-background interactions. Experiments on a large-scale dataset demonstrate the model's effectiveness, outperforming existing methods in generating realistic and dynamic backgrounds.
Strengths: S1: The paper introduces a novel problem of automated subject-aware video background generation.
S2: The methodology shows improvements in generating coherent videos with realistic subject-background interactions.
S4: The contributions are significant, particularly for applications in the movie industry and visual effects.
S4: The paper is comprehensive and well-written.
Weaknesses: W1: The paper lacks sufficient comparison with a broader range of existing methods, particularly those leveraging recent advancements in video generation and editing (though they are not for background generation, they can do).
W2: It seems that the model relies heavily on the quality of the foreground video segmentation masks.
Technical Quality: 4
Clarity: 4
Questions for Authors: Q1: I wonder about the impact of the quality of the foreground segmentation. It seems that the model relies heavily on the quality of the foreground segmentation.
Q2: How scalable is your model for generating longer video sequences? Have you tested its performance in generating videos of varying lengths, and what are the results?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: I think there is no potential negative societal impact and the authors have addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the positive feedback from the reviewer and thank them for acknowledging that our introduced problem is novel, the proposed method is effective and makes a significant contribution to the movie and VFX industries, and that our paper is well-written. We address the reviewer's individual questions and comments below.
> Lacks sufficient comparison with recent works on video generation and editing
To the best of our knowledge, the work with the closest setting to ours is AVID*, which can perform a task termed “Environment Swap” by editing the background region with a text condition. However, AVID does not take image conditioning, and hence cannot make the generated video follow the strict guidance specified by an image as we do. Moreover, from the results the authors showed on the website, they do not manifest realistic foreground-background interactions as ours (e.g. the sand does not deform according to the woman / tiger’s movement), and fail to generate correct shadows (the shadows are simply carried over from the input). We also note that this work was published recently in CVPR which was after the NeurIPS submission deadline, and is not open sourced yet which makes it hard to compare with.
Apart from this work, we believe others study different settings from ours and are generally non-trivial to extend to work under our setting. We are happy to compare any specific methods that the reviewer may suggest.
*AVID: Any-Length Video Inpainting with Diffusion Model. Zhang et al. CVPR 2024.
> It seems that the model relies heavily on the quality of the foreground segmentation
As referenced in L204 of the main manuscript and discussed in Appendix 6.1, our model is in fact quite robust to inaccurate masks (Fig. 8), thanks to our designed data augmentation and processing procedures (i.e. random cut-outs and image erosion to the segmentation and mask) as noted in Sec. 3.4 of the main manuscript and Appendix 6.2.
> How scalable is the model for generating longer / variable-length video sequences?
Due to the modular nature of our framework, we can easily scale up to generating longer videos if we swap to a different base model. The choice of generating 16-frame videos aligns with many previous works (e.g. [9, 17] for 16-frame generations, and [24] for 8-frame generations) and is primarily limited by the compute resource we had. Generating longer videos is out of the scope of this work, and we will explore this in future work with the latest DiT backbone.
---
Rebuttal Comment 1.1:
Title: Has the rebuttal addressed your concerns?
Comment: Dear Reviewer gAcX,
Thank you again for your time to review this paper. Could you please check if the authors' rebuttal has addressed your concerns at your earliest convenience? The deadline of the discussion period will end in about 24 hours. Thank you!
Best regards,
AC | Summary: This paper studies a new topic: automatic background generation of moving foreground subject. Different from video inpainting/outpainting and other video editing methods, the method in this paper can maintain the consistency of foreground moving subject, and maintain reasonable and realistic interactions, camera motion, lighting and shadows.
It can also generalizes to diverse scenarios including non-human objects, gaming and animations, as well as videos with multiple moving subjects. Input a foreground segmentation image sequence and a foreground mask sequence, and under the guidance of a background reference image, generate a motion video of the interaction between the foreground and the background. Specifically, the spatial layer parameters of Denoising 3D Unet are loaded from a pre-trained SD-inpainting model, and the temporal layer(motion module) is loaded from the pre trained model of AnimateDiff V2. The input is a feature map of nine channels, which concatenates the nosie latent, the vae encoder feature of the foreground segmentation map, and the downsampled foreground mask in the channel dimension.
The CLIP feature of the background reference map is injected into the Unet through cross attention.Trained on the HiC+ dataset (2.4M videos of human-scene interactions), Given the error of foreground segmentation, random rectangular cut-outs augmentation is applied to the foreground segmentation sequence and foreground mask sequence
Strengths: A new task was studied: automatic background generation of moving foreground, and reasonable and realistic foreground/background interaction, camera motion, lighting and shadows were required.
Although it is a new task, the results of similar methods were compared under fair conditions as much as possible, which indeed proved the superiority of the method in this paper.
Weaknesses: The training resolution is low (256x256), and it is unclear whether the method in this paper still has advantages and reliability under high-score conditions?
Ablation experiment: The method of using background reference image for guidance does not seem to be fully considered, such as the method similar to IPAdapter?
The training details are not clear: is it fine-tuning the spatial layer and the motion module at the same time? Or only fine-tuning the motion module? Or is cotraining still required?
This article spends a certain amount of space to explain the difference in results between the method proposed in this paper and the SD-inpainting method, but in terms of the algorithm pipeline, except for using the CLIP features of the background reference image for guidance, the rest is almost exactly the same as the SD-inpainting model, and even the spatial layer of Unet directly loads the pre-trained model of SD-inpainting.
The article does not seem to explain the difference in principle between the method proposed in this paper and the SD-inpainting method?
Technical Quality: 3
Clarity: 2
Questions for Authors: Please refere weakness
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their positive feedback and for acknowledging that our introduced task is novel, and that the experimental comparisons are under a fair setup. We address the reviewer’s particular questions and comments below.
> Training resolution is low… The method of using reference image does not seem fully considered, e.g. the method similar to IP-Adapter?
There is no fundamental limitation for our method to work under higher-resolution settings, granted a base model under a higher resolution and more compute resources.
Compared to the method of condition encoding in IP-Adapter, we only support image conditioning and thus do not need to map image and text to a joint space, hence we do not need the Decoupled Cross-Attention module as in their framework. Similar to the image encoding method in IP-Adapter though, we also have a linear layer to project the CLIP image features to a lower dimensional feature space, as noted in L179-L182 of the main manuscript.
> Are we finetuning only the spatial layers, only the motion module, or both?
Thank you for the question. We finetune both the spatial layers and the motion module layers at the same time. We will clarify this in the final version of the paper.
> Technical framework is very similar to SD-inpainting
Our key contributions include 1) the introduction of the novel problem of automated subject-aware video background generation with a background image as condition, and 2) the proposal of a specific video diffusion-based pipeline that is carefully designed and tailored to solve this problem.
SD-inpainting works only under the image setting, and it is unclear how to extend it to videos. Our proposed framework, along with the designed self-supervised training pipeline and data augmentation strategies, contribute to our solution altogether.
---
Rebuttal Comment 1.1:
Title: Has the rebuttal addressed your concerns?
Comment: Dear Reviewer Pq5G,
Thank you again for your time to review this paper. Could you please check if the authors' rebuttal has addressed your concerns at your earliest convenience? The deadline of the discussion period will end in about 24 hours. Thank you!
Best regards,
AC | Summary: This paper study to automatically generate video background that tailors to foreground subject motion. It proposes ActAnywhere, a video diffusion model that takes as input a sequence of foreground subject segmentation and an image of a novel background and generates a video of the subject interacting in this background. Both quantitative and qualitative comparisons demonstrate that the model significantly outperforms existing methods, which fail to accomplish the studied task.
Strengths: 1. The task is interesting and the result seems to be very competitive.
2. The paper is written clear.
Weaknesses: 1. Only encoder condition frame by a clip encoder cannot contains details information, especially for high-resolution input images. Please refer AnimtaeAnyone for human image animation. And please compare such a referencenet attention way.
2. Can you provide some video demos. I suppose there may exists blur boundary results.
3. Since the task seems to be very related to human animation task, could you please show some results on TikToK dataset? Please see AnimateAnyone paper for comparision details.
Technical Quality: 4
Clarity: 3
Questions for Authors: See weakness
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 2
Limitations: See weakness
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate the positive feedback from the reviewer and thank them for acknowledging that our introduced task is interesting and that our proposed model achieved competitive results. We next address the reviewer's individual questions and comments.
> CLIP encoder alone cannot contain detailed information… Compare to the approach of AnimateAnyone using ReferenceNet.
We thank the reviewer for the suggestion. We would like to note that detail preservation of the condition is still an open question. And because our framework is fairly general and agnostic to the image encoder, we can swap the CLIP encoder with any encoder with ease, e.g., DINO* if object-centric features are more desirable. ReferenceNet is an interesting alternative, but AnimateAnyone is not open sourced, which makes it hard to directly compare. Nonetheless, we are happy to compare if the reviewer insists.
*Emerging Properties in Self-Supervised Vision Transformers. Caron et al. ICCV 2021.
*DINOv2: Learning Robust Visual Features without Supervision. Oquab et al. TMLR, 2024.
> Provide video demos. Blurry boundaries may exist in the videos.
We already included extensive video results in the easily accessible supplementary webpage, as mentioned in L206-L208 of the main manuscript. We did not observe any particular cases with blurry boundaries.
> Task relevant to human animation. Show results on the TikTok dataset.
We note a key difference from the human animation task that the goal is in fact the opposite - we are given the foreground motion and try to generate the moving background. The TikTok dataset has a different focus to generate the foreground motion instead. Hence, the background in the dataset is often very simple and does not move much, and very little, if any, interaction happens between the foreground and the background. Thus the dataset is not a good fit for our task. And since AnimateAnyone is not open sourced, it is hard to compare experimentally. Nonetheless, as requested by the reviewer, we tested our model on two samples from the TikTok dataset and included the results in Fig. 1 of the attached PDF. Our model demonstrates good generalization over the tested data.
---
Rebuttal Comment 1.1:
Title: Has the rebuttal addressed your concerns?
Comment: Dear Reviewer Aquh,
Thank you again for your time to review this paper. Could you please check if the authors' rebuttal has addressed your concerns at your earliest convenience? The deadline of the discussion period will end in about 24 hours. Thank you!
Best regards,
AC | Rebuttal 1:
Rebuttal: We would like to sincerely thank the AC and the reviewers for their hard work and time in reviewing our submission. We appreciate the positive feedback and recognition of the novelty and significance of the introduced problem, the high quality of the results and the effectiveness of the proposed method, as well as the clear paper writing. We also appreciate the insightful suggestions for further improving our work.
We also want to emphasize that the key technical contributions of this paper are 1) the introduction of the novel problem of automated subject-aware video background generation, and 2) the proposal of a video diffusion-based pipeline that is carefully designed and tailored to solve this problem. And as we have shown in Sec. 4 of the main manuscript, no baseline was able to effectively solve the introduced problem (and with reasonable adaptations), and our proposed method achieved significantly better performance than the baselines both qualitatively and quantitatively. This demonstrates the challenge of the introduced problem and the effectiveness of the proposed solution.
In the comments below, we have addressed all specific issues / concerns / questions raised by the four reviewers by replying to each individually. We also attached a PDF which contains the additional results requested by Reviewers **Aquh** and **ghM9**.
Thank you again for your time and efforts throughout the review process -- they are greatly appreciated!
Pdf: /pdf/6af6d6c832cf33d062fe80083d32cd95485c36f3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Constrained Binary Decision Making | Accept (poster) | Summary: The authors proposed an optimization problem which has specific form of solutions that can be leveraged to solve various types of binary statistical decision making problems.
Strengths: The formulation of the optimization is general and there are many applications in binary statistical decision making problem.
Weaknesses: The motivation for characterizing the optimal solution is quote the authors: "This exmaple underscores the advantages of understanding the structure of the optimal solution to underlying BDM problems". So it seems reasonable to me to add some experiments to validate the claim.
Technical Quality: 3
Clarity: 3
Questions for Authors: As above.
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: adequately addressed
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: R: *The motivation for characterizing the optimal solution is quote the authors: "This example underscores the advantages of understanding the structure of the optimal solution to underlying BDM problems". So it seems reasonable to me to add some experiments to validate the claim.*
A: Recent papers [7] and [16] independently provided empirical evidence that the optimal strategy for SCOD outperforms the heuristic strategy SIRC [23].
---
Rebuttal Comment 1.1:
Title: Reviewer response?
Comment: Reviewer CrjX, could you please review the authors' response and see whether it addresses your questions? Please acknowledge having done so in a comment. Thanks.
---
Rebuttal Comment 1.2:
Title: Reply to rebuttal
Comment: Sorry for the late reply. I thank the reviewer for their clarification. Their reply addresses my concern, and I would like to keep my score. | Summary: The paper titled "Constrained Binary Decision Making" presents a comprehensive framework for binary statistical decision making (BDM), a critical area in both classical statistics and modern machine learning. The authors formulated BDM problems as constrained optimization tasks and provide a detailed characterization of the optimal solutions. The paper covers well-known and recent BDM problems, demonstrating the applicability of their generic approach to derive optimal decision strategies.
Strengths: - This paper is very well-written paper, and easy to follow.
- The authors proposed an interesting framework that encompasses several popular BDM problems and presented solid proof.
Weaknesses: - The authors claimed "Conversely, skipping the optimal strategy derivation and using heuristic rules, such as the SIRC strategy from the original SCOD paper [23], can lead to sub-optimal performance". I think it would be interesting (but not required) to conduct experiments to compare the proposed approach against the one in [23].
Technical Quality: 3
Clarity: 4
Questions for Authors: - In Lemma 1, I do not know what $\pi(x)$ and $\rho(x)$ stand for. Should they be $p(x)$ and $q(x)$, respectively?
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 4
Limitations: The authors claimed that "The paper presents a theoretical result on an optimization problem, hence, in its essence, it has no limitations." I think in the **Extensions** section, I'm wondering that whether the proposed approach can be adapted to handle the case where more than one **$\geq$** constraints exist, could this be a limitation of the current framework?
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: R: *The authors claimed "Conversely, skipping the optimal strategy derivation and using heuristic rules, such as the SIRC strategy from the original SCOD paper [23], can lead to sub-optimal performance". I think it would be interesting (but not required) to conduct experiments to compare the proposed approach against the one in [23].*
A: Recent papers [7] and [16] independently provided empirical evidence that the optimal strategy for SCOD outperforms the heuristic strategy SIRC [23].
R: *The authors claimed that "The paper presents a theoretical result on an optimization problem, hence, in its essence, it has no limitations." I think in the Extensions section, I'm wondering that whether the proposed approach can be adapted to handle the case where more than one constraints exist, could this be a limitation of the current framework?*
A: We agree with the comment. Since the proposed extension is an unproven hypothesis, we will list it among the paper's limitations.
R: *In Lemma 1, I do not know what $\pi(x)$ and $\rho(x)$ stand for. Should they be $p(x)$ and $q(x)$, respectively?*
A: Yes, it is a typo, these functions should be $p(x)$ and $q(x)$.
---
Rebuttal Comment 1.1:
Comment: Thanks for addressing my comments. I will keep my score unchanged. | Summary: This paper studies a class of optimality criteria for what it calls "binary decision making" which is basically binary classification but with a randomized classifier. It characterizes the solutions for thes criteria, recovering some known results and also establishing new ones. The paper is entirely theoretical.
Strengths: The main strength of the paper is that it provides a very general treatment of optimality criteria for binary classification , where the criteria include an objective to be minimized, and also one or two constraints to be satisfied. It subsumes prior work on statistical optimality in "selective classification." The proof appears to be original, nontrivial, and, for the most part, clearly written.
Weaknesses: The main weakness concerns the impact. Of course impact on practitioners is desired, but even the theoretical impact was unclear to me at times. I think this stems from how the paper is motivated. The motivation uses the notion of "selective classification," where you have a classifier and a selection function, and your optimality criterion concerns both of them. But the actual results in the paper only discuss the selection function, so there is a disconnect. So if the authors could clarify that issue, and also make a sincere effort to explain realistic settings where the novel criteria are potentially useful, that would go a long way toward resolving the main weakness.
The other notable weakness is that the motivation involves a classifier h and a selection function c, but then h goes away at some point.
Below I include a section by section list of comments that I made while reading the paper.
Introduction: Writing could be improved. For example, the Neyman-Pearson lemma is introduced in both the first and second paragraphs. Selective classification is not so well known, and it would be helpful to define it here, and explain why it is an important class of criteria. Most importantly, BDM is never precisely defined.
*** What is the impact of the contribution? Does it help us train selective classifiers? Some optimal solutions seem to require knowledge of the underlying distributions. In line 165 it is stated that the results are “potentially useful for specific applications”, but no concrete examples are given. Thus the impact could be more convincingly argued. This is probably the main weakness.
The sentence “Therefore, learning a detector from examples involves effectively approximating the likelihood ratio and then tuning the decision threshold” is seemingly false, as you just need the decision boundary
Reference [17]: “Person” should be “Pearson”
** Sec 2: What is the learning problem? Do we just get data from the “in distribution”?
The statement “We assume the classifier h was designed to minimize the prediction loss … ” is unclear – usually one chooses h to minimize an expected loss. But it is not clear what that expectation (joint distribution) is at this point in the presentation.
Eqn. (7): The notation p(x,y), to me, assumes x and y are jointly discrete. Better to just use E for expectation and eliminate the integral and sum.
Eqn. (10): missing “)”
*** The review of BDM problems discussed the optimal c, but the optimal h is not mentioned.
Selective risk: notation varies, e.g., R^S vs R_S
** The final example assumes that pi is known or can be estimated. Well, it will not be known in practice, and if estimated, the estimate won’t equal the true pi. Thus, it would be important to understand how having an estimate of pi impacts the usefulness of this case.
There are several papers that cover classes of optimal prediction functions in different setups, and I think it would be appropriate to mention this work and discussion connections. For example,
Consistent Binary Classification with Generalized Performance Metrics, Oluwasanmi O. Koyejo, Nagarajan Natarajan, Pradeep K. Ravikumar, Inderjit S. Dhillon
Harikrishna Narasimhan, Rohit Vaish, and Shivani Agarwal. On the statistical consistency of plugin classifiers for non-decomposable performance measures.
Krzysztof Dembczy´nski, Wojciech Kotłowski, Oluwasanmi Koyejo, and Nagarajan Natarajan. Consistency analysis for binary classification revisited.
Clayton Scott, A Generalized Neyman-Pearson Criterion for Optimal Domain Adaptation
One common theme is that the optimal decision rule is always a likelihood ratio test. If it’s not, it means the criterion being optimized is not sensible. So you could use that to frame your contribution.
I would also recommend looking a the simple idea presented in the section on “Birdsall’s Insight” in Statistical Signal Processing by Louis Scharf.
Another potentially related paper is Classification with confidence by Jing Lei
Sec 3
In the setup, the functions are assumed to be Lebesgue measurable. This means \mathcal{X} should be a Euclidean space, which has not been indicated.
In the statement of Lemma 1, the notations rho(x) and pi(x) appear out of nowhere.
*** The function R depends on c, whereas before it depended on c and h. An explanation is needed. The whole motivation involved h, but now h has been lost.
Sec 4
Line 235: “useful variants” -> “special cases”
Line 235: “whose solutions can be derived” – the full solution would require specification of tau(x) and lambda, so instead you could say that you are characterizing the form of the solution.
Problem 1: mention that this does not fully recover the NP lemma because in the NP lemma, tau is a constant.
In the discussion of problems 4 and 5, I *think* you mean to say that these can be transformed to (25), not (24).
** Can you give an example of where it does not suffice for tau(x) to be a constant? In all of the examples, tau(x) is neglected.
Proofs
Line 337: what notion of dimension is meant? Do you mean it has positive Lebesgue measure on R^2?
Line 348: why is C’ a compact set?
Line 353: Need to discuss why pi is measurable, and presumably here A is a Borel set.
Line 357-358: It’s clear that such x do not impact the first constraint, but it’s not clear that it doesn’t impact the objective function or second constraint.
Line 360: contains pi(x)
Technical Quality: 3
Clarity: 3
Questions for Authors: In the previous box, a ** or *** next to a comment means I would be interested in hearing the authors' response, with *** indicating higher priority. Responses to other comments are also welcome.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: There are no anticipated negative societal impacts.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Priority questions**
R: *What is the impact of the contribution? Does it help us train selective classifiers? Some optimal solutions seem to require knowledge of the underlying distributions. In line 165 it is stated that the results are “potentially useful for specific applications”, but no concrete examples are given. Thus the impact could be more convincingly argued. This is probably the main weakness.*
A: Our paper characterizes optimal strategies for various BDM decision problems, such as those used to define optimal selective classifiers. Although the formulations and derived optimal strategies rely on known data distributions, our results have two immediate practical impacts at least. First, learning involves finding a decision strategy within a predefined hypothesis space based on data. Knowing the optimal strategy's form allows to define a hypothesis space that includes the optimal strategy, essential for a statistically consistent algorithm. For reference, see the introduction, where we discuss prior SCOD works that used heuristically chosen hypothesis spaces that did not contain the optimal strategy, and a follow-up paper that improved results by modifying the heuristic to match the optimal rule. Second, our results allow the construction of plug-in optimal strategies for various problems.
R: *Sec 2: What is the learning problem? Do we just get data from the “in distribution”?*
A: Section 2 does not address any learning problems. Its goal is to provide examples of BDM problems and their optimal solutions (strategies). These formulations and strategies assume the data distribution is known. The significance of knowing the optimal strategy's form for designing learning algorithms is discussed in the previous answer.
R: *The review of BDM problems discussed the optimal c, but the optimal h is not mentioned.*
A: Our paper focuses on finding the optimal $c$ by solving various instances of the BDM problem. Except for the Neyman-Pearson problem, the example problems also involve finding the predictor $h$. However, in all the examples determining the optimal strategy for $h$ is straightforward. In all case, Bayes predictor $h^*(x)= {\rm argmin_{y'}} \sum_{y}p(y\mid x)\ell(y,y')$ is optimal due to additivity of the risk $R^S$, which allows $h$ to be optimized independently for each instance $x$.
R: *The final example assumes that pi is known or can be estimated. Well, it will not be known in practice, and if estimated, the estimate won’t equal the true pi. Thus, it would be important to understand how having an estimate of pi impacts the usefulness of this case.*
A: Yes, the formulation in Sec 2.6 is applicable only if $\pi$ is known or can be estimated. If $\pi$ is unknown, the formulations in Sec 2.4 and 2.5 should be used. While analyzing the sensitivity of the solution to the OOD ratio $\pi$ in Sec 2.6 could be an interesting topic for future research, it is not the focus of our current paper.
R: *The function R depends on c, whereas before it depended on c and h. An explanation is needed. The whole motivation involved h, but now h has been lost.*
A: Thanks for point this out this inconsistency. We will include $h$ and emphasize that finding the optimal $h$ is not an issues (see the answer above).
R: *Can you give an example of where it does not suffice for tau(x) to be a constant? In all of the examples, tau(x) is neglected.*
A: There is a degenerate case when the score $s(x)$ equals the threshold $\lambda$ for all $x\in {\cal X}$, and $\tau(x)$ acts as a selection function. In this scenario, the points $(R(x)/p(x),q(x)/p(x))$ lie on a line $L$. Consequently, $\tau(x)$ equals 1 for a line segment subset of $L$, and 0 otherwise. In practice, the degenerate cases occur when the instance space ${\cal X}$ is finite.
R: *In the statement of Lemma 1, the notations rho(x) and pi(x) appear out of nowhere.*
A: Yes, it is a typo, these functions should be $p(x)$ and $q(x)$.
**Proofs**
R: *Line 337: what notion of dimension is meant? Do you mean it has positive Lebesgue measure on $\mathbb{R}^2$?*
A: It is the dimension of $\rm{span}(A)$ in the vector space $\mathbb{R}^2$.
R: *Line 348: why is $C'$ a compact set?*
A: We will provide details to show that $C'$ is both complete and totally bounded, i.e. compact. Essentially, a Cauchy sequence of feasible $c$'s cannot converge to a function that violates any of the constraints by some $\varepsilon > 0$.
R: *Line 353: Need to discuss why $\pi$ is measurable, and presumably here $A$ is a Borel set.*
A: Agreed, a discussion is required. We need to define $p(A)$, $c(A)$ only for those $A$ that are epsilon-balls in $\mathbb{R}^2$. Then, we need to show that $\pi^{-1}(A)$ is a measurable subset of ${\cal X}$. It is fulfilled since this subset is determined by mesuarable functions derived from the functions $p(x), q(x), R(x)$.
R: *Line 357-358: It’s clear that such $x$ do not impact the first constraint, but it’s not clear that it doesn’t impact the objective function or second constraint.*
A: For such $x$ we can set $c^*(x)=0$ since this does not impact the first constrain and does not worsen the criterion or the second constraint. We will give a more detailed explanation.
R: *Line 360: contains $\pi(x)$*
A: You are right.
____
Thank you for highlighting these issues. We will revise and enhance the proof accordingly.
We will also apply the other suggested corrections to the main text.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for responding to my review. I will maintain my score of weak accept. I have no doubt the authors can improve the paper, but there are enough changes that I would want to re-review the entire paper before making a stronger recommendation. | Summary: This paper characterizes optimal solutions to a class of binary statistical decision-making (BDM) problems. The problem recovers as special cases the likelihood-ratio problem and variants of classification with rejection problems. The optimal solutions characterized in this paper coincide with the known solutions to the aforementioned problems, and thanks to this paper, new solutions are found to some new variants of the BDM problem.
Strengths: I like the paper. It is written very cleanly. The motivation is clear. I also enjoy how the paper starts with a slow pace, gives a "tutorial" on BDMs, and then uses this to connect with their results -- since the Neyman-Pearson lemma is very old and there exist many variants of the notation, this way of presentation was particularly useful. I like Theorem 1 and Lemma 1, and the proofs use smart techniques (e.g., even Lemma 1 looks simple, but the construction of c* from an optimal c is very clear in my view).
Weaknesses: In my view, the following points are the weaknesses of the paper and I kindly invite the authors to address them:
- In general, problem (17) is an infinite-dimensional linear program. There is a huge literature, many books, etc., on this. These resources devote significant time to characterizing optimal solutions and the existence of feasibility. Could the authors add more discussion on why the existing literature on infinite linear programs (ILPs) cannot be used here? One can look at "Linear programming in infinite-dimensional spaces: theory and applications" by Nash and Anderson, or related papers.
- Section 2.5: It is said optimal solutions to the given problem are unknown. But this problem variant is also defined in this paper. Can the authors also motivate "it is not known, and can *not* be derived by the existing machinery"? Currently, the flow is ad-hoc in the sense that the authors define a new problem, discuss there is no known solution, and derive one that resembles the solutions to similar problems from the earlier subsections.
- Section 2.5: In line 145, it is said SCOD is using different units in the objective function. However, there are many multi-objective problems with different units. For example, portfolio optimization involves balancing between returns and covariances, which have different units. This problem needs further motivation in my view.
- Please make the notation consistent: (14) is using $R^\mathrm{S}(h,c)$ but earlier it was $R^\mathrm{S}(c)$. I think the former is better, but please be consistent across the paper. Furthermore, Lemma 1 has $\rho$ and $\pi$, which I believe should be $q$ and $p$.
- After Theorem 1, before immediately presenting Lemma 1, can the authors discuss Theorem 1? I like the idea of having a single function to use in the optimal solution characterization via piece-wise functions (as this resembles existing results), however, equations (20) and (21) do not give much intuition currently.
- Some of the computational challenges are presented as it is easy. For example line 215: "Once these ratios are known, determining the score $s$ involves finding the multiplier $\mu$". Can the authors discuss how to find $\mu$? Similarly "one **only** needs to find the scalars $\mu$ and $\lambda$". Same for line 224, "we obtain a similar problem but with only one constraint.": how do you solve this problem, there is no mention.
- The paper would greatly benefit from some numerical experiments. Currently, it is highly theoretical and there are strong assumptions like the knowledge of underlying distributions. There are mentions of cross-validation (or estimation from held-out data), which is more of a computational task than statistical in my view, and I would like to see some mini-experiments on this.
Some minor typos and issues:
- Line 70: "is" generated from the in-distribution
- Line 91: "to distinguishes" has a typo
- Line 107, "reflects the probability of accepting an input sample": I don't think this is about an input sample. Isn't $\phi$ returning the measure of the set that we "reject"? The input of $\phi$ is $c$, not $x$.
- Equation (10): Missing comma ","
- Some references to (15) should be (14) instead. Examples are line 149, and 172.
- Line 157: "solvers" should be "solves"? That said "the strategy" should be "a strategy" if there can be alternative optima.
Technical Quality: 3
Clarity: 3
Questions for Authors: - At the beginning of the paper, it is said that the BDM problem involves Lebesgue measurable functions. It is not clear to me whether all the literature is specifically focused on basically density functions. Can the authors confirm this and add a more in-depth discussion, please?
- Why is there a focus on "robust" keyword in this work? Especially because we assume data-generating distributions like $p_I$ are known, there is no robustness. I am a little confused. Some of the work on the "classification with rejection" literature has motivation for robustness, but I don't see how this is generalized for BDMs in general.
- Are defining the distributions as $p: \mathcal{X} \mapsto \mathbb{R}_+$ the formal way? Is this how the literature defines them?
- Line 77, "only when prediction uncertainty is minimal": I don't understand this sentence. Also, there are uses of $p(x, y)$ and $p(y \mid x)$. Please formally define them.
- Can you discuss the measurability of the functions, such as terms in (7)?
- Equation (12): Aren't there work in the literature that also constrains $\phi(c)$?
- Line 232 says "the combinatorial complexity of the proof" will be increased if we have more constraints. How do the authors know the proof will still go through? If the authors are convinced, I would at least propose a high-level proof sketch in the appendix.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are addressed to some extent, and the checklist is complete. However, further focus on the computational issues about the proposed model should be highlighted.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Weaknesses**
R: *In general, problem (17) is an infinite-dimensional linear program (ILP). There is a huge literature on this.*
A: We agree that BDM is an instance of ILP, enabling the use of tools like duality and KKT conditions. However, it is unclear if these tools can provide the same characterization of the optimal decision strategy for BDM or simplify our proof. We do not see an immediate, straightforward application. Furthermore, existing proofs related to the Neyman-Pearson problem, to our knowledge, do not consider ILP. We will include a discussion on this topic.
R: *Section 2.5: It is said optimal solutions to the given problem are unknown. But this problem variant is also defined in this paper. Can the authors also motivate "it is not known, and can not be derived by the existing machinery"?*
A: Our goal is to demonstrate the generic nature of the proposed framework. Therefore, we introduced novel modifications to the existing SCOD problem (12). These modifications, which include additional constraints and an optimized function in the denominator when defining precision, are more complex but can be readily resolved. We will clarify this in the text.
R: *Section 2.5: It is said SCOD is using different units in the objective function. However, there are many multi-objective problems with different units. This problem needs further motivation.*
A: We do not intend to criticize multi-objective problems. Instead, we aim to demonstrate that alternative formulations, which may be more appropriate in certain cases, exist and can be readily solved.
R: *Please make the notation consistent in (14), Lemma 1.*
A: Thanks for pointing this. We will make the notation consistent. There is a typo, these functions should be $p(x)$ and $q(x)$.
R: *Can the authors discuss Theorem 1?*
A: We will add a discussion on equations (18)-(21). The sets ${\cal X}^{<}$, ${\cal X}^{>}$ are subsets of ${\cal X}$ separated by the score function; they ignore insignificant $x\in {\cal X}$ for which $p(x)=0$.
R: *Some of the computational challenges are presented as it is easy. Can the authors discuss how to find $\mu$ and $\lambda$"?*
A: The method for finding unknown multipliers varies by problem. In some cases, such as the optimal strategy for the SCOD problem (13), the multiplier can be calculated from the problem's input parameters. If no explicit formula exists, the standard approach is to tune the parameters using calibration data. Notably, tuning one or two parameters, while challenging, is significantly easier than tuning an unknown function, which would be required without the optimal strategy characterization.
**Questions**
R: *At the beginning of the paper, it is said that the BDM problem involves Lebesgue measurable functions. It is not clear to me whether all the literature is specifically focused on basically density functions. Can the authors confirm this and add a more in-depth discussion, please?*
A: At least in all the example problems from Sec 2, the functions involved are either probability density functions (p.d.f.s) or p.d.f.s multiplied by loss functions, all of which are Lebesgue measurable.
R: *Why is there a focus on "robust" keyword?*
A: We use the term "robust" to indicate that the tools proposed in the paper i) require very weak assumptions for application and ii) are applicable to a broad range of BDM problems.
R: *Are defining the distributions as $p\colon{\cal X}\rightarrow\mathbb{R}_+$ the formal way? Is this how the literature defines them?*
A: In defining the functions used in the problem formulations, we specify only their domains and co-domains, such as $p\colon{\cal X}\rightarrow\mathbb{R}_+$. Probability density functions (p.d.f.s) must also be Lebesgue integrable with an integral equal to one. We assume this well-known standard definition, therefore, we have not included it.
R: *Line 77, "only when prediction uncertainty is minimal": I don't understand this sentence. Also, there are uses of $p(x,y)$ and $p(y|y)$. Please formally define them.*
A: We agree that the sentence is confusing. A clearer explanation would be: "We will explore six different BDM instances to develop a selector $c$ that admits an input sample $x$ for classification with $h$ only when the prediction uncertainty is below an acceptable threshold."
We use $p(x,y)$ to denote joint probability distribution of random variables $X$ and $Y$. In our setup, these variables be either discrete or continuous; however, in the example problems, $Y$ is always discrete. We use $p(y \mid x)$ to represent the conditional probability of a discrete random variable $Y$ given another random variable $X$.
R: *Can you discuss the measurability of the functions, such as terms in (7)?*
A: Yes, this is a valid point. We should state that functions such as $c$ and $\ell$ are assumed to be measurable.
R: *Equation (12): Aren't there work in the literature that also constrains $\phi(c)$?*
A: The constraint in (12) imposes a lower bound on the true positive rate, ${\rm tpr}(c)$, which is defined identically to the coverage $\phi(c)$ . In the context of SCOD, this is referred to as the "true positive rate," whereas in selective classification, it is called "coverage."
R: *Line 232 says "the combinatorial complexity of the proof" will be increased if we have more constraints. How do the authors know the proof will still go through? If the authors are convinced, I would at least propose a high-level proof sketch in the appendix.*
A: To be cautious, we will present the extension as a hypothesis for future exploration. The Appendix will include a remark suggesting potential avenues for generalization.
R: *The limitations are addressed to some extent, and the checklist is complete. However, further focus on the computational issues about the proposed model should be highlighted.*
A: We will extend the discussion by explaining that tuning the multipliers needs be addressed for each problem separately.
---
Rebuttal Comment 1.1:
Title: Thank you & I am concerned
Comment: I would like to thank the authors for their rebuttal.
Although I appreciate the authors going through each of my comments, I would like to kindly state that this rebuttal is not satisfactory to me. None of my comments are addressed in any depth. To give some examples:
- For multi-objective discussion, the authors say "We do not intend to criticize multi-objective problems" but regardless of what the authors intend, it's just an informal and unusual motivation to revise multi-objective programs just because of differences in units.
- More importantly, to my understanding, the known proofs of Neyman-Pearson lemma do not use any infinite-dimensional LP field. But this paper does take that approach. It is still ok if such a result is known out there, but I don't think "We do not see an immediate, straightforward application." is a convincing argument. I am not even sure KKT conditions are common in ILP literature. In general, I was hoping to see more than "We will include a discussion on this topic."
Finally, I would like to strongly recommend not using statements like "mu and lambda tuning is significantly easier". It can still be intractable in many settings. Just because we have less variables to tune should not imply this paper presents a tractable approach.
---
Reply to Comment 1.1.1:
Title: Detailed answers.
Comment: R: Although I appreciate the authors going through each of my comments, I would like to kindly state that this rebuttal is not satisfactory to me. None of my comments are addressed in any depth.
A: Since there was no priority list, we addressed all the reviewer's questions. However, due to the 6,000-character limit, we had to keep our responses concise.
R: For multi-objective discussion, the authors ...
A: The inability to assign the same physical units to individual decisions (and hidden states) often prevents straightforward weighting of multiple errors with different meanings. As a result, constraints on specific errors are commonly used, especially when certain errors need explicit control in specific applications. This approach is standard in multi-criteria decision-making and not unique to the SCOD problem. For example, similar strategies are applied in selective classification, as seen in Sections 2.2 and 2.3, which offer alternatives to the traditional cost-based reject-option classifier [3]. We applied this same strategy to the SCOD problem.
However, please note that our paper's primary focus is not on revising existing SCOD formulations. We use them to demonstrate the versatility and usefulness of our framework.
R: More importantly, to my understanding, the known ...
A: We initially considered using the principle of Lagrange duality to address our ILP problem, as it offers a way to establish optimality conditions that could potentially characterize the solutions. We applied duality to finite domains ${\cal X}$, transforming the ILP into an LP, which gave us some insight into the general solution form. However, extending this approach to an arbitrary domain ${\cal X}$ and functions $R$, $p$, $q$ (with finite integrals) would require a more general duality theorem, and even then, deducing the desired result would be far from straightforward due to the increased complexity of the optimality conditions. Given these challenges and inspired by techniques used in related work (e.g., [9, 16, 17] and others on the Neyman-Pearson problem), we decided to pursue a direct proof instead. This approach provided us with solid insights, leaving only the technical details to be completed. While we do not dismiss the possibility that ILP techniques could yield the same result, potentially with a simpler proof, we believe that this path is not as straightforward as it might seem.
R: Finally, I would like to strongly recommend not using ...
A: We respectfully disagree with the suggestion that optimizing two or three scalars is not "significantly easier" than optimizing an entire function. In all existing examples, tuning these parameters by discretizing their values and performing an exhaustive search, though not necessarily optimal, has proven effective in practice. | Rebuttal 1:
Rebuttal: We thank all reviewers for their efforts and valuable comments. Our responses are submitted separately for each review. | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: The paper presents a framework for binary statistical decision-making (BDM), where decisions are made between two states based on statistical evidence. The authors introduce a constrained optimization problem formulation for BDM, involving integrals of Lebesgue measurable functions, and provide a detailed characterization of the optimal decision strategies. The paper encompasses a wide range of existing and newly proposed BDM problems as specific instances of the presented framework, demonstrating how to derive optimal strategies for these problems. This framework aims to simplify the process of solving both established and novel BDM problems, which are fundamental to many machine learning algorithms.
Strengths: Originality:
The paper introduces a novel and general framework that unifies various binary decision-making problems under a single constrained optimization formulation. This generalization is a significant contribution, as it provides a robust mathematical tool that can be applied to a wide range of existing and newly proposed BDM problems.
Quality:
The paper is rigorous in its theoretical treatment, providing detailed proofs and a clear characterization of the optimal strategies for BDM problems. The use of Lebesgue measurable functions and the generality of the framework allow it to cover both discrete and continuous instances without requiring differentiability of decision and loss functions.
Clarity:
I like it a lot that the authors provided lots of examples in section 2. The paper is well-organized, with clear explanations of the problem formulations and the derivation of optimal strategies. The inclusion of examples from classical statistics and machine learning applications enhances the clarity and relevance of the work. The theoretical results are presented in a step-by-step manner, making the paper accessible to readers with a background in optimization and decision theory.
Significance:
The significance of the paper lies in its potential to impact a broad range of applications in machine learning and statistical decision-making. By providing a unified framework for BDM problems, the paper simplifies the process of deriving optimal strategies, which can be crucial for designing efficient algorithms in areas such as selective classification, out-of-distribution detection, and hypothesis testing.
Weaknesses: Limited Empirical Validation:
The paper primarily focuses on the theoretical aspects of BDM and does not provide empirical validation of the proposed framework. While the theoretical results are strong, it would be beneficial to see experimental evaluations that demonstrate the practical effectiveness of the framework in real-world scenarios.
Computational Complexity:
The proposed optimization problems involve integrals of Lebesgue measurable functions, which may be computationally expensive to solve, especially for high-dimensional data. The paper does not fully address the computational challenges associated with solving these problems, which could be a barrier to practical implementation.
Technical Quality: 3
Clarity: 3
Questions for Authors: Have the authors considered conducting empirical experiments to validate the proposed framework? Demonstrating the practical effectiveness of the optimal strategies in real-world BDM problems could significantly strengthen the paper's contributions.
Handling Unknown Distributions:
How would the framework handle cases where the underlying data distributions are unknown or difficult to estimate? Are there potential extensions or modifications that could make the framework more robust to distributional uncertainty?
Computational Efficiency:
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Title: Authors answers to reviewer's questions
Comment: R: Have the authors considered conducting empirical experiments to validate the proposed framework? ...
A: Our paper focuses solely on providing a mathematical tool for deriving optimal strategies in various BDM problems. Our theorem implies these optimal strategies cannot perform worse than any alternatives, and in the worst case, the improvement might be small. However, there are instances where BDM problems initially addressed with heuristic strategies were later shown to perform significantly better when optimal strategies were identified and empirically validated.
A recent example from the machine learning field is the BDM in the SCOD problem (please, see paragraph 3 of the introduction). Initially solved with the SIRC selection strategy [23], it was later shown to be suboptimal in [16], with empirical evidence supporting this. Similarly, the Neyman-Pearson problem in statistics, one of the earliest BDM examples, offers ample empirical proof that the likelihood ratio outperforms alternative methods for separating two distributions.
R: How would the framework handle cases where the underlying data distributions are unknown or difficult to estimate? Are there potential extensions or modifications that could make the framework more robust to distributional uncertainty?
A: Yes, distribution uncertainty can be addressed by formulating the BDM problem. The SCOD problem is a good example of this. By definition, there is no clear sample of Out-Of-Distribution (OOD) data. However, the optimal strategy shows that modeling the ratio of OOD to In-Distribution (ID) samples is sufficient for making optimal decisions (see Equation 13). Methods for estimating this OOD/ID ratio, such as using an unlabeled mixture of ID and OOD data, have been discussed in [16] and other papers. | null | null | null | null | null | null |
Estimating Generalization Performance Along the Trajectory of Proximal SGD in Robust Regression | Accept (poster) | Summary: The paper explores linear regression with Gaussian design corrupted by noise that has bounded first moments, focusing on the 'high-dimensional' regime where the feature dimensionality $p$ scales proportionally with the number of training data $n$. The authors propose two estimators that, aside from an additive noise term, approximate the out-of-sample error (measured with the square loss) achieved by a general iterative scheme. This scheme includes commonly used solvers like GD, SGD, proximal GD, and proximal SGD, which are used to compute the regularized empirical risk minimizer with a convex, differentiable, and smooth loss function (with Lipschitz-continuous gradients) and a potentially non-smooth regularizer. This framework encompasses robust linear regression with heavy-tailed noise using the Huber loss function. The authors provide non-asymptotic guarantees in probability for finite $n$ (and $p$), demonstrating the asymptotic consistency of their estimators (Theorems 3.6 and 3.7). They also present examples and numerical experiments. Their primary tool is a generalized probabilistic approximation of the generalization error previously discussed in specific settings in the references [3] and [5].
Strengths: The authors have developed a procedure that reliably determines the best stopping time for solvers in contexts where no consistent estimators were previously available, particularly in robust regression with heavy-tailed noise, Huber-like loss functions, and non-smooth penalty terms. This methodology offers an alternative to cross-validation techniques, which are generally inconsistent (e.g., V-fold) or computationally expensive (e.g., leave-one-out). The paper is well-written and includes clear proofs.
Weaknesses: The main results of the paper can be seen as an extension of the findings in [5], applied to a broader context that encompasses the stochastic case and extends beyond the square loss. While the generality of the weaker conditions is appreciated and introduces a different structure, the work is somewhat marginal in its contribution. Although the new methodology can track the generalization performance over training time for various iterative solver schemes of interest, it remains uncertain how this methodology can be used to (provably) "optimally" tune parameters other than the stopping time. This limitation seems notable, as the increased flexibility of the algorithmic schemes—allowing for additional tuning parameters such as mini-batch size and loss-specific parameters—is not accompanied by an enhanced analysis of these tuning parameters.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1) In the case of the square loss and full-batch gradient descent, do the authors recover exactly the same theoretical results in [5]? If not, what are the differences in this case?
2) The authors claim that their procedure can be used to choose the "tuning parameters that achieves the smallest out-of-sample error" (Page 2). While this is clear for stopping time, can the authors give examples of how their methodology can be use to tune other tuning parameters?
3) While the authors clearly highlight the relevance of their contributions to robust statistics and their goal of tracking solver performance over iteration time, a specific comparison of their methodology and consistency results to other classical methods in some fundamental models, such as ridge regression and lasso regression, would be beneficial. Although Section 1.1 provides a discussion on this topic, it lacks quantitative details, making it unclear how the proposed methods compare in terms of non-asymptotic statistical rates (related to consistency results) and computational efficiency. For example, the computational complexity of the data-dependent estimator the authors consider ($\tilde{r}_t$ on page 6) is not clearly compared to other estimators in basic settings. Can the authors address this?
4) The paper does not seem to make any assumptions on the penalty term, i.e. the function g in (2). This seems surprising. Can the authors elaborate on this?
5) Could the authors comment on the challenges of extending their analysis beyond the proportional regime they consider, where the data dimension is of the same order as the sample size?
****
I have increased my score to 'Accept' post rebuttal.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The paper discusses limitation of the current approach as a way to motivate follow up research in the conclusions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the comments, we provide our response below.
> **Q1:**
In the case of the square loss and full-batch gradient descent, do the authors recover exactly the same theoretical results in [5]? If not, what are the differences in this case?
**A1:**
If the squared loss and full-batch gradient descent is used, our proposed $\tilde{r}_t$ has the same formula as $\hat{r}_t$ in [5]. However, our work allows the noise vector can be heavy-tailed (e.g., infinite variance), which is significantly different from the Gaussian noise condition in [5].
To handle heavy-tailed noise, we consider a data-fitting loss whose derivative is Lipschitz continuous and bounded (Assumption 3.3). Hence, the squared loss is excluded by Assumption 3.3 of our current paper, so the result of [5] is not formally implied by the theorems of the submission.
> **Q2:**
...can the authors give examples of how their methodology can be use to tune other tuning parameters?
**A2:**
Let's take the L1 regularization parameter $\lambda$ as an example. In proximal SGD, $\lambda$ appears in the soft-thresholding parameter of Example 2.4. The algorithm's iterates, $\hat{b}^t$, depend on the choice of $\lambda$, so we can write it as $\hat{b}^t(\lambda)$. For each candidate $\lambda \in \{\lambda_1, \ldots, \lambda_K\}$ (a finite grid of tuning parameters), we compute the estimated risk $\tilde{r}_t(\lambda)$. This estimate serves as a criterion to choose the $(t^*, k^*)$ that minimizes $\tilde{r}_t(\lambda)$. As long as the grid of $\lambda$ is finite, the risk estimate for $\hat{b}^t(\lambda)$ is consistent across all parameters in the grid. The selected $(t^*, k^*)$ leads to the smallest generalization error over the grid, up to a vanishing error term.
> **Q3:**
... a specific comparison of their methodology and consistency results to other classical methods ... would be beneficial.
**A3:**
We are not 100% sure to understand the question, but try to provide some useful pointers below. If the question is not answered by the pointers below, we would be happy to provide more.
1. Comparison of Methodology.
Methods for estimating the generalization performance of ridge and lasso regression focus on the minimizer (denoted as $\hat{b}$ in eq. (2)) of the corresponding optimization problem. Such estimates, say for the Lasso, are studied in [1, 3, 9, 22] (citations from the submission PDF). These risk estimates are only valid for the final minimizer $\hat b$ in eq. (2).
In particular, such estimators are not applicable to estimate the risk of intermediate iterates of algorithms.
For Lasso and Ridge, the risk estimates take the form of $\frac{||y - X \hat{b}||^2/n}{(1-df/n)^2}$, which can be viewed as an adjusted training error, where the degrees of freedom $df$ is, for the Lasso, the number of nonzero coefficients of $\hat b$.
In contrast, our proposed risk estimator provides a consistent estimate of the risk of $\hat{b}^t$ at each $t$.
The proposed risk estimate takes a very different form (weighted average of residual and previous gradients in eq. (11)) than the simpler ones available for Lasso/Ridge that can be simply described as an adjusted training error.
2. Computational comparison.
As we mentioned in our response to Question 1 from Reviewer 3EXM, the computational complexity of calculating our estimate $\tilde{r}_t$ for all $t\in[T]$ is $O(npT^6)$. The computational complexity of the Lasso estimate is $O(np^2 + p^3)$ (see, for example, [1]). Thus, for a fixed $T$, our risk estimate has a lower complexity than that of Lasso.
> (...) in terms of non-asymptotic statistical rates (related to consistency results)
The rate of convergence in Theorem 3.6 depends on the tails of the entries of the noise vector.
If the variance of the noise is finite then the right-hand side of (12) in Theorem 3.6 reduces to $C/(\varepsilon\sqrt n)$ (where $C$ has the same dependence as the constant in the numerator of (12)). Thus, if the variance is finite, we recover an error term of order $O_P(1/\sqrt n)$, same as the best known rate for the difference between the risk and its estimate ([3] from the submission PDF).
Now if the noise has infinite variance and is heavy-tailed, the rate of convergence of $\min\{1,||\varepsilon||/n\}$ in the right-hand side in (12) will depend on which moment of the noise exists (with a finite moment of order one, (12) converges to 0. If the moment of order $1+\delta$ is finite, explicit rates can be obtained).
> **Q4:**
The paper does not seem to make any assumptions on the penalty term, i.e. the function g in (2). Can the authors elaborate on this?
**A4:**
Yes, we did not make explicit assumptions about the penalty term. Instead, we focus on the algorithm iterates that have the form of Eq. (4). The only requirement on the functions $\phi_t,\psi$ are given in Assumption 3.3. As long as the regression problem can be solved by algorithms similar to Eq. (4), our proposed risk estimate can be used to track the generalization error of these algorithms.
For instance, for any convex penalty function $g$, consider the corresponding proximal SGD algorithm in Example 2.4, with the soft-thresholding
replaced with the proximal of the penalty $g$. Since the proximal of $g$ is Lipschitz for any convex $g$, Assumption 3.3 is satisfied.
No further assumption is needed on the penalty $g$ except convexity.
> **Q5:**
Could the authors comment on the challenges of extending their analysis beyond the proportional regime they consider, where the data dimension is of the same order as the sample size?
**A5:**
The consistency of the proposed estimate $\hat r_t$ requires the regime where the ratio $p/n \le \gamma$, because in the proof (e.g., the proof of Lemma D.5), we need to bound the finite moments of $ ||X||\_{op} /\sqrt{n}$ by a constant. This is not possible if $p\ggg n$
as $||X||_{op}$ is known to be of order $\sqrt n + \sqrt p$ [11].
---
[1] Efron, Bradley, et al. "Least angle regression." (2004): 407-499.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' response and maintain a positive outlook about this work. As a result of the authors' reply, I am raising my evaluation to Accept.
---
Reply to Comment 1.1.1:
Comment: Thanks for your final comments and your work throughout the process. | Summary: This paper focuses on deriving consistent estimators of the generalization error for robust regression with Gaussian design and heavy tailed noise along the SGD trajectory. They propose two estimators, $\hat r$ (which requires knowledge of the covariance) and $\tilde r$ (which does not) and prove consistency of both estimators when the number of samples is at least linear in the dimension. They support this with experiments which demonstrate that both $\hat r$ and $\tilde r$ accurately capture the generalization error in various settings.
Strengths: The paper is overall very well written, including the overall motivation, the related work, and the technical ideas behind Theorems 3.6 and 3.7.
Weaknesses: - The numerators in Theorems 3.6 and 3.7 include unspecified constants $C(T,\ldots)$. From the appendix, it appears that $C(T,\ldots)$ scales like $T^T$, so that you can only run for $T \approx \log(n)/\log\log(n)$ steps before the bound is vacuous, which is severely restricting. I'm concerned it may not be possible for SGD to significantly decrease the loss in this number of steps, rendering the generalization estimate vacuous.
- While some intuition for $\boldsymbol{W}$ (eq 8) is given in Section 3.1, none is given for $\widehat{\mathbf{A}}$ (eq 9) or $\widehat{\mathbf{K}}$ except that they are used to construct the weights for Theorem 3.7. This makes the notation in sections 3.2 and 3.3 somewhat difficult to follow.
Minor Points:
- line 29: starts -> start
- line 118: maybe provide some intuition here for $\phi_t, \psi$ rather than deferring it to sections 2.1/2.2 (e.g. just $\phi_t$ is a proximal operator)
- line 170: all -> both
- line 207: it is strange to define $\hat r$ in terms of $\mathbf{W}$ and $\tilde r$ in terms of $\widehat{\mathbf{W}}$. Perhaps use $\widetilde{\mathbf{W}}$ instead of $\widehat{\mathbf{W}}$?
- line 269: the three ... reveal
Technical Quality: 3
Clarity: 3
Questions for Authors: - What are the dependencies on $T,\eta_{max}$ in the numerators of Theorems 2.6 and 2.7?
- Is the reason that $W$ must be computed recursively simply to invert $\mathcal{M}$ block by block? I don't see how to connect the unrolling in section 3.1 to the Kronecker expressions in 3.2.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately addressed the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: > **Q1:**
(...) dependencies on $T, \eta_{max}$ in numerators of Theorems 2.6,2.7?
**A1:**
The dependence on $T$ is $T^T$ and the dependence on $\eta_{\text{max}}$ is $\eta_{\text{max}}^{T}$, as can be seen in Lemma D.4. We do not expect this bound to be tight. Simulation results confirm that the proposed risk estimate is still accurate for all iterations, even when $T > n$, at least in the simulation settings that we tried.
> **Q2:**
Is the reason that $\hat W$ must be computed recursively simply to invert $\mathcal{M}$ ? (...) how to connect the unrolling in section 3.1 to the Kronecker expressions in 3.2.
**A2:**
Yes, the recursive computation of $\hat W$ is related to the inversion of the triangular matrix $\mathcal{M}$.
To expand on this, as intuited in Section 3.1, unrolling the derivatives by the chain rule brings a matrix product of previous
Jacobian matrices from the current iteration to the first. The connection to $\mathcal{M}^{-1}$ is easier to see
from the identity
$
\begin{pmatrix}
1 & 0 & 0 & \cdots & 0 & 0 \\\\
-p_1 & 1 & 0 & \cdots & 0 & 0 \\\\
0 & -p_2 & 1 & \cdots & 0 & 0 \\\\
\vdots & \vdots & \vdots & \ddots & \vdots & \vdots \\\\
0 & 0 & 0 & \cdots & 1 & 0 \\\\
0 & 0 & 0 & \cdots & -p_{T-1} & 1
\end{pmatrix}^{-1}
$
$=
\begin{pmatrix}
1 & 0 & 0 & \cdots & 0 & 0 \\\\
p_1 & 1 & 0 & \cdots & 0 & 0 \\\\
p_1 p_2 & p_2 & 1 & \cdots & 0 & 0 \\\\
p_1 p_2 p_3 & p_2 p_3 & p_3 & \ddots & \vdots & 0 \\\\
\vdots & \vdots & \vdots & \ddots & 1 & 0 \\\\
p_1 p_2 \cdots p_{T-1} & p_2 p_3 \cdots p_{T-1} & p_3 p_4 \cdots p_{T-1} & \cdots & p_{T-1} & 1
\end{pmatrix}.
$
On the left, the matrix has ones on the diagonal and $-p_1,...,-p_{t-1}$ just below the diagonal, like our $\mathcal M$
but with blocks of size 1. On the right, the matrix is similar to the product of Jacobian matrices brought by unrolling the derivatives
by the chain rule. With this in mind, the matrix $W$ in equation (8) is obtained by
continuing to unroll the derivatives by the chain rule for $t=3,4,...$ as for $t=2,3$ in Section 3.1 (see also equation (19)).
Using $\mathcal M^{-1}$ is useful to manipulate a compact notation with no product over $T$ matrices.
However, for the implementation the matrices are computed recursively by leveraging the above structure
(for instance, if we have computed the $t-1$-th row of the inverse, most of the next row is easily obtained by multiplying the $t-1$-th row by $p_{T-1}$).
> **Weakness 1:**
Numerators in Theorems 3.6-3.7 include unspecified $C(T,...)$. From the appendix, it appears that $C(T,...)$ scales like $T^T$, so that you can only run for
$T \approx \log(n)/\log\log(n)$ steps before the bound is vacuous (...). I'm concerned it may not be possible for SGD to significantly decrease the loss in this number of steps, rendering the generalization estimate vacuous.
**Response:**
Yes, our current analysis provides a bound on the iteration number as $T^T$. We expect this dependence in $T$ to be an artefact of the proof. As illustrated in Figures 1-2 and further supported by additional simulations with $T > n$, the simulations suggest that the generalization error estimate is accurate for all iterations, even as $T$ diverges.
Note that even though the theory of the submission incurs these constants of order $T^T$, this still allows us to pinpoint
what the risk estimate should be for all $T$.
While the theory only holds for $T \le \log(n)/\log\log(n)$ for which the risk estimate is consistent,
once the definition of the estimator is known,
revealing the form of the estimator is still useful:
once we have identified the definition of the estimate:
- simulation studies such as those provided in the submission allow
us to check the empirical validity of the estimate beyond $T \le \log(n)/\log\log(n)$, and
- it will be easier for future work to improve this dependence in $T$, now that identifying what the risk estimate should be has been done.
It is common for new estimators to be proposed first, before further works refine the bounds on these estimators.
Improving the dependence on $T$ appears difficult and possibly out of reach of current tools,
even in the well-studied Approximate Message Passing (AMP) algorithms (which is also an algorithm typically studied
in the proportional regime of interest here). The papers [1, 2] feature for instance the same dependence
$T \le \log(n)/\log\log(n)$ for approximating the risk of AMP.
The 2024 preprint [3] offers the latest advances on the dependence on $T$ in the bounds satisfied by AMP.
It allows $T\asymp \text{poly}(n)$ to control certain AMP related quantities, although for the risk [3, equations (16)-(17)]
the dependence $t=O(\log n)$ is required which is still logarithmic. This suggests that advances on this front are possible, at least
for isotropic design for separable loss and regularizer such as those studied in [3]: Lasso or Robust M-estimation with no regularizer.
Since these latest advances in [3] are obtained for specific estimates (Lasso or Robust M-estimation with no regularizer),
it may be possible to follow a similar strategy and improve our bounds for specific examples of iterative algorithms closer to AMP
or featuring only separable losses and penalty. But we believe following such a strategy for specific examples would be
out of scope for the current submission, which tackles a general framework allowing iterations of the form
(4) with little restriction on the nonlinear functions
$\phi_t,\psi$ except being Lipschitz (and bounded by $\psi$).
**Minor points**: We agree and will implement the suggestion regarding line 207 in the camera-ready version. Thanks!
* * *
[1]: "Finite Sample Analysis of Approximate Message Passing Algorithms".
Rush and Venkataramanan, 2016.
[2]: "An Asymptotic Rate for the LASSO Loss". Rush, 2020.
[3]: "A non-asymptotic distributional theory of approximate message passing for sparse and robust regression".
Li and Wei, 2024.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for addressing my questions and concerns, but I prefer to keep my current score. I am skeptical of the “revealing the form of the estimator” claim as an estimator that works for small T at might require additional correction terms to work for large T. I understanding fixing this issue may be challenging and can be reasonably left to future work, but it would be helpful to include a discussion of C(T) and the disconnect between the theory and the experiments somewhere below Theorem 3.6.
---
Reply to Comment 1.1.1:
Comment: Thanks for reading the rebuttal and the additional comment. We agree to include, in the camera-ready version, a discussion of C(T), the disconnect between the theory and simulations somewhere below Theorem 3.6, as well as the pointers to the literature on recent advances to lower dependence on T (including Li and Wei, 2024). | Summary: This manuscript aims at finding computational efficient measure of generalization performance for high-dimensional robust regression with regularization. In this scenario, when loss function is not quadratic, estimating the out-of-sample error $\| \Sigma^{1/2}(b_t - b^* ) \|^2$ (where $\Sigma$ is the covariance matrix, $b_t$ is the $t$-iteration variable and $b^*$ is the ground truth) for iterative methods like GD, SGD or proximal GD can be very difficult and is not well-explored in previous literature. To overcome this issue, the authors propose a new estimator for the out-of-sample error for a general class of gradient methods and show that the difference converges to zero in probability when sample size goes to infinity.
Strengths: This paper is generally quite well-written. The proof and result appears to be reasonable. Empirical evidences confirm that the proposed estimator is indeed effective and accurate.
Weaknesses: Despite being more general, this paper seems to constitute *limited* progress towards the community. When comparing it with the work by Bellec and Tan, it is not hard to notice the estimator $\hat{r}_t$ in Theorem 3.6 is a trivial extension of Bellec and Tan's estimator from square loss to convex loss. The exact form of the proposed estimator and its guarantee in Theorem 3.6 are very similar to the result of the mentioned paper. Also, the proof looks quite straighforward and extending from square loss to convex loss does not rely on technical innovations. In this regard, I believe this paper falls short in getting accepted to NeurIPS, unless the authors could provide more convincing arguments how this result is *fundamentally* different from previous works.
Technical Quality: 4
Clarity: 4
Questions for Authors: There are no further questions.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: This work is purely theoretical and has no negative influences.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks for the comments and the opportunity to clarify our contributions.
> **Weakness:**
Despite being more general, this paper seems to constitute limited progress towards the community. When comparing it with the work by Bellec and Tan [5], it is not hard to notice the estimator in Theorem 3.6 is a trivial extension of Bellec and Tan's estimator from square loss to convex loss. The exact form of the proposed estimator and its guarantee in Theorem 3.6 are very similar to the result of the mentioned paper. (...)
## **Response:**
While some notation and techniques (in particular probabilistic lemmas based on Gaussian integration by parts) are reused compared the previous result of [5] for the square loss (we did not try to hide this with new notation, and we did not reinvent the wheel unless necessary), let us point out some significant technical differences.
- In terms of applicability, estimators applicable to SGD (or proximal SGD) and robust errors are a significant step forward compared to the full-batch and square loss setting of [5].
- For the square loss, residuals and gradients are the same. For robust loss functions or the iterative algorithms studied here,
residuals and gradients are different; this defines a different structure which can be seen in the estimators (involving both the residuals
and the gradients), as well as in the weight matrices $\hat A, \hat K, W$. In the square loss only two weight matrices appear, and it was a surprise to us to find the necessity to introduce three matrices to analysis the problem for proximal SGD or robust losses.
- [5] studies Gaussian noise. We allow heavy-tailed noise with infinite variance. This requires different tools to control the noise,
and the resulting rate is also different, with the rate explicitly depending on the noise through the quantity $\mathbb{E}[\min\\\{1, ||\varepsilon||/n\\\}]$
appearing in Theorem 3.6.
**More importantly, directly generalizing the approach in [5] fails for SGD.**
This failure for SGD resides in the difference between the matrices $\hat K$ and $\tilde K$
in Equations (26) and (27). Generalizing the approach of [5] leads our research to the
matrix $\tilde K$ in (26). In order to build an estimate of the weight matrix $W$ of interest
(as discussed in the submission, $W$ cannot be used directly as $\Sigma$ is typically unknown),
one wishes to invert $\tilde K$ in the approximation $\tilde A \approx \tilde K W$.
This inversion fails for SGD for small (but still very realistic) batch sizes of order $0.1n$.
The matrix $\tilde K$ is lower triangular, and the reason for the lack of invertibility
of $\tilde K$ can be seen in the diagonal terms equal to $\text{Tr}[S_tD_t]$ in (26),
where $S_t \in \\\{0,1\\\}^{n \times n}$ is the diagonal matrix with 1 in position $ii$
if and only if the $i$-th observation is used in the $t$-th batch.
This diagonal element of $\tilde K$ can easily be small (or even 0) for small batches,
if the batch only contains observations such that $(D_t)_{ii}$ is 0 or small.
For SGD and proximal SGD, we solved this failure of the invertibility of $\tilde K$ by using out-of-batch
samples in the construction of $\hat K$ and $\hat A$, in order to avoid $S_t$ in the diagonal elements
of $\hat K$ in equation (27). This is the key to making these estimators work for SGD and proximal SGD,
and this use of out-of-batch samples could be anticipated by reading or
generalizing [5] (which only tackles the square loss with full-batch
gradients).
This phenomenon is seen in generic SGD simulations, for instance with the Huber loss and
$n, p, T = 4000, 1000, 20$ and batch_size equal to $n/10$. From iteration 10 to 19, the diagonal elements of $\tilde K$ are close to 0, while using out-of-batch samples in $\hat K$ provides diagonal values bounded away from 0, and thus numerically stable invertibility of the triangular matrix $\hat K$:
| $t$ | $\tilde{K}_{tt}/n$ | $\hat{K}_{tt}/n$ |
|---:|----:|----:|
| 10 | 0.04 | 0.4 |
| 11 | 0.05 | 0.42 |
| 12 | 0.04 | 0.43 |
| 13 | 0.04 | 0.44 |
| 14 | 0.04 | 0.46 |
| 15 | 0.05 | 0.47 |
| 16 | 0.05 | 0.49 |
| 17 | 0.05 | 0.5 |
| 18 | 0.05 | 0.51 |
| 19 | 0.06 | 0.52 |
While using $\tilde K,\tilde A$ (without using out-of-batch samples) is successful for estimating the risk for large batch sizes (above $0.3n$), it quickly deteriorates as the batch size decreases: in the same setting as above, with 100 repetitions, the true and risk estimates using $\tilde K$ and $\hat K$ give
| $t$ | True risk | Estimate using $W$ | Using $\hat A \hat K^{-1}$ | Using $\tilde A \tilde K^{-1}$ |
|:---|---:|----:|----:|----:|
| 10 | 4.67 | 4.66 | 4.66 | 4.4 |
| 11 | 4.29 | 4.29 | 4.29 | 3.98 |
| 12 | 3.94 | 3.94 | 3.94 | 3.64 |
| 13 | 3.63 | 3.63 | 3.63 | 3.33 |
| 14 | 3.35 | 3.35 | 3.35 | 3.04 |
| 15 | 3.09 | 3.1 | 3.1 | 2.77 |
| 16 | 2.86 | 2.87 | 2.87 | 2.54 |
| 17 | 2.65 | 2.66 | 2.66 | 2.34 |
| 18 | 2.46 | 2.47 | 2.47 | 2.17 |
| 19 | 2.29 | 2.3 | 2.3 | 2 |
| 20 | 2.13 | 2.14 | 2.14 | 1.83 |
The direct generalization from [5], that does not leverage out-of-batch samples (right column), is inconsistent while our proposed estimate leveraging out-of-batch samples is consistent.
Note that beyond simulations, it wasn't clear at first that using out-of-batch samples would provably work. After significant trial and error we eventually found the serendipitous combination (cf. line 440) of the probabilistic identities that grants the approximation $\hat A\approx\hat K W$ in the row space of $F$ in eq (35).
**Thanks for raising this point.** Highlighting and explaining this significant departure from [5] on the use of out-of-batch samples is something that was admittedly overlooked in the main text of the initial submission, and we will use the extra page available for the camera ready version to clearly explain this.
**We kindly suggest to reconsider the review rating in light of this.** We would be happy to provide further clarifications if needed.
---
Rebuttal Comment 1.1:
Comment: I thank the reviewers for addressing my concerns. The detailed explanations regarding the technical difficulties and the empirical evidence provided are very helpful. I would like to slightly increase my score, however, I am still very dubious about the contribution and novelty of this work. I took some time to re-read the proof and response of the authors. While I acknowledge that, directly generalizing [5] to SGD and non-square loss is not possible, I still question if this technique is truly innovative. The invertibility of $\tilde K$ seems to be solved by easy tricks and the rest proof still looks very similar to [5]. As a result, I change my score to 4.
---
Rebuttal 2:
Comment: Thanks for reading the rebuttal and taking the time to re-read the proof. One challenge that arose when attempting to leverage out-of-batch samples, is that we are now manipulating quantities that are not involved in the iterative algorithm or its derivatives. This leads to some difficulty in obtaining bounds such as (35) around line 444 where the approximation between $\hat A$ and $\hat KW$ (involving out-of-bag samples for invertibility of $\hat K$) holds in the row space of $F$ (the stochastic gradient matrix, that does not involve out of bag samples). What we would like to point out as a final remark with this, is that it is subjective to argue about the challenges/difficulty of a proof once the final product is finished, since the final product does not showcase the challenges and difficulty necessary to produce it.
In any case, many thanks for your comments and work on this manuscript. We believe that case of (proximal) SGD with robust losses is important for the NeurIPS readership and community. We will emphasize the role and necessity to use out-of-bag samples, and how this departs from [5], in the camera-ready version (should the paper be accepted). | Summary: #### Summary
This paper examines the generalization performance of iterates produced by Gradient Descent (GD), Stochastic Gradient Descent (SGD), and their proximal variants in high-dimensional robust regression problems. The paper introduces estimators that accurately track the generalization error of the iterates along the trajectory of the iterative algorithms. These estimators are proven to be consistent under certain conditions and are illustrated through several examples, including Huber regression, pseudo-Huber regression, and their penalized variants with non-smooth regularizers.
Strengths: #### Strengths
1. **Innovative Methodology**: The introduction of estimators that track the generalization error of iterates is novel and provides a significant contribution to the field of robust regression.
2. **Theoretical Rigor**: The paper provides a thorough theoretical foundation for the proposed estimators, including consistency proofs and detailed mathematical derivations.
3. **Practical Relevance**: The approach is applicable to a variety of robust regression problems, including those with heavy-tailed errors and non-smooth regularizers.
4. **Empirical Validation**: Extensive simulations demonstrate the effectiveness of the proposed estimators in tracking the generalization error and determining the optimal stopping iteration.
Weaknesses: #### Weaknesses
1. **Computational Complexity**: The proposed estimators involve complex calculations that may be computationally intensive, especially for large datasets. More discussion on computational efficiency and scalability is needed.
2. **Generality**: The paper focuses on specific types of robust regression problems. Extending the methodology to a broader range of regression problems would increase its impact.
3. **Comparison with Existing Methods**: The paper provides limited empirical comparisons with existing state-of-the-art methods for estimating generalization error. More comparative analysis would strengthen the validity of the proposed approach.
4. **Practical Guidelines**: While the theoretical results are robust, practical guidelines for implementing the estimators in real-world scenarios are not sufficiently detailed.
Technical Quality: 3
Clarity: 3
Questions for Authors: #### Questions
1. **Computational Complexity**:
- Could you provide more details on the computational complexity of the proposed estimators? How do they scale with increasing dataset size and dimensionality?
2. **Generality**:
- The paper focuses on specific types of robust regression problems. Are there any challenges in extending the methodology to other types of regression problems, such as those with different types of regularizers or loss functions?
3. **Comparison with Existing Methods**:
- How do the proposed estimators compare empirically with existing methods for estimating generalization error? Are there scenarios where your approach significantly outperforms others?
4. **Practical Implementation**:
- Can you provide practical guidelines or heuristics for implementing the proposed estimators in real-world scenarios? What are the key considerations practitioners should keep in mind?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: no
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the insightful and encouraging feedback. Here, we respond to your comments point by point.
> **Q1**: Could you provide more details on the computational complexity of the proposed estimators? How do they scale with increasing dataset size and dimensionality?
**A1:**
We first note that our proposed risk estimates, $ \hat{r}_t $ and $ \tilde{r}_t $, only require computing the iterates $ \hat{b}^t $ and the weight matrices $ W $ and $ \hat{W} $. Since $ W $ is a $ T \times T $ lower triangular matrix, and the computational complexity of each entry of $ W $ is $ O(npT) $ using the Hutchinson trace estimator, the total computational complexity of $ W $ is $ O(npT^3) $.
Similarly, for the computation of $ \hat{W} = \hat{K}^{-1} \hat{A} $, the computation cost of both $ \hat{A} \in \mathbb{R}^{T \times T} $ and $ \hat{K} \in \mathbb{R}^{T \times T} $ is $ O(npT^3) $. Thus, the overall computational complexity of $ \hat{W} $ is $ O(npT^6) $, with $T^3$ coming from inverting the triangular matrix $\hat{K}$. (Note that in practice, for numerical stability we do not compute the inverse directly, but instead solve the corresponding linear system).
Overall, the implementation (provided in the supplementary material) for separable loss/penalty to compute $ \hat{b}^t $ and the weight matrix avoids any operation that would be $O(\min(n,p)^3)$ such as multiplying two matrices of sizes larger $\min(n,p)\times \min(n,p)$, for instance computing the full Gram matrix $X^TX$. The implementation only performs matrix-vector products with matrices of size smaller than $\max(n,p) \times \max(n,p)$, or multiplication of a dense matrix by a diagonal matrix both of sizes smaller than $\max(n,p) \times \max(n,p)$. It never incurs an operation with complexity larger than $\max(n,p)^2$ (ignoring here the dependence on $T$). This makes it possible to run the implementation even on a laptop for $n,p$ both 10,000.
> **Q2**: The paper focuses on specific types of robust regression problems. Are there any challenges in extending the methodology to other types of regression problems, such as those with different types of regularizers or loss functions?
**A2:**
Our analysis focuses on the performance of the iterations that can be solved using SGD or proximal SGD methods.
This includes any gradient-Lipschitz loss function (for instance Huber) with a non-smooth penalty as illustrated in the simulations.
If the regression problem has both a non-smooth data-fitting loss and a non-smooth penalty, for instance
$$
\hat{b} = \arg\min_{b \in \mathbb{R}^p} ||y - Xb||_1 + \lambda ||b||_1,
$$
we expect that one cannot use the analysis and algorithms studied here (proximal GD or proximal SGD) due to the non-differentiability of the data-fitting loss. For such optimization problems, other primal-dual algorithms are needed, such as the alternating direction method of multipliers (ADMM) and the Chambolle-Pock algorithm. However, these algorithms have a different structure, and we expect for these algorithms a significantly different result than our Lemma B.1 for instance. Because of the different structure, we expect these algorithms to require different analysis and risk estimates.
We leave this as an interesting direction for future work.
> **Q3**:
How do the proposed estimators compare empirically with existing methods for estimating generalization error? Are there scenarios where your approach significantly outperforms others?
**A3:**
We are not aware of any existing methods that can estimate the generalization error of the iterates of proximal SGD algorithms in our settings of the proportional regime ($ n\asymp p $). In this regime, cross-validation with a finite number of folds is known to be inconsistent for instance [2, Figure 1], and diverging number of folds would be impractical computationally. Thus we did not provide a direct comparison (though we would be happy to provide empirical comparisons if reviewers suggest existing competing risk estimates that we missed). One related work is by Luo et al. (2023), which proposed an estimate for the cross-validation error of the iterates of SGD algorithms. However, their method has many restrictions on loss function and does not work for high-dimensional regression settings with $p > n$.
> **Q4:**
Can you provide practical guidelines or heuristics for implementing the proposed estimators in real-world scenarios? What are the key considerations practitioners should keep in mind?
**A4:**
Thank you for this question.
We provide the following practical guidelines for implementing the proposed estimators:
1. If practitioners are solving a regression problem with a smooth loss function and are using SGD or proximal SGD algorithms, they can use the proposed estimator $\hat{r}_t$ to estimate the generalization error of the iterates if the covariance matrix of the features is known (either because lots of additional unlabeled data are available, making estimating $\Sigma$ possible, or because the practitioner samples the design $X$ themselves). Otherwise, if the covariance is unknown, use $\tilde{r}_t$ which uses out-of-bag samples to maintain good performance (cf. answer to Reviewer 2 for a discussion on the use of out-of-bag samples for SGD).
2. Once $\tilde{r}_t$ is computed, plot the estimated generalization error of the iterates as a function of the number of iterations. Use this plot to choose the stopping time that achieves the smallest out-of-sample error, or as an additional tool to diagnose to study the convergence of the algorithm at the population level.
---
[1] Luo, Yuetian, Zhimei Ren, and Rina Barber. "Iterative approximate cross-validation." International Conference on Machine Learning. PMLR, 2023.
[2] Rad, Kamiar Rahnama, and Arian Maleki. "A scalable estimate of the out-of-sample prediction error via approximate leave-one-out cross-validation." Journal of the Royal Statistical Society Series B: Statistical Methodology 82.4 (2020): 965-996. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SS1: Accelerating Inference with Fast and Expressive Sketch Structured Transform | Accept (poster) | Summary: The paper introduces a structured randomized parameter sharing scheme (SS1) for computational complexity reduction. A couple of coalescing techniques are suggested, where the chunk size does not affect the approximation error. The proposed method is easy to combine with the quantization method to achieve further gain. Experimental results demonstrate that SS1 improves the throughput of GPT-2.
Strengths: Overall, the paper suggests good additional features to Random Parameter Sharing (RPS).
- The idea of reducing the computational complexity with RPS is interesting and adequately tackles a limitation of RPS.
- The proposed method achieves actual latency improvement in some cases.
- The paper focuses also on the GPU kernel implementation. The details about Forward and Backward Kernels are discussed well.
Weaknesses: However, a few points should be addressed to improve the presentation.
- Experimental results lack an important baseline--the comparison between SS1 and the existing RPS methods should be considered since SS1 is built upon RPS.
- The latency and accuracy comparison of SS1 with other structured matrices is limited to small-sized models. Hence, the performance of SS1 compared to other methods on large models like LLM is unclear.
- Writing quality needs to be improved, especially for the mathematical expressions and algorithms. I am willing to increase the score if the presentation of the algorithms, theorems, and proofs meet the quality of the top-tier ML conference.
- Some mathematical expressions are confusing, e.g., a vector and its element are sometimes denoted by the same alphabet $x$.
- Algorithms 1 and 2 are hard to follow. Adding comments to each important line or block might help readers understand what they are for.
- The abbreviations used in Table 1 are not properly introduced in the text.
- Proofs in Appendix contain typos in many places and should be written with a more professional nuisance.
- It is hard to follow how Algorithm 2 finds the solution of Equation 10. More details regarding Algorithm 2 should be followed.
Technical Quality: 2
Clarity: 1
Questions for Authors: Major questions
- How does Algorithm 2 guarantee the minimization of the Frobenius norm error in Equation 10? More details should be provided.
- How are $K$- and $L$-coalescing used in the actual SS1 implementation?
- What are the latency and accuracy of Llama-3-8b with low-rank or Monarch matrices?
- How much is SS1 better/worse than the existing RPS methods in terms of accuracy and latency?
Minor questions
- In Equation 10, I suggest denoting the Forbenius norm by $\|\|\cdot\|\|_F$, not $\|\|\cdot\|\|_2$, where the latter is usually reserved for the operator 2-norm induced by the vector 2-norm.
- Do the hyperparameters $B_K$ and $B_N$ vary from GPU to GPU? How does the inference throughput change over those parameters?
- Typo in Line 220 “[1,2])”
- I suppose “SSL” in Table 1 is a typo and should be SS1.
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: SS1 requires a specialized kernel to be effectively utilized on GPUs. Different types of layers (e.g., convolution layer) may require different kernel implementations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for the suggestions to improve the paper. Incorporating the suggestions, we have made several writing changes to the paper that are listed in the common rebuttal. We hope to have addressed your concerns and incorporated your suggestions to your satisfaction. Specific concerns are addressed below,
**Comparison with RPS**:
Existing RPS methods focus on reducing the parameter memory but not the FLOPs. Thus the latency of RPS methods generally is slightly worse than full model latency no matter the amount of compression. Even so, we perform RPS experiments using ROAST for matrix compression. The results are as follows and as expected. We will add these results to the final paper.
| | SS1- Quality | SS1-Latency | ROAST-Quality | ROAST-Latency |
|----|--------------|-------------|---------------|---------------|
| 2x | 19.45 | 154 | 19.87 | 238 |
| 4x | 19.99 | 148 | 20.2 | 237 |
| 8x | 20.68 | 145 | 20.94 | 222 |
**the performance of SS1 compared to other methods on large models like LLM is unclear.**
While we do not have the computing power to train 1B+ LLM models, our rigorous tests on different domains, model architectures, and settings encourage us to believe that the results we see are indeed general and will translate even to bigger models.
**Writing quality improvements**
We apologize for the issues with our presentation. We have fixed all the issues listed here and more. The detailed summary of major changes is presented in the common rebuttal.
**How does Algorithm 2 guarantee the minimization of the Frobenius norm error in Equation 10?**
We have improved our explanation of projection from full matrix to SS1 compressed matrix. It should help avoid a lot of confusion. We will provide a brief explanation here. ( we have also added this explanation to appendix and have moved algorithm 2 to appendix)
Firstly, note that parameter sharing in SS1 is only inside a single neuron. So, there are independent RPS instances for each neuron. Thus, the projection of the $K \times N$ weight matrix is independently projecting each neuron. Consider a single neuron under RPS $y^\top = x^\top w$ and $w = S z$. Given $w$ , the solution to get best $z$ which minimizes $|| w - Sz||_2$ is nothing but solution of linear regression, i.e.
$z^* = (S^\top S)^{-1}) S^\top w$. Also, since S is a sparse matrix with exactly one non-zero in each row, $S^\top S$ is a diagonal matrix. The hash function defined in the paper ensures that all elements of the diagonal are non-zero. Thus, it is invertible. In fact, if you view S as a mapping matrix that
maps each element of $w$ (i.e., range K) to some element of $z$ ( range $ K /c$), then the value of $z^*$ is just an aggregation of all those elements from $w$ that map to this element in $z$ ( computed via $S^\top w$) and then normalized by the total number of elements from $w$ that map to this element in $z$ ( computed via $S^\top S)^{-1}$ ). This is straightforward to implement and can be done in a blocked manner for the entire matrix $Z$. This is presented in the Algo 2.
**B_K and B_N parameters : implementation, do they vary with GPU, does it affect throughput**
The GEMM operation has three block parameters $B_M, B_K, B_N$. In the implementation of SS1, our coalescing parameters B_K and B_N behave like standard GEMM block parameters. i.e. they determine what block sizes to bring into shared memory for performing matrix multiplication. What is interesting is that algorithmically, they also determine how the hash function and SS1 recipe in general should behave, which is the novelty in SS1 that ensures that the algorithm is hardware-friendly.
These parameters are specific to each GPU and need GPU-specific optimization. Also, the choice of these parameters has a huge impact on throughput, as is true for GEMM operations.
**Llama-3-8b experiments**
We were able to run quality experiments for lowrank. We could not find a ready implementation for monarch projection in original repository for rectangular matrices. We will add complete results in the final paper. The results are very similar across different methods.
| Model | #param | MMLU | Wingogrande | Latency |
|:--------:|:-------:|:-----:|:-----------:|:-------:|
| Original | 8B | 65.05 | 76.1 | 378.29 |
| SS1 | 6.1B | 61.26 | 69.93 | 341.59 |
| Monarch | 6.2B | NA | NA | 346.8 |
| LowRank | 6.1B | 62.01 | 68.98 | 339.42 |
We want to note that this is just a proof of concept in support of the fact that SS1 can be used in the era of pretrained models as well. The main experiment still is when we train SS1 from scratch to evaluate quality vs. latency.
We hope to have addressed your clarifications and incorporated your suggestions to your satisfaction. If you are satisfied with our changes, please consider giving us an updated score.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. The presentation changes stated in the global response addressed the majority of my concerns regarding the presentation. I am inclined to raise the score, as long as the revised manuscript is updated correspondingly so that readers do not need to deal with the ambiguous notations and presentation.
---
Reply to Comment 1.1.1:
Title: Thanks for the vote of confidence!
Comment: Dear Reviewer, we assure you that we have made the suggested changes to the manuscript in both the main paper and the appendix, and these will be reflected in the final submission. We have also requested AC, PC, and SAC to allow submitting paper PDF during review as a special case. We are awaiting their decision. | Summary: The paper introduces the Sketch Structured Transform (SS1), a novel approach designed to enhance the efficiency of tensor multiplication in deep learning models. SS1 leverages structured yet random parameter sharing to reduce computational load while maintaining the expressive power of the models. The authors provide empirical evidence demonstrating SS1's superior quality-efficiency tradeoffs compared to existing methods like Monarch and LowRank. Key findings include significant improvements in inference speed for models such as GPT2, BERT, and Llama-3-8B, even without finetuning. The combination of SS1 with quantization further enhances efficiency, and SS1 also proves effective in reducing inference latency in CPU workloads, such as the DLRM MLPerf Model.
Strengths: - SS1 introduces a unique method of structured yet random parameter sharing that effectively reduces computational requirements while preserving model expressivity.
- Extensive experiments demonstrate SS1's superior performance across a variety of models and applications, including significant improvements in inference throughput for GPT2, BERT, and Llama-3-8B.
- SS1 can be applied to pre-trained models, allowing them to be projected onto SS1 and finetuned for faster deployment, which is highly practical for real-world applications.
Weaknesses: - The details of GPU optimization, such as K- and N-coalescing, add complexity to the implementation, which might limit widespread adoption.
Technical Quality: 3
Clarity: 3
Questions for Authors: None
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the support of our paper. We have made writing changes to improve the manuscript and additional experiments as suggested by other reviewers. They are listed in common rebuttal. Specific clarifications are answered below,
**K- and N-coalescing, add complexity to the implementation**
All CUDA kernels optimize block parameters for performance. The K and N coalescing is similar to block size choosing generally required for matrix multiplication. Libraries such as CUBLAS do this optimization and cache the parameters for use, hiding the complexity of the kernels from the end user. Similarly, we do the optimizations in the code and the end user does not need to worry about these parameters.
We hope to have addressed your concerns to your satisfaction. We are happy to provide further clarifications if needed. | Summary: the authors propose the Sketch Structured Transform (SS1), a randomized parameter-sharing method that reduces FLOPs and latency while maintaining model quality. SS1 ties parameters within a single neuron weight and can be implemented to reduce input dimensions before multiplying with a compressed weight vector. This method is GPU-friendly and can be integrated with existing RPS methods, providing control over parameter memory and computation. SS1 can also be applied to pre-trained models using a projection function, preserving the benefits of existing models and enabling fast deployment.
Strengths: 1. The authors created a kernel that can work on available HW and reduce workload latency. Moreover, they open source the kernel!
2. The basic idea is clear as described in section 3.1, the rest is kind of messy.
Weaknesses: 1. Very hard to follow. They abused notations many time (c is compression rate, size of super group etc.)
2. The algorithms are unclear. For instance, algorithm 2. If we wish to find Z that minimize the frobenius norm between the compressed weight and the original weight we somehow need to search the different options to do it. If you are simply using brute force then just say it and push the algorithm to the appendix. Note that using only w and not checking the activation value when defining compression scheme will be sub-optimal as you might reduce the MSE of ||W-SS1(Z,I(k)||| but ||WX-SS1(Z,I(k))X|| (where x is the input to the linear layer) can be further reduced.
3. Generally the paper is just poorly written and it is a shame as it seems is contains have very interesting ideas
Technical Quality: 3
Clarity: 1
Questions for Authors: 1. Can you explain section 4 in high level. What are you trying to prove and why? Specifically to the best of my understanding, you compressed\quantize only the weights and activation undergo reduction. So why should we care about the variance of dot product of compressed unit norm vectors.
2. Same goes to theorem 2 – can you explain, in what sense you mean unbiased? Is it under permutation of the hash function? If so, I am not sure I understand why it is important if we pick one permutation and use that for the entire ( I mean the weight are tied once)
Confidence: 4
Soundness: 3
Presentation: 1
Contribution: 3
Limitations: The paper show promising results on small scale models. For llama3-8b the results are less promising as only 1.1x speedup was achieved by applying ss1 only on some of the layers (selected based on calibration set). The limitations are partially discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their support of our paper. We apologize for writing issues and assure the reviewer that we have taken due care to fix them both in the main paper and appendix. The list of changes made are presented in the common rebuttal. We clarify some specific concerns below
**Notation overload of c** The $c$ in the text always refers to the compression factor ( a factor of c=4 means compression of 4x). In the text we have said that the supergroup contains “c” groups. Having said that we have refactored the section 3 with standardized notations and explanations to improve the readability of the section and to avoid confusions.
**Algorithm 2** We do not need to perform brute force to find the solution to equation ||WX-SS1(Z,I(k))X||. The solution of the equation is exactly shown in algorithm 2. We found a easier way to explain the projection mechanism and have updated the manuscript accordingly. We hope that alleviates the concerns regarding algorithm 2. We also explain it here,
Firstly, note that parameter sharing in SS1 is only inside a single neuron. So, there are independent RPS instances for each neuron. Thus, the projection of the $K \times N$ weight matrix is independently projecting each neuron. Consider a single neuron under RPS $y^\top = x^\top w$ and $w = S z$. Given $w$ , the solution to get best $z$ which minimizes $|| w - Sz||_2$ is nothing but solution of linear regression, i.e.
$z^* = (S^\top S)^{-1}) S^\top w$. Also, since S is a sparse matrix with exactly one non-zero in each row, $S^\top S$ is a diagonal matrix. The hash function defined in the paper ensures that all elements of the diagonal are non-zero. Thus, it is invertible. In fact, if you view S as a mapping matrix that
maps each element of $w$ (i.e., range K) to some element of $z$ ( range $ K /c$), then the value of $z^*$ is just an aggregation of all those elements from $w$ that map to this element in $z$ ( computed via $S^\top w$) and then normalized by the total number of elements from $w$ that map to this element in $z$ ( computed via $S^\top S)^{-1}$ ). This is straightforward to implement and can be done in a blocked manner for the entire matrix $Z$. This is presented in the Algo 2.
**Other presentation issues** We have refactored section 3, added detailed comments in algorithm 1, simplified the projection explanation, and made the notation clear and consistent. We hope the reviewer finds the new version friendlier.
**Section 4 explanation**
Recent literature on RPS shows that the quality of the learned linear model under compression is directly related to the quality of inner product preservation under compression. Thus, inner product preservation is an important problem to analyze in this regard. [1]
The goal in section 4 Theorem 1 is to understand whether projection ( which is the basis for RPS) and quantization ( basis for quantization based compression) can be combined together to obtain better compression rates while preserving quality. While there is general consensus from empirical observations that compression methods can be combined, theorem 1, to our knowledge, is the first theoretical exposition on this topic which also provide guideline as how much compression to perform from individual methods.
The goal of section 4 theorem 2 is to understand the impact of the coalescing-factors have on the “inner product preservation” ( and hence quality of the learned model).
The unbiasedness and variance is under the randomness of permutations ( implemented via hash functions). It is true that we fix these permutations at the start, the randomness is still important to guarantee that we do not choose a “bad” permutation that will compromise the quality of the learned model with a high probability.
[1] Desai, A. and Shrivastava, A.. In defense of parameter sharing for model-compression. International conference on learning representations 2024
We hope to have addressed your clarifications and incorporated your suggestions to your satisfaction. We are happy to provide further clarifications if needed.
---
Rebuttal Comment 1.1:
Title: Answer to the authors rebuttal
Comment: I would like to thank the reviewers for their detailed responses. The additional explanations provided in the answers to Reviewer ZODN were very helpful, and they addressed all of my questions. I trust that the authors will revise the manuscript accordingly, and I am raising my score to 6. | Summary: This paper introduces Sketch Structured Transform (SS1), a hardware-efficient structured matrix for improving the latency of linear layers in neural networks. The paper theoretically analyzes the effectiveness of SS1 in random projections, optionally combined with quantization. Experiments show favorable performance compared to alternatives such as Monarch and low-rank structures, as the compression ratio is varied.
Strengths: 1. Due to its use of the sketch matrix, SS1 is conceptually novel and distinct compared to other popular and performant structures such as Monarch and Low Rank, which only use batched matrix multiplies.
2. SS1 show promising performance relative to Monarch and low rank.
3. SS1 is designed to be hardware efficient, so that runtime overhead is low and latency improvements can be realized even at fairly small scale like GPT-2 Small.
Weaknesses: The experiment section does not convincingly demonstrate the superiority of SS1 over Monarch and Low Rank for the following reasons:
1. The presentation of the results makes it hard to parse how SS1 compares to alternatives **as a function of latency**, which is the metric the paper aims to optimize for. For example, in Table 1, it would be much better to plot perplexity or loss as a function of latency for each structure. It would only be fair to claim SS1 outperforms alternatives if its loss vs latency curve stays below those of others.
2. It does not make sense to control for the model's hidden dimension and only vary the compression ratio, as is done in the experiments. For example, one can increase the compression ratio (number of blocks) in Monarch or decrease the rank in low rank **while scaling up the hidden dimension and achieve the same latency**. It's possible this is a favorable trade off for the other structures, but the experiment section does not investigate this possibility. Indeed, recent work [1] has shown Monarch with more blocks can perform better for the number same parameters or FLOPs by simultaneously scaling up the hidden dimension. **The truly meaningful comparison would be a scaling law plot of loss vs latency with one curve for each pair (structure, compression ratio)**.
3. I strongly suspect the learning rate is not well-tuned in the GPT experiments, making it difficult to trust the results. The learning rate used was $6e-4$ for all models. This value is well-tuned for the original dense GPT-2 model, but it is suboptimal for structured models as shown in [1]: Monarch with more blocks and low rank with lower ranks would generally require higher learning rates, otherwise their performance can be significantly underestimated.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. At a conceptual level, SS1 seems similar to a banded matrix with $N/c$ bands, which also have hardware efficient implementations. How does its performance compare to a banded matrix?
2. Have you tried either tuning the learning rates for each model in the GPT-2 experiments? Alternatively, you can use the structure-aware learning rate rescaling introduced in [1] to ensure a good value is used for each model without having to tune it.
[1] Qiu et al. 2024. Compute Better Spent: Replacing Dense Layers with Structured Matrices
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The authors discussed some limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Title: Request for sharing Banded Matrix details
Comment: Dear reviewer, we would be happy to perform comparative experiments on banded matrix while we prepare for rebuttal as time permits.
Can you please share links to the fast implementation of Banded Weight matrices? In our preliminary search, we have not found a good implementation for banded weight matrices. The closest we found was this genbmm package (https://blog.rush-nlp.com/visualizing-banded-sparse-matrices.html) but it seems to only multiple two banded matrices whereas the setting requires us to multiply one banded matrix (weight) and full matrix (input).
---
Rebuttal 2:
Rebuttal: We appreciate the reviewer rating our paper well on counts of soundness and contribution. We have made writing changes to improve the manuscript and additional experiments as suggested by other reviewers. They are listed in the common rebuttal.
**Plots of latency vs. perplexity**: Thanks for the suggestion. We plot latency vs. perplexity for the GPT experiment in figure 3 ( page 15) and we can see that indeed SS1 curve is the best lying below other curves. We do not use such plots in the main paper since we want to show more latency results for larger models for which we do not have quality numbers in the table in the main paper.
**About changing dimensions along with compression rates**
The direction of increasing hidden dimension is definitely meaningful not just for baselines such as lowrank, monarch etc. but also our proposed SS1. It makes the experimental space of models explode. Our choice of experimental setup is primarily to set up a practical and fair experimental setting to compare different alternatives. In fact, sticking to base model dimensions and evaluating various compression methods is quite standard in papers that try to fundamentally compare different methods. Some examples being [1,2,3].
[1] Tanaka, H., Kunin, D., Yamins, D.L. and Ganguli, S., 2020. Pruning neural networks without any data by iteratively conserving synaptic flow. Advances in neural information processing systems, 33, pp.6377-6389.
[2]Frankle, J., Dziugaite, G.K., Roy, D.M. and Carbin, M., 2020. Pruning neural networks at initialization: Why are we missing the mark?. arXiv preprint arXiv:2009.08576.
[3] Desai, A. and Shrivastava, A.. In defense of parameter sharing for model-compression. International conference on learning representations 2024
**Learning rate tuning**
You are correct. Our learning rate is tuned for base models and used across structured models. This again is a commonplace way to compare against different methods [1,2,3]. While it is not optimal for different methods, it avoids entering into hyperparameter tuning for each method x setting. Having said that, we understand the concern. We quickly tested if learning rate tuning would change the findings. We find that learning rate tuning improves results for Monarch and SS1 and does not change the relative ordering.
| Method | learning rate | loss | perplexity |
|--------------|---------------|-------|------------|
| Monarch - 8x | 6e-4 | 3.119 | 22.83 |
| | 9e-4 | 3.135 | 23.23 |
| | 2e-3 | 3.017 | 20.65 |
| | 8e-3 | NA | NA |
| SS1-8x | 2e-3 | 2.99 | 20.13 |
| Lowrank-8x | 2e-3 | 3.161 | 23.67 |
The findings do not change with learning rate tuning.
**suggestion on structure aware tuning** The work cited Qui et. al. is indeed interesting work and will be useful for our future research. Thanks for pointing it out.
**Banded Matrix Comparison**
We would love to compare against a banded matrix. However, as mentioned in our official comment, we did not find a good implementation of a banded matrix that multiplies with a full matrix. Please let us know if you have any references. We would be happy to include comparative study in our camera ready.
Conceptually, SS1 and the banded matrix are very different. Banded matrix is motivated via “sparsity” which drops the input signals for neuron computation whereas SS1 is motivated via parameter sharing which sketches the input to reduce dimensionality. In previous literature we have seen that sketching generally is superior in quality than sparsity. [1]
[1] Desai, A. and Shrivastava, A.. In defense of parameter sharing for model-compression. International conference on learning representations 2024
We hope to have addressed your clarifications and incorporated your suggestions to your satisfaction. we kindly ask you to consider giving us an updated score.
---
Rebuttal Comment 2.1:
Comment: I appreciate the additional experiments. I'm raising my score in light of them. That said, I believe investigating optimally balancing dimension vs compression rates, and adopting structure-aware learning rates (or simply tuning them), will be very important for delivering practical impacts. These hyperparameter choices become extremely important for large-scale training where a small improvement in a hyperparameter can translate to a significant saving of compute. Indeed, by adjusting Monarch learning rate, you have decreased its perplexity by ~10%, and significantly shrinked the gap with SS1. I hope the authors can carefully discuss these considerations in the final paper. | Rebuttal 1:
Rebuttal: While the paper's scores are borderline, the consensus of reviewers on its good soundness and contribution of the paper is encouraging.
Several suggestions on improving the presentation of the paper were made, which we have incorporated in our updated manuscript. Reviewers AurA and ZoDN are willing to increase their scores if the presentation is improved. We hope they find the changes satisfactory.
Presentation Changes: (Major text changes are marked in blue in uploaded pdf)
1. We have standardized the notation used, i.e., using small-case letters for scalars, small-case boldface for vectors, and upper-case boldface for matrices throughout the paper and appendix.
2. we have removed the inner product notation and stuck to matrix notation in most cases to avoid confusion. Also, some inconsistencies w.r.t shape of W have been corrected.
3. We have simplified the discussion on projection of matrices on SS1 and moved the algorithm to the appendix, where additional details are presented.
4. We have fixed the typos in the appendix and main text.
5. The condition in theorem 1 on norms of vectors x and y has been corrected. It is ||x|| <= 1 and ||y|| <= 1.
6. Table 1 abbreviations are introduced in the preceding paragraph.
7. A latency vs. perplexity plot for GPT2-S with SS1, Monarch and Lowrank is added to the appendix. ( attached as pdf to this rebuttal)
We think the proposed changes have improved our paper and thank the reviewers for their careful review and suggestions.
We also provide additional experiments as requested by a few reviewers.
1. SS1 vs. RPS : As expected we find that RPS does not reduce latency of the model no matter the compression. Thus SS1 has a better quality-latency tradeoff.
2. Does learning rate tuning change the results? : We perform learning rate tuning for Monarch as suggested by the reviewer and find that it does improve the quality for monarch models. However, the same is also true for SS1 and the relative ordering of the methods does not change due to learning rate tuning.
The additional experiments support the superiority of SS1 over alternatives. We will add these results to the final manuscript.
Overall, we thank the reviewers for their time and careful consideration of our paper. We urge them to reconsider their scores if their concerns have been reasonably alleviated.
UPDATE: We just found that we do not have an option to upload the updated manuscript. Since some of the changes requested by the reviewers relate to presentation, and their scores are contingent on that (e.g., reviewer ZoDN), is it possible to share the updated manuscript?
Pdf: /pdf/f78a9d8aca6bb1505835e7ed3fe89fc2d0c756e1.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | Summary: This paper introduces the sketch structured transform (SS1) method for fast inference. SS1 is a form of randomized parameter sharing (RPS) with connections to low-rank/dimensionality-reduction methods. Theory is used to elucidate SS1’s compatibility with quantization -- it is also compatible with other RPS methods. In empirical experiments with popular NLP and CV setups, relative to competing methods, SS1 can preserve performance better as it reduces parameter count and inference latency. SS1 is also shown to have flexibility in the stage it is applied at -- it is applicable to models without any finetuning, before finetuning, and during training.
Strengths: Originality: The RPS method SS1 is introduced, and it is shown to reduce parameters, FLOPs, and latency. The method is made complementary with tiled matrix multiplication via schemes to coalesce values along multiple axes. Theorems are introduced to helpfully demonstrate SS1’s compatibility with quantization and its hyperparameter flexibility.
Quality: The overall approach is straightforward and intuitive, and it is tested in a range of scenarios that vary the model, dataset domain, and application timing (before or after training). It is compared against strong benchmark methods.
Clarity: The SS1 method is clearly explained and positioned relative to prior work. The figures, equations, and algorithms motivate design choices and illustrate implementation details. The paper is mostly well written.
Significance: This paper addresses efficient inference for matrix-multiplication-focused models (e.g., LLMs), a crucially important topic. The proposed method is applicable to pretrained models, giving it broad relevance.
Weaknesses: The paper doesn't have any major weaknesses. I address some minor weaknesses below (see Questions). These mostly deal with improving the presentation. I would be happy to raise my score if some of them are addressed.
Technical Quality: 3
Clarity: 3
Questions for Authors: Line 285: this seems incorrect. The quality with SS1 seems worse in Table 2 (right).
Would the benefits of SS1 over other approaches hold at larger model sizes (1B+)?
Line 123: I think this equivalence only holds for points with unit length.
Line 155: Can you please rewrite this sentence to better clarify why the input would have to be read multiple times?
Line 153: should this be “$N \times K$” instead of “$K \times N$”? The latter notation is used on this line, which is inconsistent with the output dimension appearing first in the weight matrix notation on line 139.
Is equation 9 missing the inner product notation?
Line 202: why is $B_KB_N$ subtracted in the indexing?
Line 263: this is a figure, not a table.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations are well articulated.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the support of our paper. We have made writing changes to improve the manuscript and additional experiments as suggested by other reviewers. They are listed in common rebuttal.
Please find the clarifications requested below:
**Line 285**: In table 2 (right), we can see that the SS1 model with 74M parameters has better (lower) perplexity than GPT-S-6x with 76M parameters. Thus SS1 models with less parameters can outperform standard model.
**Would the benefits of SS1 over other approaches hold at larger model sizes (1B+)?**
We believe so. While we do not have the computing power to train 1B+ LLM models, our rigorous tests on different domains, model architectures, and settings encourage us to believe that the results we see are indeed general and will translate even to bigger models.
**Line 123**: The equivalence is due to the relation between inner products and norms.
Inner product => norms. Since ||x||^2 = <x, x> norms are just inner product with itself.
Norms => inner products. Since 2<x, y> = ||x - y ||^2 - ||x||^2 - ||y||^2
Thus, if norms are preserved, then inner products will also be preserved and vice versa. More discussion can be found in the book[1] (Page 12, first paragraph).
[1] Woodruff, D.P., 2014. Sketching as a tool for numerical linear algebra. Foundations and Trends® in Theoretical Computer Science, 10(1–2), pp.1-157.
**Line 155**: Let us clarify the line here.
Claim: If we do not perform K-coalescing, then we might end up requiring to load input multiple times to compute the value of the single neuron.
Argument: Consider a neuron that is computed as <z, Sx>, where the dimensions of z and x are large enough so that they cannot be cached or stored on shared memory. In such a case, under certain hash functions that generate S, x[i] and x[i+1] ( for any i) can get multiplied with z[j] and z[k] where j and k are far apart ( In fact under the randomness of the hash function, this event is highly likely). In such a case, if we take a single pass on z, we will have to read the cache line containing x[i] and x[i+1] at least twice.
Taking cue from the need for clarification along with notation and presentation issues pointed out, we have refactored our explanation in section 3 in manuscript while standardizing the notation. We believe the current version is clearer and will help the readers.
**Line 153**, Equation 9. Thanks for pointing out the inconsistency in handling W. We have fixed the presentation such that we always use W as a K x N matrix and all expressions are written as vector and matrix multiplications (We have removed the inner product notation) for clarity.
**Line 202** : B_K B_N is subtracted in the indexing in accordance with the hash function treatment of the ROAST[2] -- recent RPS method. It is a simple solution used to avoid overflows when reading a block of size B_K x B_N from the indexed location.
[2] Desai, A., Zhou, K. and Shrivastava, A., 2023, July. Hardware-aware compression with random operation access specific tile (ROAST) hashing. In the International Conference on Machine Learning (pp. 7732-7749). PMLR.
**Line 263**: fixed.
We hope to have addressed your clarifications and incorporated your suggestions to your satisfaction. We kindly ask you to consider giving us an updated score as per your indication in the review. We are happy to provide additional clarifications if needed.
Thanks again for the support of our paper.
---
Rebuttal Comment 1.1:
Title: Request for action on rebuttal
Comment: Dear Reviewer,
We hope to have addressed the requested clarifications and incorporated your suggestions. Please let us know if the responses are satisfactory since the discussion period deadline is fast approaching. If so, we request that you update your score as indicated in the review. We are happy to provide further details and answer any more questions.
---
Rebuttal Comment 1.2:
Comment: I have read the other reviews and the authors' response -- thank you for the clarifications. I am raising my score to 6 based on my understanding that the presentation has been improved.
Could the authors please create figures similar to the new one in the authors' general rebuttal for the larger model sizes they explored (i.e., GPT2-M and GPT2-L)? Also, there's a typo in Table 1 -- GPT2-L is not listed, and GPT2-M is listed twice.
Please make the following additional changes:
- On Line 285, make clear that you are comparing to "GPT-S-6x" and not "GPT-S-4x", as the latter is better in terms of perplexity.
- It looks like Line 123 says that inner products between points and distances between points are "equivalent", which is not correct. Could you rewrite this to avoid this confusion?
---
Reply to Comment 1.2.1:
Comment: Dear Reviewer, Thanks for the additional feedback and updated score. We will incorporate this in the final version (Table 1, Line 285, and Line 123). The scatter plots are generated for GPT2-S model as we have quality numbers for these, whereas the larger models are only tested for latency. We can add scatter plots for other datasets we have in the paper and add more baselines to them (eg. RPS). | null | null | null | null | null | null |
Partial Gromov Wasserstein Metric | Reject | Summary: This paper introduces the an unbalance Gromov-Wasserstein distance, which adopted the formulation of unbalance optimal transport into the Gromov-Wasserstein with total variance (TV) penalty instead of KL. This new distance allows comparison of probability measures from different space with partial amount of mass, and so they name it Partial Gromov-Wasserstein (PGW). They further prove the metric properties of this distance and proposed two algorithms to solve PGW: Frank-Wolfe algorithms and Line search method. In experiment section, they test PGW with different tasks including: shape-matching, shape retrieval and barycenter problem.
Strengths: This paper fills a gap in the topic of unbalance (GW) problems by introducing a TV-relaxed unbalanced GW, they call it Partial Gromov-Wasserstein (PGW). This work shows the theoretic metric properties of this distance with a solid proof, making a contribution within this topic. They proposed two solvers to compute this distance and show diverse experimental applications. The experiment results show that this new variance of unbalance GW performs effectively with data containing outliers, aligning with the anticipated performance of unbalanced Wasserstein or unbalanced Gromov-Wasserstein methods on noisy data.
Weaknesses: - It's worth to note in literature review the similar works formulating partial Waserstein as a metric with TV constraints [1] [2].
- The proof of the metric properties is not well displayed in the main text, as this is the main highlight of this work.
- Further comparison with KL version was lack as regards to robustness against outliers. And also, further discussion on the scalability (very large dataset) will be appreciated.
- In experiment section, the selection of hyperparameter is not clear. The justification of choosing $\lambda$ value for both PGW and MPGW method was not presented.
[1] Raghvendra, Sharath, Pouyan Shirzadian, and Kaiyi Zhang. "A New Robust Partial $ p $-Wasserstein-Based Metric for Comparing Distributions." arXiv preprint arXiv:2405.03664 (2024).
[2] Nietert, Sloan, Rachel Cummings, and Ziv Goldfeld. "Robust estimation under the Wasserstein distance." arXiv preprint arXiv:2302.01237 (2023).
Technical Quality: 4
Clarity: 3
Questions for Authors: - Could you add the parameter sensitivity analysis in experiment section?
- The table under line 271 doesn't have a title, and in (a) the performance of MPGW is $0$ on Dataset II which looks abnormal to me. Is this the actual experiment result or just a typo?
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: None
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## It's worth noting in the literature review the similar works formulating...
**Answer:**
The authors thank the reviewer for their point. These two references will be added in the Introduction.
## The proof of the metric properties is not well displayed in the main text, as this is the main highlight of this work.
**Answer:**
The proof of the metric property is proposed in Appendix D. We apologize that we cannot present it in the main text due to the page limit. The formal statement is based on the definition of equivalence classes among mm-spaces (Remark D.1), and the main technique of the proof relies on the relation between PGW and GW (see Appendix D.3 where we prove the triangle inequality). Due to the page limit, we cannot present these concepts in the main text, but we have added these comments in the main text (right before the statement of Proposition 3.4) to give the reader some insights about the main ideas behind the proof.
## Further comparison with KL version was lacking...
**Answer:**
- **Comparison with the KL version:**
In Experiment 5.1, we explicitly compare PGW and UGW (which uses the KL divergence) in outlier detection. Both UGW and PGW can detect outlier points when parameters are suitably selected. However, since UGW utilizes the Sinkhorn solver, it tends to favor a mass-splitting transportation plan. As a result, we can still observe some matching between clean data and outlier data. This phenomenon is also observed implicitly in other experiments.
- **Scalability:**
In Appendix K, we will include the time complexity of our algorithms, which is derived from our convergence analysis in the same appendix. In particular, we refer to the "computational complexity" section in the author rebuttal for details.
This can be further improved if the gradient computation step (optimal partial transport solving step) is replaced by the Sinkhorn method:
$$
\mathcal{O}\left(\frac{\max^2(2L_1, \max(|\mu|, |\nu|) \max(2|C^X|^2+ 2|C^Y|,2\lambda))n^2}{\epsilon^2} \frac{1}{\epsilon} \ln(n) n^2\right).
$$
## In the experiment section, the selection of hyper-parameters is not clear.
**Answer:**
We refer to the "parameter selection" section in the author rebuttal for a discussion of the parameter settings of the PGW method.
In the three experiments presented in the main text, we require full matching for the measure that has less mass. Thus, we set $\lambda$ to be sufficiently large (see Lemma E.2). Similarly, $\rho$ is set to be $\min(|\mu|,|\nu|)$ in MPGW. Additionally, we set $\rho_1=\rho_2$ to be suitably large for UGW. In PU learning experiment, presented in appendix H, $\lambda$ in PGW and $\rho$ in MPGW are required to be set to be sufficiently large. In UGW, we test different $\rho_1,\rho_2$ such that the transported mass equals the prior $\pi=0.2$.
In particular:
- In the point cloud matching experiment (Section 5.1 and Appendix M.2), we explain how $\lambda$ (for PGW), $\rho_1, \rho_2$ (in UGW), and $\rho$ (for MPGW) are selected in lines 1077-1081.
- In the point cloud interpolation experiment (Section 5.3 and Appendix M.1), we explain how $\lambda$ is selected in line 1043.
- In the shape retrieval experiment (Section 5.2 and Appendix N), we explain the setting of $\lambda$ (for PGW), $\rho_1, \rho_2$ (for UGW), and $\rho$ (for MPGW) in lines 1093-1097.
- In the PU learning experiment (Section P), we explain the settings of $\lambda$ (for PGW) and $\rho$ (for MPGW) in lines 1198-1202. The setting of $\rho_1, \rho_2$ for UGW is explained in lines 1189-1190.
## Could you add the parameter sensitivity analysis in the experiment section? The table under line 271 doesn't have a title, and in (a) the performance of MPGW is on Dataset II which looks abnormal to me. Is this the actual experiment result or just a typo?
**Answer:**
- We've add one section to explain the parameter sensitivity in the experiments into the paper. In particular, $\lambda$ in PGW, $\rho$ in MPGW, $\rho_1,\rho_2$ in UGW has a (sufficient) threashold, whenever the parameter is greather than the threashold, the performance demonstrated in this paper will be achieved.
- We apology for the title missing of the table below 271. The title should be "accuracy and wall-clock comparision in shape retrival problem." and it has been added.
---
Rebuttal Comment 1.1:
Comment: I appreciate your rebuttal. Your response addressed my concerns. I think this work is worth for publication and so I raise my score.
---
Reply to Comment 1.1.1:
Title: Comment
Comment: Thank you for your comments.
These two references will be added.
We will also make other modifications (e.g. a section about parameter selection/sensitivity) based on the reviewer's comments. | Summary: The paper considers an unbalanced version of the Gromov-Wasserstein distance, where the discrepancy terms correspond to the total variation between certain product measures for the given marginal and the marginal of the solution, respectively. Different (re)formulations of this distance is considered in both the discrete case and for general measures, existence of optimal solution and metric properties are shown for certain cost functions, as well as numerical methods for computing (local) optimal solutions. Finally a number of numerical experiments are considered.
The paper is well written and extensive. In the main paper the results are stated and with proofs in the appendix.
Strengths: The paper is strong in terms of both content and the presentation. In particular the results about the metric properties of the PGW problem. It is also a quite complete paper in terms that is contains substantial results on theory, computational methods, and numerical simulations.
Weaknesses: One weakness with the methods in the paper is that it only provides local optimal solutions. This is a problem with many non-convex problems, but in some cases global solutions can be guaranteed. In the abstract it is stated the methods solve the PGW problem. This is probably to strong statement since they are not guaranteed to converge to the global solution.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1) In the abstract it is stated the methods solve the PGW problem. This is probably to strong statement since they are not guaranteed to converge to the global solution.
2) I think that some early references on the partial/unbalanced OT problem are missing. Neither of the following two papers are mentioned in the introduction. Both papers contain the problem formulation (3).
Georgiou, Tryphon T., etal. "Metrics for power spectra: an axiomatic approach." IEEE Transactions on Signal Processing 57.3 (2008): 859-867.
Piccoli, Benedetto, and Francesco Rossi. "Generalized Wasserstein distance and its application to transport equations with source." Archive for Rational Mechanics and Analysis 211 (2014): 335-358.
3) Some minor typos:
line 174: he --> the
line 175: con --> cost
line 574: spit --> split
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## One weakness with the methods in the paper...
**Answer:**
The authors apologize for the misunderstanding regarding the English term "Partial Gromov-Wasserstein solver" in the main text. As discussed in the paper, GW and its variants UGW/MPGW/PGW are non-convex problems. To the best of our knowledge, both classical algorithms (Frank-Wolfe algorithm) and the Sinkhorn algorithm converge to local minima rather than global minima. Thus, in the optimal transport community, it is generally accepted to say these methods are "solvers" or that these methods "solve the GW/UGW/MPGW problem", rather than stating "these methods are computational algorithms that achieve a local minimum" (see, e.g., [45] abstract and introduction section; [44] section 3.1). We have added a footnote in the introduction section to explain this, maintaining both readability and rigor, and ensuring consistency with previous works.
## In the abstract it is stated the methods solve the PGW problem...
**Answer:**
The claim that our method solves the PGW problem follows the convention of English explanations in previous works [44], [45]. We've added a footnote in the introduction section to maintain consistency with related works and provide a rigorous explanation. See the answer for weaknesses for details.
## I think that some early reference...
**Answer:** These two references will be added to the paper. Equation (3), Optimal partial transport, is a classical unbalanced OT, and many works discuss its theory and applications. We apologize for only citing the classical reference from the authors who introduced this concept (Mccann and Figalli) initially.
## Some minor typos...
**Answer:** The authors thank the reviewer, and we have fixed the typos.
---
Rebuttal Comment 1.1:
Comment: Thank you for your rebuttal and the additional clarifications.
About the comment: "We apologize for only citing the classical reference from the authors who introduced this concept (Mccann and Figalli) initially."
No need to apologize. However, note that the reference above introduced the same "optimal partial transport" formulation (3) and was published 2 years before the earliest references on this that was provided in the paper.
---
Reply to Comment 1.1.1:
Title: comment
Comment: Thank you for your comments.
These two references will be added to the paper.
We will also make other modifications based on the reviewer's comments. | Summary: This paper introduces the Partial Gromov-Wasserstein (PGW) metric as a means to handle unbalanced Gromov-Wasserstein problems between non-probability mm-spaces. The authors develop and demonstrate two computationally efficient variants of the Frank-Wolfe algorithm for solving the PGW problem. They establish that PGW is a well-defined metric, providing theoretical proofs and applications in shape matching, shape retrieval, and interpolation. The metric and algorithms are validated through numerical experiments against established baselines, showcasing improved results in handling outlier data with a robust performance.
Strengths: 1. The paper presents a novel approach to defining a metric in the context of unbalanced Gromov-Wasserstein, which has been a challenging issue in the field.
2. The quality of the research is high, evident from rigorous theoretical developments, proofs, and comprehensive experiments that validate the effectiveness of the PGW metric in practical applications.
3. The paper is well-organized, with clear explanations of complex concepts. The use of examples and experimental results helps in understanding the practical implications and advantages of the PGW metric.
Weaknesses: 1. While the paper provides a comparison with existing methods, it could benefit from a broader range of comparative baselines, especially newer techniques that might provide a stiffer benchmark.
2. The paper does not extensively discuss the sensitivity of the PGW algorithm to the choice of its parameter (e.g., the regularization coefficient - lambda), which is crucial for understanding its robustness and adaptability in diverse real-world scenarios.
3. There is limited discussion on the scalability (or the time complexity) of the proposed methods, particularly in high-dimensional or large-scale settings, which is vital for their applicability in big data applications.
Technical Quality: 4
Clarity: 3
Questions for Authors: 1. The regularization coefficient lambda is referenced in Equations (9), (10), and (14) of $\tilde{M}$. However, it appears that two of the algorithms do not explicitly incorporate lambda. Could you clarify whether these algorithms require the use of lambda?
2. If lambda is indeed utilized in these algorithms, how does the selection of lambda influence the performance of the PGW metric? Are there guidelines or methods for choosing this parameter optimally based on the dataset characteristics?
3. In practical applications, especially in high-dimensional spaces, what are the computational limitations of the proposed algorithms, and how might these be mitigated?
4. Are there potential extensions of the PGW metric that could handle non-metric spaces, given the increasing interest in such spaces in various applications?
Confidence: 3
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: Including limitations on the scalability and time complexity, such as the maximum solvable problem size within one hour, would be beneficial for its applications.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## While the paper provides a comparison with existing methods...
**Answer:** The baseline methods we selected follow from the paper [45] [44] and [Beier et al., 2023](https://arxiv.org/abs/2112.11964). As the main topic of this paper is to introduce a partial GW formulation that defines a metric for two positive measures in different spaces, we only implemented our method in the classical experiments from related works and compared our method with existing baselines in these papers.
## The paper does not extensively discuss ...
**Answer:**
The authors agree that the choice of $\lambda$ is important, whether in the classical partial OT setting or the partial GW setting. We refer to the **parameter selection** section in AU for details. This section will be added to the paper.
## There is limited discussion on the scalability...
**Answer:** In Section O, we provide the wall-clock time comparison for the PGW solvers, with the size of tested data varying from 100 to 10,000. In Section K, we've discussed the convergence rate of the proposed algorithms. We refer to the "computational complexity" section and Equation (TC) in the Author Rebuttal for the theoretical time complexity. We will add this conclusion as a direct result in Appendix K and the main text.
## The regularization coefficient lambda is referenced in Equations (9), (10), and (14)...
**Answer:** In Algorithm 1, $\lambda$ is incorporated in the gradient computation. See Equations (59) and (60) in Step 1. Additionally, $\lambda$ is incorporated in the line search step; see Equations (65) and (66).
Similarly, in Algorithm 2, $\lambda$ is applied in the gradient computation step (see Equation (61)) and the line search step (see lines 886-887).
## If lambda is indeed utilized in these algorithms...
**Answer:** Selection of $\lambda$ highly depends on the task where PGW is applied. When full matching for one measure is desired, by Lemma E.2, we should set $\lambda$ to be sufficiently large (i.e., $2\lambda=\max( (C^X)^2+(C^Y)^2)$). We refer to the "parameter selection" section in the author rebuttal for details.
Intuitively, $2\lambda$ plays the role of an "upper bound" on transportation cost. Suppose $(|x_1 - x_2|^2 - |y_1 - y_2|^2)^2 > 2\lambda$; in this case, we will not apply the pairing $x_1 \to y_1, x_2 \to y_2$ or $x_2 \to y_1, x_1 \to y_2$.
If we aim to transport less mass and destroy/create more mass, a smaller $\lambda$ is required. If we aim to transport more mass, we need to set a larger $\lambda$.
## In practical applications, especially in high-dimensional spaces...
**Answer:** To the best of our knowledge, the computation of GW/MPGW/UGW/PGW only depends on the size of the dataset and is independent of the dimension of the space. The dimension of the space may affect the computation of the cost matrices $C^X$ and $C^Y$; however, theoretically, the computation cost in this step is always $O(n^2 + m^2)$.
## Are there potential extensions of the PGW metric that could handle non-metric spaces...
**Answer:** In general, GW/UGW/MPGW/PGW can be defined in a gauged measure space, where the gauge function (symmetric, L2 function) is a generalized version of the metric function. The main idea of GW/MPGW/UGW/PGW is to adapt the similarity of each space to build a measure that describes the similarity between data in two different spaces. We require a structure to describe the similarity for each pair of points in each space. Such a structure can be a metric or a gauge mapping. If such a structure is missing, then it is beyond the scope of the GW method.
---
Rebuttal Comment 1.1:
Comment: Thank you for your detailed response. Since some of my concerns have been addressed, I will maintain my positive score.
---
Reply to Comment 1.1.1:
Title: comment
Comment: The authors thank the reviewer's comment.
The related results in "computational complexity" section will be added to the paper. | Summary: This paper proposes a partial Gromov -Wasserstein (PGW) formulation, which relaxes the original constraints present in Gromov-Wasserstein (GW) formulation. In PGW, the marginal equality constraints of GW are replaced by marginal inequality constraints. Following existing works in partial GW setting, the paper additional imposes TV-based marginal regularization in the objective. The paper showed that the proposed PGW approach defines a metric between metric measure spaces. The PGW problem is solved via Frank-Wolfe (FW) and empirical results are shown on shape interpolation in the main paper.
Strengths: - The paper explores partial mass transportation setting in the GW problem. As discussed around lines 149, an existing work [45] has also explored similar concepts in GW setting. [45], in particular, involves an additional hyperparameter (\rho) for overall mass of the learned transport plan. The proposed problem (16) as well as [45] employs FW algorithm.
- Proposition L.1 in the appendix shows that if \gamma is an optimal solution of proposed PGW problem, then \gamma is also an optimal solution of the mass constrained PGW problem (MPGW) of [45] with \rho=|\gamma|.
- It is unclear what the paper implies by "mathematically this equivalence relation is not verified." in line 955? How is the paper defining the term "equivalence" which is used multiple times in Appendix L.
- For a given \lambda=\lambda_0 in PGW, does there exist a \rho=\rho_0 for MPGW such that the set of first order critical points for PGW and MPGW are same?
- The steps of the FW algorithm for proposed PGW and MPGW [45] seem quite similar.
Weaknesses: - The paper is poorly written due to the following reasons:
- The abstract and introduction states that the paper propose two variants of FW algorithm for solving the proposed PGW. However, the main paper does not describe two variants of FW algorithm. Only one variant is discussed in Sections 4-5. The other variant is described only in Appendix G. If the algorithm Typically, the main paper should be self contained, having the necessary details of the contributions claimed in the abstract and introduction. It should be noted that going through the supplementary material is a reviewer's discretion.
- The paper provides very less discussion on how its differences with MPGW [45] in the main paper (lines 148-149). While Appendix L contains this discussion, important parts of it should be discussed in the main paper. As mentioned above, going through the supplementary material is a reviewer's discretion.
- There are multiple grammatical and spelling errors throughout the paper. Eg. he -> the, con -> cost, etc.
- The main paper contained experiments only on synthetic dataset. Tables 2 and 4 in appendix discuss experiments on real-world datasets. MPGW obtains same generalization performance as PGW in both the tables. This should be discussed in the main paper as well.
Technical Quality: 2
Clarity: 1
Questions for Authors: - In (14), let \beta = minimum entry in cost tensor M. Then, what is the solution of PGW (16) when \beta > 2\lambda ?
- Why does MPGW obtains 0.0813 and 0 mean accuracy in Table (a) while other methods get > 0.89 and > 0.78 scores, respectively? It should be noted that in Tables 2 and 4 in appendix, MPGW performs at par with PGW.
Confidence: 5
Soundness: 2
Presentation: 1
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ## It is unclear what the paper implies by "mathematically this equivalence relation is not verified." in line 955?
**Answer**:
The authors thank the reviewer's point. The sentence "mathematically this equivalence relation is not verified" will be removed. The section "relation between PGW and MPGW" in author rebuttal will be added into the paper to clarify the relation between PGW and MPGW.
## For a given \lambda=\lambda_0 in PGW ....
**Answer**: Answer is No. See example 1 and "relation between PGW and MPGW" section in AR.
## the steps of the FW algorithm...
**Answer**: The formulation PGW and MPGW are related and thus the FW iteration steps of the two problem are similar. They have difference in Gradient computation steps and line search step. The algorithms for PGW requires $\lambda$ in these two steps. The solver for $MPGW$ requires $\rho$ in the gradient computation step.
## The paper is poorly written due to the following reasons: ...
**Answer**: The authors thank the reviewer for their points
- We have not include the two variants of the FW solver in the main text due to the page limit. However, for clarity, in the abstract and in the introduction section, we have modified the text:
- Abtract: "We then propose variants of the Frank-Wolfe algorithm for solving the PGW problem."
- Introduction: line 74 "Based on this relation, we propose two (mathematical equivalent) variants of Frank-Wolfe algorithms for the discrete PGW problem. One is presented in the main text, and the other is discussed in Appendix G." We refer section "relation between the two FW algorithms" and "motivation of algorithm 2" in Author Rebuttal.
- We refer "relation between the two FW algorithms" section in Author Rebuttal for details. This section will be added into the paper.
## The paper provides very less discussion...
**Answer**: The modified statement of Proposition L.1, which demonstrates the relation between PGW and MPGW, will be added into the main text. Especially,
$$PGW_\lambda(\mu,\nu)=MPGW_\rho(\mu,\nu)+\lambda(|\mu|^2+|\nu|^2-2|\rho|^2)$$
where $\rho$ is determined by $\lambda$.
## There are multiple grammatical and spelling errors...
**Answer**:The authors thank the reviewer. We have fixed the typos and grammar mistakes.
## The main paper contained experiments...
**Answer**:
- **Datasets**: both synthetic and real datasets are applied in this paper:
- In the shape retrieval experiment (Section 5.2), the dataset is given in [51].
- In the point cloud interpolation experiment (Section 5.3), the dataset is from [Github](https://github.com/gpeyre/2016-ICML-gromov-wasserstein).
- In the positive unsupervised learning experiment (Section P), the datasets include [MNIST](https://pytorch.org/vision/stable/generated/torchvision.datasets.MNIST.html), [EMNIST](https://pytorch.org/vision/main/generated/torchvision.datasets.EMNIST.html), Amazon, Webcam, and DSLR (See [60]).
- We refer "Positive label unsupervised learning" section in Author Rebuttal to see the explanation about why PGW, MPGW admit same performance in this particular task. The discussion will be added in the main text as a numerical evidence of Proposition L.1.
# In (14), let \beta = minimum entry in cost tensor M.
**Answer**: Note, $M_{i,j,i',j'}=|C^X_{i,i'}-C^Y_{j,j'}|^2,\forall i,i'\in[1:n], j,j'\in[1:m]$. Thus $\min M=|C^X_{1,1}-C^Y_{1,1}|^2=|0-0|^2=0$. In this case, $\lambda=0$. zero measure will be one optimal solution. Our algorithm will return $\gamma=0_{n\times m}$. Run the code [link](https://anonymous.4open.science/r/PGW_metric-5DCB/example_gw.ipynb) for a numerical example.
## Why does MPGW obtains 0.0813 and 0 mean accuracy in Table (a)...
**Answer**:
- We refer "limitations of MPGW" section in Author Rebuttal to see the why MPGW does not has good performance in shape retrieval experiment. In one sentence, when one shape is similar to part of another shape (e.g. "square" is similar to part of "house"), MPGW value will be closed to 0.
- For the PU learning experiment, (table (2)(4), we refer section "PU learning" section in Author Rebuttal for details.
In this experiment, we only need transportation plans from the GW-based methods. In the experimental setting, full matching is required for PGW/MPGW, and based on Proposition L.1 and Lemma E.2, MPGW and PGW yield the same solution sets in this scenario.
---
Rebuttal 2:
Title: Response to authors
Comment: I thank the authors for their rebuttal. Please find my (further) questions below.
1. Regarding global response Statement 1 example 1: Given $lambda=0$, solve PGW and obtain the solution $\gamma_{PGW}$. Then set $\rho=|\gamma_{PGW}|$. Now, would the first order critical points of MPGW contain $\gamma_{PGW}$?
2. Regarding the authors' response "both synthetic and real datasets are applied in this paper:"
- By the statement "In the shape retrieval experiment (Section 5.2), the dataset is given in [51].", are the authors claiming that this experiment use real-world dataset?
- By the statement "In the point cloud interpolation experiment (Section 5.3), the dataset is from Github (link)", are the authors claiming that this experiment use real-world dataset?
- Regarding experiments in Section P, I have already acknowledged that they are the only real-world experiments in the whole manuscript.
3. Regarding, "In (14), let \beta = minimum entry in cost tensor M", I agree with the authors that the optimal solution will be a zero measure if lambda is sufficiently large. However, does the author response contract their Lemma E.2.
4. In the appendix L.1 example, does $MPGW_\rho(X_i,X_j)$ = 0 for all values of rho? What about $\rho=|\gamma_{PGW}|$?
Overall, I have found the author response underwhelming. Hence, for now, I am reducing my score by one.
---
Rebuttal 3:
Title: answers to the further questions 1,2,3
Comment: We appreciate the reviewer’s engagement with our work and the response to our rebuttal. While we respect the reviewer's perspective, we have concerns that the current evaluation may not fully reflect the scientific and technical merits of our paper. The original review did not identify any theoretical flaws, instead raising minor clarification questions that we have thoroughly addressed. We also recognize the reviewer's comments regarding the paper's organization and the request for additional experiments on real-world data. However, these concerns do not appear to undermine the fundamental theoretical contributions of our work, which we feel may have been overlooked. Furthermore, we would like to highlight that the scope of our numerical experiments is consistent with prior work on Gromov-Wasserstein (GW) and unbalanced GW. We would welcome further discussion rooted in specific scientific reasoning and detailed feedback, which would enhance the constructive nature of this review process.
1. "Regarding global response Statement 1 example 1:..."
**Answer:** Yes.
In this case, if we select the zero measure, which is an optimal solution for PGW, and set $\rho=|\gamma|=0$, the global solution coincides with the first-order critical point of MPGW. In fact, the space $\Gamma_\leq^\rho(\mu,\nu)$ contains only a single element for MPGW.
Otherwise, as explained in the author rebuttal, if we choose $\gamma=\delta_{(x_i,y_j)}$, where $i \in [1:n]$ and $j \in [1:m]$, which is also an optimal solution for $PGW_0(\mathbb{X},\mathbb{Y})$, and we set $\rho=|\gamma|=1$, then $\gamma=\delta_{(x_i,y_j)}$ would be one of the first-order critical points of $MPGW_1(\mathbb{X},\mathbb{Y})$ and a solution to the MPGW problem.
2. "Regarding the authors' response "both synthetic and real datasets are applied in this paper..."
**Answer:** Our statement regarding our use of both synthetic and real-world data remains to be correct. To clarify:
- In the shape retrieval experiment (Section 5.2), “Dataset I” is derived from [51] and “Dataset II” is entirely synthetic. Dataset I is also synthetic; we do not mean to claim otherwise, only to outline the sources of data in our experiments.
- The data for the point cloud interpolation experiment comes from [41]. This dataset is also synthetic, but nevertheless used in prior GW papers for method evaluation.
- MNIST and EMNIST are real-world yet small-scale problems commonly used for demonstration purposes, and the Amazon, Webcam, and DSLR are clearly real-world datasets.
3. "Regarding, "In (14), let \beta = minimum entry in cost tensor M", I agree with the authors that the optimal solution will be a zero measure if lambda is sufficiently large. However, does the author response contract their Lemma E.2.
**Answer:** No, it does not contradict with Lemma E.2.
Here, we point out that the reviewer might have misinterpreted our response or there might have been a confusion. The optimal solution will be a zero measure if $\lambda$ is sufficiently **small** (e.g., $\lambda \leq\min(M)$). When $\lambda$ is sufficiently large, based on our Lemma E.2., the $|\gamma|$ will achieve its maximum, that is $|\gamma|=\min(|\mu|,|\nu|)$.
---
Rebuttal Comment 3.1:
Title: Response to authors #2
Comment: > Answer: Yes.
> In this case, if we select the zero measure, which is an optimal solution for PGW, and set , the global solution coincides with the first-order critical point of MPGW. In fact, the space $\Gamma_{\leq}^{\rho}(\mu,\nu)$ contains only a single element for MPGW.
Then, in the setting of Example 1 of the global response, it does seem that the solution set of PGW and MPGW can be identical - with appropriate value of $\rho$.
> Our statement regarding our use of both synthetic and real-world data remains to be correct. To clarify:
From the author response, I think we are on the same page. In the original review, I had written - "The main paper contained experiments only on synthetic dataset. Tables 2 and 4 in appendix discuss experiments on real-world datasets. MPGW obtains same generalization performance as PGW in both the tables."
> No, it does not contradict with Lemma E.2.
Please excuse my oversight. I had written incorrectly statement 3 in my previous response.
5. The response of one of my questions in my original review was a bit unclear. The question was "In (14), let \beta = minimum entry in cost tensor M. Then, what is the solution of PGW (16) when \beta > 2\lambda ?"
The solution of (16) seems to be zero when \beta > 2\lambda. If this is indeed the case, then the PGW distance between two distributions with marginals p and q, respectively, and with \lambda < \beta/2 will be \lambda(|q|^2 + |p|^2) (from 15) - which is a constant dependent on hyper-parameter and marginals. This restricts PGW's ability to distinguish the source distribution with target distributions of same marginal norm (|q|=|q_1|=|q_2|). No geometric information is being taken into account.
---
Rebuttal 4:
Title: answer to the further questions 4
Comment: 4. In the appendix L.1 example, does $MPGW_\rho(X_i,X_j)=0$ for all values of $\rho$? What about $\rho = |\gamma_{PGW}|?$
**Answer:** Yes, $MPGW_\rho(X_i,X_j)=0$ for all values of $\rho$, including $\rho = |\gamma_{PGW}|$.
Below, we provide the reasoning to clarify this point. In this example:
$$\mathbb{X}^1=(\mathbb{R}^3,\|\cdot\|,\sum_{i=1}^{1000} \alpha \delta_{x_i}),$$
$$\mathbb{X}^2=(\mathbb{R}^3,\|\cdot\|,\sum_{i=1}^{800} \alpha \delta_{x_i}),$$
$$\mathbb{X}^3=(\mathbb{R}^3,\|\cdot\|,\sum_{i=1}^{400} \alpha \delta_{x_i}),$$
where $\alpha=1e-3$, $\lambda=1e+1$, and $x_i \in [0,1]^2$ for all $i$.
Let $\gamma_{1,2}$ be the optimal solution for $PGW_\lambda(X_1,X_2)$, then we have $|\gamma_{1,2}|=0.8$. Similarly, we define $\gamma_{1,3}$ and $\gamma_{2,3}$, where $|\gamma_{1,3}|=0.4$ and $|\gamma_{2,3}|=0.4$. The numerical results are consistent with our Lemma E.2.
We can verify the following:
$$MPGW_{0.8}(\mathbb{X}^1,\mathbb{X}^2)=0, \text{ where } |\gamma_{1,2}|=0.8,$$
$$MPGW_{0.4}(\mathbb{X}^1,\mathbb{X}^3)=0, \text{ where } |\gamma_{1,3}|=0.4,$$
$$MPGW_{0.4}(\mathbb{X}^2,\mathbb{X}^3)=0, \text{ where } |\gamma_{2,3}|=0.4.$$
For the case of arbitrary $\rho > 0$, we may first consider the problem $MPGW_{\rho}(\mathbb{X}^1,\mathbb{X}^2)$. Let $\gamma^*=\sum_{i=1}^{800}\alpha\delta_{(x_i,x_i)}$ denote the transportation plan induced by the Monge mapping $x_i \mapsto x_i$ for all $i \in [1:800]$. Then, $\frac{\rho}{|\gamma^*|}\gamma$ will be an optimal solution for $MPGW_{\rho}(\mathbb{X}^1,\mathbb{X}^2)$. Consequently, $MPGW_{\rho}(\mathbb{X}^1,\mathbb{X}^2) = 0$.
We may apply similar reasoning to see that $MPGW_{\rho}(\mathbb{X}^1,\mathbb{X}^3),MPGW_{\rho}(\mathbb{X}^2,\mathbb{X}^3)=0$ for any $\rho\in[0,0.4]$ as well, where 0.4 is the largest value we can choose for $\rho$ in these two MPGW problems.
Run [this link](https://anonymous.4open.science/r/PGW_metric-5DCB/example_gw.ipynb) (cell 1 and cell 3) to see a numerical example. Note that, as we set the tolerance of PGW/MPGW algorithms to 1e-5, the values of $MPGW_{0.8}(\mathbb{X}^1,\mathbb{X}^2)$, $MPGW_{0.4}(\mathbb{X}^1,\mathbb{X}^3)$, and $MPGW_{0.4}(\mathbb{X}^2,\mathbb{X}^3)$ are of the order $\mathcal{O}(1e-5)$, rather than exactly 0.
---
Rebuttal 5:
Title: answers to new comments 2
Comment: > Then, in the setting of Example 1 of the global response, it does seem that the solution set of PGW and MPGW can be identical - with appropriate value of $\rho$.
**Answer**: There may have been some confusion here. In Example 1, the **solution sets** of PGW and MPGW are **NOT** identical for any $\rho\in [0, \min{(|\mu|,|\nu|)}]$.
- The solution set of $PGW_0(\mu, \nu)$ contains $\{\delta_{(x_i, y_j)} : i\in[1:n], j\in[1:m]\} \cup \{0\}$.
- When $\rho>0$, the zero measure is a solution for $PGW_0(\mu,\nu)$, but it is not a solution for $MPGW_\rho(\mu,\nu)$.
- When $\rho=0$, $\delta_{(x_i,y_j)}$ is a solution of $PGW_\lambda(\mu,\nu)$, but it is not a solution for $MPGW_0(\mu,\nu)$.
> From the author response, I think we are on the same page. In the original review, I had written - "The main paper contained experiments only on synthetic dataset. Tables 2 and 4 in appendix discuss experiments on real-world datasets. MPGW obtains same generalization performance as PGW in both the tables."
**Answer**: Thank you for the clarification. We would once again like to note that the scope of our numerical experiments is consistent with existing literature on GW and unbalanced GW. If the reviewer is instead concerned with the similar performance of PGW and MPGW in Tables 2 and 4 in the appendix, this is discussed in the global response under the heading "Positive Label Unsupervised Learning", as stated earlier.
> 5. The response of one of my questions in my original review was a bit unclear. The question was "In (14), let \beta = minimum entry in cost tensor M. Then, what is the solution of PGW (16) when \beta > 2\lambda ?"
>
> The solution of (16) seems to be zero when \beta > 2\lambda. If this is indeed the case, then the PGW distance between two distributions with marginals p and q, respectively, and with \lambda < \beta/2 will be \lambda(|q|^2 + |p|^2) (from 15) - which is a constant dependent on hyper-parameter and marginals. This restricts PGW's ability to distinguish the source distribution with target distributions of same marginal norm (|q|=|q_1|=|q_2|). No geometric information is being taken into account.
**Answer**: There still seems to be some confusion here. When $\beta=\min M$ as specified in the reviewer's original comment, this results in $\beta = 0$. To be precise, since $\lambda$ cannot be negative number, we first correct $2\lambda<\beta=0$ to $2\lambda\leq \beta=0$, i.e., $\lambda = 0$. As a result, $PGW_0(\mu, \nu)$ admits the zero measure as one solution, and the PGW distance will be $\lambda(|q|^2+|p|^2)=0$ as $\lambda=0$.
> This restricts PGW's ability to distinguish the source distribution with target distributions of same marginal norm (|q|=|q_1|=|q_2|).
As stated in Proposition 3.4, PGW is a metric which can distinguish two measures if $\lambda>0$. In this case, $\lambda=0$ and thus PGW cannot distinguish the difference between the source and target distributions. The resulting analysis given by the reviewer simply reflects the fact that $\lambda=0$ is a **bad** choice of hyperparameter when we aim to apply PGW as a metric.
---
Rebuttal Comment 5.1:
Title: Reviewer response
Comment: Regarding
> There may have been some confusion here.
Let me rephrase my question. In this example 1, will the solution of PGW be a first order critical point of MPGW with appropriate value of $\rho$? In my earlier question "Given $lambda=0$, solve PGW and obtain the solution $\gamma_{PGW}$. Then set $\rho=|\gamma_{PGW}|$. Now, would the first order critical points of MPGW contain $\gamma_{PGW}$?", the author response was "yes"
> If the reviewer is instead concerned with the similar performance of PGW and MPGW in Tables 2 and 4 in the appendix, this is discussed in the global response under the heading "Positive Label Unsupervised Learning", as stated earlier.
In the global response, it has been stated that PGW has the advantage of being a metric (while MPGW is not). Then, perhaps some experiments where the utility of PGW being a metric comes out - should be performed.
> There still seems to be some confusion here.
I am not sure why $\min M$ should be zero. It would be great if the authors could explain it. My understanding is as follows: M is computed using (14) - using $C^X$ and $C^Y$ matrices. If none of the entries of $C^X$ is equal to $C^Y$, $\min M$ would not be zero. So $\beta = \min M$ seems to be a positive entry and for any $0<\lambda < \beta/2$, the PGW distance has a closed form solution $\lambda(|p|^2 + |q|^2)$. So there is no question of setting $\lambda=0$.
---
Rebuttal 6:
Title: answers to new comments 3
Comment: > Let me clarify my question. In Example 1, will the solution of PGW be a first-order critical point of MPGW for an appropriate value of $\rho$?
**Answer**:
- In Example 1, we assert that the **solution sets** of $PGW_0(\mu,\nu)$ and $MPGW_\rho$ are different. We have already explained the distinction between these **sets** in our author rebuttal and in previous comment.
> In my earlier question, "Given $\lambda=0$, solve PGW and obtain the solution $\gamma_{PGW}$. Then set $\rho=|\gamma_{PGW}|$. Now, would the first-order critical points of MPGW contain $\gamma_{PGW}$?", the author responded "yes."
- The answer to this earlier question remains yes, as we previously explained. This particular $\gamma_{PGW}$ is also a solution for $MPGW_\rho(\mu,\nu)$ since $\rho=|\gamma_{PGW}|$. However, this does not imply that **another** solution of $PGW_0(\mu,\nu)$ will also be a solution for this specific $MPGW_\rho(\mu,\nu)$.
- Therefore, there is no contradiction to our claim "**the solution sets of the two problems are different**", although particular solutions might coincide.
> In the global response, it was stated that PGW has the advantage of being a metric (whereas MPGW is not). Perhaps some experiments should be conducted to demonstrate the utility of PGW as a metric.
**Answer**: In the shape retrieval experiment, we obtained GW/UGW/MPGW/PGW costs to describe the similarity between two shapes.
Since PGW and GW are metrics, they induce better performance than the other two methods.
That is shown in table on page 8.
> I am not sure why $\min M$ should be zero....
**Answer**: The definition of $M$ is provided in Eq (14), and the definitions of $C^X$ and $C^Y$ are given in Eq (11).
In particular $(C^X)_{i,i}=d_X(x_i,x_i')$ where $d_X$ is a metric.
Thus, when $i=i'$, $C^X_{i,i'}=0$. That is, diagonal elements of matrix $C^X$ are zero. Similarly, diagonal elements of matrix $C^Y$ are zero.
Based on the definition of $M$, we have
$$\min M=|C_{1,1}^X-C_{1,1}^Y|^2=|d_X(x_1,x_1)-d_Y(y_1,y_1)|^2=|0-0|^2=0.$$
---
Rebuttal Comment 6.1:
Title: Reviewer response
Comment: > However, this does not imply that another solution of of $PGW_{0}(\mu,\nu)$ ...
I am not sure what the phrase *another solution of $PGW_{0}(\mu,\nu)$* implies. The solution of PGW was taken to be $\gamma_{PGW}$, without any assumption.
> In the shape retrieval experiment,...
While the author response is true for synthetic shape retrieval experiments, it would be great to see the practical utility of PGW in real world applications involving real-world datasets. Appendix sections contained such experiments where PGW and MPGW had same generalization performance.
> The definition of $M$ is provided ...
I thank the authors for clarifying my doubts for this question.
---
Rebuttal 7:
Title: answers to new comments 4
Comment: > I am not sure what the phrase another solution of ...
**Answer**:
We have explained this in Author Rebuttal example 1 and in the comment “**answers to new comments 2**”.
When $\gamma_{PGW}$ is the zero measure, we set $\rho=|\gamma_{PGW}|=0$:
- The zero measure is a solution for $PGW_0(\mu,\nu)$ and also for $MPGW_0(\mu,\nu)$.
- Choose $i\in[1:n],j\in[1:m]$, $\delta_{(x_i,y_j)}$ is **another solution** for $PGW_0(\mu,\nu)$. But $\delta_{(x_i,y_j)}$ is **not** a solution for $MPGW_0(\mu,\nu)$.
---
Rebuttal Comment 7.1:
Title: Response to answers to new comments 4
Comment: > Choose $i\in[1:n],j\in[1:m]$, $\delta_{(x_i,y_j)}$ is another solution for $PGW_0(\mu,\nu)$. But $\delta_{(x_i,y_j)}$ is not a solution for $MPGW_0(\mu,\nu)$.
In this case, we could set $\rho=|\gamma_{PGW}|$ where $\gamma_{PGW}$ is the "another solution". Then $ \gamma_{PGW}$ should be a solution for $MPGW_{|\gamma_{PGW}|}(\mu,\nu)$. Since $\lambda$ and $\rho$ are not comparable hyper-parameters, one should not expect that all PGW solutions corresponding to fixed $\lambda$ would be obtained by a MPGW with a fixed $\rho$ and vice-versa.
---
Reply to Comment 7.1.1:
Title: Answers to new comments 5
Comment: > In this case, we could set $\rho = |\gamma_{PGW}|$ where $\gamma_{PGW}$ is the "another solution". Then $\gamma_{PGW}$ should be a solution for $MPGW_{|\gamma_{PGW}|}(\mu, \nu)$.
This follows immediately from Statement 1 in the global response.
> Since $\lambda$ and $\rho$ are not comparable hyper-parameters, one should not expect that all PGW solutions corresponding to fixed $\lambda$ would be obtained by a MPGW with a fixed $\rho$ and vice-versa.
This statement contradicts the reviewer's original claim (from [**Response to authors \#2**](https://openreview.net/forum?id=nrcFNxF57E¬eId=Mf06ckbj0f)):
>> Then, in the setting of Example 1 of the global response, it does seem that the solution set of PGW and MPGW can be identical - with appropriate value of $\rho$.
As stated previously, the authors hoped to clarify exactly this point; namely, that the solution sets to the two problems are different, although particular solutions can coincide.
---
Rebuttal 8:
Title: Response to Answers to new comments 5
Comment: > This statement contradicts the reviewer's original claim (from Response to authors #2):
Previously, I was wondering if the set of solution can be shown to be same for individual values of hyper-parameters. The authors have shown with example that this is not the case.
However, please note that my previous response was more in the form of a question to the authors (albeit without a question mark) rather than a definite claim about the paper's contribution.
> As stated previously, the authors hoped to clarify exactly this point; namely, that the solution sets to the two problems are different, although particular solutions can coincide.
While this is true for specific values of hyper-parameter values, practically, one does not run the model on one hyper-parameter value. A common practice is to run the model on multiple hyper-parameter values (tuning/cross-validation) and choose one (based on some other criteria). Hence, can the solutions of PGW as one tunes PGW over $lambda$ be obtained by MPGW as one tunes over $\rho$?
I would also increase my score to the original score. | Rebuttal 1:
Rebuttal: # Relation between PGW and MPGW
### Statement 1: The relation between PGW and MPGW can be described as follows (Proposition L.1. will be updated):
- For each $\lambda \ge 0$, there exists $\rho \in [0, \min(|\mu|, |\nu|)]$ such that:
- For each $\gamma \in \Gamma_\leq(\mu, \nu)$ with $|\gamma| = \rho$, $\gamma$ is optimal for $PGW_\lambda(\mu, \nu)$ iff $\gamma$ is optimal for $MPGW_\rho(\mu, \nu)$.
- $$PGW_\lambda(\mu, \nu) = MPGW_\rho(\mu, \nu) + \lambda(|\mu|^2 + |\nu|^2 - \rho^2).$$
- Note, in this case, the sets of solutions for $PGW_\lambda(\mu, \nu)$ and $MPGW_\rho(\mu, \nu)$ are not necessarily identical.
Example 1: Suppose $\lambda = 0$, $\mu = \sum_{i=1}^n \delta_{x_i}, \nu = \sum_{j=1}^m \delta_{y_j}$. Then the set of optimal solutions for $PGW_0(\mu, \nu)$ contains all the elements of the following form:
$$\{\delta_{(x_i, y_j)} \mid i \in [1:n], j \in [1:m]\} \cup \{0\},$$
where $0$ is the zero measure.
- If we set $\rho = 0$, the solution $\delta_{(x_i, y_j)}$ will not be an optimal solution for $MPGW_0(\mu, \nu)$. If we set $\rho = 1$, the solution $0_{m \times n}$ will not be an optimal solution for $MPGW_1(\mu, \nu)$. Thus, the sets of solutions of $PGW_0(\mu, \nu)$ and $MPGW_\rho(\mu, \nu)$ are not consistent for any $\rho \in [0, \min(|\mu|, |\nu|)]$.
### Statement 2: We claim the following:
- There exist $MPGW_\rho(\mu, \nu)$ problems where $\rho \in [0, \min(|\mu|, |\nu|)]$, such that, for each $\lambda > 0$, the solution of $MPGW_\rho(\mu, \nu)$ is not a solution of $PGW_\lambda(\mu, \nu)$.
Example 2: Suppose $\mu = \sum_{i=1}^{10} \delta_{x_i}, \nu = \sum_{i=1}^{5} \delta_{x_i}$, $\rho = 2$. For each $\lambda > 0$, choose an optimal $\gamma$ for $PGW(\mu, \nu)$, we have $|\gamma| = 5 > 2$. Thus, the solution of $MPGW_\rho(\mu, \nu)$ is not a solution for $PGW_\lambda(\mu, \nu)$, and we cannot build an equivalence relation between the two problems.
# Positive Label Unsupervised Learning:
Table 2 and Table 4 are part of the positive unsupervised learning section. In this experiment, we require transporting all the mass from positive samples to the target domain. In this case, by Proposition L.1. and Lemma E.2, PGW and MPGW admit the same set of solutions. As numerical evidence, we obtain the same performance from the two methods. We will add this discussion to the paper as a numerical verification of Proposition L.1. However, the PGW method has the extra advantage of giving rise to a metric, which is the main topic of this paper.
# Limitation of MPGW:
Example: **We refer to the example in Appendix L.1 for understanding the limitation of MPGW in this experiment:**
$$MPGW\left(\sum_{i=1}^{400} \delta_{x_i}, \sum_{i=1}^{800} \delta_{x_i}\right) = 0,$$
although these two measures (datasets) are distinct.
## Performance of MPGW in Shape Retrieval Experiment:
**About the MPGW in Tables (a) and (b) in Section 5.3:** In this experiment, we use MPGW/UGW/PGW to measure similarity between different shapes. The reason MPGW exhibits poor performance is that whenever shape 1 is similar to a part of shape 2 (e.g., a "square" is similar to part of a "house"), MPGW will return an approximate value of 0. The classifier relies on the similarity measure, leading to poor performance.
# Relation between the Two FW Algorithms in PGW:
The following will be added to the paper to clarify the equivalence relation between the proposed two algorithms:
- In the gradient computation step, the gradient of Algorithm 2 $\hat{M} \circ \hat{\gamma}$ can be written in terms of $\tilde{M} \circ \gamma$, which is the gradient of Algorithm 1, where $\tilde{M} = \hat{M}[1:n, 1:m, 1:n, 1:m]$ and $\gamma = \hat{\gamma}[1:n, 1:m]$ (See Lemma H.2 and Lemma H.3).
- The $a, b$ values in the line search step in Algorithms 1 and 2 are the same based on (68).
## Motivation of Algorithm 2:
Mathematically and numerically, the two algorithms are equivalent, as discussed in the paper (Appendix G). The reason for proposing Algorithm 2 is to provide a numerical implementation of Proposition G.1, which demonstrates that Macunn's conclusion [12] can be extended to the GW setting, thus establishing an equivalence relation between PGW and GW.
# Parameter Selection for PGW
1. **Shape retrieval experiment, shape interpolation, PU learning experiment:** We require full matching for the measure that has less mass than the other. In this case, we only require $\lambda$ to be sufficiently large (see Lemma E.2).
2. **When the data contains noise/outliers (e.g., shape matching experiment):** If we assume that the distance between outliers and clean data is large, a suitable $2\lambda$ should be less than this distance and greater than the pairwise distance within the clean data.
# Computational Complexity
The time complexity1 to obtain an $\epsilon$-accurate solution for algorithm 1 is
$$
\mathcal{O}\left(\frac{\max^2(2L_1, n^2\min(|\mu|, |\nu|)\cdot \max(\max\{2(C^X)^2 + 2(C^Y)^2,2\lambda)\}}{\epsilon^2} \cdot n^3\right), \tag{TC}
$$
where $L_1$ depends on the initial guess $\gamma^{(1)}$.
Note, the first term is the upper bound of iteration numbers. The $n^2$ in this term is obtained by the upper bound of the Lipschitz constant of the gradient of $\gamma$ in the PGW problem, as follows from Theorem 1 in [38]. In Frank-Wolfe algorithms for GW/MPGW, this term is also contained in the convergence rate. In practice, we generally set the number of iterations to a fixed number (the default value is 1000, as follows from [44] and [PythonOT](https://pythonot.github.io/)), and in all of our experiments, this number is not reached.
The second term $n^3$ can be improved to $1/\epsilon \ln(n)n^2$ if the Sinkhorn algorithm is applied. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Wings: Learning Multimodal LLMs without Text-only Forgetting | Accept (poster) | Summary: This paper introduces the text-only forgetting phenomenon, where multimodal large language models (MLLMs) experience a significant decline in performance on text-only evaluations. The authors claim that this phenomenon is related to the attention shift of cross-layer MLLM-LAWS (Layer-level Attention Weights) before and after processing images. To address this, the authors propose WINGS, which utilizes visual and textual learners to compensate for the attention shift. Experiments across text-only, VQA, and IIT benchmarks demonstrate the effectiveness of WINGS.
Strengths: 1. The observation that text-only forgetting is related to MLLM-LAWS is intriguing.
2. WINGS shows significant improvements in text-only QA and multimodal QA tasks.
3. The paper is well-organized and easy to follow.
Weaknesses: 1. Although the paper provides a valuable observation about the correlation between text-only forgetting and MLLM-LAWS correlation in Figure 2, it lacks a deep discussion on the underlying reasons for this phenomenon. MLLM-LAWS correlation measures the attention states before and after the image, but in the text-only evaluation, no image involved. Thus, it remains unclear why text-only forgetting is related to MLLM-LAWS correlation.
2. Figure 2 shows that text-only performance is correlated with MLLM-LAWS. However, it would be interesting to know if the multimodal performance is also correlated with MLLM-LAWS. If so, the correlation score could be a valuable metric for MLLM model selection or evaluation.
3. Table 1 shows that LoRa efficiently mitigates the text-only forgetting problem but degrades multimodal performance. The authors do not provide any explanation for this observation.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What are the 100 diverse MLLMs visualized in Figure 2(c)? Are these models sampled from the same type of MLLMs? Does the correlation occur in MLLMs trained on interleaved image-text datasets?
2. Is the relationship between text-only performance and MLLM-LAWS consistent across various scales of LLM backbones?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The major limitation of this paper is the lack of a deep understanding of the phenomenon shown in Figure 2. While the paper identifies a correlation between text-only forgetting and MLLM-LAWS, it does not explore the underlying mechanisms behind this relationship.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer LUvX,
We sincerely thank Reviewer LUvX for the keen observations and suggestions, as well as the recognition of the attention shift and Wings effectiveness, and the overall flow of the writing. We will update all modifications in the final version. Thank you.
* **Q1:** "MLLM-Laws w/ image *v.s.* text-only evaluation w/o image, why text-only forgetting is related to MLLM-Laws correlation"
* **A1:** In Figure 2 of the main paper, the attention shift phenomenon guided by MLLM-Laws is shown to be strongly correlated with text-only forgetting, **following are the two main factors:**
* The attention shift in MLLM occurs because the model's LLM main branch **excessively focuses on image features**, causing a deviation in attention toward the images (especially post-image part, line 138). **This "over-focus" prevents effective information gathering in text-only scenarios**, leading to potential attention dispersion. We discussed this in details at the final version.
* Additionally, we have conducted supplementary experiments to support this hypothesis.
1. **In Figure 1 of the supplement PDF**, our analysis shows that a reduced attention shift results in more accurate and focused text-only attention distributions.
2. **In Table 1 of the PDF**, we propose a new metric for the variance of inter-word probabilities in text-only tasks, linking attention shift (MLLM-Laws induced) and text-only performance. **We find that attention shift can diminish MLLM's information aggregation capabilities in text-only data**; further experimental results will be published in the final version.
Furthermore, as described around line 231 in Figure 4(b) of the paper, a key step in compensating for attention shift is aligning the Learner with the main LLM branch during the initial training stage. We conducted additional experiments:
* **In Table 2 of the PDF**, we compared the Learners' attention weights after the first training stage with **the main branch's scale and the routing allocation weights after the second stage**. The Router's distribution weights indicate effective compensation for attention by the Learner structure.
* **In Table 3 of PDF**, comparing the training outcomes of Wings with baseline methods, we observed that Wings effectively mitigates attention shift.
* Finally, **In Figure 1 of PDF**, further analysis of the attention distribution in cases within Wings shows reduced shifts due to dominant segments.
* **Q2:** "more analysis of attention shift *v.s.* multimodal performance"
* **A2:** **In Table 11 of the PDF**, we found only a weak correlation in multimodal performance. For instance, on MMMU-VAL and ScienceQA, but it wasn't significant on MME.
Following LEEP and LogME's benchmark, **we developed a MLLM library for task-level model selection**, achieving improvements over random selection. Establishing a connection between attention shift and the generalized lower bound is essential for this process. Further analysis and results will be included in the final version.
* **Q3:** "more explanation for LoRA observation"
* **A3: LoRA reduces forgetting in text-only tasks but develops less multimodal capability.** Early LoRA studies often focused on simpler models and tasks, whereas multimodal tasks typically show high-rank weight variations. We demonstrate this **in Table 1 of the main paper under the same training data and network parameters.**
To verify that full parameter fine-tuning in multimodal scenarios **is not a low-rank perturbation, in Table 4 of the PDF**, we sampled weight transformations and performed singular value decomposition, **revealing significant changes in the weight matrix spectra from full fine-tuning**. It's important to note that our findings pertain to full fine-tuning, while Wings uses an independent module that interacts with cross-modal inputs. Therefore, the ranks identified do not directly indicate the rank needed in the Wings learner's LoRRA.
[1] LoRA Learns Less and Forgets Less.
* **Q4:** "details of 100 MLLMs in Figure 2"
* **A4:** In our training data, we adopted **various ratios of multimodal to text-only samples**: `25:1`, `20:1`, `10:1`, `5:1`, `2:1`, `1:1`, `1:2`, ..., `1:25`, along with an `all:0` (12 combinations, ensuring a sufficient amount of multimodal samples). We used learning rate 1e-3 for the first stage and 2e-6 or 1e-5 for the second. **We sampled 5 models in one epoch**, excluding 12 failed models due to issues like gradient explosion, resulting in 108 models for analysis.
Although these models share the same architecture (Qwen1.5 7B + SigLIP), performance varies significantly between text-only and multimodal scenarios. **The results reflect different training epochs, hyperparameters, and data, revealing potential correlations.** Nonetheless, they are classic and general, with a sufficient sample size.
* **Q5:** "different scales of MLLMs for attention shifts"
* **A5:** Generally, scaling laws suggest that model patterns can be transferred across different scales. **We conducted experiments using the 1.8B Qwen-Chat backbone**, training models with varying ratios of **text-only instruction data to multimodal data**: `10:1`, `2:1`, `1:1`, `1:2`, and `1:10`, alongside fully multimodal data for a total of six variations.
In the first training stage, we set the learning rate to 1e-3, adjusting it to 2e-6 in the second stage. We averaged 5 models per epoch, discarding 4 due to issues like gradient explosion, **resulting in 26 distinct models**. All models started with the same initialization and were trained on 4 A100 GPUs.
The results **in Table 12 of PDF** demonstrated that MLLM-Laws and text-only correlations were also evident in smaller-scale models.
Please let us know if you have any further suggestions or questions! Thank you!
Best regards,
All Authors.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed response. The authors have addressed most of my concerns. I'd like to raise my rating to Weak Accept.
---
Reply to Comment 1.1.1:
Comment: Thank you for your updated rating! If you have any further questions or comments, please feel free to reach out. | Summary: Multimodal large language models (MLLMs) are initiated with a trained vision encoder and LLM, then fine-tuned on multimodal mixed inputs. In this process, the LLM catastrophically forgets the text-only instructions. This paper first reveals that text-only forgetting is related to the attention shifts from pre-image to post-image texts. Based on this analysis, the authors proposed Wings, the complementary visual and textual learners. The experiment results show that Wings outperforms similar-scaled MLLMs in both text-only and visual question-answering tasks.
Strengths: - The analysis of attention shifts is novel and interesting.
- The experiment results demonstrate the effectiveness of the proposed method in both text-only and multimodal benchmarks.
Weaknesses: - The discrepancy seems to exist between the motivation/analysis and the method. There is a lack of detailed explanation on how the Wings module improves the correlation between pre-image and post-image attentions.
- The proposed Low-Rank Residual Attention (LoRRA) module appears to be a variant of LoRA, but there is no detailed motivation or comparative analysis provided.
- The ablation study is insufficient. Ablation experiments are only compared on the Interleaved Image and Text (IIT) benchmark. It would be better to include comparisons across various benchmarks used in the main results as well.
- The simple baseline to prevent forgetting is simply utilizing text-only data, yet there are no comparisons.
Technical Quality: 2
Clarity: 2
Questions for Authors: - The most intuitive and direct approach to increase the attention correlation can be simply adding correlation loss between the attentions. What would the results be if we applied the correlation loss directly?
- The proposed method looks somewhat similar to P-LoRA [1]. How does it compare with this?
- For ablation studies (Figure 5), which model is used between Wings-base and Wings-pro?
[1] Dong, Xiaoyi, et al. "Internlm-xcomposer2: Mastering free-form text-image composition and comprehension in vision-language large model." arXiv preprint arXiv:2401.16420 (2024).
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The paper addresses limitations in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer TqCY,
We sincerely appreciate reviewer TqCY for acknowledging our analysis of attention shift and the effectiveness of Wings. All updates will be included in the final version. Thank you!
* **Q1:** "how the Wings module improves the correlation"
* **A1:**
* **From a structural perspective:** the Wings module operates **independently of the attention block**, with its outputs combined through weighted addition with the main branch LLM's attention to compensate for attention weight distribution.
* **In terms of the forward process:** the Wings module uses image features as keys and values, while the query derives from previous hidden states. This allows it to **focus more on the attention between the image and the pre-image and post-image**, compensating for the main branch LLM's attention weights.
* **From an optimization standpoint:** A key step in the Wings module's compensation for attention shift is aligning with the main branch **in the initial stage**. As shown in Figure 4(b) of the main paper, learners **first allocate attention between the pre-image, post-image, and image in the first stage**. Following this, the main branch LLM completes alignment under the "guidance" of the Wings module in the second stage, enhancing attention relevance for pre-images and post-images.
In the experiment described **in Figure 5 (C)**, we observe that compensation with the visual wings improves performance for both text-only and multimodal cases. We also conducted **additional experiments**:
* We hypothesize that models with significant pre-image and post-image attention shifts may have **a more dispersed** attention distribution for text-only tasks, with relevant analysis provided **in Table 1 of the supplemental PDF**.
* **In Table 2 of the PDF**, we compare the allocation weights of the Wings module and the main branch of the two stages. The weights from the router indicate that the Wings module **effectively compensates for pre-image and post-image attention** in the first stage.
* Additionally, **Figure 1 in the PDF** shows that the attention distribution **over words varies** with different models exhibiting attention shifts.
* **Q2:** "detailed comparative analysis between LoRRA and LoRA" **and "compare to P-LoRA"**
* **A2:** While the Low-Rank Residual Attention (LoRRA) structure also utilizes low-rank mappings, it fundamentally differs from Low-Rank Adaptation (LoRA) in key aspects:
1. **Motivation Design**: LoRRA functions **independently as an auxiliary module** to compensate for attention shifts in the main branch, while **LoRA** is **an efficient parameter tuning method** for adapting pre-trained models to target tasks with minimal training overhead.
2. **Structural Position and Training Strategy**: LoRRA operates as a separate module **that aligns prior to** attention distribution, whereas **LoRA and P-LoRA** are integrated within the attention module, **parallel to the internal attention linear mapping**, allowing the main branch mapping to remain unchanged while learning low-rank decomposition matrices. P-LoRA processes only forward image features on the low-rank mapping.
3. **Effectiveness**: Table 1 in our paper compares LoRA and Wings **under identical training data and base model architecture**. In multimodal scenarios, LoRA retains text-only content better but demonstrates weaker multimodal capabilities. This suggests its limitations in handling the complexity of multimodal tasks, as low-rank perturbations may be insufficient [1]. Results for P-LoRA with InternLM-XComposer2 7B can be found in Table 12 of the PDF.
[1] LoRA Learns Less and Forgets Less.
* **Q3:** "ablation comparisons across various benchmarks in the main results"
* **A3:**
* **Table 1 in the main paper** presents a comprehensive benchmark result from **16 text-only datasets** across **5 major domains**, including mathematics and coding, along with multimodal instructions (see Table 2, line 242). All methods in Table 1 used **the same training data and model architecture**, **serving as an ablation study**. The Wings architecture shows significant improvements over the baseline and LoRA with equal parameters and consistent training data.
* We added ablation experiments. **In Table 5 of the supplement PDF**, we compare ablation results across various benchmarks from the main results. Similar to IIT Bench, we found that **a lower learning rate or the inclusion of visual Wings** can maintain strong performance for text-only tasks (e.g., MMLU, WinoGrande).
* **Q4:** "the baseline simply utilizing text-only (training) data"
* **A4:**
1. Incorporating text-only training data during continuous training of MLLMs **helps prevent forgetting is common**, although it increases labeling and training costs. For instance, DeepSeek-VL used over 50% text-only data in its multimodal training.
2. The large parameter counts of LLMs complicate regularization and knowledge distillation, **making them less feasible**. Additionally, heightened expert supervision can be costly compared to collecting labeled text-only data.
3. While the MLLM community is evolving, some methods **lack open training data and may not align with publishers' hyperparameter settings**. Nonetheless, we've tested and compared most MLLMs in Tables 1 and 2 of the main paper.
4. In Figure 5 of the main paper, we introduced the Interleaved Image and Text (IIT) Bench for evaluations in continuous multimodal environments.
* **Q5:** "add attention correlation loss".
* **A5:** We attempted to impose KL divergence as a constraint, but this **made the training process unstable**. This is due to correlation **being a statistical phenomenon**, making it hard to control the scale of each sample. Results can be found in Table 12 (line 5) of the PDF.
* **A6:** For ablation studies (in Figure 5 of the main paper), we used **Wings_base**.
Best regards,
All Authors.
---
Rebuttal Comment 1.1:
Title: Additional questions
Comment: First of all, thank you for your thorough response. After reading the response and the paper again, some of my questions were resolved, but I have a few more that I would like to ask:
- Table 2 of PDF
- Could you provide a detailed explanation of how the values in Table 2 were calculated? The caption mentions “distribution of weights between the learner and the main LLM branch,” but what specific weights are being referred to here? According to Eq (5), the router does not seem to assign weights to the LLM and each learner. How does this relate to the explanation?
- Additionally, how do these weights in Table 2 indicate that the Wings module mitigates attention shifts?
- Role of the Router
- What is the role of the router? According to Eq (5), the router does not control the weights between the LLM and visual/textual learners but rather adjusts the internal weights of the outputs of visual/textual learners. Why is this approach necessary, and what is the motivation behind it?
- Additional discussion
- Wings introduce independent visual learners to prevent the LLM from over-relying on visual features. While this is explicit in terms of the model’s structure, it implicitly addresses the problem of over-dependence on visual features. In other words, even with this approach, there could still be an overall model tendency to depend too much on visual features. Could you discuss your thoughts on this?
- Additionally, while the paper focused on attention shift and over-focusing on visual features, there might also exist a knowledge forgetting problem in the LLM. What are your thoughts on this?
---
Rebuttal 2:
Comment: Thank you for your detailed response!
* **Part 1:** Table 2 of PDF
* How to calculate:
* In the first row of Table 2 in PDF, since the router is not introduced in the first stage, we calculate **the ratio of attention weights** between the visual learner and the LLM attention block **to reflect the importance weights.**
Specifically, we extract the above two sets of **attention weights** that image tokens receive (the shape is [sequence length x image token length]). We first average across the sequence dimension. Then, we calculate the ratio for each image token and average these ratios to obtain the results.
We sincerely apologize for the confusion. In our response to reviewer LUvX, we referred to "attention weights" instead of "weights."
* In the second row of Table 2, we show **the ratio between router weights** on the visual learner and the LLM attention block (set to 1) on image tokens.
As stated in Eq (5) in the main paper, the output of the LLM attention block (**with weights treated as 1**) is added to the output of the visual learner (with weights determined by the router), *e.g.*, `0.5681:1=0.3623:0.6377`.
* The explanation provided in Table 2 are:
1. **The first row** indicates that in the first stage, the visual learner **captures most of the attention** on image tokens due to the tuning strategy of Wings.
2. **The second row** shows that in the second stage, the router **assigns weights to the visual learner's outputs**, constraining its attention (as noted in Eq. (5) of the main paper). The weight ratio indicates the router prioritizes the main branch LLM's attention outputs, but it does not imply a mitigation of attention shift.
3. **In Table 3 of the PDF**, we calculate the number of samples where 'the correlation between MLLM-Laws Before Image and After Image' is positive, indicating **a smaller attention shift**. The Wings module mitigates attention shifts with more positive samples compared to LLaVA.
We appreciate your insights and hope those clarify the meaning of Table 2. **We sincerely apologize for any confusion.**
* **Part 2:** Role of the Router
* **The role of the router:** The router's weight $r_a$ is multiplied with the outputs of the visual/textual learners and then added to the outputs of the LLM's attention block (as the Eq. (5)). This effectively establishes a routing ratio of $r_a$ : $1$ between the two components. The router can **adjust the compensation scale** of the visual/textual learners **for each token**.
* The router's motivation stems from the observation that **different tokens** demand **different levels of visual attention compensation [1]**. For example, when inquiring about spatial relationships, more visual cues are needed. In contrast, asking about the capital of the United States does not require visual information. Therefore, we need to customize the compensation of visual and textual attention for each token.
**[1]** Are We on the Right Way for Evaluating Large Vision-Language Models?
* We test the ratio between **router weights** on the visual learner and the LLM attention block in the perception and reasoning categories of MME data. The results show that in the perception category, which involves more image descriptions, the router weight **assigned to visual learners is higher**, and the image attention compensation is greater.
| Model | MME perception | MME reasoning |
|-------|----------------|---------------|
| Wings | 0.273:1 | 0.240:1 |
In summary, the router dynamically allocates the weights for visual/textual learners to compensate for attention to each token.
The additional discussion will be presented in the following comments. Thank you!
Title: Response to the Additional Questions - 1
---
Rebuttal 3:
Comment: * **Part 3.1:** Additional discussion -- Is the overall Wings over-relying on visual feature
* The overall Wings may not rely too much on visual features because the attention of **the textual learner** is focused on the text-only part. When there is too much reliance on visual features, the router will **assign a greater weight to the textual learner**. Wings essentially processes part of the visual feature attention through an "independent" module, learning how to allocate this along with the LLM attention block.
* Wings may have some limitations; for example, due to **the scarcity of spatial relationship instructions** in our training set at that time, the overall Wings might show the over-relying on rare spatial information. We are continuously improving this and will present it in the final version.
* It seems there may be an interest in expressing the concept of modality expansion: To prevent the LLM from over-relying on visual features, we propose Wings of LLM; if addressing **over-relying audio features** could pave the way for visual, textual, and audio Wings of LLM. That is a general and feasible exploration.
* **Part 3.2:** Additional discussion -- Knowledge forgetting
* Knowledge forgetting **can be a part of** text-only forgetting. Wings also exhibit forgetting in specific areas, such as knowledge forgetting, such as an approximately 3% decline in the `logical_fallacies` category of the MMLU dataset. This may be related to the **easier-to-forget** reasoning knowledge.
* We also attempt to observe the relationship **between various metrics and forgetting rates** across different domains. For example, in the MathQA dataset, there is a correlation of approximately 0.629 (across five models) between attention shift and forgetting rates.
* Knowledge forgetting may relate to the **Incremental Learning**. Wings can add appropriate lightweight structures to achieve a better trade-off in the model's **stability-plasticity dilemma [2]**.
**[2]** DER: Dynamically Expandable Representation for Class Incremental Learning
Thank you very much for your response. We sincerely look forward to your feedback. We are committed to continually improving Wings. Thank you once again!
Title: Response to the Additional Questions - 2
---
Rebuttal Comment 3.1:
Comment: We submitted our reply about eight hours ago, but it seems the system did not send a notification. When you have a moment, could you please take a look at our response? Thank you!
---
Rebuttal 4:
Comment: Thank you for your response once again. However, I still have a few confusing points:
- First, for clarity, let the router's output weights be $\mathbf w$, then, $\mathbf w \in \mathbb R^{s \times s}$ and it multiplies with learners' output of $\mathbb R^{s \times d}$. Each row is the output of a softmax, ensuring $\sum_j \mathbf w_{ij}=1$. Is this correct?
- Apart from this question, it would be beneficial to provide a clearer description of the router operation in the paper.
- If so, it seems that the router’s role is to aggregate learner outputs on a token-wise basis. If the motivation is token-level scaling, defining $\mathbf{w}$ as a column vector ($\mathbf{w} \in \mathbb{R}^{1 \times s}$) might be more reasonable.
- Additionally, I’m unsure whether the router’s design aligns with its intended motivation. To achieve token-wise compensation scaling, the router should adjust the balance between learners (i.e., assigning different weights to each learner) rather than focusing on intra-learner aggregation.
If I have misunderstood anything, please let me know.
---
Rebuttal 5:
Comment: Sorry, there may have been some misunderstandings. Let's clarify everything comprehensively:
* **A1:** **Both the router's output and 'softmax'** are misunderstood.
$\mathbf{w}$ **are not** $\in \mathbb{R}^{s \times s}$. The router only activates the top-(sequence_length) columns of the single-layer MLP, meaning it maps **from the sequence dimension to one dimension**. Therefore, the router's output weights $\mathbf{w} \in \mathbb{R}^{1 \times s}$. There are **two** such single-layer MLPs **for both the visual learner and the textual learner.**
We denote the maximum value of the sequence length as $S$, the weight of the MLP (router) as $\mathbf{W} \in \mathbb{R}^{1 \times S}$, and the attention weights as $\mathbf{a}$. **We have**
$$
\mathbf{w} = \mathbf{W}[:, :s] \cdot \mathbf{a}^{\top}.
$$
Thus, $\mathbf{w} \in \mathbb{R}^{1 \times s}$. We guess that the misunderstanding comes from **the 'softmax' in line 193** of the main paper.
Exactly, the softmax is applied **between the weights of visual and textual learners**, **not** on the sequence_length_dim. *E.g.*, if **concat the $\mathbf{w}$ of visual and textual, and get a matrix** $\hat{\mathbf{w}} \in \mathbb{R}^{2 \times s}$, we let $\sum_j \hat{\mathbf{w}}_{i j}=1$). As the visual/textual ratio in the first stage is 1:0, in the second stage, the 'softmax' ensures that the weights of both **are summarized to 1 for each token (like 1+0=1 in the first stage)**. We also mentioned related content in Reviewer GnKS's **A4**.
* **A2:** After clarifying the misunderstanding from the previous question, this question **aligns with what you mentioned**; the router's output weights **are actually** $\in \mathbb{R}^{1 \times s}$.
* **A3:** After explaining the previous misunderstanding, it can be observed that the balance between visual and textual learners is achieved through the router's 'softmax' (at the token level). This ensures **the sum of weights for each token is related and their gradients are interconnected**. Furthermore, the router weights differ and are correlated across visual and textual learners, varying for each token while **preserving a balance among the visual and textual learners.**
**We sincerely apologize** for the misunderstanding. **The motivations you understand and the router structure you find reasonable are indeed what we designed in Wings.** We apologize again for our insufficient expression. **We will make sure to provide detailed explanations in the final version.** We’re truly sorry for the inconvenience this has caused you.
Thank you very much! If you have any more questions, please feel free to let us know.
**Thank you once again!**
We provide **the code for the Router as follows:**
```
import torch
import torch.nn as nn
import torch.nn.functional as F
from transformers.activations import ACT2FN
from typing import Optional
class Router(nn.Module):
def __init__(self, max_length: int, act_fn: Optional[str] = 'silu'):
"""Router class.
Args:
max_length (int): The maximum value of the sequence length.
act_fn (Optional[str]): The activate function.
"""
super().__init__()
self.linear = nn.Linear(max_length, 2)
self.act_fn = ACT2FN[act_fn]
def forward(self, self_attn_weights, **kwargs):
"""forward of Router.
Args:
self_attn_weights (torch.Tensor): The attention weight from the LLM (main branch).
shape: [batch_size x num_heads x sequence_len x sequence_len]
Returns:
list of [torch.Tensor, torch.Tensor]: Two Tensors, router weights on visual and textual learner for each token, respectively.
shape: [[batch_size x sequence_len x 1], [batch_size x sequence_len x 1]]
"""
cur_weights = torch.sum(self_attn_weights, dim=1) # sum on head_dim
cur_weights = self.act_fn(F.linear(
input=cur_weights,
weight=self.linear.weight[:, :cur_weights.shape[-1]], # split to sequence_length, cur_weights.shape[-1] is sequence_length
bias=self.linear.bias
)) # route with MLP
cur_weights = cur_weights.softmax(dim=-1) # softmax on two learners
cur_weights = torch.split(cur_weights, split_size_or_sections=1, dim=-1)
return list(cur_weights)
batch_size, num_heads, sequence_length = 2, 32, 5
print(f'Start testing, batch_size: {batch_size}, num_heads: {num_heads}, sequence_length: {sequence_length}')
router = Router(max_length=2048)
self_attn_weights = F.softmax(torch.rand((batch_size, num_heads, sequence_length, sequence_length)), dim=-1) # attention weights
router_weights = router(self_attn_weights) # forward
print(f'The router\'s output weights of the visual learner, shape: {router_weights[0].shape}') # [2, 5, 1]
print(router_weights[0])
print(f'The router\'s output weights of the textual learner, shape: {router_weights[1].shape}') # [2, 5, 1]
print(router_weights[1])
```
---
Rebuttal Comment 5.1:
Comment: Thank you for the clarification. Then, it seems that Eq (5) is incorrect. It would be beneficial to update the equation correctly.
I appreciate your proactive and sincere engagement during the discussion period. To conclude, I would like to summarize my review:
- As I noted in my initial review, I believe the text-only forgetting problem targeted by Wings is very important, and the performance is impressive.
- During rebuttal and discussion, several concerns are resolved, but major concerns remain:
- The attention shift analysis is interesting but not fully convincing in terms of generalizability.
- Although the analysis involves over 100 MLLMs, the models are trained by adjusting the ratio between multimodal and text-only data within the same dataset and sampling models at different training steps in a single run. This raises doubts about whether the findings would apply to MLLMs with entirely different data distributions, optimization strategies, or architectures.
- It is also unconvincing that Wings effectively resolves attention shifts, as it does not directly address the problem. There is no explicit regularization to ensure the suppression of attention shifts.
- Even though the Wings module operates independently from the main LLM attention blocks, its attention output is still combined with the main block, which means it can still overly rely on visual features. The authors describe Wings' role as "compensation," but if the compensation scale is too large, it essentially becomes another form of attention shift.
For these reasons, I believe this paper remains borderline. However, given the importance of the problem and the impressive performance, I am inclined to raise my score to borderline accept. If there's anything I may have missed or misunderstood regarding my remaining concerns, please feel free to correct me and share your thoughts.
---
Reply to Comment 5.1.1:
Comment: Thank you very much for your response and support!
* The study of Attention Shift in the 100 MLLMs does include some overlapping training epochs. However, their training data also contains certain differences (as adjusting the ratio of multimodal to text-only data **requires random sampling**). Recently, we have been researching how to generate better training data for MLLMs. In the final version, **we will train more MLLMs to supplement additional experiments as much as possible**. Furthermore, we will consider different optimization strategies, alignment methods, architectures, and model performances to investigate the generalizability of attention shifts further. Thank you very much for your comprehensive and detailed suggestions.
* Wings is a better structural and strategic assistant for LLMs to enhance learning from visual input. The overall intuition behind Wings is that the existing LLM structure **is prone to attention shift when new image (visual modality) inputs**. In the first stage, the visual wings **concentrate on the main visual attention** (during which the compensation scale may be too large for certain samples, as you mentioned). However, in the second phase, through the weight allocation of the router and the learning of the textual wings, attention **is balanced between the LLM branch and the visual wings**. This balancing relationship in the learning process resembles **a form of regularization**: for example, when attention toward images increases, the textual wings **also drive an increased focus on the text-only parts**. In Wings, not all inputs will pass through both sides of the learner (for instance, text-only data will not pass through the visual learner), so we prefer to view Wings as **a suite of structural regularizations** that help LLMs learn better during multimodal extension.
Given the differentiation of tokens, we cannot achieve an explicit regularization constraint here because we lack prior information that applies universally to each token; not all tokens should necessarily focus less on images.
We are attempting to extend Wings to more modalities. Thank you very much for your effective suggestions, and we will continue to focus on **improvements related to regularization constraints**.
**Thank you once again!** We've learned a great deal from our discussion with you. **We will certainly continue to work hard to develop the more generalized MLLM. We truly appreciate your support! Thank you!** | Summary: This paper addresses a significant challenge that when Multimodal Large Language Models (MLLMs) as they expand LLMs’ capabilities to include vision tasks. Specifically, it highlights the issue of "text-only forgetting," which occurs when MLLMs trained with visual inputs struggle to effectively process text-only instructions. The problem is attributed to inconsistent attention shifts between text and visual inputs before and after training. To solve this, this paper proposes Wings, a method adding visual and textual learners to balance attention shifts. Their approach shows improved performance on both text-only and multimodal tasks, including a newly constructed Interleaved Image-Text (IIT) benchmark for mixed-modality interactions.
Strengths: 1. The writing in the article is clear.
2. Noticing the text-only ability of MLLMs for a more general model is important, this ability can be lost when turning LLMs into MLLMs.
3. Existing methods need a lot of text-only training samples, which limits improving MLLM performance. The proposed model shows good results.
Weaknesses: The paper provides a detailed analysis of the impact of image tokens on the distribution of existing tokens and introduces the Wings structure. However, to ensure that the Wings structure effectively minimizes the overhead of text-only training data while maintaining model performance, more time-based quantitative analyses are necessary to demonstrate the module's effectiveness.
It is observed that methods like DeepSeek [1], which have published their performance on text-only datasets, do not show significant performance degradation. This might be due to their extensive use of text-only training data. Thus, there is a trade-off between the overhead of text-only training data (i.e., excessive training costs) and model optimization strategies (or training methods). Please show why Wings is efficient in this trade-off. Specifically, to what extent can text-only data be reduced in Wings training process while still maintaining text-only performance?
In Figure 2, it is shown that image tokens are inserted among text tokens. Is this insertion specific to chat models with system prompts, or does it also occur in base-version models?
Concerning the IIT benchmark, when constructing multimodal samples (e.g., (T,T,V) as shown in Figure 5), how are these samples constructed, and how is dataset balance maintained? Please provide additional implementation details.
[1] DeepSeek-VL: Towards Real-World Vision-Language Understanding.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Wings note that the text-only performance of MLLMs and the performance of the initial LLMs. This makes a compelling argument for a more general model. Could you provide examples illustrating the phenomenon of performance degradation on text-only in existing models?
2. In line 131, how is the activation value for each token computed for each layer? Is the averaging done along the columns or rows of the attention matrix, i.e., which dimension is averaged?
3. There are other methods for adding additional modules to MLLMs, such as CogVLM [1]. Please clarify the similarities and differences, particularly why Wings-architecture can alleviate the forgetting problem in text-only data.
4. Provide more specific implementation details of the model structure mentioned in line 161?
5. Please include more ablation studies on training Wings on the base LLM, such as the effect when the initial LLM is base-version and does not have a system prompt. If the images are always positioned at the beginning, does Wings still work effectively?
6. Please provide more experimental results regarding the efficiency gains with LoRRA. Compared to LoRA, it specifically handles modalities. How does its efficiency compare to LoRA, and what are its advantages?
7. Please provide more inference details, particularly how the data flows through the Wings modules (the learners for each modality) and the routing mechanism.
8. In Figure 4, the second stage requires fine-tuning the LLM parts of the MLLM. How much resource overhead does this require? Is it significantly more than methods like LLaVA?
9. Please provide the detailed parameter counts for the 1.8B and 7B model versions and include detailed descriptions of each dataset in subsequent versions.
10. In the IIT benchmark, why does the (T, T, T) configuration not show performance improvement compared to (T, T)? Does it indicate the presence of noisy text-only few-shot samples?
Some Tips:
Please add a description of $\mathbb{1}$ and its subscript in Eq. 3.
Add details to the weighted sum description in line 189, specifically how the attention weight matrix is applied in the router module.
The ARC-Easy dataset mentioned in line 217 is missing a citation.
What is referred to by "Efficient Multimodal LLMs" in line 260?
Provide a more detailed explanation of Partitioned LR in Figure 5.
Please include the prompts used to generate data with GPT-3.5 Turbo mentioned in line 268 in subsequent versions.
[1] CogVLM: Visual Expert for Pretrained Language Models.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The Limitation section in the appendix states that a relatively small amount of text-only training data is still needed to activate the MLLM's capabilities, which is reasonable. Additionally, Wings is a multimodal framework trained from a general LLM. The generalization capabilities and other limitations of Wings can be comprehensively evaluated in the future.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 3jFi,
Thank you very much for Reviewer 3jFi's detailed feedback. We appreciate the reviewer's recognition of Wings' motivation, writing, and overall performance. All updates will be incorporated into the final version. Below are our responses to the clear comments and questions raised:
* **Q1:** "trade-off between training cost and performance"
* **A1:** Wings shows **a direct improvement in training efficiency**: when adding an equivalent amount of Text-only data to the training set, Wings achieves better Text-only performance and improved multimodal performance while maintaining a training cost nearly identical to the baseline LLaVA. To achieve comparable performance, **the baseline requires more Text-only training data.** Table 1 in the main paper uses entirely consistent training data, demonstrating that Wings can maximize overall general performance within a balanced trade-off. Additionally, **we compared the same-architecture baseline using Wings_pro training data with Wings_base** (which utilized less training data, as elaborated in line 675 of the paper). We found that Wings can deliver stronger overall Text-only and multimodal capabilities with a smaller dataset.
* **Q2:** "Wings for LLM_base"
* **A2:** Wings proves effective for LLM_base as well: the presence of image tokens in any position of the training data aids the model in understanding the context of images and text. The placement of image tokens **does not restrict** Wings' expression and **does not affect** the attention weight extraction in Wings' LLM main branch. Results from training on the Qwen1.5-7B base LLM are presented **in Table 13 of PDF**, showing sustained excellent performance.
* **Q3:** "details of Interleaved Image and Text (IIT) Bench"
* **A3:** As discussed in line 264 of the paper, we first sampled multimodal examples **from MMMU, MMBench, SEED-Bench, and AI2D, then collected semantically similar text-only samples from MMLU, CMMLU, OpenbookQA, and HellaSwag.** For some questions, **we used GPT-3.5 Turbo for polishing**. We performed clustering to sample different semantic clusters, aiming for a diverse dataset, and we will open-source the entire dataset after the paper is accepted.
* **Q4:** "case of text-only forgetting"
* **A4:** In Table 2 of the main paper, we present comparisons with other methods, such as Qwen-VL and the LLaVA series models. **These models did not specifically integrate text-only data into their training or address the issue of text-only forgetting**, which led them to underperform compared to the initial text-only LLM (as shown in Table 2 of the main paper).
* **Q5:** "router details (with attention inputs)"
* **A5:** We begin by performing linear mapping on the softmax dimension. Below, we detail how the Wings structure extracts attention weights for routing, collaborating with the learner for inference:
1. First, we obtain the LLM attention weights from the main branch: during inference, by setting `output_attentions=True`, **the attention weights are derived by multiplying the query and key, dividing by the square root of head_dim, and applying softmax** (note that dropout is not applied). Consequently, the shape of the attention weights is [`number of heads` x `sequence length` x `sequence length`], ignoring the batch size dimension, and the sum along the last dimension equals 1.
2. For the image tokens, Wings introduces a visual learner. **The router maps the attention weights to a weight ratio between the image learner and the main LLM**: first, we sum the weights along the head dimension and then apply an MLP (linear mapping followed by an activation layer) to produce a shape of [`sequence length` x `2`], assigning weights to each token. The visual learners for text tokens will be masked, but in the second stage, textual learners will also undergo training in a similar manner. **This weighted combination forms a new set of hidden states, which are then processed in the next layer.** We will provide a detailed description and pseudocode in the final submission.
* **Q6:** "learner of Wings *v.s.* CogVLM visual expert"
* **A6:**
1. **Positioning differs**: The visual expert in CogVLM is a parallel module through linear mapping, while the LoRRA in Wings operates independently from the entire attention block.
2. **The motivation is similar but slightly distinct**: Similar to Mix of Experts and P-tuning, visual experts enhance the attention module's ability to **learn cross-modal interactions and alignments**. Wings also boosts multimodal learning capabilities, specifically addressing compensatory attention for text-only forgetting.
3. **Utility varies**: Visual experts increase training overhead significantly and, without low-rank optimization, **will substantially raise network parameters** (though inference overhead remains nearly unchanged). In contrast, Wings adds only a minimal number of parameters (**refer to Table 9 of PDF**).
* **Q7:** "details on line 161"
* **A7:** Line 161 refers to a visual encoder, specifically the `vit_so400m_patch14_siglip_384` and a linear mapping with a single-layer MLP.
* **Q8:** "more studies on Wings with base-version LLM"
* **A8:** **In Table 12 of the PDF**, we evaluate the MLLM based on the Qwen-base model (the image is at the beginning of instructions and lacks a system prompt). **The results show good performance**; however, because the base model has weaker instruction capabilities, **a standard chat LLM is generally more suitable**. We will include related experiments in the final version.
Due to space constraints, **subsequent responses will be presented in the comments**.
Best regards,
All Authors.
---
Rebuttal Comment 1.1:
Comment: Thank you for the detailed response. The authors have successfully addressed nearly all of my initial concerns, and based on the clarifications provided, I am inclined to raise my score.
After considering the feedback from other reviewers, particularly the points raised by reviewers TqCY and LUvX, I am curious about how Wings compensates for attention shift. Your explanations concerning the structure, forward mechanism, optimization, and experiments are convincing.
However, I have one additional question based on your explanation of the optimization process. You mention that "learners are first trained, so allocate attention in the first stage." Given that much of the data in the first stage is caption instruction, could you please elaborate on the nature of the visual attention that compensates for the LLM branch? Understanding this aspect more clearly would help in fully appreciating the robustness of your approach.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback; it inspires our continuous improvement.
To address your question about the nature of visual attention in the first training stage, we utilize caption data to align visual information. The Wings module enhances visual attention **related to image descriptions** while improving the model's **perceptual capabilities**. Unlike other MLLM baseline methods that only consider the visual part at the input layer, the Wings module **integrates images and captions across all layers**, deepening the LLM branch's understanding of visual content. However, further reasoning abilities still require the LLM main branch **to learn from instruction training data in the second stage**.
For instance, after the first stage of training, Wings can interpret images and manage tasks like identifying "Is the word in the logo 'the beatles story liverpool?'" within the `perception OCR` category in the MME dataset. However, its reasoning abilities are less robust.
The table below compares the performance of **baseline and Wings in `perception` and `reasoning` on the MME dataset**. It shows that Wings outperforms baseline methods in perception after the first training stage, with reasoning abilities improving further in the second stage.
| Model | MME perception | MME reasoning |
|-------|----------------|---------------|
| LLaVA (after the first stage of training) | 1197.53 | 216.18 |
| Wings (after the first stage of training) | 1286.46 | 241.80 |
| Wings (after the second stage of training) | 1411.76 | 342.07 |
Thank you once again for your support and recognition!
---
Rebuttal 2:
Comment: **We add the remaining responses here. Thank you!**
* **Q9:** "more inference details"
* **A9:** Starting from Figure 4 in the paper, we observe that the Learners' structure is embedded at each layer. The visual features from the first layer are fed into the key and value, **while the mapping of query, key, and value is implemented using a residual low-rank mapping**. We modified the fully connected layers in the attention module to use residual and low-rank mapping. In the output mapping matrix, we removed the residual connection since we initialized W_a in the low-rank mapping with Random Gaussian and W_b with Zero. **This allows the structure parallel to the attention module to start training with zero additions**, facilitating quicker adaptation to multimodal scenarios.
* **Q10:** "details of training cost"
* **A10:** **In Table 9 of the PDF**, we present network parameters and costs (in TFLOPS), revealing only **an increase of about 0.1B in parameters and 0.2 TFLOPS in cost**. The training time on 8 A100 GPUs increased by less than 1.5 hours.
* **Q11:** "1.8B and 7B parameters count"
* **A11:** Refer to **Table 9 in the PDF** for parameter counts and forward costs. We commit to providing dataset details in the final version.
* **Q12:** "(T, T, T) weak but (T, T) strong in Figure 5"
* **A12:** We appreciate the reviewer’s keen observation. This may be due to the two text-only in-context samples **belonging to two significantly different datasets**. We re-sampled (T, T, T) **to belong to the same dataset**, and the accuracy of Wings on (T, T, T) improved from 68.4% to 69.7%. | Summary: The paper addresses how to solve the well-known problem of multimodal large language models (MLLMs), text-only forgetting referring to the phenomenon of MLLMs showing drastic performance drops on text-only instructions. The paper first observes based on the analysis over 100 MLLMs that the performance drop is related to attention shifts in text tokens before and after visual tokens within mixed visual-and-textual inputs. To compensate for the attention shift, the paper adds separate visual and textual learners with a router to each of the LLM blocks. The visual and textual learners are implemented with a newly designed Lor-Rank Residual Attention (LoRRA). The proposed method, Wings, achieves improvements over baselines on text-only benchmarks while achieving competitive or better performance on multimodal benchmarks, including a newly collected interleaved Image-Text (IIT) benchmark.
Strengths: - The overall presentation of the paper is quite good. The analysis that explores over 100 MLLMs is really impressive. Especially, the Figure 2 effectively shows the observation that is the key to the proposed method, Wings.
- The proposed visual and textual learners make sense based on the observation.
- Wings achieves superior performance on extensive text-only and multimodal benchmarks.
Weaknesses: This work is really intriguing. However, the rationale behind how the authors designed the proposed Low-Rank Residual Attention (LoRRA) as well as the ablation studies proving some critical design choices are missing.
For example,
- why did the authors add the residual terms (the identity matrices in Equation 4)?
- please provide the results without the router, if possible.
Also, please provide computation overheads of Wings in FLOPs, and how much does Wings increase training and inference time (comparison between Wings_base and the Qwen + SigLIP model)?
Technical Quality: 3
Clarity: 2
Questions for Authors: - Please provide the details of 100 MLLMs explored for the attention shift analysis.
- How can we get a, the attention weights of shape sxs from the LLM main branch? And please elaborate more on how the router operates on a, that is, more details about computing weighted sums of visual and textual learner outputs.
- In Figure 5 (b), did the authors ablate learning rates in the second stage of training? in other words, did the authors use the same learning rate of 1e-3 in the first training stage?
- Does the authors plan to release the second-stage training data for Wings_pro, and the new IIT benchmark?
Minor comment: If the indicator variable becomes zero for any i in equation (3), the probability is zero. Is the equation (3) correct, or there are some typos?
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The paper addressed the limitations of the work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer GnKS,
We appreciate Reviewer GnKS's thoughtful feedback and support, especially regarding the motivation, structure, and performance of our Wings. In response to these valuable insights, we have conducted additional experiments and enriched our descriptions to reinforce our approach. All modifications will be highlighted in the final version.
* **Q1:** "Wings design rationale and ablation studies",
* **Q1.1**: the residual terms.
* **Q1.2**: w/o the router.
* **A1:** The primary rationale for Low-Rank Residual Attention (LoRRA) is to mitigate the attention shift seen in MLLM when integrating visual inputs (as noted in lines 147-149). The main features include:
* **Lightweight, Independent Attention Module:** LoRRA improves multimodal attention by adding **an independent learner module** that manages visual interactions with minimal parameters. It generates keys and values from initial visual features and queries from previous hidden states. This reduces the visual attention load on the primary branch, allowing for more focused processing of text parts.
* **Pre-alignment Process in the First Stage:** In the initial training stage, LoRRA is learned **on inter-modal adaptation using the learner's visual attention modules**, without the router's involvement.
* Regarding the ablation study of structural design: the construction of the LoRRA structure underwent several unreliable iterations. **In PDF Table 6**, We conducted **ablation studies focusing on the attention component**, as training MLP mappings is resource-intensive. Our results indicate that incorporating **linear mappings, learnable prompts, or dynamic mixed LoRA structures** can improve multimodal perception. However, these methods often suffer from limited learning, resulting in a trade-off where less forgetting leads to diminished learning outcomes.
**Responses to Q1.1 and Q1.2:**
* **A1.1: Table 7 in PDF** demonstrates that the **text-only input performs slightly worse than the LoRRA structure with residuals**, which enhances gradient propagation and promotes faster learning. The residual connections create a more coherent overall structure by simplifying the fully connected mappings to an identity mapping.
* **A1.2: In Table 8 of PDF**, the results indicate that random allocation (w/o a router) significantly disrupts attention and impairs performance, while 1:1 allocation reduces the effects of scale transformation. **As noted in lines 281-283 of the main paper**, the router weights are also crucial for multimodal capabilities.
* **Q2:** "details of computation overheads"
* **A2: Table 9 of PDF** summarizes the computational overheads of Wings in (Tera) FLOPS, **alongside parameter counts and TFLOPS costs compared to the Qwen + SigLIP baseline**, using measurements from the [`calculate-flops.pytorch`](https://github.com/sovrasov/flops-counter.pytorch) library with a batch size of 1 and a maximum sequence length of 2048.
* **Q3:** "details of 100 MLLMs for the attention shift analysis."
* **A3:** We analyzed **108 models derived from** 12 multimodal training configurations (`25:1`, `20:1`, `10:1`, `5:1`, `2:1`, `1:1`, `1:2`, etc., up to `1:25` and `all:0`) with varying multimodal:text-only data ratios and 2 learning rates (1e-3 for the first, and either 2e-6 or 1e-5 for the second stage), assessing their MMLU, 5-shot performance differences in text-only and multimodal contexts, **as illustrated in Figure 2 of the main paper**.
* **Q4:** "details of attention weights"
* **A4:** Wings leverages attention weights from transformers by **enabling access during inference through `output_attentions=True`**. The attention weights are generated by performing a query-key-value operation, where the query from the LLM's main attention module multiplies with the key, is divided by the square root of the head dimension, and **is processed with a softmax function—without any dropout**. These attention weights, **shape [`number of heads`, `sequence length`, `sequence length`], are summed over the last dimension to yield values that equal 1**. For image tokens, wings uses a visual learner where the router maps attention weights by first summing along the head dimension and then **applying an MLP for transformation into a two-dimensional output**. Consequently, this reshapes the attention weights to [`sequence length` x `2`], allowing the router to assign specific attention weights to each image token.
In the second stage, we incorporate **textual learners alongside masked visual learners to create weighted hidden states** from both image and textual tokens, with a detailed process description and pseudocode provided in the final version.
* **Q5:** "learning rate of the second stage"
* **A5:** We used a learning rate of 1e-3 for the first training stage and 2e-6 for the second, while the visual projectors’ alignment modules were set to 1e-5. **Our comprehensive ablation**, as detailed **in Table 10 of PDF**, tested learning rates of 2e-6, 6e-6, and 1e-5, applying larger fine-tuning steps for the visual component to **adapt better to multimodal inputs** and **reduce text-only forgetting**, similar to models like Qwen-VL and DeepSeek-VL.
* **A6:** We are committed to **releasing all data, model weights, and code, along with the new Interleaved Image and Text (IIT) benchmark**.
* **Q7:** "i in equation (3)"
* **A7:** In the main paper, we adapted equation 3 from LLaVA to account for interleaved image tokens within the text, omitting loss computation for the "next image token" **since the MLLM cannot generate image tokens.** This adjustment results in a text-only indicator variable interval of `[1, v_start)` U `(v_end, s]`. We will elaborate on this point in detail in the final version.
Thank you once again for your constructive feedback, which will greatly enhance the clarity of our paper.
Best regards,
All Authors.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the detailed response. The authors addressed most of my concerns as well as those of other reviewers. I will keep my rating.
---
Reply to Comment 1.1.1:
Comment: Thank you very much! | Rebuttal 1:
Rebuttal: Dear Reviewers,
**Thank you for your meticulous observations and analyses.** We are thrilled that you have recognized our work, Wings. We appreciate your acknowledgment of Wings’ **effectiveness on** text-only, multimodal, and Interleaved Image and Text (IIT) benchmarks. We are particularly pleased that you found the concept of **attention shift** intriguing and that you described the paper as clear and easy to read.
We observed **the attention shift phenomenon** in MLLMs that have forgotten their text-only capabilities. To address this issue, we designed Wings with **the Low Rank Residual Attention (LoRRA) structure**, which significantly **enhances performance without** adding high costs. During this time, we've been continuously improving Wings to achieve even better results.
Once again, thanks to each reviewer and everyone involved! We have responded to all your questions in detail and **included additional tables and figures in the PDF**. **Should you have further inquiries, please feel free to reach out. We appreciate your support and will continue to work diligently to enhance Wings.**
Best regards,
All Authors.
Pdf: /pdf/4ada38c4a713fe5188b0100ce78fb7ac7c15e1d8.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural Network Inference in Low-Voltage Regimes | Accept (poster) | Summary: The paper proposes a method to handle the errors when reducing the voltage of SRAM to reduce the power consumption. The method is based on preprocessing the input data into error-resilient forms. Experiments show a tradeoff with a reduction of 24% of power and 2-3% accuracy loss in CIFAR10-ResNet settings.
Strengths: 1. The reduction of SRAM power is appreciated to lower the cost.
2. The training method to recovered accuracy at 1% weight bit errors is also important.
Weaknesses: 1. Reduction of voltage of SRAM requires hardware access and will cause more errors than the ideal uniform bit flipping errors. I think tolerating 1% error is useful and interesting, but the motivation might be from more than lowering the SRAM voltages.
2. I did not catch what if the weights in the generator (preprocessing) of the image has bit errors or not. Is it? If the generator network is assumed error free, then it seems not consistent with the overall system configuration.
3. By reducing the power by 30%, the accuracy loss of CIFAR10 dataset is 2-3%. The worth of the tradeoff is debatable. What if I use a 30% smaller network and use full power of SRAM? In the experiment, clearly ResNet50 and ResNet18 has the same clean accuracy, but ResNet18 is 3 times smaller. So one could easily achieve 3x power reduction by using ResNet18 instead of ResNet50. More rigorous experiment setup needs to be considered to validate the motivation of the paper: Why one would make tradeoffs by reducing SRAM's voltage to suffer bit-errors rather than using a smaller network? This is my major concern.
Technical Quality: 3
Clarity: 3
Questions for Authors: The questions are listed in the "weakness"
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your effort and time in reviewing our paper. Our responses to your concerns are as follows:
**1. (More Motivation)** First of all, thank you for recognizing the value of our work in recovering accuracy under 1% bit errors. In fact, while energy savings are a beneficial aspect of our approach, the primary motivation behind NeuralFuse extends beyond merely lowering SRAM voltages. Our main goal is to provide a robust error protection mechanism that can be employed in scenarios where voltage instability is a concern.
For instance, the model runs at the minimum bit error-free voltage ($V_{min}$) most of the time. NeuralFuse can be activated as a temporary error protection mechanism only when the system encounters voltage instability or other unforeseen scenarios inducing bit errors. This ensures the reliability of the model's output during such critical periods, maintaining the system's overall performance and robustness. This consideration also addresses broader scenarios where bit errors might occur due to other factors such as environmental variations, component aging, or transient faults. By focusing on error resilience, NeuralFuse can be applied in various contexts where maintaining model accuracy under non-ideal conditions is essential.
**2. (Generator Errors)** To ensure that NeuralFuse functions correctly, in our experimental setting, we assume that NeuralFuse operates at nominal voltage, meaning it should be error-free. Previous research has demonstrated that the integration of multiple chip units can be implemented with different voltages [1]. Therefore, regarding concerns about '*inconsistency in the overall system configuration*,' it can be argued that although the voltage settings of different parts may be different, such system design and configuration is feasible and ensures that NeuralFuse is error-free during operation.
**3. (Small Network)** The reviewer's intuition is correct. Indeed, using a smaller network such as ResNet18 instead of ResNet50 can achieve significant power savings due to the reduced model complexity. However, as noted above, NeuralFuse could be much more useful in circumstances where bit errors are unexpected. In real-world applications, voltage instability or other transient conditions can introduce bit errors unpredictably. NeuralFuse provides a robust solution that can be activated dynamically in response to such errors, ensuring the reliability of model output during critical periods.
Beyond handling bit errors, our approach can also mitigate accuracy drops due to precision loss. In Section 4.6, our results demonstrate a promising use case in dealing with unseen bit-quantization errors. This capability broadens NeuralFuse's applicability, making it valuable in scenarios where precision constraints are relaxed to save power, but accuracy still needs to be preserved.
Therefore, in terms of efficiency, the system designers can use either smaller models or apply a low-voltage regime with NeuralFuse to achieve desired energy efficiency. In terms of robustness, NeuralFuse can act as an insurance mechanism, assuring model performance during critical periods when voltage instability or other factors induce bit errors. This reliability is particularly important for safety-critical applications, where maintaining model accuracy is essential despite adverse conditions.
[1] Rotaru et al. Design and development of high density fan-out wafer level package (HD-FOWLP) for deep neural network (DNN) chiplet accelerators using advanced interface bus (AIB). (ECTC 2021)
---
Rebuttal Comment 1.1:
Title: Looking Forward to Discussing with You
Comment: Dear Reviewer LgKi:
As the discussion period approaches, we want to check if our rebuttal has addressed your concerns. We greatly value the time and effort you have dedicated to reviewing our work and are eager to address any additional concerns or suggestions you may have.
To summarize our rebuttal:
1. We highlighted that **NeuralFuse’s primary motivation extends beyond just reducing SRAM voltage**. It’s designed to offer robust error protection in scenarios involving voltage instability, environmental variations, or transient faults, ensuring system reliability.
2. Regrading generator errors, we clarified that **NeuralFuse operates error-free at nominal voltage**, as supported by previous research, **ensuring consistency within the system’s design**.
3. Last, we acknowledged the possibility of using smaller networks but **emphasized that NeuralFuse provides additional robustness during unexpected bit errors, making it especially valuable for safety-critical applications**.
Please refer to our rebuttal for more detailed explanations. Feel free to let us know if there are any further questions, comments, or suggestions. We are more than happy to incorporate your feedback into the revision process.
Thank you once again for your time and consideration.
Yours Sincerely,
Authors
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer LgKi,
Thank you for your time and effort in reviewing our work, and we really appreciate your support.
There are only a few hours left before the rebuttal deadline, and we would like to know whether our responses successfully address your concerns. Please also let us know if you have other concerns!
Warm regards,
Authors | Summary: This paper presents NeuralFuse, a module that produces error-resistant data representations by learning input transformations in order to solve the accuracy loss of deep neural networks (DNNs) brought on by low-voltage-induced bit errors in SRAM, allowing DNNs to continue operating accurately even at low voltage without the need for model retraining. NeuralFuse was tested on multiple models and datasets, and it showed that it could recover accuracy by reasonable margin and save SRAM access energy. It supports two scenarios: restricted access, which trains using a white-box surrogate model, and relaxed access, which permits backpropagation. NeuralFuse exhibits robustness against low-precision quantization and is transferrable and adaptable. The authors argue that this development may be helpful for edge devices and on-chip AI.
Strengths: 1. The proposed NeuralFuse operates as an add-on module, meaning it can be integrated with existing DNNs without requiring modifications of the base models. This non-intrusive approach makes it applicable to various models and scenarios, including those with limited access to model internals like cloud APIs
2. The results demonstrate high robustness to low-voltage-induced bit errors and low-precision quantization. It shows reasonable performance recovery across different datasets and architectures and its high transferability and adaptability to unseen models
3. Experimentation shows that during low-voltage operation, it can recover performance drop due to bit error with energy efficiency as fringe benefit
Weaknesses: 1. Authors assume that NeuralFuse module itself is bit error free due to low voltage of SRAM, claiming that its function can be performed by general purpose core operating in nominal voltage. However, in that case, the latency of running this module will be order of magnitude higher during inference time, and the total power consumption might even be higher than base model. If instead it is run on SRAM, then it itself should be vulnerable to bit error, whose analysis is not done. From the writing, it may be assumed that the energy and latency calculation in Appendix C and D is done assuming the proposed module is in SRAM, which will change drastically if we assume that Neuralfuse operation is performed by CPU.
2. It is seen from Figure 3 and Table-1 that in bigger models (e.g: ResNet-50), the standard deviation of performance is very high for different random test bit error patterns and furthermore, the smaller the generator architecture, the worse the performance in general. So, to scale up the performance, bigger generators might be needed, which in turn will be harder to optimize and more resource consuming.
3. The authors claim that their approach has advantage over previous methods because it does not retrain base model, however, the training of generator itself has been shown to be equally hard and time-consuming, which may scale up for bigger base models or generators, so the advantage here is not very apparent.
4. If the base model is running in nominal voltage, then there should not be any bit error and in that case, the added NeuralFuse module will only increase latency and power loss with performance decrease added to it due to transformation of input to base model. So, it is only logical to adopt an adaptive approach where this module is only applicable when the SRAM voltage goes below minimum required voltage
Technical Quality: 2
Clarity: 3
Questions for Authors: Refer to the weakness section.
Confidence: 3
Soundness: 2
Presentation: 3
Contribution: 2
Limitations: I have raised some concerns in the weaknesses section. Those are some possible limitations of this work. The authors may work on these points to overcome the limitation of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for your effort and time in reviewing our paper. Our responses to your concerns are as follows:
**1. (Latency)** We understand the reviewers' concerns. However, we respectfully disagree that our evaluation would change drastically if we assume that Neuralfuse operation is performed by CPU. This is because the additional consumption of a CPU is influenced by various factors such as different CPU architectures, instruction sets, processes (14nm or 3nm) or even manufacturers. Therefore, in this paper, we simplify the confounding factors and merely evaluate the latency increased by NeuralFuse. Our experimental results based on SRAM have already achieved notable results. Nevertheless, we acknowledge that these are important factors, and we will discuss them further in the revision.
**2. (Big/small NeuralFuse)** In practice, the choice of the base model and the NeuralFuse generator depends on the problem to be solved. In current applications, achieving better performance often requires more training duration and larger models, so we believe this is an inevitable but acceptable issue.
**3. (Training of NeuralFuse)** Regarding the issue of model retraining, we believe it is important not only to consider the time-consuming nature of the training process but also to assess the sensitivity of the retrained model to hyperparameters, which can make training easily fail. Previous papers [1] have mentioned that using adversarial weight training with all vulnerable weight-bit combinations is not a feasible approach. Therefore, we believe our NeuralFuse techniques still have significant advantages than retraining one [2].
**4. (Applicable Scenario)** This is really a well-thought-out concern. As noted by [Reviewer aRGH](https://openreview.net/forum?id=npoHt6WV1F¬eId=13v95CnfzT), practitioners might want to *avoid the scenarios where NeuralFuse would be useful* because NeuralFuse alone can alleviate low-voltage inference challenges to some notable extent. However, in our paper, we consider NeuralFuse to be an add-on module, meaning that NeuralFuse can be activated not only in a low-voltage regime but in both nominal- & low-voltage regimes. Although activating NeuralFuse in a nominal-voltage regime may incur the additional costs of accuracy degradation and latency, it helps protect the module from an unstable power supply and mitigate the bit-error-induced accuracy drop.
Nevertheless, from a hardware perspective, it is also feasible to enable NeuralFuse only when the main module is running in low-voltage regimes and disable it in nominal-voltage settings. This ensures that, in nominal-voltage regimes, NeuralFuse does not introduce any energy consumption due to the additional latency and SRAM space requirements.
[1] He et al. Defending and harnessing the bit-flip based adversarial weight attack. (CVPR 2020)
[2] Stutz et al. Bit error robustness for energy-efficient dnn accelerators. (MLSys 2021)
---
Rebuttal Comment 1.1:
Comment: I appreciate the author's response. While I increased my rating, I would still be inclined toward rejection. The reason is the performance gap with a smaller generator and the need for retraining, which undermines the paper's claim that it does not require retraining, leading to an efficient approach. Indeed, a larger generator (which may be necessary for performance) would incur training costs and latency.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the feedback. However, there may be some misunderstandings, as our work aims to *not retrain* the base model. If the reviewer is referring to "*training of NeuralFuse generators*," we would like to point out that the NeuralFuse generators considered in our experiments are relatively small compared to the size of the deployed base model (see Table 7 in Appendix C) and would be more efficient to train. Furthermore, it is difficult to retrain the base model in all cases; this is also why we use a white-box surrogate model for the *restricted access* scenario to demonstrate that NeuralFuse is highly transferable to different base models. In other words, there is no free lunch. It is impossible to have such a protection module with zero training cost. To the best of our knowledge, our proposed NeuralFuse (a plug-and-play module) is the best practice for reducing the retraining cost.
We sincerely appreciate the reviewers' feedback and are just one post away from answering any follow-up questions you may have. We look forward to your feedback. | Summary: The paper presents NeuralFuse, a novel approach to address the accuracy degradation of deep neural networks (DNNs) in low-voltage regimes. The core idea is to learn an input transformation module that can generate error-resistant data representations, thereby protecting DNN accuracy even when bit errors occur due to voltage scaling. The proposed method is model-agnostic and doesn't require retraining of the deployed DNNs, making it suitable for access-limited scenarios like cloud-based APIs or non-configurable hardware. Experimental results demonstrate that NeuralFuse can significantly recover accuracy while achieving energy savings.
Strengths: 1. Novelty and Practicality: The paper introduces a new perspective on mitigating the impact of bit errors in low-voltage DNN inference by focusing on input transformation. This approach is model-agnostic and doesn't necessitate retraining, making it practical for real-world scenarios where model access is limited.
2. Effectiveness: The experimental results across various datasets, DNN models, and NeuralFuse implementations showcase the effectiveness of the proposed method in recovering accuracy and achieving energy savings. The paper also demonstrates the versatility of NeuralFuse in handling low-precision quantization and adversarial weight perturbation.
3. Thorough Evaluation: The paper provides a comprehensive evaluation of NeuralFuse, including ablation studies, comparisons with baselines, and qualitative analysis, which strengthens the validity of the claims.
Weaknesses: 1. Limited Evaluation on Complex Models and Tasks: The evaluation of NeuralFuse is primarily focused on image classification tasks using CNN-based models. It would be beneficial to assess its performance on more complex tasks, such as object detection or natural language processing, and with different types of neural networks, such as Transformers or RNNs, to ensure its broader applicability.
2. Lack of Comparison with Post-Training Quantization: The paper demonstrates the effectiveness of NeuralFuse in recovering accuracy loss due to quantization. However, it would be valuable to compare its performance with post-training quantization techniques that also aim to reduce model size and energy consumption without retraining.
3. Applicability to Dynamic Voltage Scaling: The paper assumes a fixed low-voltage setting during inference. It would be valuable to explore the applicability of NeuralFuse in scenarios with dynamic voltage scaling, where the voltage might change during inference based on the workload or energy constraints.
4. Impact of NeuralFuse on Interpretability: Does the input transformation introduced by NeuralFuse affect the interpretability of the base model? It would be interesting to analyze how NeuralFuse impacts the ability to explain the model's decisions.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Potential for Hardware Acceleration of NeuralFuse: The NeuralFuse module itself might introduce computational overhead. Could NeuralFuse be implemented or accelerated in hardware to minimize its impact on inference latency and energy consumption?
2. Limited Exploration of the Impact on Latency: While the paper acknowledges the latency overhead introduced by NeuralFuse, a more detailed analysis of its impact on real-time applications would be beneficial. It would be valuable to quantify the latency overhead for different NeuralFuse architectures and base models.
3. Assumption of Random Bit Errors: The paper assumes a random distribution of bit errors. However, in practice, bit errors might exhibit spatial or temporal correlations. It's worth investigating the robustness of NeuralFuse to different bit error patterns.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: 1. Potential for Adversarial Attacks on NeuralFuse: The paper doesn't discuss the potential for adversarial attacks specifically targeting the NeuralFuse module. It's worth investigating whether the input transformations introduced by NeuralFuse could be exploited to craft adversarial examples that bypass the error resistance mechanism.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank reviewer for recognizing the novelty, practicality, and effectiveness of our work. We address your comments in the following:
**1. (Complex Models and Tasks)** As an algorithm designer, we choose CNN-based models with classification, which may be a representative problem to prove our idea. As noted by [reviewer aRGH](https://openreview.net/forum?id=npoHt6WV1F¬eId=13v95CnfzT), our approach aims to tackle the model’s bit-error from different angles, and one can easily extend our approach to various task-specific models. Due to the limited time of rebuttal, we might not be able to provide results on other tasks but would include our findings on these applications in the revision.
**2. (Post-Training Quantization)** We conducted an experiment that used a similar experimental setting in Section 4.6 and Appendix H. In this experiment, we apply post-training quantization to induce precision loss to the base model, which means that we do not apply *quantization-aware training* during base model training. The experimental results (see Table 27 in the [attached file](https://openreview.net/attachment?id=tYSoqYMyti&name=pdf)) show that our NeuralFuse generators can still recover the accuracy on the reduced-precision post-training quantization with 0.5% BER on the CIFAR-10 pre-trained model, despite the base model being more vulnerable to bit error attacks without quantization-aware training. This experiment demonstrates the robustness of our NeuralFuse, which is resistant not only to low voltage (bit errors) but also to precision loss (quantization).
**3. (Dynamic Voltage Scaling)** The reviewer’s suggestion is really interesting. However, unstable voltage can easily damage the chip or DNN accelerators. For instance, [1] mentioned that unstable voltage can cause chips or AI accelerators to break down easily. Additionally, recent Intel processors have also suffered from voltage issues that cause CPU damage [2]. To avoid any other side effects, we used fixed voltages in our experiments, which allows a more accurate evaluation of our method and reflects reality. On the other hand, if there are only slight voltage changes, since our NeuralFuse is trained using our proposed EOPM optimizer, it will not overfit to specific error patterns. Instead, it will learn the error distribution under the specific bit error percentage. In this scenario, even if error patterns change, it will not significantly affect final performance, as demonstrated by our experiments across ten different error models.
**4. (Interpretability)** These indeed are worth exploring, and in fact, we have conducted some analyses regarding the interpretability of the base model. In Appendix K, we demonstrate the output distribution at the final linear layer of the base model under three scenarios: 1) the clean base model without errors, 2) the perturbed base model with random bit errors, and 3) the perturbed base model with NeuralFuse. Based on the t-SNE visualization, we observed that the output distribution of the perturbed model is very chaotic. However, after applying NeuralFuse, the output distribution clearly groups into 10 classes. This indicates that NeuralFuse can indeed correct the outputs of the base model.
**5. (Hardware Acceleration)** The adoption of any special hardware for NeuralFuse will depend on how hardware manufacturers design the architecture. Previous literature [3] has mentioned that to accommodate the specific architectures of DNNs, IC design manufacturers can develop corresponding hardware to support these specialized computations. Therefore, we believe that NeuralFuse can reduce additional latency or energy consumption by designing/using these specialized hardware.
**6. (Latency)** In Appendix D, we have evaluated the latency overhead introduced by NeuralFuse. Although NeuralFuse brings a certain degree of extra latency, we deemed it an inevitable tradeoff for reducing energy consumption in our setting.
**7. (Random Bit Errors)** This is a great suggestion! In fact, as mentioned in our paper (line 121), bit-cell failures for a given memory array are randomly distributed and independent of each other; that is, the spatial distribution of bit-flips can be assumed to be random, as it generally differs from one array to another, within as well as between chips. Nevertheless, we run an additional experiment with *non-uniform bit error* to explore non-uniform bit-flipping scenarios. The table below shows the perturbed accuracy with a non-uniform / non-random attack (i.e., first/last layers were implemented at $V_{min}$ and others are sub-$V_{min}$ voltages). In this setting, the perturbed accuracy becomes higher than attacking the whole models due to less perturbed parameters in the models. The experimental results also show that our NeuralFuse can still recover the perturbed accuracy.
|Base Model| Perturbed Acc|ConvL|DeConvL|UNetL|
|-|-|-|-|-|
ResNet18 | 43.8%±12.4%|88.6%±0.8%|90.0%±0.4%|85.0%±0.5%|
VGG19|41.5%±13.4%|86.0%±3.7%|85.8%±5.6%|84.3%±2.1%|
**8. (Adversarial Attacks on NeuralFuse)** This is really an interesting idea. However, we respectfully disagree that this omission represents a limitation of our work, as we believe it fall outside the immediate scope of our current work. Our primary goal is to mitigate the effects of random bit errors induced by low-voltage SRAM operation through input pre-processing, which is a distinct challenge from adversarial robustness. Nonetheless, we agree that integrating adversarial robustness with NeuralFuse will further enhance the overall reliability and security of the system.
[1] When Does Poor Power Quality Cause Electronics Failures? [[link]](https://tinyurl.com/bdfvj34f)
[2] Instability Reports on Intel Core 13th and 14th Gen Desktop Processors [[link]](https://tinyurl.com/3a5m7dsh)
[3] Zhang et al. SpArch: Efficient Architecture for Sparse Matrix Multiplication. IEEE International Symposium on High Performance Computer Architecture (HPCA 2020)
---
Rebuttal 2:
Title: Thanks for the rebuttal.
Comment: I really appreciate the authors' response, which addresses my most concerns. I update the rating from 4 to 5. Thank you.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for the encouraging response! We are glad our response could address your concerns. Thank you for the endorsement and recommendation of acceptance. | Summary: The authors train an input pre-processing module which aims to counteract the effects of random bit errors induced by low-voltage SRAM operation. They demonstrate the ability to avoid most accuracy drops on a handful of CNNs while operating in a 0.5-1% error regime.
Disclosure: I have reviewed this paper in the past. I stand by much of my previous review, as the paper has not changed substantially.
Strengths: The paper takes a rather unique approach to trying to counteract bit errors. Instead of fixing the model or the hardware (both of which have been studied extensively, as the authors admit), they pre-transform the input to avoid stepping in areas of the input space which are vulnerable to random bit errors in the model weights.
The paper is clearly scoped. The authors avoid some common pitfalls with energy cost savings (e.g., they account for the costs of their own technique), and the explanation of end-to-end considerations was probably necessary for readers not used the HW energy measurement. Overall, I felt that the authors did a solid job of balancing practicality (don't want to get too far into SRAM design tweaks) and depth (the energy simulation setup seems like a very reasonable configuration).
The authors' actual learned model is fairly simplistic. Perhaps others might see this as a weakness, but as I see it, the benefit of this paper is in following the authors' perspective flip to its logical conclusion. If further work wants to attempt something more sophisticated for $\mathcal{G}(x)$, great. Better surrogates for transfer? Sure. Plenty of room to follow-on later, if someone wants.
Ultimately, I see this work as a "perspective paper". The energy savings are not anything to write home about, but the approach is qualitatively different, and that makes it valuable. It's healthy to have a method that attacks the problem from a different angle, even if it doesn't quite stack up to state of the art. By analogy, it's worth exploring a very early automobile even if a horse can outrun it---maybe there's more down this road, maybe not. But the authors have at least shown that you can do *something* here, and the explanation and foundation they've provided is stable enough to build on.
Weaknesses: The authors' approach is not competitive with most existing HW techniques addressing low-voltage operation. This is less bad than it sounds. NeuralFuse bolts a model on to the front of an existing not-optimized-for-robustness model and tries to make the best of things. If one can add low-voltage-aware hardware modifications or low-voltage-aware models, one should. This doesn't make NeuralFuse a bad idea, but it probably does mean that HW, SW, and model builders should be trying to avoid the scenarios where NeuralFuse would be useful. This ultimately limits the practical utility of the approach.
I was a bit underwhelmed by the actual reported energy/accuracy values achieved. One of the elephants in the room that the authors skirt around is the zero accuracy degradation scenario. In order for a real-world operator to accept any kind of accuracy degradation, the power savings must be *enormous*, usually measured in factors (i.e. a 5x or 10x reduction for a 1% error loss would be reasonable). In order to get good power savings, you really want to be aggressive lowering the voltage, but BER skyrockets with even small adjustments. So this is kind of the name of the game in low-voltage fault tolerance: it's a lot easier to get energy savings by allowing accuracy drop. But the test that usually separates wheat from the chaff is when you dial that down to zero measurable accuracy drop. You can always say "well there's a trade-off a user can adjust", but in this case, the "trade-off" is dominated by one end of the spectrum, and if there's not a lot of savings in that area, then it kind of condemns the approach.
Technical Quality: 4
Clarity: 4
Questions for Authors: Feel free to address the zero accuracy degradation scenario above. (I'll note that there are plenty of other papers from the HW community that do solve this problem without accuracy loss, so while I accept that the authors' approach is different---and valuably so, I'm not willing to accept an argument that it's an unreasonably high bar. Just harder.)
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Addressed above. No societal concerns.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you so much for recognizing our work as unique and valuable and especially for pointing out its potential to inspire a number of follow-up works. We are thrilled that you enjoyed reading our paper and provided such encouraging reviews and constructive comments.
We address the answers to your concerns/questions in the following:
**1. (Limited Practical Utility)** We thank reviewer for raising this viewpoint. This is really a well-thought-out comment! While accuracy degradation is indeed a concern that developers aim to avoid, our work primarily seeks to explore a novel perspective in handling bit errors induced by low-voltage SRAM operation. In particular, NeuralFuse is used to serve as a robust error protection mechanism that can be employed in scenarios where voltage instability is a concern. Of course, NeuralFuse can be extended to save energy consumption, and our experiments have demonstrated its efficiency in this aspect. That being said, our intention is, as the reviewer said, to demonstrate that pre-processing inputs can indeed mitigate accuracy drops, even if the energy savings are modest compared to state-of-the-art hardware techniques.
It is also important to recognize that our approach is *complementary* rather than *competitive* with existing hardware solutions. In scenarios where hardware modifications are not feasible or desired, our method offers a software-centric solution that can be readily applied to existing models. We believe this flexibility is a significant strength, as it provides an additional tool for developers facing stringent power constraints.
**2. (Zero Accuracy Degradation)** We agree that achieving substantial power savings without any accuracy loss is a challenging goal and a critical step for practical utility. Our current results indicate reasonable power savings with minimal accuracy loss.
At this stage, NeuralFuse serves as a proof-of-concept, and we are optimistic that with further research, more advanced models could significantly enhance the trade-off between energy savings and accuracy maintenance. One possible direction is to refine our pre-processing module to be more adaptive to the specific error characteristics of the low-voltage SRAM.
On the other hand, we are considering hybrid approaches that combine our input transformation with *lightweight* hardware modifications to further mitigate errors without compromising accuracy. This integrated approach could offer the best of both worlds—maintaining accuracy while still achieving meaningful power savings.
In summary, the value of our approach lies in its different attack angles. It serves as a foundation for future work that could potentially integrate more sophisticated pre-processing techniques with existing hardware solutions, thereby providing a more robust overall system.
---
Rebuttal 2:
Comment: Both comments sort of hit on the same topic, so I'll lump them together: that NeuralFuse is complementary rather than competitive with HW approaches (agree) and that it can be used in a hybrid approach with both (optimistic). I'd like to agree with the second part, but in practice, there's a lot of evidence to suggest that it's not that easy. There's been a fair number of HW papers that have tried lumping several techniques together (including hybrid SW/HW), and the results are sometimes that different techniques end up cannibalizing each others' gains. It's not *always* the case, so it's fine to be optimistic that NeuralFuse might dovetail perfectly with other techniques and allow zero accuracy degradation with even more aggressive voltage settings. But ultimately, that's a claim that needs experimentation and proof before it's valid. So I'm strongly with the authors when I say I also hope it's true, but we'll both need to see the evidence before knowing so.
> In summary, the value of our approach lies in its different attack angles.
I agree strongly with this statement, and it's largely the reason for my review score. I think there's a long way to go if NeuralFuse is to be proven useful in practice, but the paper demonstrates enough proof of a concept to allow the community to run with the idea if they so choose.
---
Rebuttal Comment 2.1:
Comment: We thank the reviewer for the prompt response! We are glad our response is in the same boat as the reviewer. We totally agree that "*our work allows the community to run with the idea*," and we are devoted to deploying our method into real applications.
Thank you for the endorsement and recommendation of acceptance. | Rebuttal 1:
Rebuttal: We sincerely appreciate all reviewers' valuable feedback and the efforts of the program chair and area chair. We are particularly pleased that reviewers found our paper well-written (aRGH), featuring a novel idea (aRGH, nPtF), highlighting energy efficiency benefits (nPtF, LgKi), providing thorough analysis (aRGH, nPtF), and being practical for real-world scenarios (aRGH, nPtF, e1Hd). We have addressed your specific questions and concerns. Additionally, we have included further experimental results on post-quantization training for Reviewer nPtF in the attachment. We are committed to addressing any further issues raised and improving our manuscript accordingly, so we look forward to your feedback on our response.
Pdf: /pdf/9611d485afd805ef2d775b72bed4478311bede1a.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Understanding and Minimising Outlier Features in Transformer Training | Accept (poster) | Summary: This paper is doing a lot and is a rare case of the abstract/title really underselling what the paper contains. Basically, the paper investigates outlier feature emergence (OFE) in LLMs and some potential fixes for it. The paper argues that such a study is important for both practical reasons (preventing OFE aids quantization) and theory reasons (understanding training dynamics). It clearly succeeds on both fronts, developing a transformer block that empirically reduces OFE and producing theoretical insights that touch on Signal Propagation, entropy collapse, optimization, and more.
Strengths: Originality: This work clarifies that some prior hypotheses about causes of OFE are inconsistent with experiments. It makes strides towards understanding OFE by combining ideas from Signal Prop and entropy collapse (e.g.), in ways that are insightful, novel, and excellently motivated. The related mathematical analyses are also very well done.
Quality: This is a thorough and careful study, making claims that are well supported by the experiments.
Clarity: This paper is extremely well written and contextualized relative to prior work.
Significance: This work will be appreciated by multiple communities for its wealth of insights. For example, the augmentation of norm-free Signal Prop-focused networks with QK norm to address entropy collapse (making norm-free transformer training competitive) -- which was inspired by the idea that vanilla norm layers help prevent entropy collapse -- is likely to have effects on transformer training procedures going forward.
Weaknesses: A lot of experimental details are not in the main text, which should be addressable with an extra page.
Experiments with other LLMs and at larger scales are unnecessary but would further increase the significance of this work. The authors addressed this in their limitations section.
Technical Quality: 4
Clarity: 4
Questions for Authors: Line 118 has a typo, I think “untrainable” was intended.
Line 193: isn't it normalized by the square of the second moment?
Line 314 has a typo in a figure reference.
Figure 9: make the solid and dashed lines represent the same thing for both SGD and Adam.
Consider moving more discussion of appendix F to the main text.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The limitations are well covered.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 10
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort in carefully reviewing our work. We are pleased that the review demonstrates a clear understanding of our contributions, and in particular how our work builds on existing work to relate OFEs to areas such as signal Propagation and entropy collapse. We are also grateful that the reviewer has spent time digging deep into our paper’s appendix, beyond the main 9 pages.
Overall the review is extremely positive, and we are happy that our work was well received by Reviewer cUUo. Indeed, we share the belief that “this work will be appreciated by multiple communities for its wealth of insights” and that our insights will “have effects on transformer training procedures going forward.”
*Weaknesses* We will make sure to use an extra page to include more detail of our experimental setup, and thank the reviewer for spotting the typos, which we will amend. Regarding scale, we agree with the reviewer that scaling beyond 1.2B is “unnecessary but would further increase the significance of this work”. However, to address scale (which is also mentioned as a weakness by R7V85) in Figures 1* and 2* of the additional pdf page we present loss and kurtosis curves of LLaMa-style transformers trained at 7B parameters for 6B tokens. We did not have enough compute to perform hyperparameter tuning, but we still see that the OP block closely matches the performance of Pre-Norm, and is vastly better in terms of kurtosis. This mirrors our findings at smaller scale in the submission. We provide more context and details of the experimental setup in our response to R7V85.
Given the thoughtful nature of the review, we would like to use the remainder of our rebuttal to engage in further discussion with the reviewer on some topics/questions/experiments that may be of interest:
**Quantisation experiments** In Table 1* of the additional pdf page, we go back to our original motivation in studying OFs and assess how our various architecture and optimisation suggestions affect quantisation. We take the OPT-125m setting of Bondarenko et al 2023 [14] and vary architecture and optimiser. For architecture, we compare OP to Pre-LN and also the Gated Attention baseline of [14], and for optimiser choices we take those identified in Section 5 (increased LR, and increased Adam epsilon), as well as removing dropout regularisation present in [14]. After training on BookCorpus+Wikipedia in mixed FP16/32 we quantise the models to W8A8 (int8 weights and activations) and assess the quantisation error (in perplexity lost). We also present the average kurtosis across layers in Table 1*.
In Table 1* we see that our kurtosis metric for OFs is indeed highly correlated with quantisation error across different architectures and optimiser choices. For example, Pre-LN has consistently high kurtosis, and consistently catastrophic quantisation error. Moreover, the OP block has consistently low kurtosis across different optimisation hyperparameters, and also low quantisation error. Finally, the quantised Gated Attention baseline is better than Pre-LN but struggles with aggressive hyperparameters that improve FP16/32 performance but at the expense of OFs (like large LRs as identified in Section 5, and removed dropout reg). However, both kurtosis and quantisation are improved in Gated Attention when increasing Adam epsilon, as identified in section 5. We believe this experiment ties together our findings and motivation throughout the paper. We present a more detailed discussion of our quantisation results in the global response.
**Underselling in title/abstract** We thank the reviewer for their kind and positive words regarding underselling. Throughout the paper, we were very careful to avoid overclaiming in our wording. This is reflected also in our choice of title and abstract. The reviewer is likely correct that we may have undersold in the process.
**Stated limitations** We are glad that the reviewer appreciates our stated limitations, and sees the strengths of our submission based on their own merit. We chose to include a thorough limitations section in the spirit of honest research and progressing the field in terms of future directions. While we stand by this decision, we also accept that including our limitations may have come to our own detriment in the review process.
**Disparity across reviews** We note that there is an unusually high variance in the reviews for our work. In particular, many of the criticisms of Reviewer Lb1w are in direct opposition of Reviewer cUUo. We find such criticisms to be unfounded, and we have used several quotes from RcUUo’s review in our rebuttal to Reviewer Lb1w.
We hope reviewer cUUo will continue to support our work throughout the discussion period, and thank them once again for their careful review. We also welcome any additional feedback.
---
Rebuttal 2:
Comment: AC, I think this paper should be strongly considered for an oral presentation, and it clearly deserves at least a spotlight award.
---
All, I have read the other reviewers' comments and would like to maintain my high score. Based on the rebuttal, I increased my confidence from 3 to 4. I would also like to clarify that, to the best of my recollection, this is the first time in a couple of years that I have given a score higher than 8 to an ICML, ICLR, or NeurIPS submission.
I had a similarly high opinion of this submission before I saw the rebuttal. The rebuttal then showed that the original submission's findings hold at larger (7B) model sizes, and that the original submission's insights can (a) explain failures of quantization in a new set of experiments, and (b) provide hyperparameter/architecture tuning guidance that mitigates these failures.
---
> The reviewer is likely correct that we may have undersold in the process.
Maybe you didn't "undersell", but you actually delivered on what you described in the title and abstract much better than expected.
> In Table 1* of the additional pdf page, we go back to our original motivation in studying OFs... We believe this experiment ties together our findings and motivation throughout the paper.
I agree completely. It will make an excellent addition to the final version of the paper, tying everything together while corroborating key findings.
Title: Award quality
---
Rebuttal Comment 2.1:
Title: Thank you
Comment: We sincerely thank the reviewer for championing our paper. We deeply appreciate it.
Best,
Authors | Summary: The paper tackles outlier features (OF), i.e. neurons with activation magnitudes that are significantly larger than average which can cause issues with quantization and low-precision training. The paper introduces several metrics for quantifying the existence of OFs and uses them to explore which design choices lead to the emergence of OFs. The authors then introduce an Outlier Protected Block (OP), similar to a standard transformer block, that works without explicit normalization and does not cause a loss in performance and reduces OFs. They then show the connection between signal propagation and OF and demonstrate that OP improves signal prop and therefore reduces OFs, as predicted by their theory. Lastly, the paper discusses how the optimization, in particular learning rate and adaptivity influence OFs.
Strengths: - The topic seems relevant and I feel like the authors made a sufficiently large contribution to the research of OFs.
- I liked the connection between signal prop and OFs in Section 4. The section was formal enough while still being easy to understand. I also think it's good that the authors included a discussion about potential issues with the theory (the off-diagonals on the left-hand side of eq 4) and added experiments to show that in practice, bad signal prop seems to lead to OFEs.
- While maybe not quite sufficient in width, the experimental design itself seems good and makes the results look plausible.
- The paper seems well written and is relatively easy to understand considering its topic and while some figures are a bit too small, I overall like their design.
Weaknesses: - The building blocks for the OP block should be explained in the appendix since the paper seems relevant for practitioners who might not be familiar with the literature. For example: formal definition of pre and post-norm, entropy regularization signal prop theory.
- My only significant concern is the width of the experiments. I do understand that training LLMs with 10s of billions of parameters is expensive, but it would clearly strengthen the message of the paper if OP could keep up in performance with standard normalized transformers at a much larger layer count, since it could be that this is only true for relatively "small" transformers. If training larger language models is impossible, maybe the authors could add experiments for ViTs. Transformers have obviously become very relevant in the vision domain and it would be very interesting to see if OFs are a problem in vision and, if so, if OP can help reduce them while offering similar performance.
- Section 5 felt more like an after-thought. While I think it is still interesting, I would personally much rather see a larger experimental discussion (as mentioned in the previous point) and have some of this content be moved to the appendix if necessary.
Technical Quality: 3
Clarity: 4
Questions for Authors: - The reference to Fig 28 in 314 seems off
Confidence: 3
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: The authors discuss the lack of large-scale experiments and give experimental evidence for the validity of the conclusions obtained from eq (4).
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their constructive review. We are pleased the reviewer feels we “made a sufficiently large contribution to the research of OFs”, appreciates our experimental design, and finds our paper “well written and relatively easy to understand considering its topic”. We address the raised concerns in order.
**Background knowledge** Thanks for the suggestion. We assume familiarity with the terms “post-norm” and “pre-norm” as they are ubiquitous to transformer design, with relevant citations on lines 113-114. However, following the suggestion we will add more discussion to formalise these terms. For entropy regularisation we included a thorough discussion in appendix A, but will formalise the notation. For Signal Prop we believe there is sufficient relevant background and formalism in appendix A, and would be curious to know if the reviewer agrees.
**Scale** Scale is discussed by both Reviewers 7V85 and cUUo. We accept the shared point of Reviewers 7V85 and cUUo that larger scales would “strengthen the message” or “further increase the significance” of our work. However, we disagree with R7V85 that scale is a ‘significant concern’, and agree with RcUUo that scaling further is “unnecessary” for the paper. We are at an academic lab, and our results at 1.2B are prohibitive for most in academia, so we disagree that the scales in our paper are “relatively small”.
Moreover, the OP block itself is not the main contribution of this work. As on lines 164-171 and 346-348, as well as our title, our main focus is to understand and minimise Outlier Features, and we introduce the OP block to test the impact of normalisation layers on OFE. We believe we present an extensive study on this question, and that the scales we test at are sufficient to validate our other findings on OFs e.g. the links between OFs and signal propagation or optimisation choices.
Having said that, we managed, with much difficulty, to access compute to train 7B parameter models during the author response period. Due to short notice, we spent the weekend preparing/running this experiment in a setup based on HF nanotron, with tensor+data parallelism. As such, we did not tune hyperparameters, and our hyperparameters are from the default Pre-Norm mode (see below). Moreover, we use fixed residual gains of beta=0.05 and did not implement trainable betas in the OP, which as discussed in our response to Reviewer Yyp1 would give a small improvement in perplexity.
Despite this, in Figure 1* the OP block closely matches the Pre-Norm performance in loss over steps with a minimal gap at 7B scale, which we believe would be closed with hyperparameter tuning. The plot for kurtosis, averaged across layers, is Figure 2*. We see at 7B scale the OP block also has significantly lower kurtosis than Pre-LN, mirroring smaller scales. We implemented the OP block before the kurtosis logging, and when rerunning for logging our compute allowance unfortunately ran out. As such, Figure 2* stops after around 2B (of 6B tokens). Despite this, we clearly see in this case the OP block’s kurtosis has stabilised (around 8) whereas the Pre-Norm block’s kurtosis is still 450+ and increasing.
7B hyperparameters: These autoregressive language models have depth 32, and hidden width 4096, like the LLaMa-7B models. Standard LLaMa design choices (AdamW betas, RoPE, RMSNorm, cosine decay etc) are used apart from a standard GeLU MLP (instead of gated MLP), and we use max LR=3e-4 like in LLaMa-7B. We trained on the FineWeb-Edu dataset for 20K steps with batch size 72 and context length 4096, giving ~6B tokens.
We hope this experiment alleviates scaling concerns. As we wrote in lines 348-349, we had no reason to suspect the OP block would struggle at bigger scales because the OP block is designed with scaling principles like signal propagation in mind. We maintain this position. We will endeavour to empirically scale to further parameter and token counts, and also investigate other settings like ViTs beyond the language settings that are typically studied in the OF literature, in future work.
**Section 5** Thanks for this point. Section 5 is very important because OFs only arise during training. Thus, to understand OFs one must consider optimisation choices. This leads to several interesting findings as the Reviewer states, particularly the effect of adaptivity.
The brevity of Section 5 is due to the page limit. We point the Reviewer to Appendix F which discusses why the different optimisation suggestions we propose (smaller LR and bigger adam epsilon) result in reduced OFs, by breaking down the updates to the kurtosis into different moment statistics. Reviewer cUUo even suggested moving the discussion in Appendix F to the main paper.
Besides the results in the submission, we present in Table 1* a new experiment that studies the quantisation effect of our proposed optimisation and arch choices, as well as their effect on kurtosis. We take the setting of Bondarenko et al. [14] on OPT-125m, which trains on BookCorpus+Wikipedia and quantises post-training from mixed FP16/32 to W8A8 (weight-and-activation int8). We see that our optimisation interventions (bigger LR/Adam epsilon) have the same effect on kurtosis as found in Section 5, across different architectures, which further reinforces Section 5.
Table 1* also shows kurtosis is highly correlated to quantisation error e.g. our OP block has lowest kurtosis and also lowest quantisation error, across optimiser choices. A complete discussion of Table 1* is in the global response. We believe this experiment brings to full circle our paper's narrative for studying the roles of different architecture/optimisation choices on OF, in which Section 5 plays a crucial role.
We believe we have addressed the reviewer's concerns thoroughly in our response and would appreciate reconsideration of the score if so. We thank the reviewer again for their constructive feedback and would welcome any additional feedback.
---
Rebuttal 2:
Comment: I would like to thank the authors for the rebuttal.
**Background knowledge** Not too surprising given its title, Appendix A is written like a related work section rather than a background section. I think the paper would be more accessible if the most important concepts were properly defined in a unified mathematical notation and with the most important equations not being inlined such that they are easy to find. This does not have to include all concepts cited in Appendix A, but I think methods that are used in the paper, such as QK-Norm should be defined in formal mathematical notation. After reading Section 3 again, I also noticed that the OP Block is only described as a diagram, and via the 3 changes to the Pre-Norm block. I think the paper/appendix should include a proper mathematical definition of the forward pass of the OP block since I find it difficult to think about this only in terms of changes to a pre-norm block that is not even defined anywhere in the current version of the paper. The diagram helps but I believe mathematical notation is the best way to express this and could help people that try to reimplement the OP block on their own.
**Scale** I disagree with the authors that scale is not a concern when it comes to LARGE language models. That being said, I understand the issues with running large-scale experiments in academia and I understand that the analysis of OFE is the main contribution of this paper. I also want to thank the authors for running these additional experiments and I think that demonstrating that OP norm works well for a 7B model and significantly reduces kurtosis is a strong contribution since this is the standard size for small-scale LLMs that are being used in practice. That being said, I would encourage the authors to train at least one ViT for the camera-ready version since a ViT-Bor a ViT-L are much cheaper to train than LLMs and to my knowledge, the phenomenon of OFEs has been mostly studied in the language literature.
**Section 5** I agree with the other reviewer that extending Section 5 in the main paper with some of the results from the Appendix and the rebuttal would be a good idea. I also liked the quantisation experiments here in particular.
In conclusion, I would recommend acceptance given the author's detailed rebuttal.
---
Rebuttal Comment 2.1:
Title: Thank you
Comment: We are pleased that the reviewer appreciated our rebuttal and has updated their score. We thank them once more for the (additional) constructive feedback, which we will incorporate to improve our paper. | Summary: The paper focuses on Outlier Features (OF) in neural networks, particularly transformers, where certain neurons exhibit significantly higher activation levels than others. OFs hinder model quantisation and their emergence during training is poorly understood. The study introduces quantitative metrics to measure OFs, such as kurtosis of neuron activation norms, and investigates how network architecture and optimization choices affect their occurrence. Practical insights are provided to mitigate OFs, emphasizing the importance of managing signal propagation during training. The Outlier Protected transformer block is proposed as a solution, removing standard Pre-Norm layers to reduce OFs without compromising convergence speed or stability. Overall, the paper advances our understanding of OF dynamics and offers strategies to address this challenging aspect of neural network training.
Strengths: 1. The paper introduces metrics such as kurtosis of neuron activation norms to quantify Outlier Features (OFs) in neural networks.
2. It provides insights into how architectural and optimization choices influence the emergence of OFs during transformer training.
3. The study proposes the Outlier Protected transformer block, which removes standard Pre-Norm layers to mitigate OFs without impacting training stability or convergence speed.
4. Findings are supported by empirical validation across various datasets, demonstrating the effectiveness of the proposed metrics and strategies.
Weaknesses: 1. For the OP module proposed in this paper, all the parameters $\beta$ are added before the residual connection, but not after the residual connection. Why is this done? Will this result in the inability to eliminate outliers in the input X of the first block?
2. Regarding the selection of parameter $\beta$, this paper proposes in the appendix experimental details that it is selected in advance as a fixed parameter in the "CodeParrot" experiment, and as a trainable parameter in the "Languini" experiment. So how should we choose for other scenarios? In the second experiment, what are the update details of parameter $\beta$? Can $\beta=\mathcal{O}(1 / \sqrt{\text { depth }})$ be guaranteed all the time?
3. In the comparative experiment of this paper on how the OP module can eliminate OFE, such as Figure 4, the OFs metric using the OP module has a trend of increasing with “Token seen”. How to explain this phenomenon? What is the effect for larger “Token seen”?
Technical Quality: 3
Clarity: 2
Questions for Authors: see Cons
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their time and effort in reviewing our work. We are pleased that the reviewer writes that our “findings are supported by empirical validation across various datasets, demonstrating the effectiveness of the proposed metrics and strategies”. The reviewer’s concerns largely centre around the beta residual scaling gain parameter, in the context of our OP block introduced in Section 3. We address the raised concerns in order:
**Beta before residual connection not after** The reason for beta before the residual is that as per Signal Propagation theory we care about reducing the relative weight of the residual branch relative to the skip branch in order to reproduce the initialisation benefits of pre-normalisation. We outline relevant citations for this fact in lines 127-129, and perhaps the single most relevant citation for this is De and Smith 2020 (which is citation [45] in the submission). Once the skip and residual branches have been summed, it is not possible to change their relative weightings.
In the inputs to the first block, e.g. output of the embedding layers in Transformers, we do not tend to observe the most extreme OFE, as can be seen across different settings in Figures 13, 18, 21, 23, 25, 27, 30, 32 (note the lack of log scale in Figures 21,23). Intuitively, this makes sense because preceeding the input to the first block there is only a single linear matrix of parameters (the embedding matrix), compared to other layers where there are the trainable weights in the non-linear attention and MLP sub-blocks.
**Trainable beta or not** We are not the first to propose the residual scaling idea to enable scaling to large depths, so we based our experiments off existing practice. Many have proposed trainable beta initialised to a small value, like SkipInit (De and Smith 2020, [34]), NF-Nets (Brock et al 2021, [47]), Wortsman et al 2023 [10]. From Signal Propagation theory, Hayou et al 2021 [22] and Noci et al 2022 [25] show that $\beta=O(1/\sqrt{\text{depth}})$ is needed for a well-behaved infinite depth limit at initialisation, so the question of trainable beta is outside the current theoretical scope of the literature.
Empirically, we find that trainable betas lead to slight improvements in perplexity/loss, but stress that they are not essential for the OP block to be performant and scalable, as seen in our 7B scale experiment in Figure 1* of the additional pdf page. We provide a plot of the evolution of betas in our 1.2B Languini experiments in Figure 3* of the additional pdf page, where we see the trainable betas evolve during training but stay roughly in the range of their initialisation 0.1 for both OP and Pre-LN blocks. The slight improvement with trainable betas intuitively makes sense, because beta can be seen as scaling the learning rate for parameters on the residual branch (e.g. https://arxiv.org/abs/2205.15242 or He and Hofmann [28]), and so trainable beta amounts to block-wise adaptive LRs. We think analysing the training dynamics of beta (or proposing modifications to ensure it remains $O(1/\sqrt{\text{depth}})$) is an interesting direction for future work.
**Increasing kurtosis with tokens seen** We address the point on kurtosis increasing (to comparatively low values) with OP block in Section 4 (particularly on lines 236-245). There we show that kurtosis can be seen to closely track with signal propagation dynamics, which we attribute to the model creating structure in its hidden layer representation space to fit the task at hand. Identifying this relationship between signal propagation and OFs is one of our most significant contributions, as noted by Reviewers 7V85 and cUUo, in addition to the OP block.
For increased “tokens seen”, in our new quantisation experiment, Table 1*, we take the OPT-125m setting of Bondarenko et al. [14], which at 12B is more than double the number of tokens seen in Figure 4. Even with this increased training length, we again see that the OP-block has significantly lower kurtosis, across various training hyperparameters, than not only the Pre-LN block but also the Gated Attention method of [14], which was designed to reduce OFs. Moreover, we note that this reduced kurtosis directly translates to reduced quantisation errors with Post-training quantisation from mixed FP16/FP32 to int8 weights-and-activations, which motivated our study of OFs in the first place.
We believe we have addressed the reviewer's concerns in our response and would appreciate reconsideration of the score if so. We thank the reviewer again for their review and would welcome any additional feedback.
---
Rebuttal Comment 1.1:
Comment: Thanks for the efforts of addressing my concerns. This confirms my score. | Summary: This paper addresses the issue of Outlier Features (OFs), which are neurons with activation magnitudes significantly exceeding the average during neural network training, particularly in transformers. These OFs are undesirable to model quantization, leading to high quantization errors. The authors propose quantitative metrics, like kurtosis, to measure OFs and study how architectural and optimization choices affect them. They also introduce the Outlier Protected transformer block, which removes standard Pre-Norm layers to reduce OFs without affecting convergence speed or training stability. Their findings highlight the importance of controlling signal propagation and suggest practical interventions to minimize OFs during training.
Strengths: 1. The definition of Outlier Features (OFs) is clear and quantitative with kurtosis score. It allows for a more objective and precise analysis of OFs across different neural network architectures.
2. The paper presents non-trivial amount of empirical studies that analyze the impact of various architectural and optimization choices on OFs.
Weaknesses: 1. Writing Quality: The paper is poorly written, with comments and claims listed in a piecemeal fashion rather than being unified into a structured story. Glitches: on line 59, the phrase "we matrix multiply" is unclear. The writing overall lacks fluidity and coherence across sections.
2. Insufficient Evidence for Outlier Protected Block (OP): In Section 3, the authors claim that the Outlier Protected Block (OP) reduces Outlier Features. However, the evidence provided is insufficient to support this contribution. The figures show a rough trend, but more thorough investigation is needed:
a) Is this observation consistent across all layers? (Figure 4 shows different trends in "final layers" compared to "early layers.")
b) Does the claim hold under different training schemes?
c) Are there existing baselines that achieve similar effects?
d) What are the gains of using the OP block? How much does it improve network quantization efficiency?
3. Relation Between OF and Signal Propagation: In Section 4, the authors assert that the OF phenomenon is related to Signal Propagation. However, both metrics are mathematically defined by matrix X in different ways, so it is unsurprising that they are related. This diminishes the novelty of the finding.
4. Lack of Depth in Training Design Analysis: In Section 5, while the authors provide several training designs and analyze their effects on OFE, the analysis lacks depth. The paper should offer deeper insights into why these effects occur rather than presenting rough findings and claims.
Overall, the paper feels more like a technical report than a conference paper ready for publication. It lacks a solid contribution and fails to adequately answer the research question of "why OFs emerge during training." The authors acknowledge this in Section 6: "Perhaps the most important is understanding how signal propagation evolves during training, and which factors affect this." This is the core question that the paper should address comprehensively.
Technical Quality: 2
Clarity: 1
Questions for Authors: See weakness.
Confidence: 4
Soundness: 2
Presentation: 1
Contribution: 1
Limitations: Yes, in Section 6.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank Reviewer Lb1W for their time. However, the reviewer has made several assertive yet unsubstantiated criticisms of our work, which oppose all the other reviewers. We are concerned by these criticisms as they lack constructive feedback and could be perceived as overly critical from our view. All other reviewers gave scores leaning towards accept, with RcUUo giving the maximum score 10, in contrast to RLb1W’s score 3. We hope our rebuttal will address these points effectively and convince RLb1W of our paper’s merit, hopefully leading to an improved score.
**Writing Quality**: RLb1W claims the paper is “poorly written” and “overall lacks fluidity and coherence across sections”. This is in stark contrast to R7V85 and RcUUo who praise the paper for being “well written”, and “extremely well written”. RLb1W provides no evidence for this purported poor writing, besides line 59 where the verb form of the noun “matrix multiplication” is used. As such, this criticism is unsubstantiated. With regards to fluidity/coherence, we gave significant thought to the paper’s structure/flow, as seen via the key takeaway boxes between sections. The effect of this, quoting R7V85, is: “the paper is relatively easy to understand considering its topic”.
**Insufficient Evidence for OP Block**: We contrast RLb1W’s concern with RYyp1 and RcUUo, who write: “findings are supported by empirical validation across various datasets, demonstrating the effectiveness of the proposed metrics and strategies”, and “this is a thorough and careful study, making claims that are well supported by the experiments”.
More concretely, Fig 4 plots all different layers (the legend samples layers to save space) which addresses weakness 2a and we discuss later layers and their connection to signal propagation in lines 236 to 245 of the submission. Besides Fig 4, we show the effectiveness of the OP block for OF reduction in five other Figs (2, 13, 14, 32, and 38) which constitute extensive ablations across datasets, entropy regulation mechanism, centring in kurtosis metric, and the effect of norm in MLP. This answers weakness 2b regarding training schemes, and so we view weaknesses 2a+b to be unfounded. We respond to weakness 2c-d (and provide additional evidence for 2b) in the next point.
Also, in Figures 1* and 2* of the additional pdf page we show the effectiveness of the OP block at 7B scale.
**Quantisation Experiments** In response to weakness 2c-2d, we present new quantisation results in Table 1* of the additional pdf page, which compares OP to Pre-LN and the best baseline (Gated Attention) from Bondarenko et al [14]. To our knowledge [14] is the only existing paper that has studied architecture and OFs. We test these models across training schemes, and find the OP block has significantly lower kurtosis across different optimiser choices. This lower kurtosis directly translates to improved quantisation, as seen in the significantly smaller drops in performance when going from mixed FP16/32 to W8A8 (weight and activation in int8) with OP, compared to Pre-LN and Gated Attention. We provide a complete discussion of the results of Table 1* in the global response.
**Lack of novelty of relation between OF and Signal Propagation**: To us, this is again unfounded: according to RcUUo, we "make strides towards understanding OFE by combining ideas from Signal Prop and entropy collapse (e.g.), in ways that are insightful, novel, and excellently motivated.” We take it as a strength not weakness that RLb1W sees the link between OF and Signal Prop unsurprising, as it implies the significant effort taken to make the paper accessible was successful in Section 4 (going back to the point on writing quality). This criticism also contradicts R7V85: who “liked the connection between signal prop and OFs in Section 4. The section was formal enough while still being easy to understand." We add that a result that is unsurprising in hindsight is not necessarily unsurprising.
**Lack of depth in training design analysis**: We disagree: due to the page limit, we could not fit all our optimiser choices analysis into Section 5, but this does not mean our analysis lacks depth. As also discussed in line 339, we refer the reviewer to Appendix F, where we dive deeper into the reasons why our proposed optimisation changes lead to reduced OFs, by breaking down the kurtosis updates into different moment statistics. Reviewer cUUo read Appendix F and suggested we include it in the main paper.
Moreover, our claims regarding the importance of large adaptive learning rates for OFs are not “rough findings”, but instead substantiated across multiple architectures and datasets. Namely, Figures 7, 24 and 25 show the effect of large learning rates, and our claims regarding adaptivity with Adam epsilon are supported by Figures 8, 27, 30. As such, we again fail to see the basis of this criticism.
In addition to the existing depth of analysis of optimiser choices in the submission, we refer to our new quantisation experiment in Table 1*, which provides further evidence of the effect of optimiser choices we identify in Section 5 in terms of OFE, on a new dataset and across multiple architectures.
**More like a technical report than a conference paper ready for publication**: We are completely surprised by this assertion. The quote from lines 353-354 is misquoted without context, and thus misleading. The missing context clearly refers to theoretical understanding; we give many empirical insights into signal propagation training dynamics in Section 4. We leave RLb1W with quotes from R7V85 and RcUUo, who write: “I feel like the authors made a sufficiently large contribution to the research of OFs” and “this work will be appreciated by multiple communities for its wealth of insights”.
We believe we have addressed the reviewer's concerns thoroughly in our response and would appreciate reconsideration of the score if so. We would also welcome any additional feedback.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. However, it appears that the revision did not adequately address the core concerns, instead referencing unhelpful comments from other reviewers. The citations from other evaluations do not substantiate your responses unless they are accompanied by supporting evidence. Consequently, my concerns remain unresolved, and I have retained my original score.
1. **Unsatisfactory Writing Quality**: The manuscript suffers from numerous stylistic and structural weaknesses, making it difficult to follow. Starting in Section 3, from Line 87-103, the paragraph introduces background on normalization but fails to signal this topic in the opening sentence. Line 104-177, abruptly begins with "In Figure 2" without a transition from the previous paragraph, leaving the reader unclear about the figure's contect...... Limited by time and energy, the reviewer can not point out all the places that writing can be improved since there are too many. Most of the paragraphs in the document lack a coherent top-down structure.
2. **Insufficient Support for Architectural Contribution**: The "Outlier Protected transformer block" is presented as a highlighted innovation, yet its description is confined to a single paragraph (Lines 146-163). For an effective presentation of a new architecture, consider the detailed exposition found in Section 4 of Reference [14] (frequently mentioned by the author). If there are substantial results supporting the proposed architecture, they should be prominently featured, akin to how Tables 2 and 3 are presented in [14]. The lack of a unified section detailing the experimental approach, akin to Section 5 in Reference [14], makes it challenging for the readers to effectively understand the context and validity of your findings.
3. **Insignificant Observation**: My concern regarding the relationship between Outlier Features (OF) and Signal Propagation has not been addressed. You provide several opposing subjective comments but fail to engage with the core issue. Specifically, you claim a significant observation that Signal Propagation relates to OF, yet both measures you discuss (Kurtosis and the Gram matrix) are mathematical functions of the same variable, X. The inherent mathematical relationship between these functions should not be surprising, and you have not clarified why this finding is noteworthy.
4. **Insufficient Fundamental Insight**: The title and abstract of your paper promises a deep understanding of why OFs emerge during training. However, the empirical analysis over optimization choices in Section 5, while useful, does not address this core question from a fundamental perspective. An effective discussion should integrate more profound insights into the nature and implications of OFs within neural network dynamics. To enhance the paper, one possible analysis technique can be associated with the neural tangent kernel or training dynamics. For guidance on conducting this type of rigorous research, I recommend referring to [A,B,C], which illustrate how scholars **deeply** uncover the underlying causes of phenomena during training.
Overall, the manuscript requires significant improvements in writing, experimental design, and technical depth to meet the publication standards.
[A] Kumar, A., Raghunathan, A., Jones, R. M., Ma, T., & Liang, P. Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. In International Conference on Learning Representations 2022.
[B] Kou, Y., Chen, Z., Chen, Y., & Gu, Q. Benign overfitting in two-layer ReLU convolutional neural networks. In International Conference on Machine Learning (pp. 17615-17659). PMLR 2023.
[C] Mahankali, A. V., Hashimoto, T., & Ma, T. One Step of Gradient Descent is Provably the Optimal In-Context Learner with One Layer of Linear Self-Attention. In The Twelfth International Conference on Learning Representations 2024.
---
Reply to Comment 1.1.1:
Title: Additional author response
Comment: We thank Reviewer Lb1w for engaging during the rebuttal period. We respond to Reviewer Lb1w with the following points:
**On the depth of the experimental analysis**
In their original review, Reviewer Lb1w stressed the importance of quantisation experiments to have more thorough analysis of the effectiveness of the OP block (which we agree). We have since run the experiments, reported in Table 1* in the attached one-page pdf (see details and discussion in the global repsonse). In summary, the proposed modifications do enhance low precision training, which nicely correlates with the observation of less OFs. We would appreciate it if the Reviewer acknowledged this and potentially reassessed their considerations on the depth of our analysis.
**Insufficient Support for Architectural Contribution**
Given what we said above, we politely disagree that there is not enough experimental evidence for the architectural modification. In particular, with regards to the four axes mentioned in the initial review, we believe that (a), (b) and (c) are resolved in the original rebuttal and references to the appendix. Finally, see above for the quantization experiment, requested in (d).
We also add that Reviewer Lb1w has raised new concerns, relating to the description of the OP block and lack of unified experimental section, in their response that were not raised in their original review. Reviewer 7V85 raised similar concerns in their response to our rebuttal, which we have already incorporated into our revised work.
Finally, we have demonstrated the efficacy of the OP block at 7B scale in Figures 1* and 2* of the additional rebuttal pdf, and would appreciate it if the Reviewer acknowledges this.
**Insignificance of Signal Propagation link**
The reviewer has raised the “insignificance/noteworthiness” of the OF and Signal Prop link as a new concern in their most recent response. The original review (weakness 3) criticised the “surprisingness/novelty” of the finding, which we addressed in our rebuttal.
The significance/noteworthiness of the relation between OF and Signal Prop is that it: (i) establishes a connection between two previously unconnected sub-branches of the literature which allows ideas from both to help drive progress in the other e.g. the identification that normalisation layers make Signal Prop worse during training (Figures 5 and 6), and (ii) helps us motivate why choices from the Signal Prop literature that improve Signal Prop (like downweighted residuals [22, 25] or shaped activations [63, 24, 23]) also help to reduce OFs, which we provide empirical evidence for in Figures 5 and 21.
**Insufficient Fundamental Insight**
As we openly state in our limitations section, there are a lot of interesting future directions for mathematically rigorous studies that would build on the results of our paper, but are outside the scope of our work. We make the distinction between “mathematically rigorous” and “fundamental” as a lot of empirical results in deep learning are fundamental in our eyes, e.g. the Edge of Stability [30].
The reviewer’s suggestion of the “NTK” or “training dynamics” as theoretical analysis techniques are insufficient for OFE because the NTK limit is incompatible with feature learning, which is necessary for OFs, as we state in lines 254-256 of the submission. We are unsure what is meant by “training dynamics” as an analysis technique here.
We do not wish to come across as overly harsh on the Reviewer Lb1w, but we find the phrasing “[..] which illustrate how scholars deeply uncover the underlying causes of phenomena …” to be somewhat inappropriate, as it seems to hierarchically separate such “scholars” and the authors of this paper.
**Unsatisfactory Writing Quality**
We have already significantly improved the manuscript, particularly with regards to a more extensive description of the OP block. The reviewer raises 2 additional concrete examples of what they perceive as poor writing: the openings of lines 87-103 and lines 104-117. We thank the reviewer for these examples but politely disagree and view these as subjective stylistic preferences instead of substantial criticisms that merit the review’s score. We appreciate it takes time and energy to review a paper, but strongly believe there is a burden of proof here and the claimed criticisms to writing quality need to be backed up with more evidence to be warranted.
**Comments from Other Reviews**
Finally, we think it is somewhat inappropriate to call the comments of the other reviewers “unhelpful”: the fellow reviewers have read the same manuscript and are entitled to their own viewpoints. It is natural for there to be disagreement between reviewers, but the level of disagreement with our paper this far into the author-reviewer discussion period is unprecedented. With our rebuttal and additional response, as well as the comments of the other reviewers, we once again politely ask Reviewer Lb1w to reconsider their appraisal of our work. | Rebuttal 1:
Rebuttal: We thank the reviewers for their time in reviewing our work. We are encouraged that three out of four reviewers gave scores leaning towards accept, with an average score of 6 across all four reviews. In particular, reviewer cUUo gave the highest possible score of 10, commenting: “this work will be appreciated by multiple communities for its wealth of insights”.
There is, however, a disappointingly large variance in the scores, with Reviewer Lb1w giving a score of 3 and writing that the paper feels “more like a technical report than a conference paper ready for publication”. We believe this, and several of Reviewer Lb1w’s other criticisms, are unsubstantiated with no evidence, which we have discussed at length in our individual response to Reviewer Lb1w. Moreover, many of Reviewer Lb1w’s criticisms are in direct opposition to praise received from all three other reviews. We urge the reviewers to reach a more consistent appraisal of our work during the discussion period. Through our author response, we hope to convince Reviewers Lb1w, 7V85 and Yyp1, to raise their scores.
In the individual responses, we believe we have addressed all concerns of each individual reviewer, so in the remainder of the global response we would like to draw attention to additional results in the rebuttal that may be of shared interest to the reviewers.
*Notation for additional results* Whenever we write an asterisk after a Figure or Table, e.g. Table 1*, this means we refer to a Figure/Table from the additional pdf page. Unasterisked Figures/Tables refer to the original submission.
**Quantisation Experiment and Section 5**
In Table 1* we present an experiment testing the combined effects of our different architecture and optimisation choices in affecting quantisation errors. This goes back to our original motivation in studying OFs, and how we can minimise their emergence with different design choices.
In Table 1*, we take the 125m scale OPT setting of Bondarenko et al. [14], which trains models in standard mixed FP16/FP32 precision on BookCorpus+Wikipedia for around 12B tokens seen. Post training, we quantise to int8 weight-and-activations (W8A8). We use the same quantisation recipe and reproduce their results. We present results with standard deviations from 3 seeds in the post training quantisation step. We note that the GPTQ paper https://arxiv.org/abs/2210.17323 shows that post training quantisation is harder at smaller scales like this 125m setting than at larger scales.
Table 1* compares both the standard precision performance and also quantisation performance across architecture and optimiser choices. We compare 3 different architectures: 1) standard Pre-LN, 2) the Gated Attention baseline of [14] (which was their best baseline), and 3) our OP block, as well 4 different optimisation setups that are added one after another on top of each other: 1) the default hyperparameters of [14], 2) removing the dropout regularisation, 3) increasing the maximum LR from 4e-4 to 1e-3, and 4) increasing Adam Epsilon from 1e-8 to 1e-5.
Optimiser choices 2) and 3) were designed to improve standard precision performance, albeit potentially at the detriment of quantisation performance due to OFs (as suggested by our findings with increase LRs in Section 5). Optimiser choice 4) was chosen to improve and reduce OFs following our contributions in Section 5, and thereby hopefully improve quantisation errors.
As seen in Table 1*, our findings throughout the rest of our paper are validated. Firstly, the kurtosis is seen to be highly correlated with quantisation error, for example the Pre-LN model has consistently high kurtosis, and also consistently catastrophic performance at W8A8 (>45 perplexity increase in W8A8 across all optimiser settings). Secondly, our OP block has consistently low kurtosis (around or below 10), and this directly translates to low quantisation error (the difference in perplexity between standard mixed precision at W8A8 is around or below 0.7 across all optimiser settings). This low kurtosis/quantisation error of the OP block holds true even for aggressive optimiser hyperparameters like large LRs that increase kurtosis, but also improve standard mixed precision performance. Finally, the baseline of [14] struggles when dropout is removed and a large learning rate is used, with kurtosis of ~30 and large quantisation error around 2-3 perplexity but our suggestion of increasing Adam epsilon in Section 5 reduces the kurtosis back down to 15 which translates to a quantisation error of 0.77 perplexity.
Indeed, the best quantised W8A8 model in this case is actually the Gated Attention baseline of [14] with our optimisation choices from Section 5, which is very slightly better than our OP block (15.54 vs 15.62 perplexity). In relation to comments from reviewers 7V85 and Lb1w, this highlights that Section 5 is a fundamental and key section of the paper, and that our contributions to understanding the impact of optimisation choices on OFs are also important, in addition to our architectural contributions.
**Scaling experiment**
We agree with Reviewer cUUo that scaling further beyond 1.2B parameters is “unnecessary but would further increase the significance of this work”. To address scale, over the weekend of the response period, we were able to access resources to run limited experiments at 7B, which we present in the additional pdf page. In Figure 1* we see that the OP block closely matches standard Pre-Norm in terms of loss curves, and in Figure 2* we see that the Pre-Norm block suffers from significantly worse OFs, in the sense that its kurtosis is much larger. These results mirror our findings at smaller scales in the submission. Further details and context can be found in our response to Reviewer 7V85.
Pdf: /pdf/364f510b824033366597c9990c91dd448ec0a010.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
The Representation Landscape of Few-Shot Learning and Fine-Tuning in Large Language Models | Accept (poster) | Summary: This work focuses on understanding supervised finie-tuning (denoted SFT, where parameter weights are changed) and in-context learning (denoted ICL, where weights remain frozen and the adaptivity is done through a few shots in prompts), in the context of modern LLM. In doing so, the authors propose to apply the Advanced Density Peaks algorithm (ADP) to layers in the transformers that are used in a question answering task. The authors presented a wide range of plots and discussion, observing that both few-shot learning and supervised fine-tuning shows complementary behavior where earlier layers and late layers focus on different levels of information.
Strengths: This manuscript could gain importance due to the recent advances in modern LLMs and the widely adapted practice of supervised fine-tuning and in-context learning. The authors make great efforts showing a wide range of analysis (although in a less clearly organized manner).
Weaknesses: I have a few major concerns regarding the manuscript in its current form.
This manuscript has several issues in clarity. First, the description of the ADA algorithm, which is the cornerstone of this work, remains largely hard to follow, and I have to refer to the original ADA paper to figure out what the authors propose. Also in the results section, a wide range of observations are discussed, but such discussions are largely scattered and I found it hard to gain insight from them.
There are also issues with the originality and significance of this manuscript. This is partially due to the issues in clarity mentioned above and also the lack of significant contributions. Technically, this manuscript adapts ADA algorithms in understanding the representation, in each layer of the transformer. This is technically not a significant contribution. This, alone, is not a concern, since many great analysis works use simple methods to reveal important insight. However, in this manuscript’s case, the insights are less clear and a reader may find it difficult to draw inspiration from the discussions.
In an ideal form, the manuscript could contain clear, organized discussion revealing interesting insights. A non-exhausting topic that can be studied includes semantics, fine-tuning dynamics, theoretical bounds, practical rule-of-thumb, or the lack thereof in modern models. I would encourage the authors to consider such aspects.
Nit-picketing:
Table A1: last columns for Llama-3-70b have formatting issues (should uniformly use percentage or decimal).
Technical Quality: 2
Clarity: 2
Questions for Authors: N/A
Confidence: 3
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time spent reading our manuscript and the concerns you raised, which will help improve our study and make its exposition clearer. We will address all your points below.
> *This manuscript has several issues in clarity. [...]*
We agree with you that the description of the ADP algorithm can be improved, and we plan to do so in the revised version of the paper. We plan to modify the ADP exposition as follows:
1. To facilitate understanding of the algorithm's logical flow, we will split the description of the ADP algorithm into four paragraphs: “intrinsic dimension estimation”, “density estimation,” “density-based clustering,” and “hierarchical clustering of the peaks with WPGMA.”
2. We will focus on the algorithm's assumptions and give an intuitive picture, adding a figure to represent the geometrical intuition behind the ADP.
3. We will include all the technical details in the appendix (we already did that in Sec. B) to avoid making the exposition in the main paper too long and difficult to follow.
>*discussions are largely scattered [...]*
The main messages of our work, as explicitly stated in the bullet points of the introduction, are the following:
1. Across the hidden layers of LLMs, the probability landscape changes in a two-phased manner, separated by a sharp transition in the middle of the network (1st bullet point, and Sec 3.1)
2. When LLMs are adapted to solve a downstream task, ICL and SFT exhibit complementary behavior with respect to these two phases: \
2a. before the transition, ICL induces an interpretable organization of the density peaks, which is consistent with the semantic information of the data (2nd bullet point, and Sec 3.2) \
2b. SFT modified the geometry of the probability density after the transition where few (3rd bullet point, and Sec 3.3)
In the revised version of the manuscript, we will make this correspondence clearer by reducing the bullet points from 4 to 3 so that they can explicitly match each section of the results and assign more explicit titles to Sec. 3.2 and 3.3.
> *There are also issues with the originality [...]*
We believe that our study has a number of methodological and substantial novel contributions: \
While most of the previous works compare ICL and finetuning in terms of performance on downstream tasks (see e.g. [1] and refs. therein), we propose a very different analysis of the changes in the geometry of the hidden layers induced by these two paradigms;\
While most previous studies trained linear probes to discover semantic information encoded in hidden layers (see [2-3] among others), we propose and use a fully unsupervised nonparametric approach to study the probability landscape in LLM, finding interpretable semantic relations between the modes of distribution;\
Finally, we validate our claims on state-of-the-art models like Llama-3, released only a few months ago (April 2024), including a study of the 70B parameter models, which are often overlooked.
>*Topics that can be studied include semantics, fine-tuning dynamics, theoretical bounds, practical rule-of-thumb [...].*
We appreciate your perspective on this matter. However, we'd like to respectfully clarify that our work does indeed address the *semantics* and *fine-tuning dynamics* in modern models. In addition, these analyses allow us to give clear-cut practical rules-of-thumb.
*Semantics*:\
In Section 3.2, we explore the extent to which the contextual embeddings encode a global property of prompts, specifically the topic of the question. We show that in ICL, the nearby density peaks contain the embeddings of
sentences denoting similar topics (e.g. math, physics, ...), and their global hierarchical organization closely mirrors the semantic similarity of the subjects (see lines 182-185, 212-214, 229-236, and Figure 4).
In line 44 and bullet points 1, 2, and 4, we explicitly state that our work also addresses semantic analysis.
*Fine-tuning dynamics*:\
Figure 6 and lines 275-284 are dedicated to the analysis of the fine-tuning dynamics through the lens of the representation similarity between hidden layers before and after fine-tuning.
This analysis shows that most training steps are spent chaining the last layers rather than the early ones (see lines 282-284).
In Figure 3 of the accompanying PDF, we show that as the late layers become more dissimilar to their initial state, their ARI with the letters correspondingly increases. This change results from the onset of a few homogeneous density peaks on the answer label. \
This analysis addresses another point raised by the reviewer, namely, the discussion of practical rules of thumb.
*Practical rule-of-thumb*:\
Since fine-tuning affects the second half of the network, our analysis suggests a possible strategy to adjust the ranks of the LoRA matrices. Several studies [4-6] attempt to adapt the ranks of the LoRA matrices based on different notions of matrix ‘importance.’ Our study of the dynamics of similarity (Fig. 6) gives a principled measure of importance: the layers that change the most should have higher ranks. We are currently investigating these aspects in our lab and will include a brief discussion of the practical application of our study in the revised version of the manuscript.
We thank you again for pointing out clarity issues since these will allow us to improve our manuscript. We hope our responses can alleviate your concerns regarding the clarity of our results. If so, we hope you will consider raising your score.
We’ll be happy to reply to any further questions.
[1] Mosbach et al., Few-shot Fine-tuning vs. In-context Learning: A Fair Comparison and Evaluation\
[2] Conneau et al, What you can cram into a single vector: Probing sentence embeddings for linguistic properties\
[3] Hewitt and Liang, Designing and interpreting probes with control tasks.\
[4] Zhang et al., AdaLoRA\
[5] Valipour et al., DyLoRA\
[6] Ding et al., Sparse Low-rank Adaptation of Pre-trained Language Models.
---
Rebuttal 2:
Comment: Thanks for the detailed rebuttal.
Having read the response and the conversation between the authors and other reviewers, I'm happy that the authors propose revisions to deal with some issues in clarity. Also I'm glad that the extensive set of experiments have been well recognized by all reviewers. However, a few issues I'm concerning still exists, especially on how scatters observations are grouped and discussed in a coherent and concise way -- which is important for readers in the community to gain useful insights. As a result, I revised my score to reflect the positive side of the authors' articulation in rebuttal, with the hope that the promised revision could be realized in full extent.
---
Rebuttal Comment 2.1:
Comment: We thank you for acknowledging the extensive set of experiments that support the main message of our contribution and for increasing your score.
We take your concerns on clarity seriously and are committed to include the promised improvements in the camera ready version of this work. | Summary: The paper explores the internal dynamics of LLMs when subjected to ICL and SFT strategies. Despite achieving comparable outcomes, the paper reveals that these methods result in distinct representation landscapes within the model. ICL fosters a hierarchical organization of representations based on semantic content in the initial layers, whereas SFT yields a more diffuse and semantically mixed landscape. In the latter stages of the model, fine-tuning leads to representations with sharper probability modes that better capture answer identities compared to the less defined peaks produced by ICL.
Strengths: The paper takes a novel approach by analyzing the geometric properties of the hidden representations in LLMs, offering new insights into the workings of ICL and SFT. The use of the Advanced Density Peaks algorithm and the systematic investigation of data transformations across hidden layers demonstrate a insightful methodological framework.
Weaknesses: See Questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. Do few-shot examples significantly impact the analysis results? If so, in what aspects are these impacts manifested?
2. Is te method applicable to studying tasks that generate longer texts?
3. What do you think the specific practical applications of the findings might be? For example, improving ICL or SFT methods, or perhaps combining the two in some way?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the time spent reading our manuscript, for your general appreciation of our work, and for your questions. We will answer point by point below.
> *1. Do few-shot examples significantly impact the analysis results? If so, in what aspects are these impacts manifested?*
The order and identity of the shots can slightly affect few-shot accuracy, it has a smaller impact on the clusters found by the ADP algorithm and, more generally, on our results. In Table A1 in the appendix of the main paper, we show that the 5-shot accuracy can change from 66.7 to 66.2, changing the few-shot order in Llama 3 8b.
We performed an additional experiment to test the impact of the order and identity of the shots on the ARI with the subjects. Figure 3 in the attached pdf shows the results of this test. We measured the standard deviation of the ARI with 5 runs with different shot orders by shuffling the MMLU dev set (red profile) and the identity of the few shots sapling from the union of the dev and validation set. On average, the standard deviation is about $\sim0.02$. This means that our findings are robust to changes in the prompt since the number of clusters and their internal composition remain consistent across different runs.
> *2. Is the method applicable to studying tasks that generate longer texts?*
The method can be applied to sequences with more than one token. However, since the clustering and intrinsic dimension estimation rely on the Euclidean distances between samples, it is important to map the generated sequence to a common embedding.
One strategy is to average over the sequence tokens as done, for instance, by Valeriani et al. [1] who measure the intrinsic dimension of biological sequences of varying length by averaging over the sequence axis.
> *3. What do you think the specific practical applications of the findings might be? For example, improving ICL or SFT methods, or perhaps combining the two in some way?*
Our findings can help improve strategies for LoRA finetuning that adapt the rank for LoRA matrices. Several studies [2-4] attempt to adapt the ranks of the LoRA matrices based on various notions of matrix ‘importance’ or relevance to downstream tasks.
Our analysis of the similarity between finetuned layers and the pre-trained layers (see Fig. 6 and lines 275 284) shows that late layers are most significant for finetuning. This indicates that the layers that change the most should have higher ranks. This approach would naturally prevent the early layers from being modified by fine-tuning and could combine the benefits of ICL and FT, as you correctly suggest.
We are actively working on these aspects in our lab and will include a brief discussion of the practical application of our study in the revised version of the manuscript.
[1] Valeriani et al., The geometry of hidden representation in Large Transformer Models \
[2] Zhang et al., AdaLoRA: Adaptive Budget Allocation for Parameter-Efficient Fine-Tuning.\
[3] Valipour et al., DyLoRA: Parameter Efficient Tuning of Pre-trained Models using Dynamic Search-Free Low-Rank Adaptation.\
[4] Ding et al., Sparse Low-rank Adaptation of Pre-trained Language Models
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. I am interested in the topic of your paper and recognize its contribution to the community. However, I share the concerns of other reviewers about the need for improved clarity and organization in the writing, as well as the shortcomings in the research novelty. These considerations have led me to adjust my score accordingly.
---
Reply to Comment 1.1.1:
Comment: We thank the reviewer for the response. \
Since we believe that our work has several novel contributions we take the occasion to summarize them further below:
1. we compare ICL and fine-tuning by analyzing the *geometry of the layers* rather than using performance metrics on downstream tasks;
2. we describe the geometry of the representations with the *density peaks clustering*. This fully unsupervised nonparametric algorithm harnesses the low dimensional structure of the data to interpret representations in the hidden layers. Our work is the first to effectively employ this strategy to interpret the content of hidden layers since most of the previous studies use supervised probes;
3. we apply this powerful methodological framework to *analyze SOTA LLMs* (e.g. llama3, ...), we show that it *scales to 70B* parameter models, and it can extract clear, interpretable patterns from the hidden representations of such models;
4. we show that the geometry of the hidden layers changes abruptly in the middle of the models. ICL and fine-tuning modify the geometry of layers' representations in radically different regions of the LLMs: ICL before and fine-tuning after the transition observed in the middle of the LLMs. We are the first to observe and interpret this different behavior of ICL and fine-tuning inside LLMs.\
For these reasons, we believe that our contribution has elements of novelty in its perspective (1), methodology (2-3), and findings (4).
Part of the clarity issues concerns the exposition of the density peaks clustering and WPGMA linkage strategy. \
We did not develop these algorithms in this work, but for the first time we employ them to analyze LLM representations, showing that they can be powerful tools for discovering new, insightful findings.
We cited the original papers where these techniques were developed; thanks to the concerns raised by the reviewers we are committed to improving further their exposition in the camera-ready version of this work. | Summary: In this paper, the authors analyze the probability landscapes of the hidden representations of LLMs when they perform in-context learning (ICL) and supervised fine-tuning (SFT) using the MMLU question answering benchmark, and the Llama-3, Llama-2, and Mistral-7B LLMs are used for this purpose. The Advanced Density Peaks (ADP) algorithm is used for this analysis. The experiments conducted recover fingerprints that showcase the distinctions between ICL and SFT, specifically that representations in the earlier layers of the network are aligned with the subjects (topic of the question) during ICL and that later layers are better aligned with the final answers for SFT. The authors conduct a number of different experiments analyzing the probability landscape and draw interesting conclusions from the results.
Strengths: - The paper has a number of strengths, namely the extensive set of experiments that lead to interesting findings. The findings are useful to the community-- with the main takeaway being the two-phased nature of the hidden layer representations' probability landscapes.
- Additionally, I appreciate the findings obtained that demonstrate the fingerprints obtained for ICL and SFT in the early and later layers respectively. More specifically, the results imply that during ICL the representations in the earlier layers of the network are more aligned with the subject partitions and that during SFT representations of the later layers are better aligned with the answer/label distributions of questions. This finding showcases the distinctions between ICL and SFT from the perspective of the representation landscape.
- The experiments on the hierarchical organization of the cluster density peaks via linkage clustering with respect to the subject relationships show how information in the internal model layers are organized.
Weaknesses: - My main issue with this work is that the experimental analysis is only conducted on the MMLU benchmark. This detrimentally restricts the impact of the work, especially as the authors draw general conclusions. Is it possible to extend some of this experimental analysis to other benchmarks, perhaps just even to other question answering based benchmarks?
- Can the authors provide more details on the average linkage experiments that demonstrate the hierarchical organization of the cluster density peaks? For instance, how do the authors map the subjects at a granular level to each of the dendograms/leaves? Overall, I believe the subsection "Hierarchical organization of density peaks" can be further improved in terms of writing and readability for future readers.
- There are a number of typos throughout the paper that need to be corrected. For example, the authors write "LLama" in a number of places although it should read "Llama" (e.g. line 86), "assignation" -> "assignment" on line 140, among others. These can be fixed in the revision.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses listed above as each of those is also framed as a question to the authors.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors discuss limitations sufficiently. Note that owing to the strengths and weaknesses mentioned above, the paper's contributions currently constitute a technically solid, moderate-to-high impact work, which forms the basis for my score.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are grateful for your constructive and thoughtful comments and for the time spent on our manuscript. We will address all the main concerns below:
> *Is it possible to extend some of this experimental analysis to other benchmarks?*
In our work, we analyzed MMLU because it includes a wide variety of topics (57), with clear semantic relationships between them and a rich set of samples (more than 100) for each topic. These requirements are hard to find in other datasets.
We agree with you that an additional benchmark would strengthen our claims. Therefore following your suggestion, we performed a new experiment on a second dataset constructed from TheoremQA [1], ScibenchQA [2], Stemez [3], and RACE [4]. This dataset contains roughly 6700 examples, not included in MMLU, 10 subjects from STEM topics [1-3], and middle school reading comprehension task [4], with at least 300 examples per subject. We keep 4 choices for each answer. The 0 shot, 5 shot, and fine-tuned accuracies in Llama 3 8b are 55%, 57%, and 58%, respectively.
We report the analysis in Fig. 1 and 2 of the attached PDF. In Fig. 1-left, we see that the intrinsic dimension profiles have a peak around layers 15/16, the *same layers as in MMLU* (Fig. 2-left main paper). This peak in ID signals the transition between the two phases described in the paper.
Before layer 17, few-shot models encode better information about the subjects (ARI with subjects above 0.8). Between layers 3 and 7, the peaks in 5 shot layers reflect the semantic similarity of the subjects (see the dendrograms for layer 5 in Fig. 2 of the attached PDF).
Fine-tuning instead changes the representations after layer 17, where ARI with the answers for the is $\sim 0.15$, higher than that of the 5 shot and 0 shot models. The absolute value is lower than that reported in the main paper (Fig. 5-left) because the fine-tuned accuracy reached on the STEM subjects in this dataset is lower.
Overall, the results are consistent with those shown in the paper for MMLU.
> *Can the authors provide more details on the average linkage experiments [...]? How do the authors map the subjects [...] to each of the dendograms/leaves?*
The two key points to understanding our procedure are:
1. Each peak is associated with a single subject;
2. The saddle points between the density peaks (subjects) allow us to cluster them hierarchically using Weighted Pair Group Method with Arithmetic Mean (WPGMA)..
We elaborate on these points below.
1. The clusters are homogeneous in the layers where ARI is highest (layers 4 to 6, Fig. 3-left of the main manuscript). In these layers in Llama 3 8b, 51 out of 77 clusters are pure, and in 70 out of 77, 90% of the points belong to the same subject. This high purity allows us to map one or two subjects to one density peak, which becomes a leaf of the dendrogram (see also Fig. A9).
2. Average linkage is a strategy to do hierarchical clustering on the density peaks. Hierarchical clustering requires a notion of **dissimilarity** between the peaks/subjects [5]. The Advanced Density Peaks (ADP) defines the dissimilarity between a pair of peaks $\alpha$ and $\beta$ as [6]: \
$S_{\alpha, \beta} = \log \rho_{max} - \log \rho_{\alpha,\beta}$ \
where $\rho_{max}$ is the density of the highest peak, and $\rho^{\alpha, \beta}$ is saddle point density between $\alpha$ and $\beta$.
Intuitively, the higher is $\log \rho^{\alpha, \beta}$, the smaller is $S_{\alpha, \beta}$. The ADP algorithm considers two peaks similar if they are connected through a high-density saddle point. \
The peaks are then linked hierarchically, starting from the pair with the lowest dissimilarity according to $S_{\alpha, \beta}$. Once two peaks $\alpha$, $\beta$ have been merged into a peak \gamma, the saddle point density between the peak representing their union and the rest of the peaks ${\delta_i}$ is determined according to the average linkage strategy (WPGMA) [7] (see line. 222) namely: \
$ \log \rho^{\gamma, \delta_i}= \dfrac{\log \rho^{\alpha, \delta_i}+\log \rho^{\beta, \delta_i}}{2} $ \
After the update of $S$ we repeat the procedure until we reach a single cluster which is the root of the dendrogram.
We will include the description of the WPGMA methodology in the method section at the end of the section devoted to the ADP algorithm.
> *There are a number of typos.*
Thank you. we will fix them in the camera-ready version.
We hope that our additional experiment addresses your main request and that we have satisfactorily clarified your second concern. If so, we hope you can consider raising your score.
We'll be more than happy to give further replies to any remaining doubts.
[1] Chen et. al, Theoremqa: A theorem-driven question answering dataset \
[2] Wang et. al, Scibench: Evaluating college-level scientific problem-solving abilities of large language models. \
[3] https://stemez.com/subjects \
[4] Lai et al, RACE: Large-scale ReAding Comprehension Dataset From Examinations \
[5] Hastie et al., Elements of statistical learning. \
[6] D’Errico et al., Automatic topography of high-dimensional data sets by non-parametric density peak clustering. \
[7] Sokal, A statistical method for evaluation systematic relationship.
---
Rebuttal Comment 1.1:
Comment: I would like to thank the authors for their rebuttal. After going through it, I believe the paper's contributions still currently constitute a technically solid, moderate-to-high impact work as I had mentioned in my original review. I will hence keep my score. I would also request the authors to include the additional details on the WPGMA framework in the revision.
---
Reply to Comment 1.1.1:
Comment: Thank you for keeping your positive feedback about our work.
We will include the details of the WPGMA algorithm in the revision. | null | null | Rebuttal 1:
Rebuttal: We thank the chair and the senior area chair for their time spent reviewing our work.
*Summary of our contribution*:\
In this study, we compare fine-tuning (FT) and in-context learning (ICL) in LLMs by
analyzing the evolution of the probability density in the hidden layers of state-of-the-art LLMs (Llama-3 8B/70B, Llama-2 7B/13B/70B, Mistral 7B) as they solve a challenging question answering task (MMLU). \
We find that the evolution of the probability density in LLMs occurs in two phases, separated by a sharp modification of the representation landscape in the middle layers. In early layers, ICL promotes a remarkable organization of the probability modes that are close to each other according to the semantic similarity of the MMLU subjects. In contrast, FT affects the probability landscape of late layers, leaving that of the early layers mostly unchanged.
*Summary of reviewers’ praise and concerns*:\
Reviewer **mxnJ** appreciated the novelty of our approach, which analyzes *the geometric properties of the hidden representations in LLMs, offering new insights into the workings of ICL and SFT*. For Reviewer **P3MS** our analysis of the representation landscape *has a number of strengths, namely the extensive set of experiments that lead to interesting findings useful to the community*. **UJWa** acknowledges the *extensive set of experiments* carried out in our work.\
The reviewers' main concerns were:
1. Lack of clarity in some parts of our exposition (**UJWa** and **P3MS**).
2. The need for an additional dataset to strengthen our claims (**P3MS**).
*Summary of our reply to reviewers' concerns*:
1. We have addressed reviewer **P3MS**'s primary concern with a new experiment on a *second dataset*, which confirms and strengthens our findings (see attached pdf Figures 1 and 2).
2. To improve the clarity of our exposition, we will: \
2a. Organize the description of the Advanced Density Peaks algorithm into four distinct paragraphs (see reply to **UJWa**), including an explicit description of the average linkage algorithm we use (see reply to **P3MS**).\
2b. We will make explicit the match between bullet points and the section of the Results (see reply to **UJWa**) and polish the paragraph about the Hierarchical organization of the density peaks, as suggested by Reviewer **P3MS**.
We hope that our additional experiment and our replies to the reviewers' comments address their concerns. We are open to clarifying any remaining questions.
Pdf: /pdf/5ee8224ef1744521e079bcc75a847b665a4d6757.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Enhancing Motion in Text-to-Video Generation with Decomposed Encoding and Conditioning | Accept (poster) | Summary: This paper introduces DEMO, a text to video diffusion approach that aims at enhancing the movement in the video produced by text to video models. In additional to the classical diffusion loss, DEMO introduces:
* __A text motion loss__: the CLIP text encoder used for the temporal cross attention is fine-tuned - the loss matches the text motion to the motion of the video as computed with the optical flow
* __A regularization__: to avoid the CLIP text encoder used for the temporal cross attention to diverge too much from the CLIP encoder used for the spatial cross attention
* __A video motion loss__: to encourage the frame latents output by model to follow the difference between frames in the actual video
The model is trained on WebVid and compared to other models trained on WebVid on both video quality and motion quality.
Strengths: Technical contribution is strong:
* The need for motion focused text encoder is well argued
* The experiments show that this addition contributes into increasing movement
Technical clarity is also good:
* The different losses are presented in a clear and detailed way.
Weaknesses: The comparison with other approaches is not 100% fair in the sense that additional parameters are being trained on the text encoder side (a CLIP encoder).
It might also be that additional parameters are being trained on the temporal cross attention side. This is actually unclear to me, because at the beginning of the method section (section 3) the author define the U-Net 3D this way: "the temporal transformer consists of temporal self-attention and feed-forward layers", while some papers do use cross-attention in the temporal self-attention as well.
Technical Quality: 3
Clarity: 3
Questions for Authors: The author define the U-Net 3D this way: "the temporal transformer consists of temporal self-attention and feed-forward layers", while some papers do use cross-attention in the temporal self-attention as well. Is that a mistake in the text, or to which specific papers are you referring to as the "vanilla" 3D U-Net ?
Is the temporal attention similar to Make-A-Video and is operating on a single patch ? Or do you use full attention across all patches of all other frames ? That also has an impact on the fairness of comparisons / ablations of the importance of the losses that you introduce.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback and insightful questions. Below are our responses to the concerns raised:
**Q1**: The comparison with other approaches is not 100\% fair since additional parameters (a CLIP encoder) are trained.
**Response**: We appreciate your concern about the fairness of comparisons due to additional parameters being trained on the text encoder side (a CLIP encoder). We conducted an experiment to rule out the effect of the additional parameters in the text encoder. Specially, we trained the baseline ModelScopeT2V model with a new CLIP text encoder while keeping its original training loss. We then compared its performance to the model without the additional CLIP encoder parameters (ModelScopeT2V fine-tuned column in the table). From the experimental results, we observed that the performance of the baseline model with the additional text encoder parameters is similar to that of directly fine-tuning the ModelScopeT2V without additional parameters. This indicates that without appropriate supervision, the impact of the additional parameters in the text encoder has only a marginal influence on the overall performance.
|Benchmark|Metric|ModelScopeT2V|ModelScopeT2V fine-tuned|ModelScopeT2V + text encoder|DEMO|
|---------|------|-------------|-----------------------|----------------------------|----|
|MSRVTT|FID (↓)|14.89|13.80|13.98|11.77|
||FVD (↓)|557|536|552|422|
||CLIPSIM (↑)|0.2941|0.2932|0.2935|0.2965|
|UCF-101|IS (↑)|37.55|37.21|37.66|36.35|
||FVD (↓)|628.17|612.53|601.25|547.31|
|WebVid-10M|FID (↓)|11.14|10.53|10.45|9.86|
||FVD (↓)|508|461|458|351|
||CLIPSIM (↑)|0.2986|0.2952|0.2967|0.3083|
|EvalCrafter|VQA_A (↑)|15.12|15.89|16.21|19.28|
||VQA_T (↑)|16.88|16.39|16.34|15.65|
||IS (↑)|14.60|14.92|15.02|17.57|
||Action Score (↑)|75.88|74.23|75.20|78.22|
||Motion AC-Score (↑)|44|40|46|58|
||Flow Score (↑)|2.51|2.72|2.44|4.89|
|Vbench|Motion Dynamics (↑)|62.50|63.75|63.50|68.90|
||Human Action (↑)|90.40|90.40|90.20|90.60|
||Temporal Flickering (↑)|96.02|96.35|95.45|94.63|
||Motion Smoothness (↑)|96.19|96.38|96.22|96.09|
**Q2**: The definition of the 3D U-Net in your paper states: "the temporal transformer consists of temporal self-attention and feed-forward layers." Some papers also use cross-attention in temporal self-attention. Is this a mistake, and which specific papers are you referring to as the "vanilla" 3D U-Net?
**Response**:
Thank you for pointing out the ambiguity in our definition of the 3D U-Net. In the methodology section, when we refer to the 3D U-Net, we are specifically referencing the ModelScopeT2V model. This model employs spatial transformers for spatial conditioning and temporal transformers with temporal self-attention for smoothing individual frames along the temporal dimension. We will clarify this in the revised manuscript to avoid further confusion.
**Q3**: How your temporal attention is performed? Do you perform the attention on a single patch or full attention across all patches of all other frames?
**Response**:
Thank you for raising your question about the attention mechanism in our model. The temporal cross-attention in our model operates on a single patch across the frame dimension rather than full attention across all patches of all frames.
Our approach involves decomposing the full 3D cross-attention into two separate components: a 2D spatial cross-attention (content conditioning) and a 1D temporal cross-attention (motion conditioning). This design choice enables us to: 1) Preserve the text-to-video generation capability (generate 2D content for each individual frame separately) of the base model. 2) Allow the text instructions to control the temporal dynamics of the video effectively. By separating the spatial and temporal attention mechanisms, we achieve a balance between computational efficiency and performance. This approach ensures fair comparisons and meaningful ablations of the introduced losses, demonstrating the effectiveness of our conditioning mechanism.
---
Rebuttal Comment 1.1:
Title: Answer to Rebuttal
Comment: Thanks a lot to the authors for answering my concerns. I will keep my rating the same.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We sincerely appreciate your thoughtful comments, recognition of our response, and support for our work.
Sincerely,
The Authors | Summary: This paper introduces novel framework called DEcomposed MOtion (DEMO), which enhances motion synthesis in T2V generation by decomposing both text encoding and conditioning into content and motion components. Authors investigate sensitivity of CLIP text encoder to motion descriptions and propose to condition the model on content and motion separately. To obtain a motion encoder, authors propose fine-tune separate CLIP text encoder with losses that encourage its [eot] token to describe motion better. To this end, authors introduce novel text-motion and video-motion losses to encourage the motion conditioning module to generate and render motion dynamics. Experiment results show that proposed method enhances motion of the T2V model.
Strengths: 1. Importance. The proposed method successfully tackles the an important problem of enhancing motion quality in videos generated by T2V models.
2. Results. Experiment results demonstrate the effectiveness of the proposed method.
3. Clarity. The text of the paper is well written and easy to follow.
Weaknesses: 1. Lack of user study. Qualitative results are supported only by examples of several videos. Such results are not statistically representative. Please See Question 1.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. I strongly recommend that the authors incorporate a user study in their research to obtain qualitative results. Additionally, conducting ablation studies with a larger sample size, consisting of a minimum of 50 distinct prompts, and involving at least 15 independent assessors, would greatly enhance the robustness and reliability of the findings.
2. On line 127, the authors mention calculating optical flow between attention maps of different frames. However, the methodology employed to perform this operation lacks clarity in the paper. It is essential for the authors to provide a more detailed explanation of the procedures used to compute optical flow between attention maps. Furthermore, considering that attention maps in diffusion models are often noisy, it is crucial to address the reliability and potential limitations associated with this computation.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1**: We strongly recommend to incorporate a user study in the research to obtain statistical qualitative results.
**Response**:
Thank you for your excellent suggestion. We conducted a user study to compare our method with other video generation models. We selected 50 prompts from EvalCrafter [1], covering diverse scenes, styles, and objects. For each model comparison, 15 annotators were asked to select their preferred video from three options: method 1, method 2, and comparable results. They evaluated the videos based on visual quality, motion quality, and text-video alignment. The results are shown as follows:
|Methods|Text-Video Alignment|Visual Quality|Motion Quality|
|-------|---------------------|--------------|--------------|
|DEMO vs ModelScopeT2V|62%|66%|74%|
|DEMO vs LaVie|56%|46%|62%|
|DEMO vs VideoCrafter2|60%|42%|52%|
|DEMO vs DEMO w/o $\mathcal{L}_{\text{video-motion}}$|58%|56%|72%|
Specifically, 74\% of user feedback indicated that DEMO has better motion quality compared to the baseline ModelScopeT2V. Additionally, 62\% and 66\% of the user feedback agreed that DEMO has better text-video alignment and visual quality, respectively. These findings align with our quantitative results, demonstrating that DEMO significantly improves motion quality without compromising visual quality and text-video alignment.
We also performed a user study to test the effect of our video-motion supervision term, $\mathcal{L}_{\text{video-motion}}$, which is designed to guide the motion conditioning module to generate better motion dynamics. We achieved win rates of 58\%, 56\%, and 72\% in terms of text-video alignment, visual quality, and motion quality, respectively, compared to DEMO without video-motion supervision. Additionally, we observed that both ModelScopeT2V and our improved version, DEMO, have lower visual quality compared to LaVie and VideoCrafter2. We attribute this to the fact that the training dataset for ModelScopeT2V and DEMO (specifically WebVid10M) is low quality visually. In contrast, LaVie uses the high-quality video dataset Vimeo-25M [2], and VideoCrafter2 adopts the high-quality image dataset JDB [3] for better visual quality.
We appreciate the recommendation to incorporate this into the revised paper, and we will include a summary of these findings to enhance the robustness and reliability of our qualitative results.
**Q2**: How to calculate the optical flow between attention maps is not clearly described. Furthermore, considering that attention maps in diffusion models are often noisy, it is crucial to address the reliability and potential limitations associated with this computation.
**Response**: Thank you for pointing out the lack of clarity in our explanation. In our approach, we utilize RAFT [4] to calculate the optical flow, as detailed in Table 6 of the appendix. Basically, we treat cross-attention maps as kind of videos but in a different space compared with real world videos. We then directly apply RAFT to extract the optical flows. We recognize that attention maps in diffusion models can be noisy, which poses challenges for reliable optical flow computation. To address this, we tried a simple thresholding strategy that zeroes out regions with little activation. This approach resulted in slight improvements in the stability and accuracy of the optical flow calculations. Nonetheless, we acknowledge that this is a preliminary solution and that more sophisticated strategies could yield better results.
In the revised paper, we will provide a more detailed explanation of our methodology for calculating optical flow. This will include a step-by-step description of how we preprocess the attention maps, apply the RAFT model, and handle the inherent noise in the attention maps. Additionally, we will discuss the potential limitations of our current approach and outline possible directions for future work to enhance the robustness and reliability of optical flow calculations in noisy conditions.
[1] Teed, Z., & Deng, J. (2020). Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16 (pp. 402-419). Springer International Publishing.
[2] Wang, Y., Chen, X., Ma, X., Zhou, S., Huang, Z., Wang, Y., ... & Liu, Z. (2023). Lavie: High-quality video generation with cascaded latent diffusion models. arXiv preprint arXiv:2309.15103.
[3] Sun, K., Pan, J., Ge, Y., Li, H., Duan, H., Wu, X., ... & Li, H. (2024). Journeydb: A benchmark for generative image understanding. Advances in Neural Information Processing Systems, 36.
[4] Teed, Z., & Deng, J. (2020). Raft: Recurrent all-pairs field transforms for optical flow. In Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part II 16 (pp. 402-419). Springer International Publishing.
---
Rebuttal 2:
Title: Response to Rebuttal
Comment: Dear Authors,
Thank you for your clarifications and conducted user study. My major concerns were addressed, so I raise my score to **7: Accept**.
However, the fact that directly applying RAFT to extract the optical flows from attention maps works that well is surprising to me. Intuitively, it should be an out of domain input for RAFT. Looking forward for your detailed explanation of this part of pipeline in the camera-ready version of the paper.
Best regards,
Reviewer WCt2
---
Rebuttal Comment 2.1:
Comment: Dear Reviewer,
We sincerely appreciate your thoughtful comments, recognition of our response, and support for our work.
Sincerely,
The Authors | Summary: This paper proposes a method for enhancing motion synthesis in T2V generation by decomposing both text encoding and conditioning into content and motion components, called Decomposed Motion (DEMO). To address the issue of inadequate motion representation in text encoding, they decompose text encoding into content encoding and motion encoding processes, focusing on spatial information and temporal information respectively. To solve the problem of reliance on spatial-only text conditioning, they also decompose the text conditioning process into content and motion dimensions to integrate spatial and temporal conditions. Additionally, they incorporate text-motion and video-motion supervision to enhance the model’s understanding and generation of motion.
Strengths: 1. The paper is well written overall and it is easy to follow along with the presented concepts and results.
2. The paper includes experimental results that validate the authors' claims, as well as ablations that offer a better understanding of model design choices.
3. The motivation for proposing the method is well demonstrated experimentally.
4. The proposed method is efficient in training, has a small number of parameters and a low inference burden, and does not require additional reference information.
Weaknesses: 1. There is too little content in the related work section.
2. There is a lack of contrast experiments with other improved methods that rely on external references to exhibit performance differences.
3. The proposed method is generalized, so why not validate it on other BASE models? From qualitative and quantitative experiments, the performance and various indexes of ModelScope are poor, and while DEMO is useful, it is unclear if there is any improvement when adding DEMO to models with better performance. There is a lack of relevant experimental evidence.
4. The provided videos are less impressive. The separate files of other methods make it difficult for reviewers to identify the advantages of the proposed method, it would be better to make the comparison in a single file.
Technical Quality: 3
Clarity: 2
Questions for Authors: In line 196, “showing marked improvement in individual frame quality compared to the baseline,” can you explain why the quality of a single frame is increased? As I understand, DEMO is designed to increase the model's motion generation capability.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: See weakness.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your detailed feedback. Below are our responses to the concerns raised:
**Q1**: The related work section contains too little content.
**Response**: We acknowledge that the related work section is limited due to page constraints. In the revised paper, we will expand this section and include additional relevant studies (references omitted here due to limits):
**T2V Generation**: T2V has advanced significantly, leveraging T2I success. The first T2V model, VDM, introduced a space-time factorized U-Net for temporal modeling, training on both images and videos. For high-definition videos, models like ImagenVideo, Make-A-Video, LaVie, and Show-1 use cascades of diffusion models with spatial and temporal super-resolution. MagicVideo, Video LDM, and LVDM apply latent diffusion for video, working in a compressed latent space. VideoFusion separates video noise into base and residual components. ModelScopeT2V, the first open-source T2V diffusion model, uses 1D convolutions and attention to approximate 3D operations. Stable Video Diffusion (SVD) divides the process into T2I pre-training, video pre-training, and fine-tuning.
**Q2**: There is a lack of contrast experiments with other methods that rely on external references.
**Response**: Thank you for raising your concern. Here we list the key differences between our method and other methods that rely on external references to address your concern.
1) Different Goal: Our focus is to enhance motion synthesis in general text-to-video generation without relying on external references, which are often infeasible in real-world scenarios. Methods using external references, like video customization or controllable video generation, aim to enable more control to generate videos with specific object appearances, motions, and other details, which is not our goal.
2) Different Information Sources: Our model takes only sparse textual descriptions. It cannot directly take external references without additional training and adaptation, which is beyond the scope of our paper. Methods using external references have access to more detailed visual signals. Comparing our approach with such methods is unfair due to the disparity in information density.
**Q3**: There is a lack of experiments with other base model to show the generalization of the proposed method. Additionally, the performance of the baseline model is poor.
**Response**:
We agree that demonstrating the generalization of our method on other base models would be beneficial. To this end, we have applied DEMO to ZeroScope [1] and included the experimental results in the table below. The results are consistent with those obtained using ModelScopeT2V, showing significant improvements in motion quality without loss of visual quality.
|Benchmark|Metric|ZeroScope|DEMO+ZeroScope|
|---------|------|---------|--------------|
|MSRVTT|FID(↓)|14.57|13.59|
||FVD(↓)|812|543|
||CLIPSIM(↑)|0.2833|0.2945|
|UCF-101|IS(↑)|37.22|37.01|
||FVD(↓)|744|601|
|WebVid-10M|FID(↓)|11.34|10.03|
||FVD(↓)|615|479|
||CLIPSIM(↑)|0.2846|0.2903|
|EvalCrafter|VQA_A(↑)|27.76|33.02|
||VQA_T(↑)|33.87|37.28|
||IS(↑)|14.20|15.28|
||ActionScore(↑)|67.78|72.55|
||MotionAC-Score(↑)|44|62|
||FlowScore(↑)|1.10|5.25|
|Vbench|MotionDynamics(↑)|42.72|70.28|
||HumanAction(↑)|67.36|88.34|
||TemporalFlickering(↑)|97.39|94.83|
||MotionSmoothness(↑)|97.92|95.72|
Additionally, ModelScopeT2V is the only fully open-source text-to-video model (dataset, model, and code). While its performance may be lower than some closed-source models, it performs reasonably well especially in motion quality. Below are its performance rankings across various benchmarks against both closed-source and open-source models:
|Benchmark|Metric|Ranking|
|---|---|---|
|DEVIL [2]|Dynamics Quality|3/10|
||Dynamics Control|5/10|
|FETV [3]|Temporal Quality|1/4|
|T2V-CompBench [4]|Motion|4/20|
|VBench [5]|Dynamic Degree|6/29|
||Human Action|13/29|
||Temporal Style|10/29|
**Q4**: The provided videos are less impressive and the separate files make it difficult to compare those methods.
**Response**: Please refer to Q1 in Global Response.
**Q5**: Why DEMO can improve the individual frame quality (FID) since DEMO is designed to enhance the motion generation?
**Response**: Thank you for raising this concern. The confusion arises from the fact that individual frame quality (FID) involves both the realism of the frame itself and the diversity of frames. The FID [6] calculate the Fréchet Distance between generated frames and real frames as follows:
\begin{equation}
d(P_R, P_G) = \| \mu_R - \mu_G \|^2 + \text{Tr}(\Sigma_R + \Sigma_G - 2(\Sigma_R \Sigma_G)^{\frac{1}{2}})
\end{equation}
where $\mu$, and $\Sigma$ are mean, covariance of real ($R$) and generated ($G$) frames. The mean term measures frame realism by comparing the average feature representation of generated and real frames. The covariance term assesses frame diversity by capturing how well generated frames replicate the variability and feature relationships of real frames.
Our base model uses content conditioning for each individual frame and temporal self-attention for smoothing those frames, which improves motion smoothness but limits motion dynamics and frame diversity. In contrast, DEMO introduces a separate motion conditioning module, enhancing motion dynamics and generating more diverse frames. This increased diversity leads to a better FID score, thus improving individual frame quality.
[1] Zeroscope. https://huggingface.co/cerspense/zeroscope_v1_320s. 2023.
[2] Evaluation of Text-to-Video Generation Models: A Dynamics Perspective. 2024.
[3] Fetv: A benchmark for fine-grained evaluation of open-domain text-to-video generation. 2024.
[4] T2V-CompBench: A Comprehensive Benchmark for Compositional Text-to-video Generation. 2024.
[5] Vbench: Comprehensive benchmark suite for video generative models. 2024.
[6] Gans trained by a two time-scale update rule converge to a local nash equilibrium. 2017. | Summary: This paper aims to improve the motion dynamics generation of the text-to-video generation models. It proposes a framework to decompose both text encoding and conditioning into content and motion components. For text encoding, a CLIP encoder is fine-tuned to encode the motion information in the text prompts better. For text conditioning, it introduces a motion conditioning module to incorporate motion information in the denoising process.
Strengths: 1. The general idea is interesting, i.e. decompose the text encoding and conditioning into content and motion components to capture the motion prompts and generate motion dynamics better.
2. The pilot study is interesting, which investigates that the CLIP encoder tends to be less sensitive to the motion instructions in the text prompts.
3. This paper is technically clear and easy to follow.
Weaknesses: 1. For the pilot study in Fig. 1, only one kind of sentence template is applied. Will the drawn conclusion be the same when the text prompts changes to other formats? This concern is not addressed in the paper.
2. In Fig.3, it is hard to tell which rows are generated by the base model and which rows are generated by the fine-tuned model.
3. From line 179 to line 183, it is hard to evaluate the correctness of the conclusion. Because it is quite hard to distinguish the two generated videos shown in Fig. 3. They look almost the same.
4. For qualitative evaluations. The improvements in generated motions in most shown cases are limited.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The limitations are discussed in the paper.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We appreciate your valuable feedback. Below are our responses to the concerns raised:
**Q1**: In the pilot study, employing only one kind of sentence template may not be sufficient to draw the conclusion that the text encoder is less sensitive to the motion instructions in the text prompt.
**Response**: We thank the reviewer for highlighting this concern. Due to page limitations, we included the details of our pilot study in the appendix. As stated in the appendix (lines 527 to 530), "It is noteworthy that we did not observe significant differences when using different templates or different sets of words within each POS. The results were consistent across different setups, and we selected these prompts to try to make these prompts meaningful." We will revise the caption for our pilot study (Fig.1) to include this information and to address this concern as follows:
"We generated a set of prompts (262144 in total) following a fixed
template, grouping them according to the different parts of speech (POS). These grouped texts are
then passed into the CLIP text encoder, and we calculate the sensitivity as the average sentence
distance within each group. As shown on the left-hand side, compared to POS representing content,
CLIP is less sensitive to POS representing motion. (Results are consistent across different templates and different sets of words within each POS. Further details can be found in the appendix.)"
**Q2**: In Fig.3, it is hard to tell which rows are generated by the base model and which rows are generated by the fine-tuned model.
**Response**: Thank you for pointing out the potential confusion caused by the arrangement of video frames in Fig.3. This complexity arises because our quantitative evaluation involves multiple dimensions. We need to compare our model with other models, show the differences within each video, and evaluate multiple text inputs.
In Fig.3, we present video frames generated by four different models: LaVie, VideoCrafter2, ModelScopeT2V (base model), and DEMO (fine-tuned model), across three different text prompts. As mentioned in the caption of Fig.3, "Each video is generated with 16 frames. We display frames 1, 2, 4, 6, 8, 10, 12, 14, 15, and 16, arranged in two rows from left to right. Full videos are available in the supplementary materials."
To improve clarity, we plan to display these videos in a GIF format in our revised paper. This will help illustrate the differences more effectively. Please refer to Fig.1 in the PDF of Global Response.
**Q3**: It is hard to evaluate that our DEMO significantly outperforms the base model with the text prompt "Slow motion flower petals fall from a blossom, landing softly on the ground (the first example in Fig.3)" as the frames look almost the same.
**Response**: Thank you for pointing out the potential confusion in our presentation of video frames. The motion between different frames, especially those involving many objects such as flower petals, can be difficult to observe with static frames alone. We have provided video files in the supplementary materials, which clearly show significant differences between these two videos.
Additionally, we plan to revise Fig.3 in our paper to better highlight these differences. Please refer to Q1 and Fig.1 in the Global Response.
**Q4**: For qualitative evaluations, the improvements in generated motions in most shown cases are limited.
**Response**: Thanks for raising your concern. This is due to the fact that the improvements of motion are hardly to observe by only showing sequence of static frames. We have provided video files in the supplementary materials to better observe these improvements. Additionally, we plan to revise Fig.3 in our paper to better highlight these differences. Please refer to Q1 and Fig.1 in the Global Response.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal.
Comment: The authors have addressed most of my concerns. I decide to keep my initial positive rating.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
We sincerely appreciate your thoughtful comments, recognition of our response, and support for our work.
Sincerely,
The Authors | Rebuttal 1:
Rebuttal: **Q1**:The presentation of Fig.3 (qualitative results) is not good. The improved motion are hardly to observe and it is difficult to compare DEMO with other methods.
**Response**: We thank all the reviewers for highlighting this concern. Qualitative comparison of videos generated by different models is inherently challenging, especially when focusing on the dynamic aspects of video generation. We acknowledge that displaying only sequence of static frames makes it difficult to compare and directly observe the improvements. To address this issue, we plan to revise Fig.3 by using GIFs for each individual video instead of sequences of static frames. This approach will provide a much clearer and more effective comparison. The revised version can be found in Fig.1 of the attached PDF. For the best viewing experience, please use Adobe Acrobat Reader or another PDF reader that supports animation, as browser support for this feature is limited.
Pdf: /pdf/4d725ca4c2a5c9664b41d0c51d313830e51fc84b.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Dual-Personalizing Adapter for Federated Foundation Models | Accept (poster) | Summary: This paper focuses on personalization and test-time adaptation in federated learning of foundation models. The authors propose two solutions for this setting and show its performance on two splits of FLAN dataset.
Strengths: - The paper is generally easy to read.
- The constructed experimental setup is genenrally reasonable.
Weaknesses: - The motivation and significance of the setting is not sufficiently strong. The training method in this paper is not new in personalized FL [1,2]. The test-time adaptation method is straightforward and does not convey new insight. Therefore, I do not clearly see the significance of considering personalization and test-time adaptation at the same time.
- Limited performance gain. Despite that the paper focuses on personalization and test-time adaptation at the same time, it proposes two methods, one performs better at personalization while the other performs better at test-time adaptation. This is really confusing. If the authors can not find one solution that fits both two metrics, then why bother to consider these two issues in one paper?
- Limited performance gain. From the two main tables in the paper, the performance of FedLoRA is stable across two metrics, which always performs the second best.
[1] Exploiting shared representations for personalized federated learning
[2] Perada: Parameter-efficient and generalizable federated learning personalization with guarantees
Technical Quality: 2
Clarity: 2
Questions for Authors: - Why the convergence curve of FedIT so distinct from the others? Also, it is unclear what is the difference between FedIT and FedLoRA.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 3
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Cannot clearly see the significance of considering personalization and test-time adaptation at the same time.**
Our proposed method is **not doing test-time adaptation**. In general, the test-time adaptation methods [1] usually involve fine-tuning steps to align the model parameters with new distributions in test time. In contrast, our proposed method is designed without any adaptation step, aiming instead for test-time generalization where clients must maintain performance on their main tasks while also demonstrating generalization capabilities on new tasks during the testing stage.
Moreover, our method addresses an innovative application scenario known as federated foundation models (FFM) [2], an emerging domain that significantly diverges from traditional federated learning methods. We are the first to explore the test-time generalization on personalized FFM. Unlike the works you mentioned, we refined a new setting (test-time personalization) in FFM, proposed a new training paradigm with self-reconstructed datasets to train personalized FFM in extreme scenes like cross-tasks, and also introduced a new dynamic weighting mechanism to determine the combination of different adapters for test-time personalization. We will add a discussion about the difference between our work and your mentioned works in related work.
[1] Liang, J., He, R., & Tan, T. (2024). A comprehensive survey on test-time adaptation under distribution shifts. International Journal of Computer Vision, 1-34.
[2] Ren, C., Yu, H., Peng, H., Tang, X., Li, A., Gao, Y., ... & Yang, Q. (2024). Advances and open challenges in federated learning with foundation models. arXiv preprint arXiv:2404.15381.
**W2: Propose two methods and can not find one solution that fits both two metrics.**
Our paper aims to improve the test-time generalization without sacrificing the performance on personalization metric in federated settings. In traditional machine learning methods, an algorithm should be evaluated on both validation dataset and test dataset, which in our context correspond to the personalization metric and test-time generalization metric, respectively.
Our proposed framework (ref to Section 4), model architecture (ref to Figure 1) and loss function (ref to Equation 1) are unified design. The mentioned two methods can be considered as two hyperparameters to control the updating strategy (sequentially or iteratively) of local and global adapters within the federated learning process. Specifically, FedDPA-F adopts a sequential approach, first optimizing the global adapter and then optimizing the local adapter. In contrast, FedDPA-T alternates between optimizing the global and local adapters iteratively during each communication round. To facilitate understanding, Figure 2 illustrates the two options of the hyperparameter in the learning process. In the distributed learning scenario, it is a common way to try different updating strategies in federated learning, such as [3].
[3] Tian Li, Shengyuan Hu, AhmadBeirami, and Virginia Smith, “Ditto: Fair and Robust Federated Learning Through Personalization”, ICML 2021
**W3: Limited performance gain.**
The objective of this paper is to improve the test-time generalization of federated foundation models without sacrificing personalization performance. In two main tables (Table 1 & 2), in the test-time generalization metric, our proposed method shows significant improvement over FedLoRA. For example, both FedDPA-F and FedDPA-T demonstrate average improvements of approximately 2.3% and 0.9% over FedLoRA. Notably, in the summarization task, our approaches achieve an **8.4%** higher score than FedLoRA.
Similarly, in the personalization metric, our method achieved relatively higher performance than FedLoRA. For example, both FedDPA-F and FedDPA-T demonstrate average improvements of approximately 0.5% and 2.3% over FedLoRA. Notably, in the reading comprehension task, our methods outperform FedLoRA by **5.9%** in terms of personalization.
Moreover, we will further explore one more analysis of the statistical significance. Specifically, it will involve assuming that our method can improve test-time generalization performance and subsequently employing statistical tests, such as the P-value approach, to verify the significance and validate the efficacy and confidence of our method.
**Q1: Difference between FedIT and FedLoRA.**
Compared to personalized federated learning methods, FedIT focuses on training a global model applicable across all clients, without incorporating personalization techniques. In each communication round of the convergence curve, FedIT evaluates the global model on test data. Conversely, FedLoRA, a personalized federated learning method, evaluates performance using fine-tuned models in each communication round. Therefore, FedIT shows a relatively lower accuracy in the early stage of the learning process. For a more detailed discussion, please refer to Appendix A.2.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer 94he,
As the author-reviewer discussion period is approaching its end on 13 August, we respectfully ask whether we have addressed all your questions and concerns.
---
Reply to Comment 1.1.1:
Comment: Dear reviewer 94he,
We believe that we have addressed the concerns that you have raised. Specifically,
1. Our work does **not focus on test-time adaptation [1] but test-time generalization**, where clients must maintain performance on their main tasks while also demonstrating generalization capabilities on new tasks during testing without any adaptation methods. Unlike the works you mentioned, our work is the first to explore the test-time generalization of personalized federated foundation models (FFM) [2], an emerging domain that significantly diverges from traditional FL. We establish the benchmark by refining the new setting (test-time personalization), proposing the new training paradigm with self-reconstructed datasets for personalized FFM, and introducing a new dynamic weighting mechanism to determine the combination of different adapters for test-time personalization.
2. Our proposed solution is a unified design, the mentioned two methods are **under the same solution with different hyperparameters** (different optimizing strategies) as illustrated in **Section 4**, and it is a common way to try different optimizing strategies in FL such as [3]. Since our paper aims to improve the test-time generalization without sacrificing the performance on personalization, we take the personalization metric as validation results and the test-time generalization metric as test results.
3. The objective of our paper is to improve the test-time generalization of FFM without sacrificing personalization performance. In our two main tables (**Table 1 & 2**), our proposed method shows an average improvement of **2.3%** (even **8.4%** higher in the summarization task) over FedLoRA in the test-time generalization metric with relatively higher performance of personalization.
4. The difference between FedIT and FedLoRA lies in whether there is a personalization step or not. FedIT focuses on training a global model applicable across all clients, while FedLoRA adapts the global model to each client with further fine-tuning for personalization. For a more detailed discussion, please refer to **Appendix A.2**.
[1] Liang, J., He, R., & Tan, T. (2024). A comprehensive survey on test-time adaptation under distribution shifts. International Journal of Computer Vision, 1-34.
[2] Ren, C., Yu, H., Peng, H., Tang, X., Li, A., Gao, Y., ... & Yang, Q. (2024). Advances and open challenges in federated learning with foundation models. arXiv preprint arXiv:2404.15381.
[3] Tian Li, Shengyuan Hu, AhmadBeirami, and Virginia Smith, “Ditto: Fair and Robust Federated Learning Through Personalization”, ICML 2021
***
We would like to gently remind you that the **end of the discussion period is imminent**. We would appreciate it if you could let us know whether our comments addressed your concerns.
Best regards,
Authors
---
Rebuttal Comment 1.2:
Comment: Thanks for the responses. However, my concerns are not well addressed.
- W1. I still do not clearly see the motivation for considering such scenario. Are there specific and reasonable real-world application scenarios?
- W2 & W3. It seems like that you are cherry-picking the results in the rebuttal. In Table 1, your two methods perform comparably with FedLoRA, and no one method can consistently perform the best. Specifically, for test-time personalization, FedDPA-T performs worse than FedLoRA.
---
Reply to Comment 1.2.1:
Comment: Thanks for your responses.
For W1, let's consider the recommendation system in real-world application scenarios. Each user has their own areas of interest (e.g., user1 focuses on book reading), and there is sufficient historical clicking/viewing data for training and personalization. However, users may have new interests at any time, for example, user1 focusing on reading is becoming interested in dancing. Since the user has no historical data about the new interest and this recommendation for new interest only happens during inference/testing, the test-time generalization ability of the personalized model becomes significant in this case.
For W2 and W3, our main focus is to improve the generalization ability without sacrificing the personalization performance; therefore, as long as our method can outperform other methods on test time personalization with comparable results on personalization, it proves the effectiveness of our method. FedDPA-T and FedDPA-F are under the same solution with different optimizing strategies, which means that either method performing better than other baselines on test-time personalization can prove the effectiveness of our proposed solution. In Table 1&2, FedDPA-F consistently performs better than FedLoRA on test-time personalization and maintains comparable performance on personalization, which has sufficiently demonstrated the effectiveness of our proposed solution. FedDPA-T performs worse than FedLoRA, which can only indicate that the optimizing strategy used for FedDPA-T is not beneficial for test-time generalization.
These results are from Table 1 & 2 on test-time personalization
| Dataset1 | Paraphrase | Entailment | Structure to Text | Text Formatting | Linguistic Acc| Word Dis | Coreference | Question CLS |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| FedLoRA | 75.56 | 76.55 | 75.21 | 74.94 | 76.16 | 74.64 | 74.99 | 76.97 |
| FedDPA-F | **78.10** | **77.36** | **77.18** | **76.98** | **77.11** | **76.23** | **76.84** | **77.19**|
| Dataset2 | Paraphrase | Commonsense | Entailment | Text Formatting | Summarization | Reading Com | Sentiment | Open QA |
| ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- | ---- |
| FedLoRA | 69.60 | 71.64 | 71.09 | 71.28 | 65.63 | 68.89 | 70.32 | 70.44 |
| FedDPA-F | **71.64** | **72.28** | **72.42** | **72.39** | **71.12** | **70.46** | **71.00** | **71.82**| | Summary: The authors use two set of adapters to personalize a model with federated learning. The idea is to use FL to learn the global adapter whereas each device has a local adapter to personalize the model for each client.
Strengths: - The paper is well written and easy to understand
- The overall approach is sound
- THe authors evaluated their approach on a number of NLP tasks
Weaknesses: Overall this is an interesting paper. However these are some areas to potentially improve:
- Firstly, the overall premise: the authors say that this approach is used to learn from distribution shifts between training and test datasets. However, this is the whole reason we use federeated learning. Typically we have models trained on public deatasets (e.g., imagenet, wikipedia, etc), but these don't work on real-world applications because the domain is slightly different. FL was proposed so we can get a real-world (and real-time) signal about in-domain distributions in a privacy-preserving. As a result, we actually do want to learn this "test" distribution during FL. It is unclear why in the FL scenario, the "test" distribution is not similar to the train distribution. Overall, the paper would benefit for a much stronger motivation.
- There are a lot of works that use federated learning to learn a global model and then personalize this model for each client (e.g., https://arxiv.org/abs/2103.00710). As a result, the overall approach of training a global model and then having extra client-side personalization is not entirely novel. Furhtermore, there have been many works that train adapters instead of full models in order to save on communication and on-device compute.
- At section 3.1 the authors say there are two datasets belonging to two different distributions (Ps vs Pt): why does this happen within a device ? How do you detect that the distribution has shifted ? How does this happen in practice ?
- The evaluation has been done in a very artificial scenario where each of the 8 devices had a completely different NLP task (section 5.1). This is quite extreme and it is likely to favour the results that take advantage of personalization. However, this extreme is unlikely to happen in the real FL deployments: typically we want to build a global model from a set of non-IID users but not to an extreme where each user has a different task. Furthemore, we attempt to train large-enough models that can tolerate some distribution shift (as long as it has been seen in the data).
Technical Quality: 2
Clarity: 4
Questions for Authors: See Above
Confidence: 4
Soundness: 2
Presentation: 4
Contribution: 3
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: The paper would benefit for a much stronger motivation.**
Unlike traditional FL (especially personalized FL) which primarily addresses heterogeneity among clients during training, our setting also considers heterogeneity within each client during testing, building on insights from previous works [1]. Since the learning process of foundation models will involve more heterogeneous datasets from different tasks compared with the traditional machine learning methods, our setting aims to consider more complex scenarios with various distribution shifts in federated foundation models. For example, in traditional personalized FL, each client's training and testing are restricted to the same distribution (e.g., client 1 trains and tests on task 1); in contrast, our setting involves clients training on one task and testing on multiple different tasks (e.g., client 1 trains on task 1 and tests on tasks 1, 2, 3...).
Moreover, imagine the on-device foundation model in the near future, the model should be lightweight and versatile enough to tackle many different tasks. To this motivation, this paper is the first step towards exploring a solution that can enable the lightweight on-device foundation models with desired performance on training data and also gain a better test-time generalization while the test data is from another domain or task.
[1] Tan, Y., Chen, C., Zhuang, W., Dong, X., Lyu, L., & Long, G. (2024). Is heterogeneity notorious? taming heterogeneity to handle test-time shift in federated learning. Advances in Neural Information Processing Systems, 36.
**W2: Novelty of our methods.**
As discussed in the recent literature on Federated Foundation Models (FFM) [2], there are many new challenges that need to be rethought. For example, training versatile foundation models requires incorporating more heterogeneous datasets within the federated learning framework. Moreover, the model communication-efficient and fine-tuning strategy also needs to be considered in the federated foundation models. Before the research community can go further to explore more exciting applications of federated foundation models, some base work must be explored. Our work is to rethink the problems of traditional FL (especially PFL) based on FFM and establish the benchmark of personalized FFM.
We are the first to explore the test-time generalization of personalized FFM. The overall design is new with refined setting (test-time personalization) in FFM and the technique framework is composed of multiple pieces including a new training paradigm with self-reconstructed datasets to train personalized FFM, and also a new dynamic weighting mechanism to determine the combination of different adapters for test-time personalization.
[2] Ren, C., Yu, H., Peng, H., Tang, X., Li, A., Gao, Y., ... & Yang, Q. (2024). Advances and open challenges in federated learning with foundation models. arXiv preprint arXiv:2404.15381.
**W3: Different distributions within a device and how to detect shifts.**
In federated foundation models, each client’s device will be deployed with a lightweight intelligent assistant to tackle a broad range of downstream tasks, thus it is very likely that the client needs to tackle unseen tasks with different distributions during testing.
In this paper, we make new assumptions for FFM: 1) for each device, the test distribution may differ from the train distribution, 2) different clients have different distributions as well, and 3) the test distribution of client A could be observed by another client in their training tasks. For example, in practice, a client accustomed to writing emails in English may require translation assistance when working on a new project in Chinese. Here, the client has ample historical data for training in English email writing, but none for Chinese translation, which only emerges during the testing/inference phase. Additionally, other clients may be specialized in translation tasks.
To detect the distribution shifts during the testing phase, we designed an Instance-wise Dynamic Weighting mechanism. Specifically, we measure the similarity between the representations of test and training instances to adjust the importance weight of local adapter and global adapter. More detailed discussion could be found in Section 4.3.
**W4: Extreme evaluation setting.**
The problem setting and application scenario in federated foundation models have been different from the traditional federated learning methods. Our research is focused on the challenges of FFM, wherein foundation models, pre-trained on extensive datasets, already exhibit a degree of generalization capability towards general non-IID problems —a topic thoroughly investigated within traditional FL. Consequently, in the realm of FFM, we aim to address more complex scenarios, such as cross-domain and cross-dataset challenges, where centralized foundation models typically underperform [3].
In our future work, we could add more experiments to include traditional federated learning settings by splitting a dataset into many pieces with a slight non-IID. We believe our proposed method can be easily adapted to this type of non-IID.
[3] Wang, Y., Ivison, H., Dasigi, P., Hessel, J., Khot, T., Chandu, K., ... & Hajishirzi, H. (2023). How far can camels go? exploring the state of instruction tuning on open resources. Advances in Neural Information Processing Systems, 36, 74764-74786.
---
Rebuttal Comment 1.1:
Comment: Dear Reviewer mFUh,
As the author-reviewer discussion period is approaching its end on 13 August, we respectfully ask whether we have addressed all your questions and concerns.
---
Rebuttal Comment 1.2:
Comment: I want to thank the authors for their answers.
After reading the rebuttal and the other reviews, the authors did address some of the comments. I am still not entirely convinced about the motivation (I.e,. real-world use-cases) but I will increase my score to reflect this. | Summary: This paper proposes a novel dual-personalizing adapter to tackle the test-time distribution shift for federated foundation models (FedFM). FedFM is a new research domain to enhance foundation models by leveraging many fine-tuning tasks on many protected datasets for end users. The solution is essentially to tackle the trade-off of personalisation and generalisation of the client-specific models in FedFM.
Strengths: 1. FedFM is a new research domain that has been regarded as an important pathway to enhance foundation models by leveraging decentralized data. Tackling the tradeoff between client-specific personalisation and generalisation is a key challenge of the FedFM research domain.
2. The proposed method is a new idea with original design. Moreover, this paper is the first work to discuss the test-time generalization challenge on FedFM.
3. The proposed method is technique sounds. The design of the method is simple yet effective. It is very easy to follow.
4. In terms of clarity, the paper’s contents are well organized and clearly presented with easy-to-understand figures and well-described details.
5. The appendix with experiment details and the source codes are provided to support the reproducibility of this work.
Weaknesses: 1. The important weight of the global adapter and personalised adapter is a key factor of the proposed dual-personalising adapter mechanism in FedFM. More insightful discussion is expected to analyse the selection of the important weights.
2. The proposed instance-wise trade-off mechanism essentially relies on the similarity between the test instance and training dataset. The current solution is straightforward and reasonable, however it could be elaborately designed.
3. It would be better if a large-scale experiment could be conducted to further evaluate the proposed method. However, considering the lack of computation resources, this paper’s experiment is sufficient enough as an initial exploration and discussion of this research direction.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. In line 260, what type of similarity function will be chosen and why?
2. According to the design of alpha, if the new test instance is more similar to the client’s training instances in a representation space, the local adapter will gain a bigger importance weight. Any theoretical discussion about this?
3. Why FedDPA-F wins more times than FedDPA-T?
4. Are there any other data sources other than Flan that can be used in this type of setting?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W2 and W3: Design of trade-off mechanism and large-scale experiment.**
Thanks for your insightful advice. We will further explore better mechanisms from the perspective of generalization theory and more datasets in real applications.
**Q1: Choice of similarity function.**
We select cosine similarity due to its better robustness and normalization with high-dimensional vectors than other metrics, and its superiority has been demonstrated in many NLP/CV works. Additionally, we conducted an ablation study comparing various similarity functions in Appendix B.2, and the results in Table 9 demonstrate cosine similarity outperforms other similarity functions.
**W1 and Q2: Theoretical analysis of the design of alpha.**
In section 3.2, we conducted a preliminary theoretical analysis of the discordance between personalization and test-time tasks, which motivated us to learn two adapters: a local adapter for personalization task (training distribution) and a global adapter for test-time tasks. Therefore, when a test instance is more similar to the client’s training instance, we intuitively infer that its distribution is more similar to the training data, and because the local adapter aimed for personalization contains more knowledge of training data distribution, we increase the weight of local adapter. We will further explore the theoretical relation between them.
**Q3: Why FedDPA-F wins more time than FedDPA-T.**
It is because that FedDPA-F maintains a more generalized local adapter than FedDPA-T. The difference between FedDPA-F and FedDPA-T lies in the local adapter: FedDPA-F utilizes the global adapter as initialization of the local adapter, while FedDPA-T randomly initializes the local adapter. Given that the global adapter aggregated on different distributions matin certain generalization capabilities, the local adapter of FedDPA-F has better generalization performance than that of FedDPA-T, which leads to better performance on most test-time tasks. We will add this discussion in our experiment analysis to clarify.
**Q4: Other data sources.**
Our method is designed with a high degree of flexibility for its adaption across various data sources. As illustrated in Appendix A.1, any public text data sources (e.g., Dolly), uniformed into the generative task with appropriate prompts like Table 6, can be used in this setting. In addition, other types of data can also be applied to our methods by replacing LLM with the corresponding foundation models. For example, for image datasets (e.g., ImageNet, MS COCO), ViT can be used as the foundation model in our methods for this setting. We will add a discussion about the adaptation to other data sources. | Summary: Federated Foundation Models (FedFM) is an emerging research domain to study collaboratively fine-tuning the pre-trained foundation models. This paper studied a test-time distribution shift problem on FedFM by proposing a new dual-personalising adapter.
Strengths: 1) The proposed method is novel. The targeting problem is new while a new setting is created in the experimental study.
2) The paper presents high-quality content and novelty design. The claimed points are well supported by theoretical discussion and experimental analysis. The appendix and source codes provide sufficient details of the experiment.
3) The clarity of this paper is excellent.
4) The targeting problem is significant to the emerging domain of FedFM.
Weaknesses: 1) According to Eq 1, the P_all is all potential distributions. However, in real applications, the clients are usually insufficient to represent all possible distributions. A discussion is required to clarify this assumption.
2) The implementation framework heavily relies on the LoRA, thus the contribution is reduced.
3) The experiment setting assumes each client owns one type of dataset or task. What’s the difference with multi-task learning? Should you compare it with some multi-task learning baseline methods?
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. Is it possible to apply this dual-personalizing adapter framework to other types of data, for example, image recognition, recommendation, time series, and multimodality data?
2. Is it possible to merge two Federated Datasets into one so that you can have 16 clients?
3. Line 315, how to fix the value of alpha? According to the design, the value of alpha should be decided by a dynamic weighting mechanism.
4. In future work, the author mentioned “theoretical analysis and more datasets”. Why do you think this method needs a theoretical analysis that can make a significant difference to this paper? What are the potential major challenges to applying this method to other datasets that prevent you from finishing these experiments in this paper?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **W1: Discussion of assumption.**
Yes, our main experiments are based on the assumption that all possible distributions are included in all clients. We already discussed this in the first paragraph of section 4 and Appendix C. To enhance clarity, we will rewrite this part and emphasize keywords in bold to make it clearer.
**W2 and Q1: Heavily rely on the LoRA and adapt to other data types.**
Our method is designed with a high degree of flexibility, facilitating its adaption across various adapter-based PEFT methods and transformer-based foundation models of different data types. In this paper, we just use LLM and LoRA as examples to illustrate our method, and our method can easily be adapted to other frameworks by substituting the LoRA and LLM with alternative adapter-based PEFT methods and transformer-based foundation models. We will revise our paper by including a discussion about adaptation to other adapter-based PEFT methods (e.g., series adapter) and other foundation models (e.g., ViT for images and UNITER for multimodality data).
**W3: Different with multi-task learning.**
Multi-task learning is centralized, and tuning on the combination of multi-task data of a centralized foundation model can be taken as the multi-task learning baseline. As in the centralized foundation model, all tasks are standardized into a uniform format (refer to Appendix A.1 and Table 6), and the model already acquires task-agnostic token embedding by the extensive data pre-training; directly tuning on these multi-task data is the implementation of multi-task learning with foundation models [1]. We have included this baseline, referred to as "Centralized," in our experimental comparisons. In addition, our setting accounts for test-time distribution shifts, a factor not typically addressed in multi-task learning. Although our main experiments are under the assumption that all possible distributions are included in all clients, our setting can also allow for scenarios involving unseen distributions for all clients, and our methods have demonstrated robustness in these scenarios, as detailed in Appendix C.1. We will add a discussion in related work to clarify the difference with multi-task learning.
[1] Yu, J., Dai, Y., Liu, X., Huang, J., Shen, Y., Zhang, K., ... & Chen, Y. (2024). Unleashing the Power of Multi-Task Learning: A Comprehensive Survey Spanning Traditional, Deep, and Pretrained Foundation Model Eras. arXiv preprint arXiv:2404.18961.
**Q2: Merge two federated datasets into one for 16 clients.**
Yes, we can merge these two datasets into one for more clients. As explored in section 6.2 of the ablation study on the impact of client number, our methods could maintain performance when scaling up the clients (up to 40 clients) in Figure 4. Therefore, our methods are supposed to be effective when having more clients and more tasks.
**Q3: How to fix the value of alpha.**
In line 315, we aim to investigate the impact of the global adapter on the training of the local adapter in FedDPA-T. As illustrated on the left of Fig 2(b), during the local adapter training of FedDPA-T, we employ the frozen global adapter to expedite the learning of the local adapter. This is because the global adapter contains certain task-related knowledge to accelerate the local adapter’s learning. It is worth noting that during the training phase, there are no distribution shifts; such shifts occur solely in the inference/testing phase, and the dynamic weighting mechanism is only used in the inference phase to determine the value of alpha for addressing test-time distribution shifts. In contrast, in the training phase, alpha is only used to control the contribution of the frozen global adapter to the local adapter learning. Therefore, during the training of FedDPA-T, we can fix the value of alpha, as its function differs from that during inference. To enhance clarity, we will revise the symbol and use two distinct symbols to differentiate these uses.
**Q4: Theoretical analysis and more datasets.**
For theoretical analysis, we believe that further work could rethink this problem within the context of existing theoretical frameworks and establish a new theoretical framework of FFM. For example, we can start from the perspective of out-of-domain generalization theory to rethink the generalization analysis and establish a new generalization and convergence analysis framework of FFM.
One main challenge in applying to more datasets is the significant computational resources required. Given that foundation models often consist of billions of parameters, they necessitate considerable time and storage for tuning. Additionally, the computational demand varies significantly across data types; for example, a thousand text data in the Flan occupy only about 1MB, whereas the same number of images in ImageNet can require around 130MB. Therefore, we believe that future work should explore more efficient methods for tuning FFM on larger datasets.
---
Rebuttal Comment 1.1:
Comment: The authors' great efforts made in the rebuttal have been appreciated. After carefully checking it, my queries have been well replied and I will keep my score. | Rebuttal 1:
Rebuttal: We thank all the reviewers for their valuable reviews. We also appreciate their recognition of the key contributions of our work and the efficacy of our method.
1. **Contributions** to federated foundation models:
- "The targeting problem is significant to the emerging domain of FedFM." (Reviewer D5Ab)
- "FedFM is a new research domain that has been regarded as an important pathway to enhance foundation models by leveraging decentralized data. Tackling the tradeoff between client-specific personalisation and generalisation is a key challenge of the FedFM research domain." (Reviewer vXiN)
2. The **novelty** of our method:
- "The proposed method is novel. The targeting problem is new while a new setting is created in the experimental study." (Reviewer D5Ab)
- "The proposed method is a new idea with original design. Moreover, this paper is the first work to discuss the test-time generalization challenge on FedFM." (Reviewer vXiN)
3. Comprehensive experiments and **efficacy** of our method:
- "The paper presents high-quality content and novelty design. The claimed points are well supported by theoretical discussion and experimental analysis. The appendix and source codes provide sufficient details of the experiment." (Reviewer D5Ab)
- "The appendix with experiment details and the source codes are provided to support the reproducibility of this work." (Reviewer vXiN)
- "The authors evaluated their approach on a number of NLP tasks." (Reviewer mFUh)
4. **Excellent presentation** of our paper:
- "The clarity of this paper is excellent." (Reviewer D5Ab)
- "In terms of clarity, the paper’s contents are well organized and clearly presented with easy-to-understand figures and well-described details." (Reviewer vXiN)
- "The paper is well written and easy to understand." (Reviewer mFUh)
Detailed responses to each reviewer are provided below. We will incorporate all the feedback in the final version. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Visual Fourier Prompt Tuning | Accept (poster) | Summary: This paper introduces Visual Fourier Prompt Tuning (VFPT), a novel approach to parameter-efficient fine-tuning (PEFT) for large-scale Transformer-based vision models. VFPT integrates Fast Fourier Transform (FFT) with prompt tuning, allowing the model to adapt to new tasks effectively by utilizing both spatial and frequency domain information, especially when there is a significant disparity between the datasets used in pre-training and fine-tuning phases. This method not only retains the simplicity and intuitiveness of standard visual prompt tuning but also enhances model performance across various tasks without substantially increasing the parameter count. Empirical results show that VFPT outperforms several state-of-the-art PEFT methods on benchmark datasets like VTAB-1k and FGVC, achieving higher accuracy with fewer parameters. For instance, VFPT uses only 0.66% of the total model parameters while achieving 73.20% mean accuracy on VTAB-1k, surpassing both full fine-tuning and conventional visual prompt tuning methods.
Strengths: 1. The paper is well-structured and clearly written, with each section logically flowing into the next.
2. VFPT introduces a unique combination of Fast Fourier Transform (FFT) with visual prompt tuning in Transformer-based models, which leverages both spatial and frequency domain information.
3. The empirical results are robust, demonstrating VFPT's superiority over existing state-of-the-art parameter-efficient fine-tuning methods. The paper includes detailed comparative analyses, showing VFPT's performance advantages across multiple benchmark datasets.
4. VFPT has significant implications for the scalability and efficiency of Transformer-based models in vision tasks. The ability to maintain high performance with fewer parameters makes large-scale models more accessible and practical for a wider range of applications.
5. The paper discusses the theoretical implications of integrating FFT within visual prompt tuning, noting improvements in the optimization landscape and model interpretability. VFPT promotes a smoother loss landscape, which correlates with better generalization and lower test error rates.
Weaknesses: 1. The abstract states that the proposed method, VFPT, utilizes only 0.57% of the model parameters to achieve a performance of 73.20% mean accuracy on VTAB-1k dataset. However, Table 1 contradicts this by listing the parameter usage as 0.66%. This discrepancy raises concerns about the accuracy of the reported statistics.
2. The paper does not fully elucidate the complexity introduced by integrating FFT into the prompt tuning process. While the method is praised for its parameter efficiency, the computational overhead, especially in terms of runtime and memory consumption during the FFT operations, is not thoroughly discussed.
3. Table 3 reveals that VFPT appears to underperform relative to some established partial tuning and extra module methods when applied to models pretrained with MAE self-supervised objective. This might indicate that VFPT's reliance on Fourier transforms does not synergize as effectively with the feature representations or data distributions typical of self-supervised learning models. Understanding why VFPT performs less effectively in this context is crucial for refining the approach or identifying its optimal application scenarios.
Technical Quality: 3
Clarity: 3
Questions for Authors: Table 5(b) explores the effects of different Fourier Prompt Locations (Prepend, Append, and Random) on the performance of Visual Fourier Prompt Tuning (VFPT). The results suggest variations in performance based on the placement of these prompts within the input sequence. Since VPT mentions that different prompt locations are mathematically equivalent, I wonder why the location of Fourier prompts impacts model performance? The following sentence is from section 3.2 of the original VPT paper: “Notably for ViT, $x_N$ is invariant to the location of prompts since they are inserted after positional encoding, e.g., $[\mathbf{x_0}, \mathbf{P}, \mathbf{E_0}]$ and $[\mathbf{x_0}, \mathbf{E_0}, \mathbf{P}]$ are mathematically equivalent. This also applies to VPT-Deep.”
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The limitations have been discussed in Appendix S9.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer TrZ7,
We sincerely appreciate your time and effort in reviewing our paper and providing valuable comments. We provide explanations to your questions point-by-point in the following.
**Q1: Regarding the discrepancy of tuned parameter rate.**
**A1:** Sorry for the confusion. We want to clarify that the average percentage 0.57% mentioned in the abstract is exclusive to the results obtained on the VTAB-1k benchmark, which comprises 19 subset datasets. While 0.66 % in Table 1 is the average percentage of tuned parameters required by the overall 24 tasks. i.e., encompasses both the VTAB-1k benchmark and the FGVC datasets (i.e., 19 from VTAB-1k and 5 from FGVC). We will revise accordingly to make it more clear.
**Q2: Regarding complexity analysis.**
**A2:** Thank you for the excellent suggestion. We have provided a detailed comparison of our computational results below. More specifically, we experimented with different Fourier percentage settings (i.e., the alpha rate) on the CIFAR-100 benchmark and reported their maximum memory consumption, average training batch time, and average inference batch time. All settings were tested with the same batch size and prompt length. The experiments were conducted on NVIDIA A100-40GB GPUs. We’ll supplement these results in the revision.
| Configurations | Maximum Memory Consumption (GB) | Training Average Batch Time (s) | Inference Average Batch Time (s) |
| ---------------- | ------------------------------ | ------------------------------- | -------------------------------- |
| VPT (alpha=0.0) | 1.8210 | 0.1140 | 0.0499 |
| VFPT (alpha=0.3) | 1.8210 (0%) | 0.1169 (+2.54%) | 0.0505 (+1.20%) |
| VFPT (alpha=0.5) | 1.8210 (0%) | 0.1155 (+1.32%) | 0.0502 (+0.60%) |
| VFPT (alpha=0.7) | 1.8210 (0%) | 0.1150 (+0.88%) | 0.0500 (+0.20%) |
| VFPT (alpha=1.0) | 1.8210 (0%) | 0.1150 (+0.88%) | 0.0501 (+0.40%) |
**Q3: Regarding the performance in MAE tasks.**
**A3:** Thank you for the insightful question. There are two main reasons why VFPT performs less effectively in MAE objectives.
**First**, our work is built on top of VPT, which has significantly low performance on different pretrained objectives. VPT claims this may be due to the training strategies being fundamentally different between the two self-supervised ViTs and the supervised ones. Though in this work, we manage to significantly narrow this gap (e.g., 53.59% vs. 36.02% under MAE on VTAB-1k Natural) by introducing frequency domain information, the discrepancy in the training objectives might still cause lower performance.
**Second**, we observe an interesting phenomenon that during prompt tuning, the evaluation performance in self-supervised objective tasks is less stable compared to supervised tasks (e.g., 0.33 vs. 1.30 average standard deviation on VTAB-1k Natural). This observation further supports our claim above. In the future, we plan to explore and further mitigate the gap between different pretrained objectives.
**Q4: Regarding Fourier Prompt Locations.**
**A4:** Thank you for the question. We want to clarify that the claim in VPT holds when switching between $E_{0}$ and $P$, as it only changes the position considering **all prompts as an entire block**. In contrast, in VFPT, the Fourier operation is introduced **within prompts**, indicating that it still exists in different location settings when the Fourier percentage is determined. Our ablative study on Fourier Prompt Location investigates how we incorporate Visual Fourier Prompts within the prompt representation. Different locations result in varied prompt representations, thereby distinctly impacting model performance.
We appreciate your thoughtful comments. We hope our response addresses your concerns. Please let us know if there are any additional questions, and we will be happy to discuss further.
---
Rebuttal Comment 1.1:
Comment: I appreciate your detailed response. The authors have addressed all the concerns I raised in my initial review, and after considering the other feedback, I have raised my score to 7.
---
Reply to Comment 1.1.1:
Title: Thank you for the prompt reply
Comment: Thank you for your prompt response and the constructive feedback, which is crucial for enhancing our work. | Summary: This paper introduces Visual Fourier Prompt Tuning (VFPT), an approach for parameter-efficient fine-tuning of large vision models. VFPT integrates Fast Fourier Transform (FFT) operations into visual prompt tuning, allowing it to incorporate both spatial and frequency domain information. The method demonstrates superior performance and generalizability across various tasks and datasets, particularly when there are significant disparities between pretraining and fine-tuning data. VFPT outperforms several state-of-the-art baselines on benchmarks like VTAB-1k and FGVC, using only a small fraction of trainable parameters. The authors provide extensive empirical results, ablation studies, and visualizations to demonstrate the effectiveness of their approach. They also explore the optimization landscape and interpretability aspects of VFPT.
Strengths: Originality:
The paper presents an approach by integrating Fast Fourier Transform (FFT) operations into visual prompt tuning. This is an original idea that bridges frequency domain analysis with parameter-efficient fine-tuning of large vision models. The authors draw inspiration from human visual cognition to incorporate both spatial and frequency domain information, which is a creative angle not explored in previous visual prompt tuning methods.
Quality:
- Comprehensive experiments are conducted across multiple benchmarks (VTAB-1k, FGVC) and model architectures (ViT, Swin Transformer).
- Rigorous ablation studies examine various components of the proposed method.
- The authors provide in-depth analysis of the optimization landscape and interpretability aspects.
- Error bars and statistical significance are reported for main results.
- The paper includes thorough comparisons with state-of-the-art baselines.
Clarity:
The paper is well-structured and clearly written.
Significance:
- It addresses a key challenge in parameter-efficient fine-tuning - performance degradation when there's a large disparity between pretraining and fine-tuning datasets.
- The method achieves strong performance while using only a small fraction of trainable parameters, which is crucial for adapting large models efficiently.
- The approach is general and can potentially be applied to various vision transformer architectures.
Weaknesses: 1. The paper lacks a rigorous theoretical analysis of why integrating Fourier components improves generalization. While empirical results are strong, a deeper theoretical investigation could provide insights into:
- Why frequency domain information helps with dataset disparities
- How the balance between spatial and frequency information affects model behavior
- Potential limitations or failure cases of the approach
Suggestion: Develop a theoretical framework explaining the interplay between spatial and frequency domain information in prompt tuning. This could involve analyzing the properties of the loss landscape or examining how Fourier components affect the model's representation space.
2. The Fourier percentage (α) is a crucial hyperparameter, but its selection process is not fully automated. The paper suggests using dataset disparity as a guideline, but this may not always be easily quantifiable.
Suggestion: Develop an adaptive method to automatically determine the optimal Fourier percentage based on task characteristics. This could involve a small meta-network that predicts the optimal α, or a dynamic adjustment mechanism during training.
3. While the paper mentions a slight decrease in training speed (2.8% on VTAB-1k), a more comprehensive analysis of computational trade-offs is missing. This is important for practitioners considering adopting the method.
Suggestion: Provide a detailed analysis of computational overhead across different settings, including:
- Training time comparisons with baselines
- Memory usage
- Inference time implications
- Potential optimizations to reduce overhead
4. The paper primarily uses FFT as a black-box operation. A deeper exploration of how different frequency components contribute to performance could yield additional insights.
Suggestion: Conduct experiments analyzing the impact of different frequency bands on model performance. This could involve selectively filtering certain frequency components or visualizing which frequencies are most important for different tasks.
5. The generalization to other architectures is not clear. While the method is tested on ViT and Swin Transformer, its applicability to other architecture types (e.g., convolutional networks, MLP-Mixers) is not explored.
Suggestion: Extend the experimentation to a broader range of architecture types to demonstrate the method's generality. If challenges arise, analyze and discuss the limitations of applying VFPT to different model families.
6. The paper doesn't deeply explore how Fourier prompts interact with regular prompts or how they influence different layers of the network.
Suggestion: Conduct a layer-wise analysis of how Fourier prompts affect feature representations throughout the network. Visualize and quantify the interactions between Fourier and regular prompts to provide insights into their complementary roles.
7. The paper doesn't thoroughly address the robustness of VFPT to different initializations or potential instabilities during training.
Suggestions: Perform a comprehensive stability analysis, including:
- Sensitivity to different random seeds
- Learning rate schedules that work well with VFPT
- Potential gradient issues or training instabilities
- Robustness to different prompt lengths and configurations
8. While benchmark performance is strong, the paper lacks discussion on how VFPT performs on real-world, large-scale applications outside standard benchmarks.
Suggestion: Include case studies or experiments on practical, large-scale vision tasks (e.g., autonomous driving, medical imaging) to demonstrate the method's effectiveness in real-world scenarios. Discuss any challenges or modifications needed for such applications.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please refer to the weaknesses section.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Limitations:
The authors do discuss limitations in Section S9 of the appendix, which is a positive point. They specifically mention the introduction of an additional hyperparameter (Fourier percentage) and acknowledge that while dataset disparity can guide its selection, an automatic search method could further improve efficiency. This shows awareness of a key limitation in their approach.
However, the limitations discussion could be more comprehensive. For example:
1. They could elaborate on potential scenarios where VFPT might not perform well.
2. Discussion of scalability limitations or computational constraints for very large models is not clearly addressed.
3. Potential limitations in applying VFPT to other domains or model architectures could be explored further.
Societal Impact:
The authors do discuss social impact in Section S9 of the appendix, which is commendable. They highlight positive aspects such as:
1. Potential for improving model accuracy without substantial computational overhead.
2. Applicability in computationally-sensitive real-world applications.
3. Advancements towards achieving generality across datasets.
However, the discussion of potential negative societal impacts is limited. The authors could have explored:
1. Potential misuse of more efficient and generalizable models in privacy-invading or surveillance applications.
2. Environmental impacts of encouraging more fine-tuning experiments.
3. Potential biases that might be introduced or amplified through this method.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer 9y4B,
We sincerely appreciate your time and effort in reviewing our paper and providing detailed comments and suggestions, which are crucial for improving our work. We provide explanations to your questions point-by-point in the following.
**Q1: Regarding theoretical analysis.**
**A1:** This is indeed an excellent suggestion. While we empirically investigate why VFPT achieves better performance and generalizes well across various tasks from an optimization perspective, we also want to highlight several potential directions for future theoretical analysis:
- The influence of the self-attention mechanism: the self-attention mechanism is the key component of transformers, we plan to conduct a deeper exploration into how the incorporation of frequency information influences the attention module.
- The potential mitigation of low-pass filter phenomena [1] in transformers. The work in [1] indicates that as ViT scales up, excessive low-pass filtering can cause attention collapse. In future research, we aim to theoretically analyze whether the Fourier transform can reduce low-frequency passing and introduce high-frequency components to mitigate this issue.
[1] Anti-Oversmoothing in Deep Vision Transformers via the Fourier Domain Analysis: From Theory to Practice. ICLR 2022.
**Q2: Automatically determine the optimal Fourier percentage.**
**A2:** We are exploring introducing a small network within the VFPT framework to autonomously search for optimal combinations as mentioned in Appendix S10 (line 733-735). This approach might enhance training efficiency and facilitate additional performance improvements. We plan to conduct more experiments in this direction. Thank you for the suggestion.
**Q3: A more comprehensive analysis of computational trade-offs.**
**A3:** Following the suggestion, we provide a detailed computational analysis with different settings below. Specifically, we experiment with different Fourier percentages on the CIFAR-100 benchmark and report their maximum memory consumption, average training batch time, and average inference batch time. All settings are tested with the same batch size and prompt length. The experiments are conducted on NVIDIA A100-40GB GPUs.
| Settings | Maximum Memory Consumption (GB) | Training Average Batch Time (s) | Inference Average Batch Time (s) |
| ---------------- | ------------------------------ | ------------------------------- | -------------------------------- |
| VPT (alpha=0.0) | 1.8210 | 0.1140 | 0.0499 |
| VFPT (alpha=0.3) | 1.8210 (0%) | 0.1169 (+2.54%) | 0.0505 (+1.20%) |
| VFPT (alpha=0.5) | 1.8210 (0%) | 0.1155 (+1.32%) | 0.0502 (+0.60%) |
| VFPT (alpha=0.7) | 1.8210 (0%) | 0.1150 (+0.88%) | 0.0500 (+0.20%) |
| VFPT (alpha=1.0) | 1.8210 (0%) | 0.1150 (+0.88%) | 0.0501 (+0.40%) |
**Q4: Analyzing the impact of different frequency bands.**
**A4:** Thank you for the insightful suggestion. Following the suggestion, we further visualize the spectral response of an attention map in the last layer (**Figure 1 in the rebuttal PDF**) on the KITTI/distance benchmark following Anti-Oversmoothing [1]. The preliminary results show that VFPT effectively filters out specific low-frequency components while introducing high-frequency components when compared to VPT. This observation suggests that VFPT may effectively mitigate the attention collapse phenomenon (i.e., as the transformer goes deeper, the attention maps gradually become similar and even much the same after certain layers), as introduced in [1-3]. This observation is consistent with VFPT’s superior performance when compared to VPT. We’ll supplement the analysis with further discussion in the revision.
[2] Improving vision transformers by revisiting high-frequency components. ECCV 2022.
[3] Deepvit: Towards deeper vision transformer. arXiv 2021
(Due to character limitations, we will continue to answer in another comment.)
---
Rebuttal 2:
Comment: (Continue from where rebuttal ends.)
**Q5: The generalization to other model families.**
**A5:** Thank you for the great suggestion. Following VPT, we further conduct experiments on ConvNeXt-Base pretrained on ImageNet21K. ConvNeXt-Base is a convolutional neural network with size 87.6M. We follow the padding strategy to incorporate learnable prompts for the input images for an intuitive setup. The results presented below show a noticeable improvement, indicating the generalization of VFPT to other architectures.
| ConvNeXt-Base | VTAB-1k Specialized (4) |
| ---- | ------------------- |
| FULL FT | 83.73% |
| VPT | 83.00% |
| VFPT | 83.69% |
In addition, we conducted a preliminary study on a vision-language model: CLIP. Specifically, we applied VFPT to the vision encoder of CLIP using the optimal settings from this work. The results on ImageNet are shown in the table below. As seen, there is a noticeable improvement compared to the standard CLIP, indicating the generalization of VFPT to different models. We’ll provide these additional results and discussions in the revision.
| ImageNet | Tuned / Total (%) | Accuracy (%) |
| -------------- | ----------------- | ------------ |
| CLIP (zero-shot) | 0.00 | 66.73 |
| CLIP + VPT | 0.22 | 70.30 |
| CLIP + VFPT | 0.21 | 71.49 |
**Q6: Regarding the interaction of Fourier prompts and regular prompts.**
**A6:** We conduct experiments analyzing the impact of prompts, as detailed in Section 4.4. We observe from the 3D attention map (Figure 4(a)) that there is a pronounced accumulation of attention scores at both Fourier and regular prompt locations, indicating that these prompts substantially impact the feature representation learning. These observations are from the raw average attention head in the last layer of VFPT, and we plan to investigate how they influence different layers of the network as suggested.
**Q7: Regarding model robustness and stability.**
**A7:** Thank you for the great suggestion. In our experiments, all results are averaged over three runs with different random seeds to mitigate potential sensitivity issues related to random seed variation. Additionally, we conduct the same grid search using the same learning rate and weight decay as VPT and E2VPT, detailed in Section 4.1. We further include an ablation study on different prompt lengths and configurations in Appendix 2.5 to evaluate the robustness across varying prompt lengths and Fourier percentages.
Following the suggestion, we conduct an additional ablation study with different initialization techniques (i.e., xavier uniform [4] and He. normal [5] initialization scheme). As seen, both He and xavier initialization show competitive results, validating the robustness of VFPT w.r.t. prompt initialization. We’ll supplement the results with further discussion in the revision.
| Initialization | VTAB-1k Specialized (4) |
| --------------------- | ----------------------- |
| VFPT-He. | 84.88% |
| VFPT-Xavier (default) | 84.93% |
[4] Understanding the difficulty of training deep feedforward neural networks. AISTATS 2010.
[5] Delving deep into rectifiers: Surpassing human-level performance on imagenet classification. ICCV 2015.
**Q8: How does VFPT perform on real-world, large-scale applications.**
**A8:** Thank you for the question. We want to highlight that VTAB-1k includes several insightful scenes valuable for real-world and diverse applications. For instance, the **specialized group** contains two sub-groups: remote sensing and medical. These are particularly useful for applications like satellite monitoring and medical diagnosis. Many tasks from the **structured group** are generated from simulated environments, contributing to object detection in virtual video game contexts. Our results on VTAB-1k demonstrate the potential of VFPT for various applications. In the future, we plan to employ VFPT to tackle more challenging real-world applications, such as out-of-distribution detection.
**Q9: Regarding Limitations and Social Impact.**
**A9:** Thank you for all the excellent suggestions. We’ll provide more discussion according to the suggestion in our revision.
We appreciate your thoughtful comments. We hope our response addresses your concerns. Please let us know if there are any additional questions, and we will be happy to discuss further.
---
Rebuttal Comment 2.1:
Title: Acknowledging response
Comment: The authors have addressed my comments. Considering the feedback from the other reviewers and the authors' responses, I raise my rating to a weak accept for this submission.
---
Reply to Comment 2.1.1:
Title: Thank you for your positive response
Comment: We are delighted to hear that we have successfully addressed all your concerns and received your approval of the work. | Summary: This paper proposes Visual Fourier Prompt Tuning (VFPT) to address performance degradation in parameter-efficient finetuning (PEFT) methods caused by dataset disparities. VFPT integrates Fast Fourier Transform (FFT) into prompt embeddings, enhancing performance with minimal parameter usage. This work is built upon the VPT method. The key innovation is to integrate frequency domain information using FFT to VPT. The performance is evaluated on VTAB-1k and FGVC.
Strengths: The experiments are comprehensive and performance on several benchmark are impressive as well. Study of Interpretability shows some interesting findings can be helpful for future research. The ablation study is completed. We can see compared with VPT baseline, FFT’s effectiveness has been well demonstrated.
The demonstration including figure/tables of this paper is very well.
Weaknesses: Overall, this is a well-organized paper. I have several questions regarding the methods and results;
1. What is the larger model’s performance when applying the VFPT? Whether the VFPT can work well with large models is an important question that hasn't been answered in the current version. As the model size grows, efficient, prompt learning can be more critical. However, current results are mainly on a relatively small size of models.
2. Whether VFPT can be integrated into the CLIP model to enhance zero-shot performance is also an interesting question. I am very interested in vision and text encoders and whether VFPT can be applied to CLIP models.
3. Current results do not answer the first question in the introduction: “Can prompt tuning generalize across datasets with varying disparities?”. It confuses me because I think of the transferable prompt design, which is trained on one dataset and then transferred to another. This motivation, to me, needs a further rephrase or validation.
Technical Quality: 3
Clarity: 3
Questions for Authors: I have several expectations for seeing the potential of VFPT. If the author can address these questions, I will change my scores.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, the authors discussed the limitation of their methods.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Dear Reviewer PyKu,
We sincerely appreciate your time and effort in reviewing our paper and providing valuable comments. We provide explanations to your questions point-by-point in the following.
**Q1: Regarding the VFPT performance with larger models.**
**A1:** Thank you for the great suggestion. We use ViT-Base and Swin-Base for comparison as they are the most commonly adopted models in previous works. We completely agree that prompt learning is also important for large models. Based on your suggestion, we have conducted additional experiments by applying VFPT to the ViT-Huge (632M), which is significantly larger than the ViT-Base (86M) and Swin-Base (86.7M). The preliminary results in the table below show that VFPT consistently achieves better performance with larger models. We will include the full results with further discussion in the revision. Thank you again for your suggestion.
| ViT-Huge | Natural (7) | Specialized (4) | Structured (8) |
| ---- | ----------- | --------------- | -------------- |
| VPT | 77.9 | 83.3 | 52.2 |
| VFPT | 78.9 | 84.4 | 57.0 |
**Q2: Regarding the integration of VFPT into the CLIP model.**
**A2:** Thank you for the insightful question. The proposed VFPT is a general and robust solution that can be applied to different models. Based on your suggestion, we conducted a preliminary study on CLIP. Specifically, we applied VFPT to the vision encoder of CLIP using the optimal settings from this work. The results on ImageNet are shown in the table below. As seen, there is a noticeable improvement compared to the standard CLIP, indicating the flexibility of adapting VFPT to different models. We appreciate your suggestion and plan to investigate further in the direction of prompt tuning on vision-language models.
| ImageNet | Tuned / Total (%) | Accuracy (%) |
| -------------- | ----------------- | ------------ |
| CLIP (zero-shot) | 0.00 | 66.73 |
| CLIP + VPT | 0.22 | 70.30 |
| CLIP + VFPT | 0.21 | 71.49 |
**Q3: Regarding the motivation of ‘Can prompt tuning generalize across datasets with varying disparities?**
**A3:** Sorry for the confusion. We would like to provide some clarification. **First**, this question is motivated by the observation that although original prompt tuning methods (e.g., VPT) have achieved promising results, significant performance degradation occurs when there is a substantial disparity between the datasets used in the pretraining and finetuning phases. The purpose of designing VFPT is to bridge this gap by incorporating frequency domain information. **Second**, we try to answer this question in Section 4.2 (lines 240-268). The empirical results indicate that VFPT achieves noticeably better performance on datasets with large disparities compared to other baselines. We’ll rephrase the question during revision to enhance the clarity.
We appreciate your thoughtful comments. We hope our response addresses your concerns. Please let us know if there are any additional questions, and we will be happy to discuss further.
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal
Comment: The improvement over scaling up of models and CLIP is not marginal; hence, it enhances the authors' claim.
---
Reply to Comment 1.1.1:
Title: Thank you for your valuable suggestions
Comment: Thank you for your valuable suggestions. We are immensely appreciative of the discussions on the performance of large models and vision-language models, as they clearly illuminate the future direction of our work. | null | null | Rebuttal 1:
Rebuttal: To All Reviewers:
We sincerely thank all reviewers for your valuable suggestions and constructive feedback. We have revised our paper accordingly. The major changes are as follows:
- We’ve conducted additional experiments on the large model (i.e., ViT-Huge), vision-language model (i.e., CLIP), and other architectures (i.e., ConvNeXt-Base), following Reviewer PyKu and 9y4B's suggestion.
- We’ve added additional discussions with real-world, large-scale applications, as suggested by Reviewer 9y4B.
- We’ve rephrased our exploration and theoretical analysis of VFPT, as suggested by Reviewer 9y4B.
- We’ve tried to visualize the spectral response of an attention map in the last layer in Figure 1 of the rebuttal PDF, as suggested by Reviewer 9y4B.
- We’ve provided details about the robustness of VFPT and discussions of determining Fourier percentage automatically, as suggested by Reviewer 9y4B.
- We’ve supplemented runtime analysis and comparison, as suggested by Reviewer 9y4B and TrZ7.
- We’ve provided additional discussion on the less effective performance in self-supervised tasks, as pointed out by Reviewer TrZ7.
We hope our response addresses the concerns from the reviewers. Please let us know if there are any additional questions, and we will be happy to discuss further.
Best Regards,\
Authors
Pdf: /pdf/0a2f0986f953642efdcccbdb6763b9a60cd0cc54.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
DistrictNet: Decision-aware learning for geographical districting | Accept (poster) | Summary: This paper propose an ML-based method to solve districting problems that are known to be challenging. The core idea is to first convert the districting problem to a simpler capacitated minimum spanning tree problem and then leverage the existing learning to optimize method to perform imitation learning. Such conversion is valuable because the learning to optimize approach requires repeatedly solving the optimization problem of interest. Extensive numerical studies were presented, leading to a rich set of insights. I found the paper inspiring.
Strengths: 1. Districting problems are common in practice, so I believe this paper is interesting to a broad community.
2. The mapping from a districting problem to a minimum spanning tree problem is elegant and well justified.
3. The numerical studies were well designed and excuted. I particularly appreciate the analysis related to the model's generalizablity, which we do not see in every learning to optimize paper.
4. The paper was well structed and presented. I found it easy to follow.
Weaknesses: - The paper focus on a specific problem class (that is of common interest). The paper can benefit from drawing connections to other problem classes. The idea of converting a harder problem to an easier problem to enable the application of learning to optimize techniques is general and has potential. And I believe there should exist many other examples, in transportation and beyond. Conducting additional experiments on other problems is probably out of the scope of this paper, but it'd be great if the author can add more discussions related to its broader applicability.
- Discussion/comparison related to some benchmarks from the OR/optimization community are missing. For example, when the demands are uniformly distributed, I believe the second-stage TSP objective value has some very nice closed-form approximations (see literature related to continuous approximation). Such approximation usually leads to tractable districting problem with good performance. I'd be good to see some comparisons related to that.
- I'm not fully convinced about the generalizability from small instances to larger instances as the size range experimented in this paper is relatively narrow.
Technical Quality: 3
Clarity: 4
Questions for Authors: - It is unclear to me what the costs (objective) are in the numerical experiments.
- Related to weakness 1, what is the downside of performing such conversion?
Confidence: 4
Soundness: 3
Presentation: 4
Contribution: 3
Limitations: Discussed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your in-depth review, valuable suggestions, and overall positive appreciation of the paper.
> Some benchmarks from the OR/optimization community are missing. When the demands are uniformly distributed, I believe the second-stage TSP objective value has some very nice closed-form approximations (continuous approximation). Such approximation usually leads to tractable problems with good performance. It'd be good to see some comparisons.
The two benchmarks we call BD and FIG belong precisely to the long stream of research on continuous approximations of TSPs. They trace back to Beardwood et al. (1956): *The shortest path through many points*. Several recent works have shown that the formula of Beardwood et al. (and its extension to districting problems, i.e., BD and FIG) holds remarkably well empirically against more sophisticated regression functions for uniform distributions; see e.g., Kou et al. (2022), *Optimal TSP tour length estimation using standard deviation as a predictor*.
We briefly discussed these results in the introduction of the paper. We will improve the presentation and recall it when introducing the benchmarks in the experiment section.
Note also that we introduced a new benchmark method following the recommendation of Reviewer CwjD, which can be seen as a point-forecast approximation of the stochastic TSP.
> I'm not fully convinced about the generalizability from small instances to larger instances as the size range experimented in this paper is relatively narrow.
In the paper, we train on instances containing 30 basic units (BUs) and test on instances with up to 900 BUs. This is an increase of 30x from train to test, and 7.5x compared to the largest instances of Ferraz et al. This increase is not linear in terms of complexity: the complexity of districting problems increases exponentially with the number of BUs and the size of a district.
Our experiments show that we can scale to the largest and most dense European urban areas and provide good solution qualities. We intentionally train on cities with 30 BUs to show that DistrictNet can learn from small cities and generalize to very large ones. Following your suggestion, we investigate whether our approach can scale even further by considering 2 000 BUs of the Ile-de-France region. The results (see below) show that DistrictNet still performs best. This experiment will be added to the paper.
| | BD | FIG | PredGNN | AvgTSP | DistrictNet |
|---|---|---|---|---|---|
| Districting cost | 2379.0 | 2388.8 | 2295.2 | 2262.7 | **2205.7** |
| Relative to best | + 7.8\% | + 8.3\% | + 4.0\% | + 2.6\% | **0.0** |
> It is unclear to me what the costs (objective) are in the numerical experiments.
Essentially, we want to find the solution to $\min_{\lambda \in \Lambda} \sum_{d \in \mathcal{D}} C_{TSP}(d) \lambda_d$ where $C_{TSP}(d)$ is the cost of district $d$ computed as the expected cost of a stochastic TSP in district $d$. The true cost $\sum_{d \in \mathcal{D}} C_{TSP}(d) \lambda_d$ is our main performance metric.
All the benchmarks approximate the cost $C_{TSP}(d)$, which is too computationally expensive to calculate in a search algorithm. To do this, they train regression models using features of the districts. DistrictNet does not approximate the districting costs: it learns to parameterize a surrogate optimization problem.
Although all methods have a different approximation for district costs, they are all evaluated using the true performance metric $C_{TSP}(d)$ in our experiments. We will clarify this in the paper, thank you for highlighting this potential for improvement!
> The idea of converting a harder problem to an easier problem to enable the application of learning to optimize techniques is general and has potential. [...] Conducting additional experiments on other problems is probably out of the scope of this paper, but it'd be great if the author can add more discussions related to its broader applicability. [...] What is the downside of performing such conversion?
"Approximating a harder problem by an easier one" can be seen as a paradigm shift for solving difficult combinatorial optimization problems. Classic approaches in OR rely on a purely combinatorial optimization approach. They design an algorithm that works for any problem instance and they tend to focus on worst-case complexity.
On the contrary, the "approximate by an easier problem" literature relies on learning. Its key advantage is being very efficient online because it shifts most of the computing time offline to the training algorithm. Provided that the learning architecture and surrogate optimization layer are well chosen, it enables excellent performance on the test instances if they remain close to the training distribution. Being a discriminative approach (instead of a generative one), it tends to require less data to obtain good performance.
There are also downsides. First, until now, there are no known "worst-case" theoretical guarantees on the quality of the solution. Instead, guarantees are in expectation over the training set, as is usual in ML. Further, the architecture (i.e., the neural network and surrogate optimization layer) may be poorly chosen. Finally, learning may be intensive computationally, both when generating the training instances and in the training algorithm. That is why we study the strategy of training on smaller instances and evaluating on larger ones.
For a general discussion of these aspects and an analytical characterization of generalization bounds, we recommend the preprint that appeared online a few days ago: Aubin-Frankowski et al. (2024), *Generalization Bounds of Surrogate Policies for Combinatorial Optimization Problems*, arXiv:2407.17200.
While an extended discussion is beyond the scope of this paper, we will update our paper to present these advantages and limitations. Thank you for this encouraging suggestion!
---
Rebuttal Comment 1.1:
Comment: Thank you for your response. All my concerns have been addressed.
---
Rebuttal 2:
Title: Thank you for your updates
Comment: Dear authors, thank you for your responses. My concerns have been addressed, and the answers clarified my understanding. I'll update my score.
---
Rebuttal Comment 2.1:
Comment: Thank you for acknowledging our rebuttal and raising your score! We are glad that our rebuttal addressed your concerns. | Summary: This paper tackles the problem of geographical districting through decision-aware learning. The problem is challenging due to the large combinatorial number of possible district designs. By generalizing the GNN-based method (similar to [Ferraz et al.2024 arXiv]) based on decision-aware learning through the Fenchel-Young loss, the proposed method showed much better relative performance to the existing methods.
Strengths: - The connection between CMST and the districting problem is utilized for decision-aware learning.
- Compared with the recent existing work [Ferraz et al. 2024], a more powerful learning methodology through GNN and FY loss was studied and evaluated, demonstrated using real data (i.e., real districts in famous cities).
- The experimental results show better results than those of existing solvers.
Weaknesses: - Some unclear explanations of using GNN, compared with the existing work [Ferraz et al. 2024]; some parts just follow [Ferraz et al. 2024]. These points raise the insufficient explanation of the technical contributions.
- Datasets are not explained semantically (i.e., geographical differences, why these cities are selected, etc.) in the main text: It is important to explain the importance of the task and used datasets (I understand the page limit and appendix).
Technical Quality: 4
Clarity: 3
Questions for Authors: - I am not sure why we should evaluate the p-value in this setting. Where does the randomness come from?
- I understood that the average demands are required. Once BU and average demands are fixed, the solution seems to be (almost) unique. Can we combine some learning-based demand predictors? If the estimation is not correct, how can the districting results be affected? Some questions could be about district costs.
~~~
After discussions, I have updated my score.
Confidence: 4
Soundness: 4
Presentation: 3
Contribution: 3
Limitations: In my opinion, the authors have addressed the issue adequately and mentioned these points.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you very much for your review and detailed evaluation of our paper.
> Some unclear explanations of using GNN, compared with the existing work [Ferraz et al. 2024]; some parts just follow [Ferraz et al. 2024]. These points raise the insufficient explanation of the technical contributions.
Thank you for this remark. We follow Ferraz et al. (2024) in the sense that we use a GNN to embed graphs in a latent space. However, there is a major distinction: we use the GNN to parameterize a surrogate optimization model rather than to estimate the cost of a district. This is not a direct application at all. It requires a suitable loss function, constructing CMST targets to learn from, and an appropriate method to backpropagate gradients. The major benefit of this approach is its ability to generalize. As shown in Table 1, it is the combination of GNN and the CMST layer that allows finding good solutions with a single model on all the cities and sizes used in our experiments.
> Datasets are not explained semantically (i.e., geographical differences, why these cities are selected, etc.) in the main text: It is important to explain the importance of the task and used datasets (I understand the page limit and appendix).
Thank you for this suggestion. We agree that the presentation of the datasets can be improved. To provide comparable results with Ferraz et al, we include the same districting tasks on cities in England. However, we expand the experimental setting by considering larger cities and by including cities in France. Thus, we consider some of the largest and most dense European urban areas. We show that we can solve real-world districting tasks and generalize over cities that are diverse in terms of geography and social characteristics.
> I am not sure why we should evaluate the p-value in this setting. Where does the randomness come from?
In Table 1, we measure the empirical performance of our method on 35 districting tasks from real-world cities. The randomness comes from the selection of the test instances from the distribution of possible districting tasks. The $p$-value tells us that, even though we have a limited set of tasks, we can confidently tell that the difference in average performance is not due to random chance but to a clear advantage in the method.
> I understood that the average demands are required. Once BU and average demands are fixed, the solution seems to be (almost) unique. Can we combine some learning-based demand predictors? If the estimation is not correct, how can the districting results be affected? Some questions could be about district costs.
Formally, we solve the problem
$\min_{\lambda \in \Lambda}\sum_{d \in \mathcal{D}} E_{\xi}[c_{TSP}(d, \xi)] \lambda_d$,
where $c_{TSP}(d, \xi)$ is the cost of the TSP in district $d$ with requests $\xi$. The suggestion of the reviewer is to solve instead
$\min_{\lambda \in \Lambda} \sum_{d \in \mathcal{D}} c_{TSP}(d, E_\xi[\xi]) \lambda_d$,
i.e., switching the expectation and minimization operator.
We did not implement this benchmark because this approximation leads to poor results in many stochastic problems (see e.g., Chapter 4.2 of Birge and Louveaux, *Introduction to stochastic programming*). This is because this approximation ignores the variance of the random variable $\xi$.
We followed your suggestion and implemented this method as a baseline. In our setting, the demand distribution is known, so we do not need to forecast it. For each basic unit, the average demand request $E_\xi[\xi]$ is a single request that appears at the barycentre of the area. Evaluating a district cost reduces to solving a TSP over the barycentres of all its BUs.
We integrate this approximation in an iterated local search and evaluate all our districting tasks. The results (called AvgTSP in the table below) show that this method finds districts of reasonably high quality and outperforms the other baselines. Still, DistrictNet outperforms it by more than 4\% on average.
| Method | Avg Rel Diff | p-value |
|---------------|--------------------|----------------------|
| Benchmark 1, BD | 10.02 \% | 1.3e-08 |
| Benchmark 2, FIG | 10.17 \% | 1.6e-08 |
| Benchmark 3, PredGnn | 11.56 \% | 1.5e-10 |
| **Benchmark 4, AvgTSP** | 4.44 \% | 2.7e-04 |
| DistrictNet | **0.0 \%** | - |
We will add this method to the paper with a formal description and a detailed analysis of its performance. Thank you for this great suggestion! | Summary: The paper discusses the development and evaluation of a new method called
DISTRICTNET for addressing districting and routing problems using neural networks.
The approach focuses on minimizing the Fenchel Young loss and generalizing to large
out-of-distribution instances from training on smaller instances. The method involves
a representation of the instances as a set of graph weights learned by GNN. These weights
are fed to a CMST algorithm to obtain the spatial partitioning. Comparative
benchmarks include linear regression models and graph neural networks trained
for cost estimation. Results show that DISTRICTNET can achieve lower costs with
a small number of training examples but benefits from more examples,
with diminishing returns.
Strengths: Some key strengths of the research presented in this paper include a structured learning approach to obtain high-quality solutions to large districting problems, the robustness of the approach to changes in problem parameters, and the ability to generalize to out-of-distribution instances.
Weaknesses: I am not familiar with the routing literature and I had a very hard time to read this paper. The supplementary
material is essential and the authors should consider if the mathematical treatment
of several issues that were not relevant for the algorithm proposed should not be moved to
the appendix in exchange of bringing some of the appendix to the main paper. For example,
I was lost about the district demand and cost features along the paper. This is
explained in the appendix.
The datasets are small, with the number of areas varying between 120 and 983.
They use an additional set of 27 cities, 25 of them with less than 100 areas and a
maxium of 684.
The benchmarks did not include what the authors call the quick heuristic methods.
I also found other recent papers in important venues dealing with this districting
problem by partitioning a minimum spanning tree as in the paper and with simpler
methods.
Teixeira, L. V., Assunção, R. M., & Loschi, R. H. (2019). Bayesian space-time partitioning by sampling and pruning spanning trees. Journal of Machine Learning Research, 20(85), 1-35.
Luo, Z. T., Sang, H., & Mallick, B. (2021). A Bayesian contiguous partitioning method for learning clustered latent variables. Journal of Machine Learning Research, 22(37), 1-52.
Luo, Z. T., Sang, H., & Mallick, B. (2021). BAST: Bayesian additive regression spanning trees for complex constrained domain. Advances in Neural Information Processing Systems, 34, 90-102.
McCartan, C., & Imai, K. (2023). Sequential Monte Carlo for sampling balanced and compact redistricting plans. The Annals of Applied Statistics, 17(4), 3300-3323.
Technical Quality: 2
Clarity: 2
Questions for Authors: Add "routing" to the paper title.
Each benchmark method has a different definition of a district cost. The outputs of each of
these algorithms are a result of these different cost definitions or are due to their
different approaches?
Confidence: 1
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: This is a theoretical paper with no immediate connection to societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you very much for your detailed review of the paper and the helpful suggestions. We respond to your concerns and questions below.
> The supplementary material is essential and the authors should consider [...] bringing some of the appendix to the main paper. For example, I was lost about the district demand and cost features along the paper.[...] Each benchmark method has a different definition of a district cost. The outputs of each of these algorithms are a result of these different cost definitions or are due to their different approaches?
Thank you for this comment. Essentially, we want find a districting solution $\lambda$ that minimizes $\min_{\lambda \in \Lambda} \sum_{d \in \mathcal{D}} C_{TSP}(d) \lambda_d$, where $C_{TSP}(d)$ is the cost of district $d$. A district's cost is the expected cost of a "stochastic" TSP. The true cost $\sum_{d \in \mathcal{D}} C_{TSP}(d) \lambda_d$ is the main performance metric. This is the cost that we measure in all our experiments.
Evaluating a district's cost $C_{TSP}(d)$ is too computationally demanding to be integrated into a districting algorithm at all steps of the search. Hence, the benchmarks approximate this cost during the search. This will be clarified in the paper.
To approximate districts' costs, the benchmarks train regression models using features of the districts. In contrast, DistrictNet does not learn to approximate the districting costs: it learns to parameterize a surrogate optimization problem. In the final version, we will present the key district features in the main body of the paper. Thank you for this suggestion!
> The datasets are small, with the number of areas varying between 120 and 983.
In the paper, we train on instances containing 30 basic units (BUs) and evaluate on instances with up to 900 BUs. This is an increase of 30x from train to test, and 7.5x compared to the largest instances of Ferraz et al. (2024). This increase is not linear in terms of complexity: the complexity of districting problems increases exponentially with the number of BUs and the size of a district.
Our experiments show that we can scale to the largest and most dense European urban areas and provide good solution qualities. Following your suggestion, we investigate whether our approach can scale even further by considering 2 000 BUs of the Ile-de-France region. DistrictNet provides the best performance, showing that it generalizes to instances that are more than 60 times larger than the training ones. This experiment will be added to the paper.
| BD | FIG | PredGNN | AvgTSP | DistrictNet |
|----|----|----|----|----|
| 2379.0 | 2388.8 | 2295.2 | 2262.7 | **2205.7** |
| + 7.8\% | + 8.3\% | + 4.0\% | + 2.6\% | **0.0** |
> They use an additional set of 27 cities, 25 of them with less than 100 areas.
We want to clarify: the 27 small cities presented in Appendix B.2 are used to generate the training data. Our test instances are at least four times larger containing more than 120 BUs. We intentionally train on cities with 30 BUs to show that DistrictNet can learn from small cities and generalize to much larger ones.
> The benchmarks did not include what the authors call the quick heuristic methods.
We do include two quick heuristic methods: BD and FIG, which apply an iterated local search with continuous approximations of district costs. These methods stem from the seminal work of Beardwood et al. (1956): \textit{The shortest path through many points}. Several recent works have shown that the formula of Beardwood et al. holds remarkably well against more sophisticated regression functions for uniform distributions; see, e.g., Kou et al. (2022), *Optimal TSP tour length estimation using standard deviation as a predictor*.
Further, following the recommendation of Reviewer CwjD, we introduced a new benchmark method. It approximates a district's cost by the cost of the TSP that goes through the barycentre of each of its BUs.
> I also found other recent papers in important venues dealing with this districting problem by partitioning a minimum spanning tree as in the paper and with simpler methods.
Thank you for referring us to the rich literature on partition sampling. The suggested works provide methods to sample districts with constraints on the balance, contiguity, and compactness of the partitioning. Balance is to be understood w.r.t. features such as mortality rates in Teixeira et al. These approaches are motivated by the fact that sampling from a uniform distribution of partitions tends to have low compactness. Hence, they recommend sampling from the spanning tree distribution.
Our approach is fundamentally different. It is discriminative, in the sense that it strives to identify the single partition with minimum cost. In contrast, the suggested papers are generative: they output a distribution of partitions. Further, none of them considers districting-and-routing applications, and adapting them to handle routing applications is not straightforward.
Still, our work and the literature on partition sampling have very interesting connections. A first idea could be to solve approximately the CMST problems by sampling from balanced partitioning partitions. In this way, we might replace our current search algorithm (iterated local search) with a specialized type of random search over spanning trees. A second idea is to integrate considerations about balance and compactness into the problem. To this end, given examples of districts that are compact and balanced, DistrictNet can be directly trained to learn to imitate them using a parameterized CMST.
We will include this discussion in the paper, and highlight the opportunities in future research on the integration of decision-aware partitioning and sampling from spanning trees. Thank you for this suggestion!
> Add "routing" to the paper title.
Yes, we will specify that we solve districting-and-routing problems in the paper title.
---
Rebuttal Comment 1.1:
Title: Reviewers WW6p and qHFv
Comment: The authors have provided extensive replies to the criticisms expressed in your reviews. Have these responses satisfied your concerns? If not, are there any further clarifying questions you would like to ask? The author discussion period ends tomorrow (Aug 13). Please respond.
---
Rebuttal Comment 1.2:
Comment: Thanks for addressing my questions and providing more detail. I will update my score.
---
Reply to Comment 1.2.1:
Comment: Thank you for your response and raising your score! | Summary: The authors use CMST to solve the problem of districting and routing in large scale scenarios. Finding the relationship between CMST and partitioning is quite beneficial for researchers engaged in related research.
However, it is worth noting that the authors only report the performance of the model on the 'cost' metric, and do not investigate the performance of the method on the common metrics of traditional districting task. What's more, they do not discuss the efficiency of the method. I hope that the authors can supplement this.
Another problem is that the baseline methods compared seem to be relatively old except for PREDGNN. The districting methods I know in terms of road network, such as CCH, have not been used as the baseline method. I hope the author can clarify the criteria for baseline selection.
Strengths: The authors use CMST to solve the problem of districting and routing in large scale scenarios. Finding the relationship between CMST and partitioning is quite beneficial for researchers engaged in related research.
Weaknesses: It is worth noting that the authors only report the performance of the model on the 'cost' metric, and do not investigate the performance of the method on the common metrics of traditional districting task. What's more, they do not discuss the efficiency of the method. I hope that the authors can supplement this.
Another problem is that the baseline methods compared seem to be relatively old except for PREDGNN. The districting methods I know in terms of road network, such as CCH, have not been used as the baseline method. I hope the author can clarify the criteria for baseline selection.
Technical Quality: 3
Clarity: 3
Questions for Authors: Please see the weaknesses.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes. The authors claim that though the proposed method can be applied to a wide variety of partitioning problems, they only focus on the districting and routing problem in this paper. In fact, I think it's a very valuable question to discuss because the evaluation metric of the partitioning problem and that of the districting and routing problem are different. I think it is necessary for the authors to give more explanation of why their method can be applied to a wider range of partitioning problems.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review and insightful comments. We answer your questions below and clarify the concerns discussed in the report.
> The authors only report the performance of the model on the 'cost' metric, and do not investigate the performance of the method on the common metrics of traditional districting task. [...] The authors claim that though the proposed method can be applied to a wide variety of partitioning problems, they only focus on the districting and routing problem in this paper. [...] I think it is necessary for the authors to give more explanation of why their method can be applied to a wider range of partitioning problems.
Thank you for your comments. In this work, we focus on the 'cost' metric because our task at hand is to solve the problem $\min_{\lambda \in \Lambda} \sum_{d \in \mathcal{D}} C_{TSP}(d) \lambda_d$, where $C_{TSP}(d)$ is the expected cost of a stochastic TSP in district $d$. In other words, we want to partition the city into efficient districts from a logistics perspective. The cost of a district corresponds to the long-term expected cost of deliveries (TSP) in each area. In contrast with other districting objectives for which a closed-form formula is available (e.g., compactness) this setting is computationally challenging. This is because closed-form formulas to approximate TSP costs are inaccurate, and sampling scenarios and solving TSPs is too time-consuming to be integrated in a districting algorithm.
Our contribution is a new decision-aware approximation and solution method. Our approach can be readily extended to other districting metrics. This is a valuable direction for future work, as also highlighted by Reviewer WW6p. To adapt DistrictNet to this setting, we only need to adapt the generation of the training instances. This is why we claim that our framework applies to a wide range of constrained partitioning problems: it can work with any general districting cost function of the form $C(d)$. Hence, it can consider other metrics such as fairness, balancing, compactness, etc. Because the task at hand remains a constrained partitioning problem, our surrogate CMST layer will still capture the structure of the problem.
We will make sure to include additional discussions on this aspect in the final version of the paper. We will suggest exploring other districting metrics as future work, and specify why DistrictNet is especially suitable in these settings.
Lastly, we have evaluated the compactness of our districting solutions using Reock's formula. The results (see below; a larger value indicates higher compactness) demonstrate that DistrictNet generally achieves the highest compactness on average. However, it is crucial to note that, in logistics, delivery costs are not always correlated with compactness. The shape and orientation of the districts relative to the depot's position can significantly impact TSP costs.
| | BD | FIG | PredGNN | AvgTSP | DistrictNet |
|---|---|---|---|---|---|
| Bristol |0.124 | 0.125 | 0.124 | 0.142 | **0.173** |
| Leeds |0.168 | 0.154 | 0.187 | 0.223 | **0.268** |
| London |0.086 | 0.089 | 0.08 | 0.096 | **0.11** |
| Lyon |0.183 | 0.177 | 0.209 | 0.213 | **0.258**|
| Manchester |0.167 | 0.165 | 0.142 | 0.258 | **0.272** |
| Marseille |0.143 | 0.143 | 0.177 | **0.195** | 0.185 |
| Paris |0.169 | 0.182 | 0.157 | **0.278** | 0.274 |
| *Average* |0.149 | 0.148 | 0.154 | 0.201 | **0.22** |
> The authors do not discuss the efficiency of the method. I hope that the authors can supplement this.
Our algorithm is made of three main components: the data generation, the learning algorithm (setting the weights of the neural network), and the inference algorithm (using the neural network to obtain a solution on a given instance). Learning is quite fast (see Table 2 in the appendix) and inference is very fast (20 minutes). For DistrictNet, the crux of computations is in the generation of training data. Yet, we show in Figure 5 that DistrictNet provides already good results with only 20 training examples.
We will improve the discussion of the efficiency of our methods following your recommendation.
> Another problem is that the baseline methods compared seem to be relatively old except for PREDGNN. [...] I hope the author can clarify the criteria for baseline selection.
To our knowledge, the recent work of Ferraz et al. (2024) represents the state-of-the-art for districting and routing problems. The two benchmarks BD and FIG stem from the long history of analytical results on TSPs and, in particular, the formula of Beardwood et al. (1959) on approximation formulas in asymptotic regimes with many points. The recent work of Kou et al. (2022), *Optimal TSP tour length estimation using standard deviation as a predictor*, shows that more sophisticated regression methods do not significantly outperform the formula of Beardwood et al. That is why we include the two benchmarks BD and FIG even if they are relatively older.
This discussion will be added to the paper. Note also that we have included an additional benchmark method (AvgTSP) following the suggestion of Reviewer S4NW.
> The districting methods I know in terms of road network, such as CCH, have not been used as the baseline method.
Thank you for this suggestion. We understand that you refer to Customizable Contraction Hierarchies (CCH), an efficient technique to compute distances over road networks. Please correct us if our interpretation is wrong.
Since CCH is a technique designed to speed up shortest-path computations, it cannot be directly used in our districting-and-routing context to contract edges and speed up the method. This is because (1) customer positions in each district are random (they are calculated over 100 scenarios), and (2) we calculate TSPs and not shortest paths.
We thank you again for your review, which allowed us to significantly improve the discussion of the general value of DistrictNet for districting. | Rebuttal 1:
Rebuttal: ## We thank the reviewers for their constructive comments and feedback
Dear Reviewers, Dear Area Chair,
We want to express our sincere thanks for the detailed reviews of our work and the constructive feedback. The reviewers have appreciated the value of our decision-aware approach to solve large districting problems and the potential of identifying the CMST as a differentiable optimization surrogate for partitioning problems
Several comments have been raised by multiple reviewers, addressing: (a) the definition of districting costs and the approximations used in the benchmarks, (b) what makes a districting task large, (c) what justifies the choice of benchmarks and whether others could be included, and (d) what makes DistrictNet applicable to other partitioning problems. We have answered these points in detail in each reviewer's rebuttal. We also provide additional results: a new benchmark method following the suggestion of Reviewer CwjD, and results on an even larger districting task with 2 000 basic units.
We are available to answer any remaining questions during the discussion period. | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HardCore Generation: Generating Hard UNSAT Problems for Data Augmentation | Accept (poster) | Summary: This paper presents a procedure for synthesizing hard UNSAT formulas from a given distribution of UNSAT formulas. Concretely, a formula is generated by 1) extracting a core from a seed instance; 2) adding random new clauses; and 3) refining the formula to become harder. In step 3), a GNN is trained to predict UNSAT core of a given formula and the predicted core is relaxed, potentially yielding a larger core. On two problem distributions, LEC Internal and random K-SAT, the proposed method can generate hard instances faster than previous approaches, and the generated instances exhibit similar hardness distribution.
Strengths: - The proposed method strikes a good balance between the hardness of the generated instances and the generation time. It can generate instances much harder than W2SAT, while in a much faster manner than HardSATGEN.
- The combination of the two ideas, iteratively relaxing core to generate hard instances and learning to predict cores, is quite clever and elegant.
- The fact that the solver performance on the synthetic instances resembles that on the original instances is a good indication that the generated instances are similar in some way to the original instances.
Weaknesses: - It seems that the proposed core refinement technique can be viewed as a post-processing step that can be applied to any initial formulas (e.g., ones generated by W2SAT). Have the authors considered building on top of previous DL-generated formulas? Compared with formulas generated by randomly adding clauses to an original UNSAT core, conceptually, formulas generated by a method like W2SAT, which are trained to embed a formula distribution, seem to be a more justifiable starting point.
- The method can still only handle relatively small formulas.
- Given that the LEC benchmarks are proprietary, only random K-SAT will be made publicly available. However, one could have considered using dataset such as SATLIB like in previous work (e.g., SATGEN, G2SAT, W2SAT).
Minor:
- x axis labels are missing in Figure 5.
- Line 353: "syntehtic" -> synthetic
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. We have previously argued that random data differs from real data in important ways that make it unsuitable for machine learning applied to real problems. Could you elaborate?
Confidence: 5
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors adequately addressed the limitations. No potential negative societal impact.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We deeply appreciate the reviewer's generous acknowledgement of the
elegance and performance of our method. In addition, the reviewer's
comments about scalability and publicly available data have led us to
make significant additions to the depth of our work's experimental
setting and analysis. Finally, the reviewer has provided an insightful
future direction regarding the generation mechanism used in our method.
## Weaknesses
1. *Core refinement can be viewed as post-processing applied to any
generator. Have authors considered building on top of previous
DL-generators? Starting with methods like W2SAT which are trained to
embed formula distributions seems more justifiable than starting
with a random generator.*\
Viewing core refinement here as a post-processing step applicable to
any generator is valid. During the design of the proposed method, we
did consider building on top of previous DL-generators. However, a
random generator designed to emulate industrial problems was
selected for its simplicity, minimal cost, and the convenience of
not having to train it. Indeed, perhaps using a DL generator may
improve the downstream performance. Investigating such possibilities
could prove insightful and is a valuable direction for future work.
2. *The method can still only handle small formulas*\
While scalability is certainly a concern as discussed in the
Limitations section of the text, the breaking point of our method
for a 32GB GPU, for example, would be in worst-case a graph with
hundreds of thousands of variables and clauses. This is a problem
size beyond many real-world applications. In the LEC data, no
problem exceeds 100,000 clauses. This limitation was primarily
expressed in consideration of the SAT Competition data, in which
problems (almost always randomly generated problems) can sometimes
reach a million clauses or more. Our scalability limitation, as
discussed in the general response, is primarily related to memory,
and the storage of the adjacency matrix of the graph. A potential
direction for future work is to investigate the application of
existing high-efficiency GNN methods that are capable of addressing
graphs of tens-of- millions of nodes.
3. *Only K-SAT will be made publicly available. Why not use dataset
such as SATLIB like in previous work?*\
We decided against using SATLIB as the great majority of its
problems are either randomly generated or not UNSAT (our method is
limited to UNSAT problems). Additionally, similarly to the SAT
competition benchmarks, the data is composed of problems from
several highly distinct sources, which is undesirable for
machine-learning applications. Despite this, we have included in the
global rebuttal and single-page pdf results on data from the SAT
Competition, showing that our method is able to generate hard
problems and provide data augmentation for SAT Competition data. We
will make the code for our SAT Competition experiments publicly
available.
## Questions
1. *It has been argued that random data differs from real data in
important ways that make it unsuitable for machine learning applied
to real problems. Could you elaborate?*\
The clearest demonstration of the difference between real data and
random data, in our view, is the notable difference in solver
performance. It is generally accepted in the literature that some
solvers are random-specialized and others are industry-specialized.
For example, the experimental results in \[1\] are heavily based on
this. One can also see such specialization by noting that the best
solver on the SAT Competition random tracks is not the same as the
best solver on the SAT Competition main track. In fact, the
existence of separate tracks itself indicates an acknowledged
distinction between the types of data.
Given this distinction between the behavior of random and industrial
data, training an industrial-facing model on random data leads to an
out-of-distribution problem. Inference must be performed for
problems that are very different from the training data. It would be
equivalent to augmenting a real-world image dataset with random
patterns. In most cases the model will perform poorly. For example,
SAT-solver selection is a machine-learning task for SAT in which,
given a problem, we aim to rapidly choose the solver which is likely
to solve the problem the fastest. This is a task which greatly
benefits industrial settings in which thousands of SAT problems must
be solved each day and computation and time costs must be minimized.
Training such a selection model on random data would result in the
model learning a wholly different runtime distribution than that of
the industrial data. A similar example is low-cost benchmarking, in
which a model is trained to predict the performance of a solver over
a benchmark. This task is of interest to solver-designers, as it can
be much cheaper than running a design-iteration of a new solver on
the whole benchmark data. If random data is used to learn to
benchmark a solver on industrial problems, the predictor would
likely predict benchmarks as if the industrial data were in fact
random. A third example is the task of hyper-parameter tuning of
solvers, as presented in the HardSATGEN paper as a down-stream task.
If one were to use random data to tune the hyper-parameters, the
solvers would likely be poorly tuned for industrial data, whose
hardness distributions are very different to random data.
[1]J. Giraldez-Cru and J. Levy, "Generating sat instances with
community structure," in Artificial Intelligence, 2016, p. 119--134.
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their clarifications and new results. I would like to keep my score. | Summary: This paper proposes a novel method for generating hard UNSAT problems. The method targets the "core identification" problem and iteratively performs refinement using a GNN-based detection procedure, which preserves the key aspects of the original instances that impact solver runtimes. The experimental results show that the method can generate instances with a similar solver runtime distribution as the original instances in a short time.
Strengths: This paper introduces a novel method to generate hard UNSAT problems in a reasonable time frame, which alleviates the data scarcity problem in the SAT solving domain and improves the performance of deep learning methods in this area. Compared to previous methods, this method can generate SAT problems that are more similar to the original instances.
Weaknesses: 1. There is not enough support for the correlation between hardness and core size. For example, in the random 3-SAT problem, the UNSAT core can constitute a significant portion of total instances, even more than 80%, yet modern solvers can easily address these instances.
2. Lack of details about model training. There are no explanations on how to prepare the training dataset.
3. The experimental setting is not convincing. Authors only perform the proposed hard case generation approach on LEC and random k-SAT problems, however, there are no details about how to construct these two datasets. Moreover, it is essential to show results on other datasets to showcase the generalization ability of the proposed method, such as the SAT competition benchmarks.
4. There are some logical problems in the paper’s writing. In the “Core Refinement” part, the paper states that the addition of random new clauses is likely to create a trivial core. However, I find that in the core refinement pipeline, there is no addition of new clause. Instead, new literal is added. The writing here is kind of messy.
Technical Quality: 2
Clarity: 2
Questions for Authors: 1. Please provide justifications on the correlation between solving hardness and UNSAT core size.
2. Even though considering the minimal UNSAT core, one instance may contain multiple cores. How to handle this one-to-many mapping problem? This problem is significant as it is directly related to the supervision.
3. The random k-SAT and LEC cases have a very strong bias compared to the real instances. The authors should also prove the generalization ability of their proposed approach on more diverse datasets, such as SAT competition benchmarks.
4. In Section 6.2.3, you mention that you use a subset of the training set as the evaluation dataset. This is not the right.
5. When you perform the core refinement step, can the problems generated in the middle be used? How does n, the number of refinement steps, affect the characteristics and performance of the generated problems?
Confidence: 5
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: Yes, the authors have stated the limitations of their work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for their careful and critical comments. These
comments have prompted several new explorations, including an
examination of the progression of hardness during refinement, and the
addition of a new dataset to the experimental setting. These improve
both the strength of our results and the depth of our analysis of the
method.
## Weaknesses
1. *Correlation between hardness and core size is not sufficiently
supported.*
We do not claim in the paper that there is a correlation between the
core size and the hardness. For our proposed
technique to work, our key assumption is that if a problem is easy
then core prediction can identify the easiest core, and then by
removing that core the problem is made harder. The experimental results
demonstrate that we generate challenging problem instances,
providing support that our assumption is correct. We agree with the
reviewer that there is value in providing more direct evidence.
First, we evaluate the hardness of the original problems in the LEC dataset. Note that this is a
real-world dataset derived during industrial circuit design. Figure
3 (left) in the attached pdf shows that there is a general trend of
the hardness increasing as the core size increases, up to a
threshold of 4000-5000 clauses. This trend is observed for absolute
core size rather than the percentage of clauses in the core; the
reviewer's observation is correct that an 80% core can be easy.
Although we observe this correlation, it
is not essential for our method. Much more important are the results
shown in Figure 3 (right) in the pdf, where we show how the hardness
changes as a result of refinement. The figure shows
boxplots of the percentage change in hardness for different bins of
initial hardness. For easy problems we increase the
hardness sometimes by a factor of over 200, indicating successful
hardening of hardness-collapsed problems.
2. *Model training details missing.*
Details on how the training data is prepared for the core prediction
model (In particular, how supervision labels are generated) is
provided at l. 233.
There are three **separate** groupings of the dataset: (i) Core
Prediction training data, (ii) generation seeding data, and (iii)
the remaining data. This split is chosen randomly. Core
Prediction training data can be small (we used 15 problems),
because we use each problem as a seed instance 5 times for generation
followed by core-refinement with a traditional core detector. Saving problem-core pairs at each step,
we obtain 15,000 training pairs for the
core-predictor model. The seeding data are used to
seed HardCore once the core predictor is trained in order to obtain
generations to evaluate. These generations are then compared against
the seed data for runtime similarity. Finally, these generations
(and their seeds) are used to train a runtime-predictor model,
which is evaluated on the remaining un-used data.
The core prediction model is trained with a binary cross-entropy
loss using the prepared data.
For more details on the training procedure, please consider the
code provided in the supplementary material. We will add a detailed
description of this training process in an appendix.
3. *only 2 datasets, no SAT Competition benchmarks. *
Please see the global rebuttal, in which we present new results on
SAT Competition data.
4. *Construction details of the used datasets are not provided.*
For details on how the K-SAT dataset was generated, please see
Appendix section A.4. The problems from the LEC dataset are
SAT problems which were solved during the design of real circuits,
and were saved for future use in analysis and research.
5. *Unclear writing: in \"Core Refinement\", the paper states that
adding new clauses is likely to create trivial cores, but clauses
are not added during core refinement.*\
In the text, the correction will be at line 176: "The addition of
random clauses during **generation** is very likely to [...]".
Clauses are not added during refinement, the reviewer is correct.
## Questions
1. *Justify core-size and hardness correlation*.\
See Weakness 1.
2. *One instance may have multiple cores. How to handle this one-to-many mapping problem.*\
The goal of the estimator is not to only the easiest core.
The first core to be output by the detection algorithm is considered the easiest
(finding a core proves the problem UNSAT, solving the problem).
We use the outputs of the core detector as labels for
supervision, and find that the GNN is able to identify the easiest
core based on performance results shown in Table 4 of Appendix B of
the paper. Once that core is de-cored, the other core may now be the
easiest, and it will be detected in the next refinement step.
3. *k-SAT and LEC cases have very strong bias compared to real
instances.*\
We believe the LEC cases we use in the paper must be considered as
real instances as they come from real industry.
Logic Equivalence Checking (LEC) is a critical step in
logic synthesis for circuit design. The LEC data we use in the paper came from the
design pipeline of 29 industrial circuits by a prominent circuit
design company.
4. *The authors should show generalization ability on the SAT
competition data.*\
See weakness 3.
5. *In section 6.2.3 it is mentioned that a subset of training data is
used as evaluation data. This is not right.*\
We do not evaluate with training data at any point in the
experimental method. We have been unable to locate any reference to
this in Section 6.2.3. If you could kindly quote the line (or
paragraph) in which this was conveyed, we would be very happy to
clarify the issue.
6. *Can problems from halfway through core refinement be used? How does
the number of refinement steps affect performance?*\
Yes, although they will likely be easier. Problems usually get harder in
a smooth progression during refinement, as shown in Figure 1 (left).
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses, which clarify most concerns. I'll raise my score accordingly. However, I still have questions regarding Weaknesses 1 and 3.
1. I understand that the proposed approach adds literals to clauses in the UNSAT core rather than creating additional clauses. However, there seems to be a conflict, as mentioned in Figure 2: “As steps (1) and (2) are repeated, the core gradually becomes larger, raising the hardness of the generated instance.” This sentence should be further clarified to avoid misunderstandings.
2. I noticed the global rebuttal results that demonstrate generalization ability. However, the selected "Tseitin" family likely shares a similar distribution with LEC problems, since LEC data is also converted into CNF using the Tseitin Transformation. Could the authors clarify this assumption? I suggest the authors test the proposed approaches on non-circuit-based families and novel cases in SAT benchmarks, such as cryptography and scheduling.
---
Reply to Comment 1.1.1:
Comment: We express our gratitude to the reviewer for acknowledging our response.
1. *I understand that the proposed approach adds literals to clauses in
the UNSAT core rather than creating additional clauses. However,
there seems to be a conflict, as mentioned in Figure 2: "As
steps (1) and (2) are repeated, the core gradually becomes larger,
raising the hardness of the generated instance." This sentence
should be further clarified to avoid misunderstandings.*\
\
We agree that the sentence needs to be edited and we will revise it.
We will amend the text to clarify that the core we consider at each
step is specifically the easiest core (because traditional core
detection effectively solves the SAT problem). We also intend to
replace references to the "enlarging" of cores with their
"hardening" in order to emphasize our crucial assumption as
discussed previously. Thus, this phrase would read: "As steps (1)
and (2) are repeated, the easiest core of the problem is gradually
refined, raising the hardness of the generated instances."
2. *I noticed the global rebuttal results that demonstrate
generalization ability. However, the selected \"Tseitin\" family
likely shares a similar distribution with LEC problems, since LEC
data is also converted into CNF using the Tseitin Transformation.
Could the authors clarify this assumption? I suggest the authors
test the proposed approaches on non-circuit-based families and novel
cases in SAT benchmarks, such as cryptography and scheduling.*\
While it is true that the LEC and Tseitin data share an element of
origin, we note that the Tseitin data is made up of problems
generated in many different but random ways. For example, the
data-points contributed by Elfers in the 2016 Hand-Crafted track
\[1\] are generated by providing grid-graphs of varying dimensions
to the Tseitin transformation. These grid-graphs are very different
from the circuit-graphs seen during LEC. In consequence we see very
different runtime-distributions. Comparing Figure 2 in the rebuttal
pdf to Figure 6 in Appendix B of the paper and Figure 4 in the text,
we note significant differences. Most notably, Solvers 1 and 6
(which we generally have found to be strong for industrial data and
weak for random data, are much slower for the Tseitin data).
Therefore, we do not believe the Tseitin data shares a similar
distribution to the LEC data as the reviewer has suggested.
Additionally, while other data such as Cryptography and Scheduling
were considered, Tseitin was the family with the most UNSAT problems
available in the benchmarks. Since more data enable us to provide
more complete distributional experimentation as well as more results
on the data augmentation task, we chose the largest dataset: the
Tseitin.
With that said, we have begun experiments on a dataset of 100
scheduling problems taken from the SAT Competition data using the
filter "`family=scheduling and result=unsat`". Of the two families
suggested by the reviewer, scheduling had more problems, and so we
chose it over cryptography. If time permits for completion before
the end of the discussion period we will share results. Otherwise,
we will report the fully processed results in the appendix of the
revised paper.
\[1\] *Proceedings of SAT Competition 2016: Solver and Benchmark
Descriptions*, volume B-2016-1 of Department of Computer Science
Series of Publications B, University of Helsinki 2016. ISBN
978-951-51-2345-9. | Summary: This paper introduces HardCore, a novel method for efficiently generating hard Unsatisfiable (UNSAT) Boolean Satisfiability (SAT) problems, addressing the critical challenge of data scarcity. The approach combines a Graph Neural Network (GNN) for rapid core prediction with an iterative core refinement process, enabling the generation of thousands of hard instances in minutes or hours while preserving problem difficulty. HardCore outperforms existing methods in terms of generation speed and hardness preservation, as demonstrated through experiments on both Logic Equivalence Checking (LEC) data and K-SAT Random data. Despite limitations such as applicability only to UNSAT problems and potential scalability issues with extremely large instances, HardCore represents an advancement in SAT problem generation.
Strengths: * The work provides a meaningful contribution to the SAT community by introducing a novel method that generates SAT instances within a reasonable timeframe while preserving the hardness of the original problems. The work has potential applications in various industrial settings, though the scalability limit of the approach might pose a challenge.
* The authors conduct extensive experiments, comparing their method against multiple baselines and evaluating various aspects such as hardness preservation, generation speed, and similarity to original distributions. The paper compares against relevant and recent baselines (HardSATGEN, W2SAT, G2MILP), providing a comprehensive view of how HardCore performs relative to the state-of-the-art.
* Innovative core refinement process: The iterative core refinement process (Figure 2) is a clever approach to gradually increasing problem hardness while avoiding the creation of trivial cores.
Weaknesses: * The experiments primarily focus on two datasets (LEC Internal and K-SAT Random), with one being proprietary. This somewhat limits the generalizability of the results.
* While the paper focuses on runtime distributions and solver rankings, it lacks a deeper analysis of the structural properties of the generated instances (e.g., clause-to-variable ratios, community structure) compared to the original ones.
* The core refinement process (Section 5.1) involves adding a single literal to break easy cores. The paper doesn't explore more sophisticated strategies or justify why this simple approach is sufficient.
* The paper mentions struggles with "extremely large SAT problems" but doesn't clearly define what constitutes "extremely large" or provide empirical evidence of where the method breaks down.
Technical Quality: 3
Clarity: 3
Questions for Authors: How would one extend your technique to more structured classes of formulas?
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: It would really strengthen the paper if the authors were to study the structural properties (e.g., clause-variable ratio, tree width, hierarchical community structure,...) of the generated instances and identify why they are hard for modern SAT solvers.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We wish to express our gratitude to the reviewer for their
acknowledgement of our extensive experimentation and innovation. The
reviewer's comments have guided us to explore meaningful improvements to
the presented work: complete results on an additional public dataset, a
deeper analysis of scaling considerations and comparison of SAT-problem
statistics. The reviewer has also illuminated multiple promising avenues
for future work.
## Weaknesses
1. *The experiments focus on only 2 datasets, limiting result
generalizability*
Please see the global rebuttal for the
presentation of results on a new dataset in order to address this
issue. We introduce some data from the SAT Competition database and
present results which show that HardCore is able to generate
problems which resemble the SAT Competition (SC) data in hardness.
Results also show that HardCore is able to provide data augmentation
which improves a runtime-prediction model's performance by 4-6%.
2. *Lacks "deeper analysis" of structural properties
(clause-to-variable ratios, community structure)*
We chose not to report such results in the paper because such statistics have neither been claimed nor shown
to have causal connections to hardness or industrial structure, only correlation.
Despite this, we agree with the reviewer that the paper would
benefit by a presentation of the structural properties of the
instances generated by HardCore. In the table below, we compare the
statistics of the generated instances to those of the original
problems, replicating the experiment that was peformed in the
HardSATGEN paper. We report the number of variables and clauses,
clustering on the variable-instance graph, and modularity on four
graph-representations. Note that both HardCore and HardSATGEN only
add a single auxiliary variable (to eliminate easy cores), so the
number of variables is a close match. HardCore maintains the number
of clauses by setting the number of generated clauses (an easily-set
parameter of the generation algorithm) accordingly. For other
statistics, HardCore achieves values that are relatively close to
those of the original instances, with a slightly reduced modularity
in the VIG and LIG representations, compared to HardSATGEN. This is
likely because HardSATGEN monitors VIG communities while HardCore
does not consider incidence representations.
| | HardSATGEN | HardCore | original |
|--------------|------------|-----------|----------|
| num. vars | **933.8 | **933.8** | 932.8 |
| num. clauses | 3395.16 | **3400.2 | 3400.2 |
| VIG clust. | 0.38 | **0.39 | 0.39 |
| mod. VIG | **0.74** | 0.56 | 0.74 |
| mod. LIG | **0.74 ** | 0.63 | 0.75 |
| mod. VCG | **0.81 | 0.78 | 0.81 |
| mod. LCG | **0.64** | **0.64 | 0.67 |
It is our view that hardness distributions --- whether per-solver,
ranking, multi-solver --- are considerably more important than the
measured structural statistics presented in previous works. Other
generators, despite being capable of preserving structural
statistics reasonably well, generate trivial problems. These clearly
differ in critical ways from the seed instances from the perspective
of the solvers. For example, W2SAT reports modularity very similar
to the original problems' modularity and yet fails to generate hard
instances. Since HardCore is directly shown to preserve hardness
behavior well, we did not strive for similar modularity.
3. *De-coring is done by adding a single literal to break cores. More
sophisticated strategies are not explored. Why is this approach
sufficient?*
Exploring more sophisticated de-coring strategies could
be an interesting direction for future work, especially for
application to more structured formula families. We
chose to use the established de-coring paradigm from the HardSATGEN
method, considering it to be interpretable due to its
simplicity. More complicated strategies, which might add or remove
existing variables from the clauses (instead of adding new ones),
could introduce unforeseen conflicts and cores which are easier than the current de-coring target. The
current strategy guarantees the removal of the current core, without
constructing a new one, which is desirable. Since the approach
proved experimentally effective, we did not explore other
strategies.\
4. *The paper mentions struggles with extremely large problems. Define
what constitutes "extremely large" or provide empirical evidence of
where it breaks down*
Please see our discussion of scalability in
the global rebuttal, in which we describe the memory-scaling in
worst-case dense problems and more realistic sparse problems and in
which we discuss worst-case break-down given our hardware.
## Questions
1. *How would one extend the work to more structured classes of
formulas?*
For highly structured problems, the generation mechanism would have
to be designed such that a formula adheres to the required
structure.
Structured problems such as the pigeonhole problem and
graph-coloring are often easy to generate. For example, a graph
coloring problem can be generated by generating a graph in any
random way, and then applying the coloring problem. A pigeonhole
problem can be generated simply by parameters for the
problem.
Core Prediction itself would likely remain un-changed. In fact, if
the data are focused on one structured
class of formulas, we would expect high performance from the
prediction model.
Special care would be required to ensure that the
de-coring operation does not corrupt the required structure of the
problem. By adding a new variable to
de-coring target clauses, we may change structure of the problem
in such a way that it no longer conforms to the strict
structure of the family. De-coring operations would therefore have
to be specially designed for each family type.
This represents an interesting challenge and is an intriguing
direction for future work. | Summary: This paper addresses the scarcity of practical data (industrial satisfiability problem instances) for training deep learning methods for SAT solving. The existing data augmentation methods suffer from either the limited scalability or the collapsed hardness of the generated instances. Therefore, the authors introduced a fast augmentation method while preserving the original data's hardness. The primary technical contribution is a fast UNSAT core detection procedure using graph neural networks to monitor the hardness of UNSAT formulae during data augmentation. The empirical evaluation confirmed the new method’s fast generation speed and the preserved hardness of the generated instances. In the application of solver runtime prediction, the augmented data led to a 20-50 percent reduction in mean absolute error.
Strengths: The proposed method achieved the best of both worlds. It has a similar speed to a fast generator while preserving a similar hardness of generated instances to a high-quality but slow generator.
The proposed idea is simple and easy to follow, yet effective. The paper is well-written.
The authors performed extensive evaluations to demonstrate the preserved hardness of the generated instances in various aspects, such as the preserved average hardness, the hardness distribution over instances, the runtime distribution per solver, and the best-solver distribution over instances.
The authors illustrated the effectiveness of their method in a practical application (solver runtime prediction). The augmented data successfully reduced the mean absolute error by 20-50 percent.
Weaknesses: The authors don’t present how well the core predictor generalizes to other datasets. We may need to re-train the core predictor when applied to a new family of instances if it doesn’t generalize well, which would introduce extra overhead not presented in the current evaluation.
The technique to eliminate an easy core described in lines 176-191 has been proposed in [1]. The authors should cite the work and move the content to related work.
[1] Yang Li, Xinyan Chen, Wenxuan Guo, Xijun Li, Junhua Huang, Hui-Ling Zhen, Mingxuan Yuan, and Junchi Yan. 2023. HardSATGEN: Understanding the Difficulty of Hard SAT Formula Generation and A Strong Structure-Hardness-Aware Baseline. In Proc. of ACM SIGKDD Conf. Knowledge Discovery and Data Mining
In line 190, the satisfying solution should be (A=0, B=0, C=1).
The axis label is missing in Fig. 5.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. How much time did you spend on collecting data and training the core predictor? If the core predictor doesn’t generalize well to a new family of instances, is the re-train time going to be a huge overhead of your method?
2. How many instances are generated from a single seed instance by HardCore and HardSATGEN? If you could use one seed instance to generate multiple instances, why can’t you keep the same number of seed instances for HardCore and HardSATGEN in line 304?
3. Why do you use the extremely small data size (e.g., 10 and 100) in Table 2? It should be considerably insufficient, even after the three-time augmentation, to train a fair runtime predictor.
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The limitations have been well addressed. The authors acknowledged that the method can only be applied to UNSAT instance generation and is not suitable for the SAT case. They also mentioned that the results are limited to the current datasets and their method can’t scale to large SAT problems with millions of clauses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We would like to thank the reviewer for their insightful and
constructive review. In following the reviewers comments we have
uncovered new, valuable experiments (Circuit-Split LEC) as well as
short-comings in the description of certain details of the experimental
setting.
## Weaknesses
1. *GNN generalization to other datasets not shown. If we have to
re-train for new data, this consists of overhead.*\
In order to measure GNN generalization to new data without
re-training, we create a new split of the LEC data. Each problem in
the LEC data can be traced back to one of 29 circuits. By randomly
splitting circuits into training circuits and test circuits (and
then building training and evaluation sets with their respective
problems), we can measure generalizability. Note that we would not
expect the model to generalize to problems derived from a completely
different application domain (although fine-tuning a previously
model in a domain adaptation strategy might be interesting to
explore).
In Table 2 we report the GNN performance on this experiment. In the
paper we discussed that Core recovery is the priority, because if we
falsely classify true-positives then we may be un-able to de-core
the current core (since the necessary clause may be un-detected),
whereas if we mis-classify true-negatives then we will simply
de-core a non-core clause. Given enough iterations of
core-refinement, a true-positive clause will eventually be selected
for de-coring (since the clause for de-coring is randomly selected
from among the detected clauses). With this in mind, the threshold
hyper-parameter which is used on the sigmoid outputs at model
readout becomes a useful parameter in cases where classification
performance is weakened: we can boost Core Recovery (recall) by
lowering the threshold. Tuning this threshold is very low-cost:
testing a thousand problems takes 500 seconds on a GPU. We find that
by testing values $[0.1, 0.3, 0.5, 0.7, 0.9]$ --- which takes 25
minutes --- we can tune the threshold to provide similar recall to
the in-distribution model.
| | ↑ Core Recovery Ratio (Recall) | ↓ Core Size Discrepancy (TP-P)/(P+N) | ↑Accuracy |
|-------------------|--------------------------------|--------------------------------------|------------|
| Circuit-Split LEC | 0.97 | 0.05 | 0.65 |
| LEC | 0.960 | 0.009 | 0.940 |
2. *The de-coring method of adding a literal to a clause was put
forward by the HardSATGEN paper. HardSATGEN should be cited and this
should be moved to related work.*\
Thank you for this comment. We will modify the paper to acknowledge
this and cite HardSATGEN accordingly. We did not intend to imply
that this was a contribution of our work.
3. *Correct Satisfying solution at l. 190* Thank you, we will make this
correction.
4. *Fig. 5 axis label missing* Thank you, we will correct this in the
final version.
## Questions
1. *How long does it take to collect training data and train the core
predictor for new family of instances?*\
Data collection requires the execution of iterative core refinement
using a traditional core detection algorithm. This process is indeed
costly. With 200x parallelization, which is easily achievable using
a moderately-sized server, data collection for training the GNN
predictor takes approximately 26 hours. The training itself of the
GNN is relatively cheap computationally, taking only 135 minutes.
While there are thus training costs, we stress that the generation
is very low-cost, which allows us to amortize training cost over
many generations. For example, HardCore can generate 20,000 LEC
instances in 24 hours, once trained. This means that the total time
required to generate those 20,000 instances, including
training-time, is 24+26+2.25 hours, which amounts to 9 seconds per
instance. This is very low compared to HardSATGEN's generation cost
for LEC of 6441 seconds per instance, even without including its
training time-cost.
2. *How many instances are generated per seed?*\
We generate 5 instances per seed using each method. We will add this
information to the experimental setting in the text.
3. *If we can use a seed more than once, why can't we obtain many
HardSATGEN generations?*\
With HardSATGEN, the primary bottleneck is the slow iterative
core-detection step during generation. Given that this must be
performed for each generation (and not only for each seed),
generating 100 instances from only 1 seed (100 instances per seed)
with HardSATGEN has the same cost as generating 100 instances from
100 seeds (1 instance per seed). HardSATGEN's generations were
limited during our experiments by time budgets for generation, not a
scarcity of seeds (we have as many seeds for HardSATGEN as we did
for HardCore and the other baselines).
4. *Why do we use such small data in Table 2 (runtime-prediction
experiment) It should be insufficient to train a fair runtime
predictor?*\
The motivation for our work is that SAT data is often scarce. This
is especially the case for public datasets. Therefore in this
downstream task, our goal was to model a realistic case, where one
is provided with only a small number of examples in the dataset for
training the predictor.
---
Rebuttal Comment 1.1:
Comment: Thank you for your response! | Rebuttal 1:
Rebuttal: We thank the reviewers for the insightful and thoughtful reviews. All
reviewers have stated that the proposed method is novel and stands as a
meaningful and innovative contribution to the field. Several reviewers
agreed that the experimentation was extensive and demonstrative of
improvements over existing work. The primary concerns expressed in the
reviews were (1) scalability and cost overhead; (2) result
generalizability to other datasets; and (3) certain details which
required clarification. We have addressed these concerns in the response
by (1) providing a more detailed cost analysis for the pre-processing of
data; (2) presenting new results on a public dataset; and (3) providing
additional clarifications, details and analysis where required.
\(1\) Scalability: Memory limitations are the primary challenge for our
proposed method, due to the need to store the graph. For our
experimental hardware (32GB GPU) and our implementation of the graph
building/storage ($O(nm)$ for a problem with $n$ clauses and $m$
variables), a problem with 256,000 variables and 1,000,000 clauses would
require $256,000 \times 1,000,000 \times 1 = 256 \times 10^9$ bits, or
32 gigabytes. Given a GPU with 32GB of memory, this would be a breaking
point for the method. Of course this is the worst case, which only
occurs for a completely dense graph representation. In practice, clauses
in the LEC data, for example, tend to have on average 3 or 4 variables.
Since in the LCG clause nodes are only connected to the variables of
which they consist, each clause node would then only have degree of 3 or
4, meaning the graph is very sparse. Thus, in practice the primary
memory cost of our model scales moreso according to $O(dn)$, assuming
average number of variables in a clause is $d$, and assuming the
implementation is adapted to leverage the sparse structure (using
edge-lists instead of dense adjacency matrix, for example).
In cases where the problem is large and dense, another option might be
the use of more specialized GNN methods that are specially designed to
handle very large graphs. For example, by sampling from neighborhoods or
loading portions of the graph from storage.
For time-cost scaling, the primary point in the pipeline in which we
suffer scaling challenges is during pre-processing. We build graphs from
the text-file representation of the problems. The time-cost of this step
is linear with the problem size, measured in terms of the number of
clauses. Given this, time-cost is not a major factor in the scaling
issue.
We will add text to clarify the scalability considerations outlined
above and specify the precise nature of an "extremely large" problem
that would challenge our proposed generative model.
\(2\) Public Dataset:
SAT Competition data is an aggregation of thousands of families of
problems and is therefore highly heterogeneous. As discussed briefly in
the introduction, this heterogeneity is highly unfavorable for machine
learning algorithms and tasks. Thus, machine-learning papers are often
required to find creative ways to provide large-scale experimental data
(and this may not conform to real-world data). In \[1\], for example,
data is randomly generated.
In order to demonstrate that our method can generalize to other
datasets, we now provide new results on all UNSAT problems from the SAT
Competition data coming from the "Tseitin" family. The data was compiled
using the GBD database, querying for "family=Tseitin and result=unsat".
There are 135 "Tseitin"-family problems in the database,
We repeat the runtime distribution experiments on this data, generating
5 instances for each of the 135 seed instances. We see very similar
stacked histograms for the solver rankings (Fig. 2 in the attached
response pdf). Fig. 2 (right) shows per-solver boxplots comparing seed
instances and generated instances. Overall these are similar, although
solvers 1 and 6 show greater upper-range in the original data compared
to HardCore. However, the median solve times for solvers 1 and 6 are
still close.
In the table below we show data augmentation experimental results. Since
there are only 135 problems, and our focus is on data-scarce settings,
we use small datasets to train the SATZilla-based predictor. In the
table, we see that for all datasets aside from the smallest one, using
HardCore to generate an augmented dataset leads to a 4%-6% reduction in
MAE compared to training using the original data.
\[1\] D. Selsam et. al., "Learning a sat solver from single-bit
supervision,", in ICLR, 2019.
| Training Data | 10 | 20 | 30 | 40 | 50 |
|-------------------------|-----------------|---------------|-----------------|-----------------|-----------------|
| HardCore-Augmented | 3618.9 | **3410.0** | **3311.4** | **3417.7** | **3419.4** |
| Original (Un-Augmented) | **3369.6**| 3581.9 | 3576.1 | 3544.7 | 3608.5 |
Data Augmentation experiment: MAE of Runtime Prediction averaged
across 7 solvers and 15 trials. We train a runtime prediction model
according to the experimental setting in 6.4.2 of the paper. Columns
in the table indicate the number of original problems used in the
training set (we generate 4 times per original problem in the training
set). Results in the "HardCore" row are MAE for a runtime prediction
model trained on HardCore-augmented data, whereas "Original" indicates
un-augmented performance.
Pdf: /pdf/12d988f76a4148e798a4a2aae1307e53671be63d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
A Layer-Wise Natural Gradient Optimizer for Training Deep Neural Networks | Accept (poster) | Summary: The proposed approach uses the block diagonal form of the Fischer Information Matrix (FIM) which is achieved by computing this FIM with every layer in the network sequentially. In addition, the space complexity is limited by consider the diagonal form of this FIM matrix for each layer and its respective gradient outer products.
Strengths: 1. The authors find a clever way of approximating the Fischer information matrix by only consider the block diagonal form of the FIM. In addition, due to i.i.d. assumptions, the block diagonal reduces to a diagonal matrix for
$\Psi_l \otimes \Phi_l$
2. The adaptive learning rate for each layer further makes it an interesting way of optimizing the weights.
Weaknesses: The reviewer has actually found it difficult to find any weaknesses with the paper. It is really well written, with sufficient proofs and sufficient results to support their claims.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is $\mathbf{s}_l$ in line 136 ?
2. This reveiwer would like to know what's the difference between the proposed approach and https://ojs.aaai.org/index.php/AAAI/article/view/16867 ?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This is not really a limitation - Given that this paper improves upon fundamental results significantly, an open source implementation of this approach would a be really good to have.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: What is $s_l$ in line 136 ?
A1: Sorry for the unclear description, $s_l= W_l a_{l-1}$. We will correct it in the final version.
Q2: This reveiwer would like to know what's the difference between the proposed approach and https://ojs.aaai.org/index.php/AAAI/article/view/16867?
A2: Thanks for your question. The THOR, which is proposed in https://ojs.aaai.org/index.php/AAAI/article/view/16867, mainly considers reducing the computational cost of natural gradient descent through two aspects. On the one hand, THOR gradually increasess the updating frequency of the inverse matrix of Fisher information matrix (FIM) and proposes a trace-based updating rule for FIM for each layer. On the other hand, THOR approximates the approximated FIM obtained by KFAC as some smaller matrices by splitting matrix dimensions. Our proposed LNGD first gives a layer-wise sample method to more efficiently compute each block matrix corresponding to each layer and proposes a novel approximate scheme of the FIM. Furthermore, LNGD also adopts an adaptive layer-wise learning rate to speed up training. The contributions and ideas of our proposed LNGD are different from THOR.
---
Rebuttal 2:
Title: Thank you for feedback
Comment: This reviewer thanks the authors for their feedback.
This reviewer upholds his/her decision on the paper. There is solid contribution
However, here are some minor changes the reviewer would suggest
1. Make the clear distinction between THOR and the proposed approach. At a first glance, the methods seem to have a lot of overlap, albeit the contributions might be different. It would be interesting to see a comparative performance of THOR against LNGD as well (if the code is reproducible and available to use).
2. Reiterating on reproducibility - It would be interesting to release a reproducible version of this code to the community.
3. Absolute difference between the exact FIM and the proposed approach FIM $\textbf{along the diagonal}$ - It would be interesting to see how different the diagonal elements of the FIM are from the exact version. It would be great if that information is provided in the final version of the paper as well.
---
Rebuttal Comment 2.1:
Comment: Thank you for your new valuable comments and suggestions.
We will add explanations in the final version to make a clear distinction between THOR and LNGD. We will also try to perform experiments to compare their performance.
As for the code, we are currently undergoing the approval process for the open-source workflow in compliance with the company's regulations, and it will be open-sourced shortly.
We will add figures and explanations in the final version to illustrate the difference between the exact FIM and the approximated FIM in LNGD along the diagonal. | Summary: The authors propose a novel way to approximate the Fisher information matrix. They do this by starting with the block diagonal approximation of the Fisher information matrix which they compute using a novel layer-wise sampling method without performing a complete backpropagation. Then they approximate each block as a Kronecker product of two smaller matrices, where one is diagonal. They also keep the traces equal before and after the approximation. They call this new method the Layer-wise Natural Gradient Descent (LNGD). They further propose a new adaptive layer-wise learning rate that accelerates training. The authors also provide global convergence analysis of LNGD. At the end, they show extensive experiments on image classification and machine translation tasks, demonstrating their method is competitive.
Strengths: - The idea of factoring the Fisher information matrix as in Section 3.1 seems novel and interesting.
- The authors provide a convergence result in Theorem 2.
- The authors have a great experimental section on complicated datasets. Their method outperforms all compared methods in both epoch and wallclock time.
Weaknesses: - The biggest and most glaring weakness is that the paper doesn't compare with recent algorithms, some of which they even list in their own related works sections. The most recent algorithm that was compared is KFAC, which was published in 2015 -- almost a decade ago. If the authors provide comparisons to more recent algorithms (EKFAC, TKFAC, relevant extensions to convolutional neural networks, and NLP-related architectures, etc.), that might push it over to acceptance.
- There is the main idea (Section 3.1) of factoring the Fisher information matrix which seems novel. But then there are various techniques (Section 3.2 and 3.3) that are employed to make the method more effective. But it's hard to disentangle whether the main contribution to the numerical performance is due to the novel factoring that allows using the natural gradient, or the techniques such as adaptive learning rates based on the Hessian. As asked in the Questions section of this review, can you apply the adaptive learning rate techniques to KFAC? How does it compare?
Technical Quality: 4
Clarity: 4
Questions for Authors: - I like Figure 1. But it would be interesting to also compare this with KFAC.
- In Section 3.2, doesn't having adaptive learning rates contradict the spirit of the natural gradient?
- In the equation between line 172-173, isn't this computing the Taylor expansion, and thus also the Hessian? But I thought with the natural gradient and tricks to compute it, there should be no need to compute the Hessian?
- Can you also apply your adaptive learning rate strategy to KFAC?
Confidence: 3
Soundness: 4
Presentation: 4
Contribution: 2
Limitations: The authors provide limitations for each theory.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: The first part of weaknesses.
A1: We are sincerely thankful for the valuable suggestions. Our primary objective is to propose and optimizer and achieve an optimal balance between the speed and accuracy of model training. Expanding upon established methodologies in the field of NGD, we propose a layer-wise sampling approach for efficiently computing the block matrix corresponding to each layer, as well as an adaptive layer-wise learning rate to improve training efficiency. In Appendix B of our manuscript, we provide comparisons of LNGD and KFAC as well as its recent variants (EKFAC and TKFAC). Additionally, in order to further substantiate the effectiveness of LNGD, we conducted additional experiments on CIFAR-10 and ImageNet using recent NGD methods, consistently demonstrating LNGD's superior performance in achieving comparable accuracy levels. Detailed experimental findings are available in our response to all reviewers, and we intend to include these pertinent results in the final version of our paper.
Q2: The second part of weaknesses.
A2: We appreciate the insightful suggestion. In order to provide a deeper understanding of the individual contributions of different components within the LNGD, a set of ablation studies have been conducted and are detailed in Appendix F.3 of our manuscript. These experiments are aimed at isolating the impact of adaptive learning rate and sampling optimization on the performance of the LNGD. Due to limitations in the rebuttal space, we will not elaborate further on this matter here and appreciate your understanding.
Q3: I like Figure 1. But it would be interesting to also compare this with KFAC.
A3: We have added the visualization results of KFAC in Figure 2, in which we only show partially enlarged figures due to page limitation. From Figure (b) and (e), we can see that KFAC and LNGD can all emphasize the importance of the diagonal elements in the exact Fisher information matrix (FIM). In addition, it can also be seen clearly that KFAC still retains some elements near the main diagonal, while LNGD does not, which also reflects that LNGD provides an efficient approximation of FIM with less computational cost in comparison with KFAC.
Q4: In Section 3.2, doesn't having adaptive learning rates contradict the spirit of the natural gradient?
A4: Apologies for the lack of clarity in our presentation. We extend the current natural gradient methods by introducing a layer-wise sampling technique and an adaptive layer-wise learning rate to speed up the training process. In Section 3.2, we first discuss the necessity of an adaptive layer-wise learning rate, and then derive the formula for calculating it using a second-order Taylor expansion approach (i.e., Eq. (11)). Adhering to the NG principle, based on Eq. (11), we necessitate the computation of the Hessian matrix. However, across various loss functions such as cross-entropy or mean squared loss, the Fisher information matrix and the Hessian matrix are, in fact, equivalent. Hence, we proceed to compute the Fisher information matrix and utilize Eq. (12) to obtain the adaptive learning rate in LNGD.
Q5: Can you also apply your adaptive learning rate strategy to KFAC?
A5: Thanks for the valuable question. Our proposed adaptive learning rate strategy can also be applied to KFAC. As given in Appendix B, the main difference between KFAC and LNGD is the approximations of the block matrix $F_l$. KFAC approximates $F_l$ as the Kronecker product of $A=E[a_{l-1}a_{l-1}^\top]$ and $ B=E[g_lg_l^\top]$. Our proposed LNGD
approximates $F_l$ as the Kronecker product of a matrix ${\bf\Phi}_l$ and a diagonal matrix ${\bf\Psi}_l$, which is computed by sampling from each layer, details are given in Section 3. The adaptive layer-wise learning rate is computed by Eq. (12), which would be the same for KFAC and LNGD. Hence, it is feasible to integrate the adaptive layer-wise learning rate into KFAC with minimal costs.
---
Rebuttal Comment 1.1:
Comment: Thank you for your clarifications. I had a few more questions.
Reply to Q.1 reply: For the first part of the weakness, yes I already saw Appendix B, but I was looking for performance comparisons. I see now that the authors have done this in the reply to all reviewers, thank you very much, I really appreciate it. But now can the authors expand on the experimental procedure with the new methods? For example, did the authors take the code from public code repositories and run it on the datasets? What hyperparameters were used? Also, apologies if I'm ignorant, but I just realized the oddity of using 91% and 75.9% as accuracy benchmarks. Why were these specific numbers chosen?
Reply to Q.2 reply: For the second part of the weakness in my original review, I did see Section F.3 (which wasn't referenced in the main text, but perhaps should be referenced around line 260), but perhaps I wasn't being clear, my apologies. I was hoping to see the adaptive learning rates and sampling technique also applied to KFAC if applicable, and in light of the new experiments but me being sympathetic to the large amount of work involved, perhaps just to NG+. This is so we can examine if it's the novel factoring that shows impressive performance (which seems to me the main contribution of the paper), or the adaptive learning rate/sampling, or the new damping technique. I even see throughout the paper (Line 175-178 with adaptive learning rates, and Line 225-228 with damping) that adaptive learning rates and damping techniques are required to achieve optimal performance, so I would have hoped these advantages would be conferred onto KFAC as well, and now NG+ if they were also conferred to the proposed method, LNGD. I do realize this is a lot of work in a limited time, but I also believe this would make for a fairer comparison.
Reply to Q.3 reply: From my eyes, and perhaps I'm looking at it incorrectly, but it seems KFAC is more accurate than LNGD? But are the authors arguing that LNGD achieves a "close-enough" approximation much faster?
Reply to Q.4 reply: Thank you for the answer, it was enlightening. Apologies, but just to reclarify for my own ignorance: so to compute the adaptive learning rate, the Hessian matrix must be computed?
Reply to Q.5 reply: See my answers to "Reply to Q.1" and "Reply to Q.2" above.
---
Reply to Comment 1.1.1:
Comment: Thank you for your new valuable comments and questions.
Q 1. The new methods and experiments.
A 1. Thank you for your question. We are sorry for the lack of detailed descriptions of the new methods’ setting in the reply. The code of EKFAC and NG+ are obtained from GitHub (links will be added in the final version since they are not permitted in our comment), and the code of TKFAC is obtained from its authors. The hyperparameters, such as the learning rate, the update of the curvature matrix strategy, the damping parameter, etc., are used according to the experimental settings in their papers and codes. We will add the detailed descriptions in the final version. Regarding the rationale for selecting accuracy benchmarks of 91% and 75.9%, we predominantly adhere to the settings established in the literature on natural gradient descent, specifically referencing several variants of KFAC.
Q 2. KFAC with the adaptive learning rates and sampling technique.
A 2. Thank you for your insightful question. We would like to take this opportunity to clarify the main contributions of our work. In this paper, we introduce a novel optimizer, LNGD, which builds upon existing natural gradient methodologies. Central to our approach are the novel layer-wise sampling strategy and the adaptive layer-wise learning rate strategy, which are essential components for executing the natural gradient optimization process within LNGD. As mentioned in our previous response, layer-wise sampling and adaptive layer-wise learning rates can also be integrated with KFAC and its variants, including EKFAC and TKFAC. However, once these techniques are incorporated into KFAC, EKFAC, and TKFAC, the resulting methods may be regarded as new variants of these methods rather than the original ones since some new techniques are incorporated.
Certainly. To illustrate the effects of incorporating sampling strategy and adaptive learning rate into KFAC, we conducted experiments on the CIFAR-10 dataset. We denote L-KFAC as a model variant where the sampling strategy, the adaptive learning rate and the new damping have been integrated into KFAC. On the CIFAR-10 dataset, when the top-1 testing accuracy reached 91%, both L-KFAC and LNGD utilized 36 epochs; however, L-KFAC incurred a time cost of 5.62 seconds per epoch, in contrast to LNGD, which took only 5.08 seconds. This highlights the effectiveness of our new factorization scheme. We will explore the application of the layer-wise sampling and learning rate adjustments to other natural gradient descent methods and hope to incorporate pertinent content in the final version.
Q 3. KFAC is more accurate than LNGD.
A 3. Regarding the Fisher information matrix (FIM), KFAC can indeed provide a more accurate approximation compared to LNGD. However, the purpose of proposing the LNGD optimizer is to achieve an optimal balance between the training speed and accuracy of one model, and to make every effort to reduce computational cost while preserving the main information of the FIM as much as possible. Therefore, the approximation accuracy of LNGD to FIM may not be the highest, but LNGD achieves a "close-enough" approximation much faster.
Q 4. Compute the Hessian matrix.
A 4. In the context of LNGD, computation of the Hessian matrix is not required. When calculating the adaptive learning rate within LNGD, we approximate the Hessian matrix using the Fisher Information Matrix (FIM). This is justified by the equivalence of the FIM and the Hessian matrix under various loss functions, such as cross-entropy and mean squared loss. Accordingly, we compute the FIM and employ Eq. (12) to derive the adaptive learning rate for LNGD. However, if our proposed adaptive layer-wise learning rate is applied to other methods without FIM, the Hessian matrix or its approximation is need to be computed.
---
Rebuttal 2:
Comment: Thank you for your clarifying response, it is much appreciated. I just had a few more questions:
- For the 91% and 75.9% benchmarks, can you provide references to the literature where these numbers are utilized? I'm so sorry, but I'm having trouble find it.
- I'm still a bit uncomfortable with the "bundling" of many methods, and not providing similar "boosts" to the other methods in the literature. It does seem like multiple new methods are being introduced in the paper. Thus, I think it would be more scientifically enlightening to have a study where these new methods also apply to previous methods in the literature, where applicable. This is so the contribution of each part can be examined, and thus the causes of the performance increases can be made more clear.
Thank you again, and I do find the research direction and the method quite interesting. And please correct me if I missed something.
---
Rebuttal 3:
Comment: Q 1. About the reference.
The related work is presented as follows:
[15] Kaixin Gao, Xiaolei Liu, Zhenghai Huang, Min Wang, Zidong Wang, Dachuan Xu, and Fan Yu. A trace-restricted Kronecker-factored approximation to natural gradient. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 7519-7527, 2021.
[16] Kazuki Osawa, Yohei Tsuji, Yuichiro Ueno, Akira Naruse, Chuan-Sheng Foo, and Rio Yokota. Scalable and practical natural gradient for large-scale deep learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 44(1):404-415, 2020.
[17] Minghan Yang, Dong Xu, Qiwen Cui, Zaiwen Wen, and Pengxiang Xu. An efficient Fisher matrix approximation method for large-scale neural network optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5):5391-5403, 2022.
[22] Thomas George, César Laurent, Xavier Bouthillier, Nicolas Ballas, and Pascal Vincent. Fast approximate natural gradient descent in a Kronecker factored eigenbasis. In Advances in Neural Information Processing Systems, pages 9550-9560, 2018.
[24] Mengyun Chen, Kaixin Gao, Xiaolei Liu, Zidong Wang, Ningxi Ni, Qian Zhang, Lei Chen, Chao Ding, Zhenghai Huang, Min Wang, et al. THOR, trace-based hardware-driven layer-oriented natural gradient descent computation. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 7046-7054, 2021.
Q 2. About the integration of the newly proposed strategies into existing NGD methods.
Thank you for your response. As previously mentioned, layer-wise sampling and adaptive layer-wise learning rates can indeed be integrated with KFAC and its variants, such as EKFAC and TKFAC. However, it is important to note that the incorporation of these techniques into KFAC, EKFAC, and TKFAC results in new variants of these methodologies. Consequently, it would not be appropriate to conduct comparative experiments using these resulting new variants against LNGD. Actually, to facilitate a comprehensive understanding of the distinct contributions of various components within the LNGD framework, we have conducted a series of ablation studies, detailed in Appendix F.3 of our manuscript. These experiments aim to isolate the effects of adaptive learning rates and sampling optimization on LNGD performance.
Furthermore, as you suggested, for a more thorough analysis of the impact of layer-wise sampling and adaptive layer-wise learning rates as independent components on the performance enhancement of existing methods such as EKFAC and TKFAC, it would be beneficial to incorporate these techniques into EKFAC and TKFAC and compare their performance against the baseline.
---
Rebuttal 4:
Comment: Dear reviewer:
We would like to express our gratitude for all the feedback provided. Additionally, we hope that the additional responses above address your concerns and contribute to an enhancement in your overall rating of this manuscript. | Summary: The authors propose a computationally feasible second-order method for training neural nets, layer-wise natural GD, which includes an Adaptive Layer-Wise Learning Rate scheme. The method eliminates the backprop pass by using a layer-local sampling approach to approximate the Fisher information matrix; thus providing a novel approach to *local learning*. The authors provide a coherent and reasonably thourough theoretical analysis and motivation for their method, which is backed up by a *somewhat modest amount pf experimental results*.
Strengths: The paper is generally well-written with only few grammatical errors/typos. The authors did a good job of presenting the math and theoretical foundation for the method (including proofs in the appendix). Clearly, a good amount of work was done. While the authors frame their work as a computationally feasible second order method, I find that it is indeed an interesting addition to the lit on local learning.
Weaknesses: While the authors claim (in the abstract) to provide ``extensive experiments'', I find the work lacking wrt. empirical validation. Only three results are provided, and those are for old architectures (far from SOTA), and without error bars. The proposed method could easily have been tested on a much wider variety of (even smaller) models and benchmarks --- providing the reader with a much more complete picture of how the method behaves in priactice. This leaves a good amount of doubt wrt. the robustness of the approach.
Given the known issues with other local learning methods, I suspect that this method too might have a strong tendency to overfit. Three experiments are not sufficient to convince me otherwise.
## Speaking of local learning, the authors did not touch on the topic at all. Thus, several relevant references were missed, such as:
* Nøkland, A. (2016). Direct Feedback Alignment Provides Learning in Deep Neural Networks. ArXiv Preprint ArXiv:1609.01596. http://arxiv.org/abs/1609.01596
* Nøkland, A., & Eidnes, L. H. (2019). Training Neural Networks with Local Error Signals. Proceedings of the 36th International Conference on Machine Learning, 4839–4850. https://proceedings.mlr.press/v97/nokland19a.html
* Belilovsky, E., Eickenberg, M., & Oyallon, E. (2019). Greedy Layerwise Learning Can Scale To ImageNet. Proceedings of the 36th International Conference on Machine Learning, 583–593. https://proceedings.mlr.press/v97/belilovsky19a.html
* Belilovsky, E., Eickenberg, M., & Oyallon, E. (2020). Decoupled Greedy Learning of CNNs. Proceedings of the 37th International Conference on Machine Learning, 736–745. https://proceedings.mlr.press/v119/belilovsky20a.html
* Ren, M., Kornblith, S., Liao, R., & Hinton, G. (2022). Scaling Forward Gradient With Local Los
* Xiong, Y., Ren, M., & Urtasun, R. (2020). LoCo: Local Contrastive Representation Learning. Advances in Neural Information Processing Systems, 33, 11142–11153.ses. https://arxiv.org/abs/2210.03310v3
* Wang, Y., Ni, Z., Song, S., Yang, L., & Huang, G. (2021). Revisiting Locally Supervised Learning: an Alternative to End-to-end Training. ICLR 2021 - 9th International Conference on Learning Representations. https://arxiv.org/abs/2101.10832v1
Moreover, I really think that the authors should have included their code in the submission. Their method substantially differs from the standard SGD variants, and the code would help a lot wrt. reproducibility.
Technical Quality: 2
Clarity: 3
Questions for Authors: * Have you considered how your method might affect the well-known implicit regularization of GD training? Specifically, the implicit bias of depth? It seems to me that skipping the backward pass might significantly affect this.
* Please explicitly define the vector, **v**, of Thm 2 (near line 255). It is only defined in the appendix.
* In Algo 1 you use both T_FIM and T_Fisherinformationmatrix. Please be consistent.
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: * What is your justifcation for calling your three experiments ``extensive''? Am I being too grumpy here? ;-)
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Weaknesses.
A1: We would like to express our gratitude for the insightful suggestions provided. Our primary aim is to establish an optimal equilibrium between model training speed and accuracy. Building upon the existing approaches in NGD, we propose a layer-wise sampling methodology to efficiently compute the block matrix corresponding to each layer, as well as an adaptive layer-wise learning rate to enhance training efficiency. In Appendix B of our manuscript, we offer comparisons of LNGD and KFAC alongside its recent variants (EKFAC and TKFAC). To further validate the effectiveness of LNGD, we conducted additional comparative experiments on CIFAR-10 and ImageNet using recent NDG methods, consistently confirming LNGD's superior performance in achieving the same level of accuracy. Detailed experimental results are available in the response to all reviewers. Throughout the experimental process, extensive and multiple rounds of hyperparameter search were conducted, and the best results were chosen for comparison, therefore no error bars were included. As for the code, we are currently undergoing the approval process for the open-source workflow in compliance with the company's regulations, and it will be open-sourced shortly.
Q2: Have you considered how your method might affect the well-known implicit regularization of GD training? Specifically, the implicit bias of depth? It seems to me that skipping the backward pass might significantly affect this.
A2: Thanks for your question. In LNGD, the key in parameter updating strategy is to calculate the updating direction ${\bf d}^k_l=({\bf F}^k_l)^{-1}\nabla_{\theta}h(\theta^k_l)$. To save computational costs, we propose the layer-wise sample approximation approach to compute ${\bf F}^k_l$ without having to perform a complete back-propagation. However, the calculation of the first-order gradient $\nabla_{\theta}h(\theta^k_l)$ still requires performing a complete back-propagation. This may not have a significant impact on the implicit bias of depth. Since we have not focused on the implicit regularization in this paper, we may not provide a complete and detailed explanation. These contents will be further considered in our following work. Thank you again for providing a valuable direction for our further study.
Q3: Please explicitly define the vector, v, of Thm 2 (near line 255). It is only defined in the appendix.
In Algo 1 you use both T_FIM and T_Fisherinformationmatrix. Please be consistent.
A3: Thanks for your questions. We will add the definition of $\bf{v}$ before Theorem 2 and correct $T_{Fisherinformationmatrix}$ as $T_{FIM}$ in the revised version.
---
Rebuttal Comment 1.1:
Comment: Dear Authors,
Since Reviewer k9Mw has asked, could you please explain why the codes were not included in the submission? Also why weren't the error bars created in the plots?
Thank you,
AC
---
Reply to Comment 1.1.1:
Comment: Regarding the open-sourcing of the code, as we are affiliated with an internet company, we initiated the process for code release upon receiving the first round of review comments. Given the multiple stages of the internal workflow, we are currently at the final stage of this process. We assure the reviewers and the area chair that the code will be made publicly available immediately upon completion of this final step.
In regard to the error bars, we conducted multiple rounds of hyperparameter searches throughout the experimental process and selected the best results for final comparison; consequently, no error bars were included. If quite necessary, we can incorporate error bars in the final version.
---
Rebuttal Comment 1.2:
Comment: Dear Authors,
Thanks for your careful response.
I still *highly recommend* sharing your code. Also, please do not say "extensive experiments" in your paper; you are still far from that in my opinion ;-)
I maintain my original rating of "Accept".
---
Reply to Comment 1.2.1:
Comment: Thank you for your valuable suggestion. Regarding the open-sourcing of the code, as we are affiliated with an internet company, we initiated the process for code release upon receiving the first round of review comments. Given the multiple stages of the internal workflow, we are currently at the final stage of this process. We assure the reviewers and the area chair that the code will be made publicly available immediately upon completion of this final step.
In regard to ambiguous descriptions such as "extensive experiments," we will revise and refine these aspects in the final version. We sincerely apologize for any confusion caused.
---
Rebuttal 2:
Comment: Dear Reviewer k9Mw,
Could you please respond with how you think about the authors' response? Please at least indicate that you have read their responses.
Thank you,
Area chair | Summary: The paper introduces a new optimization algorithm called LNGD (Layer-wise Natural Gradient Descent). This optimizer aims to enhance the training efficiency of deep neural networks by approximating the Fisher information matrix in a computationally efficient manner and introducing adaptive layer-wise learning rates.
Strengths: - **Research Significance**: The paper addresses a highly valuable problem in the field of deep learning training. Improving the efficiency and effectiveness of training deep neural networks with second-order methods.
- **Practical Utility**: The proposed LNGD method shows practical promise. By reducing computational costs and introducing adaptive learning rates, the method can potentially be applied to many real-world training scenarios rather than KFAC.
Weaknesses: 1. **Assumption of Gaussian Distribution**: The assumption that $P_{a_l \mid a_{l-1}}$ follows a standard Gaussian distribution lacks strong motivation and empirical evidence. Even for a single layer in a random linear neural network, this assumption is not convincingly justified. Further explanation or practical case demonstrations where $P_{a_l \mid a_{l-1}}$ follows or approximates a Gaussian distribution are necessary.
2. **Convergence Analysis**: The convergence analysis results are unusual. Typically, new optimization algorithms share the relationship between convergence speed and iteration steps, assuming a certain smoothness. The approach result, similar to NTK, is unconventional and may not highlight the superiority of NGD since different optimization algorithms tend to have similar rates with a large number of hidden units.
3. **Complexity of Adaptive Learning Rate**: The method for calculating the adaptive learning rate is overly complicated. It needs to be shown whether the proposed Adaptive Layer-Wise Learning Rate has practical advantages over optimizers like Adam or LAMB. Ablation studies comparing this approach with existing LR schedules (e.g., linear warmup + cosine decay) should be conducted.
4. **Computational Overhead**: The computation of sample-wise metrics in Eq. (13) poses significant memory and computational overhead. It is necessary to verify if the final equation holds true, specifically whether the squared term's batch average is equivalent to the mean of the squared terms.
5. **Comparison with Latest Methods**: The experiments lack comparisons with some of the latest KFAC-variant methods. Additionally, current results omit several key metrics, such as peak memory usage, and selected baselines appear weak. For example, advanced training recipes for smaller models (like ResNet-50) on ImageNet typically achieve over 80% accuracy, making comparisons with sub-75% settings less meaningful.
Minor Comments
- **Layer Outputs**: Clarify the relationship between the outputs of the previous layer and the inputs of the next layer, as this affects the solution for $g_l$ in Eq. (3).
- **Notation Consistency**: The notation $F_ {LNGD}$ is not defined.
- **Citations**: The citations appear to be added post hoc, with critical parts lacking references. The related work section is also absent, suggesting a possible unfamiliarity with the latest advancements in the field, as evidenced by the absence of citations from 2023 onward.
- **Typos**: There are typographical errors in the paper, such as inconsistent matrix multiplication dimensions between lines 195-197 and missing add sign between Lines 183-184 and
- **Algorithm Description**: The description of Algorithm 1 should not be placed in the appendix.
Technical Quality: 3
Clarity: 2
Questions for Authors: See Weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: See Weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Q1: Assumption of Gaussian distribution.
A1: Thanks for your valuable suggestion. We have added Figure 1 to illustrate the validity of the Gaussian distribution assumption. Please refer to the submitted pdf file. We collect the output of two layers of the ResNet-18 network on CIFAR-10. Figure 1 (a) and (b) show the distributions of sample representation vectors' values in some dimension. Since we use the ReLU activation function, the obtained distributions are in accord with the Gaussian distribution in the positive quadrant. Figure 1 (c) and (d) show the distributions of values of sample representation vectors' Euclidean norm, from which we can see that the two distributions can also be approximated as Gaussian distributions. We will also consider this problem further and add relevant explanations in the revised version.
Q2: Convergence analysis.
A2: Thank you for your insightful feedback. The convergence analysis presented in our manuscript is predominantly grounded in the framework utilized for the convergence analysis of natural gradient descent in related works. As you correctly pointed out, this result does have limitations, as different methods may exhibit similar rates when applied to a large number of hidden units. We have recognized this issue and have endeavored to provide a convergence analysis in terms of convergence rate and iteration steps within the framework of stochastic optimization with rigorous and appropriate assumptions. This work is currently ongoing and we intend to present such results in the following work in order to comprehensively illustrate the advantages of natural gradient descent in comparison with other methods.
Q3: Complexity of adaptive learning rate.
A3: We apologize for any ambiguity in the previous description. The adaptive learning rate is presented in Eq. (12). Furthermore, to enhance computational efficiency, we transform matrix computations into vector form as delineated in Eq. (13). Notably, both ${\bf d}^k_l$ and ${\bf F}_l^k$ are common components in Natural Gradient Descent (NGD) approaches, ensuring that our method does not introduce additional variables but instead incorporates vector computations. In Theorem 1, we have demonstrated that the adaptive layer-wise learning rate contributes to a more rapid convergence of learnable parameters. Additionally, we conducted an Ablation Analysis in Section F.3 of the appendix, wherein confirm that the adaptive layer-wise learning rate effectively accelerates the training procedures.
In addition, LNGD serves as a second-order optimization algorithm. In practical applications, it operates similarly to other second-order optimizers (e.g., KFAC, EKFAC and TKFAC) and first-order optimization algorithms (e.g., Adam), which target to efficiently modify the gradient descending magnitude of training parameters and often necessitate the integration of various learning rate adjustment schedules, including the linear warmup and cosine decay strategies as you mentioned. In our manuscript, we reference other second-order optimization algorithms and incorporate an exponential updating strategy for the learning rate.
Q4: Computational overhead.
A4: The aim of Eq. (13) is to expedite the computation of the adaptive layer-wise learning rate provided in Eq. (12) by converting the matrix computations to vector form. It is noteworthy that the terms ${\bf d}_l^k$ and $\mathcal{D}\theta_l$ in Eq. (13) are common variables in the NGD method, therefore we have not introduced additional parameters, but have indeed incorporated additional vector multiplication calculations to yield the adaptive layer-wise learning rate. Nonetheless, through Theorem.1 and Ablation Analysis in section F.3 in the appendix of our manuscript, the additional calculations introduced for computing the adaptive layer-wise learning rate are acceptable and can significantly speed up the convergence of training procedures.
Regarding the validity of the final equation in Eq. (13), indeed, the strict equality does not hold. We are very grateful for your careful reading and correction. In reality, an approximate equality should be used here, where we utilize the batch average of the squared term to estimate the mean of the squared terms in computation. We will amend this in the revised version. Thank you once again.
Q5: Comparison with latest methods.
A5: We acknowledge and appreciate the valuable suggestion provided. Our objective is to achieve a satisfactory balance between model training speed and accuracy. Building upon the existing NDG approaches, we propose a layer-wise sample method to efficiently compute each block matrix corresponding to each layer and introduce an adaptive layer-wise learning rate to expedite training. In Appendix B of our manuscript, we present comparisons of LNGD and KFAC along with its recent variants (EKFAC and TKFAC). To further validate the effectiveness of LNGD, we conducted additional comparative experiments, which affirm LNGD's superior performance at achieving the same level of accuracy. The detailed experimental results can be found in the response to all reviewers. As our emphasis is not on whether LNGD can ultimately achieve the highest accuracy, but rather on the performance of various optimizations in achieving acceptable accuracy, we may have overlooked metrics such as peak memory usage. If time permits, we will evaluate the final accuracy level achievable by LNGD in the final version. Once again, we express our gratitude for the important and valuable suggestions provided for the further improvement of LNGD.
Q6: Minor comments.
A6: Thank you for your insightful suggestions. We will diligently proofread the manuscript and address the notations and typographical errors. Additionally, we will incorporate a comprehensive overview of recent pertinent literature published in 2023 and 2024 in the revised version. The details of Algorithm 1 will also be presented in subsection 3.3.
---
Rebuttal 2:
Comment: Dear Reviewer HWh2,
Could you please respond with how you think about the authors' response? Please at least indicate that you have read their responses.
Thank you, Area chair
---
Rebuttal 3:
Comment: Dear reviewer:
We sincerely appreciate your taking the time to review our response. We hope that the above clarifications address your concerns and contribute positively to your overall assessment of the paper. | Rebuttal 1:
Rebuttal: We are very grateful to the four reviewers for their constructive comments and valuable suggestions on our manuscript.
The tables and figures mentioned in the reply are given in the the submitted pdf file. Please see it for details.
In our manuscript, we aim to propose an optimizer that can achieve a good balance between the training speed and accuracy of the model. To this end, based on the existing NDG approaches, we introduce LNGD. In Appendix B of our manuscript, we provide comparisons of the primary variances between LNGD, KFAC, and its recent variants (EKFAC and TKFAC). In order to further validate the effectiveness of LNGD, we conducted additional experiments on the CIFAR-10 dataset, in which three methods including EKFAC [1], TKFAC [2], and NG+ [3] are added for comparison. The detailed statistics are presented in Table 1. From the table, we observe that LNGD achieves a testing accuracy of 91\% with the fewest epochs and the shortest total time. Furthermore, LNGD exhibits the smallest computational time per epoch. Additionally, due to the efficient FIM approximation strategy adopted by NG+, it can significantly reduce the computational time and number of epochs compared to EKFAC and TKFAC. Thus, considering the time constraints, we prioritize testing the performance of NG+ on ImageNet, and the results are presented in Table 2. It is evident that LNGD consistently demonstrates a noteworthy reduction in computational time compared to NG+.
References:
[1] Thomas George, César Laurent, Xavier Bouthillier, Nicolas Ballas, and Pascal Vincent. Fast approximate natural gradient descent in a Kronecker factored eigenbasis. In Advances in Neural Information Processing Systems, pages 9550–9560, 2018.
[2] Kaixin Gao, Xiaolei Liu, Zhenghai Huang, Min Wang, Zidong Wang, Dachuan Xu, and Fan Yu. A trace-restricted Kronecker-factored approximation to natural gradient. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 35, pages 7519–7527, 2021.
[3] Minghan Yang, Dong Xu, Qiwen Cui, Zaiwen Wen, and Pengxiang Xu. An efficient Fisher matrix approximation method for large-scale neural network optimization. IEEE Transactions on Pattern Analysis and Machine Intelligence, 45(5):5391–5403, 2023.
Pdf: /pdf/81e1cc963855de54e4afa32456ebd6df2db766e3.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory | Accept (poster) | Summary: In this paper, the authors discuss the impact of random initialization of deep neural networks in the neural tangent kernel (NTK) regime.
Precisely, the authors prove that in the case of standard random initialization, the network function in the finite wide limit, converges to the NTK predictor uniformly.
Then, by studying the behavior of the underlying Gaussian process, the authors establish upper and lower bounds for the generalization errors of DNNs in the NTK regime, in terms of the real interpolation space of the RKHS.
Strengths: The paper studies an important problem.
The paper is in general well written and not difficult to follow.
The numerical results, at least those with artificial data, look compelling.
Weaknesses: It is a bit difficult for me to evaluate this paper.
The technical results of Theorem 3.3 and 4.1 are not very surprising and their proofs also look more or less standard.
The result in Theorem 4.2 appear novel and interesting, but it is a bit difficult to position this in the existing literature of, kernel or DNN. I believe that the authors should make more efforts on that to make the message of this paper more clear.
Technical Quality: 3
Clarity: 2
Questions for Authors: * Is Proposition 2.2 novel result? If yes, where is its proof?
* it appears that the notations of expectation are not aligned in Assumption 1 and line 178.
* The results of Theorem 3.3 and 4.1 are not very surprising and their proofs also look more or less standard
* The result in Theorem 4.2 looks interesting, but essentially focuses on the Gaussian process with zero mean and random feature kernel of covariance function. Also, is this the FIRST time such results are established (e.g., in terms of real interpolation space of the RKHS)? How should we compare this result to other kernels/methods/models? It may be helpful to compare this behavior to the literature.
---
I thank the authors for their detailed reply and clarification. This helps me have a better grasp of this presented results, I will increase my score accordingly.
I would like to ask the authors to carefully include the above discussions in a revised version of the paper, as also suggested by Reviewer 43yC.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: I do not see any potential negative social impact of this work.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your careful reading and for raising these questions. We will summarize our main technical contribution and address the questions:
### **Summary of our technical contribution**
*"The result in Theorem 4.2 appears novel and interesting, but it is a bit difficult to position this in the existing literature on kernel methods or deep neural networks. I believe that the authors should make more efforts to make the message of this paper clearer."*
Thank you for your review and thoughtful comments. Your intuition is correct. Our paper discusses the impact of initialization on the generalization ability of network under NTK theory, with the most critical technical contribution being Theorem 4.2. This theorem addresses the smoothness of the Gaussian process approximated by the initial output network function with respect to the NTK. Now we discuss the challenges and contributions of Theorem 4.2:
- **Challenge of Theorem 4.2**:
- If the input is uniformly distributed on the sphere $\mathbb{S}^d$ instead of $\mathcal{X}$, both NTK and RFK are inner-product kernels and their Mercer's decomposition shares the same eigen-function (i.e., spherical harmonic polynomials). In this situation, simple calculations on the eigenvalues would show that $f^{\mathrm{GP}}\in [\mathcal{H}^{\mathrm{NT}}]^{\frac{3}{d+1}}$.
- However, when the data is a general distribution supported in a bounded open set $\mathcal{X} \subset \mathbb{R}^d$. NTK and RFK will not possess the nice property anymore which makes the claim if $f^{\mathrm{GP}}\in [\mathcal{H}^{\mathrm{NT}}]^{\frac{3}{d+1}}$ unclear.
- **Solution and Technical Contributions**:
- Before we clarify our solution and technical contributions, we first recollect some notations. For a sub-region $S\subset \mathbb{S}^d$, let $\mathcal{H}_0^{\mathrm{NT}}(\mathbb{S}^d)$ be the RKHS associated with the homogeneous NTK defined on $\mathbb{S}^d$ and $\mathcal{H}_0^{\mathrm{NT}}(S)=\mathcal{H}_0^{\mathrm{NT}}(\mathbb{S}^d) |_S$ be its restriction on the sub-region $S\subset \mathbb{S}^d$. We define $\mathcal{H}_0^{\mathrm{RF}}(\mathbb{S}^d)$ and $\mathcal{H}_0^{\mathrm{RF}}(S)$ similarly.
- We first show that, on a general $\mathcal{X}$, $\mathcal{H}^{\mathrm{RF}} \cong [\mathcal{H}^{\mathrm{NT}}]^\frac{d+3}{d+1}$ (where the $\cong$ is isomorphic as Hilbert space not RKHS).
- Let us fix a continuous and isomorphic mapping from $\mathcal{X} \subset \mathbb{R}^d$ to a sub-region $S \subset \mathbb{S}^d$. We then show the equivalence (as RKHS):
$$
\mathcal{H}^{\mathrm{\mathrm{RF}}}\cong \mathcal{H}_0^{\mathrm{\mathrm{RF}}}(S), \quad \mathcal{H}^{\mathrm{NT}}\cong \mathcal{H}_0^{\mathrm{NT}}(S).
$$
This equivalence is not as trivial as it appears.
- We then need to show that $[\mathcal{H}_0^{\mathrm{RF}}(S)]\cong [\mathcal{H}_0^{\mathrm{NT}}(S)]^{\frac{d+3}{d+1}}$.
Notice that the Mercer's decomposition of RFK and NTK share the same eigenfunctions (i.e., the spherical harmonic polynomials), we know that $[\mathcal{H}_0^{\mathrm{RF}}(\mathbb{S}^d)]\cong [\mathcal{H}_0^{\mathrm{NT}}(\mathbb{S}^d)]^{\frac{d+3}{d+1}}$. If the restriction $\mid_S$ and the interpolation $[]^{\frac{d+3}{d+1}}$ are commutative, then we are done. Unfortunately, this commutativity is not easy to verify in general.
- Fortunately, by a recent result [haas2023mind], we have $\mathcal{H}_0^{\mathrm{RF}}(\mathbb{S}^d) \cong W^{\frac{d+3}{2},2}(\mathbb{S}^d)$ and $\mathcal{H}_0^{\mathrm{NT}}(\mathbb{S}^d) \cong W^{\frac{d+1}{2},2}(\mathbb{S}^d)$ where $W^{s,2}$ denotes the usual fractional Sobolev space. Since Sobolev space is a more tractable object, we then carefully verify that $\mathcal{H}_0^{\mathrm{RF}}(S) = [\mathcal{H}_0^{\mathrm{NT}}(S)]^{\frac{d+3}{d+1}}$ and finish the proof. We would like to emphasize that there are several different definitions for the fractional Sobolev spaces and there are no similar results explicitly stated in the literature.
- By Lemma H.5, we have $[\mathcal{H}^{\mathrm{RF}}]^{\frac{3}{d+3}} \cong [[\mathcal{H}^{\mathrm{NT}}]^\frac{d+3}{d+1}]^{\frac{3}{d+3}} (\cong [\mathcal{H}^{\mathrm{NT}}]^{\frac{3}{d+1}})$.
- If we denote $\mathcal{H}^{\mathrm{RF}} = \left\lbrace \sum_{i \in \mathbb{N}} a_i \lambda_i^{\frac{1}{2}} e_i \mid \sum_{i \in \mathbb{N}} a_i^2 < \infty \right\rbrace $, then the K-L expansion gives us that $f^{\mathrm{GP}} = \sum_{i \in \mathbb{N}} Z_i \lambda_i^{\frac{1}{2}} e_i$ where $\lbrace Z_i\rbrace_{i \in \mathbb{N}}$ are i.i.d. standard Gaussian variables. We can verify that $\mathbf{P}( f^{\mathrm{GP}}\in [\mathcal{H}^{\mathrm{RF}}]^{t}) = 1$ for $t < \frac{3}{d+3}$ and that $\mathbf{P}( f^{\mathrm{GP}}\in [\mathcal{H}^{\mathrm{RF}}]^{t}) = 0$ for $t \geq \frac{3}{d+3}$. i.e., we have
$$
\mathbf{P}( f^{\mathrm{GP}}\in [\mathcal{H}^{\mathrm{NT}}]^{t}) = 1 \text{ for } t < \frac{3}{d+1} \text{ and } \mathbf{P}( f^{\mathrm{GP}}\in [\mathcal{H}^{\mathrm{NT}}]^{t}) = 0 \text{ for }t\geq \frac{3}{d+1}.
$$
### **Response to the Questions**:
1. **Proposition 2.2**: Proposition 2.2 is a direct corollary of Theorem 1 in [zhang2023optimality]. Theorem 1 in [zhang2023optimality] provides an upper bound on the generalization error for general spectral algorithms. We directly applied this to the kernel gradient flow, a special type of spectral algorithm.
2. **Consistency in Notation**: Thank you for your correction. We will use the expectation symbol $\mathbf{E}$ consistently.
3. **Theorem Designation**: Thank you for pointing out the issues with Theorem 3.3 and Theorem 4.1. These two theorems respectively provide conclusions on uniform convergence and the impact of the initial output function in kernel gradient flow. The proofs of them are standard and we will change their designation from Theorem to Proposition.
4. **Explanation of Theorem 4.2**: The explanation of the technical contribution of Theorem 4.2 is detailed in our response above. Thank you again for your valuable feedback.
---
Rebuttal 2:
Title: Refernces of Rebuttal
Comment: (We apologize for the rebuttal reaching the 6000-word limit. We will provide the references below and appreciate your understanding for any inconvenience caused.)
### **References**
[haas2023mind] Haas, M., et al. (2023). “Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension”.
[zhang2023optimality] Zhang, H., Li, Y., and Lin, Q. (2023). “On the optimality of misspecified spectral algorithms”.
---
Rebuttal 3:
Comment: I thank the authors for their detailed reply and clarification.
This helps me have a better grasp of this presented results, I will increase my score accordingly.
I would like to ask the authors to carefully include the above discussions in a revised version of the paper, as also suggested by Reviewer 43yC.
---
Rebuttal Comment 3.1:
Comment: Thank you very much for your feedback. We are grateful for the chance to improve the presentation of the current paper. For example, we will include a brief summary of the challenge and technical contribution after presenting Theorem 4.2. We will certainly carefully incorporate other relevant discussions into the revised version of the paper.
We are especially grateful for your decision to increase the score, which is very encouraging for us.
---
Rebuttal Comment 3.2:
Comment: Thank you for your valuable suggestions and for considering an increase in the score. We appreciate your thoughtful review. (We noticed that the current score still appears as the previous one on our end, in case there might be an error.)
We agree with your suggestions. After presenting Theorem 4.2, we will provide a brief introduction about the proof process of Theorem 4.2 and the previous results it builds upon. This will help clarify the challenges addressed by Theorem 4.2 and highlight our technical contributions . Also, we will include the above discussions in the revised version, and we would like to re-iterate the changes we will incorporate:
1. The proofs of Theorems 3.3 and 4.1 are standard, and therefore, we propose renaming them as propositions rather than theorems.
2. We will enhance the discussion of related work on mirror initialization in the introduction, ensuring that our findings are better integrated with existing research.
3. The revised version will emphasize the generalization advantages of mirror initialization over standard initialization, as indicated by our results.
4. We will correct minor errors, including spelling and grammar mistakes, and clarify any ambiguous expressions in the text.
Thank you again for your thorough review and constructive feedback. If you have any further comments or questions, please feel free to ask. Thank you. | Summary: The paper aims to study the impact of standard random weight initialization - as opposed to mirror initialization - on NTK theory. The key observation is that, for standard initialization, the operation of the network at initialization $f_0$ acts as a bias on the regression function $f^*$. When analyzing the convergence of the excess risk in the kernel gradient flow model the convergence rate depends on the smoothness of the regression function, which is (effectively) $f_0+f^*$. The paper demonstrates that, regardless of how smooth $f^*$ is, the smoothness of $f_0+f^*$ is bounded above by the smoothness of $f_0$ (theorem 4.2). Non-zero initialization thus places upper and lower bounds on the generalization error predicted by KGF (theorem 4.3 and 4.4). This is claimed to bring into focus the shortcomings of KGF in the NTK context for analyzing the convergence of neural networks even in the infinitely wide limit.
Strengths: It is obviously important to consider the role of standard random initialization on the training and convergence of neural networks. The argument presented in the paper is clear and gives an interesting insight into the predictions of NTK and KGF for understanding generalization and convergence. Mathematically the results are strong (as best as I can tell, having skimmed the supplementary).
Weaknesses: Perhaps I am misreading the results, but it would seem to me that, far from demonstrating the downsides of NTK theory, you have in fact shown that NTK theory correctly predicts that standard initialization can degrade performance (in terms of the generalization error during training) compared to mirror initialization for the artificial data. With regard the real data, all I can see is that you have shown that standard initialization will indeed decrease (effective) smoothness; however your predictions are asymptotic, so without a side-by-side comparison of standard and mirror initialization training (as per the artificial data) I don't see how this demonstrates anything at all. It would certainly be interesting to see if the results for real data are the same as for artificial data as this would (presumably) indicate that mirror initialization is to be preferred where possible.
(fwiw this is the core reason for the discrepancy between my evaluation of soundness and my recommendation. Mathematically the paper seems sound, but the interpretation seems incorrect to me: far from showing the downsides of NTK theory, the results would appear to show its power. If I'm wrong or the paper is suitably modified then I am happy to upgrade my recommendation).
Minor point: please run this paper through a spell checker.
Technical Quality: 3
Clarity: 2
Questions for Authors: My main question is whether or not it would be feasible to run mirror-initialization vs standard-initialization experiments on the real-world dataset, or even a high-dimensional artificial dataset. My guess would be that the results would replicate the Figure 1, which could have interesting practical implications.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: n/a
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thanks a lot for your valuable suggestion. It greatly helps us re-interpret our main results. More precisely, Theorem 4.3 and 4.4 actually tell us that the generalization ability depends on the smoothness of the target function $f^*$ and the initialization function. In particular, given the target function $f^* \in [\mathcal H]^{s}, (s \geq \frac{3}{d+1})$,
- The mirror initialization $f_0 = 0 \in [\mathcal H]^{\infty}$ and thus the generalization error is $n^{-\frac{s(d+1)}{s(d+1)+d}}$ ([li2023statistical]);
- The standard initialization $f_0 \in [\mathcal H]^{\frac{3}{d+1}}$ and thus the generalization error is $n^{-\frac{3}{d+3}}$ (our result).
Thus, the $L^2$ generalization error with mirror initialization is $n^{-\frac{s(d+1)}{s(d+1)+d}}$ (which is minimax optimal) and even if the target function $f^*$ is sufficiently smooth, the $L^2$ generalization error with standard initialization is $n^{-\frac{3}{d+3}}$ (which suffers the curse of dimensionality). i.e., the implicit bias introduced by initialization significantly impacts the generalization ability and we should prefer mirror initialization in practice.
We reviewed previous literature and found that [zhang2020type] compared mirror initialization with standard initialization. In Section 7.2.3 and 7.2.4 of [zhang2020type], Figures 3 and 4 show experiments on the Boston house price dataset and the MNIST dataset, demonstrating that mirror initialization (i.e., referred to as Anti-Symmetric Initialization in his notation) indeed outperforms standard initialization in terms of generalization ability. This confirms that NTK theory correctly predicts the impact of initialization in real applications, and we will emphasize this point in our paper.
In summary, we appreciate your highly valuable suggestion. We will revise our paper to emphasize the impact of different methods of initialization in the NTK theory. Thank you again for your insightful comments.
### References
[li2023statistical] Li, Y., et al. (2024). “On the Eigenvalue Decay Rates of a Class of Neural-Network Related Kernel Functions Defined on General Domains”.
[zhang2020type] Zhang, Y., et al. (2020). “A type of generalization error induced by initialization in deep neural networks”.
---
Rebuttal Comment 1.1:
Comment: Thank you for this this clarification, I have raised my score accordingly.
---
Reply to Comment 1.1.1:
Comment: Thank you for your feedback and for taking the time to review our rebuttal. We are pleased to hear that our clarifications helped to address your concerns, and we greatly appreciate your willingness to increase the score. In response to your suggestions, we reconsider the interpretation of our results to ensure that the connection between NTK theory and the observed effects on generalization performance is well presented, and will highlight the potential of mirror initialization in the revised version.
Thank you again for your thoughtful review and for helping us to improve the quality of our paper. | Summary: This paper explores the standard random mode of initialization of neural networks through the lens of the neural tangent kernel (NTK). This connection is made by showing that a randomly initialized neural network does indeed converge to the NTK uniformly during training thus allowing to analyze the generalization ability of a neural network with more relaxed assumptions on initialization. Furthermore, this paper finds that neural network performance under this regime comes at odds with the NTK theory which indicates that networks should perform poorly for high dimension inputs.
Strengths: The paper is sufficiently original in that it explores the more conventional initialization schemes used with neural nets and shows that, despite uniform convergence to the NTK, the kernel itself contradicts real world performance of the networks. This is a continuation of the theme in NTK theory where the infinite width regime fails to fully capture the finite's performance.
This is presented in a clear and thorough manner with rigorous proofs and straightforward, but convincing experiments. In regards to significance, again, this further builds on top of work that has "poked holes" in the practical usage of NTK theory and as such is a relatively significant work in that direction.
Weaknesses: There are a few typos/grammer mishaps that I caught in the paper:
Proposition 2.2 - "...noise term $\epsilon$ **satisfis**..."
Lines 126-127 - "...*let us reall the* concept..."
Line 233 - "This is **an** generalization..."
Lines 266-267 - "...the network **on longer generalizes** well..." (does not make sense)
Line 314 - "...datasets *from real world* and estimate the *smoothness of function*."
Technical Quality: 4
Clarity: 4
Questions for Authors: Line 186 - Can you specify what space $C(\mathcal{X}, \mathbb{R})$ refers to?
Lines 193-194 - (Less of a question and more of a comment) I am assuming NNK is just your notation for the standard empirical NTK...
Lines 274-275 - Why is it necessary to remove the bias term for the derivation of the lower bound?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: Everything is adequately addressed.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Minors
Thank you for your thorough review and for pointing out the typos and grammatical errors in our paper. We appreciate your feedback, and we have made the necessary corrections as follows:
1. **Proposition 2.2:** The typo "satisfis" has been corrected to "satisfies".
- Original: *"...noise term $\epsilon$ satisfis..."*
- Corrected: *"...noise term $\epsilon$ satisfies..."*
2. **Lines 126-127:** The typo "reall" has been corrected to "recall".
- Original: *"...let us reall the concept..."*
- Corrected: *"...let us recall the concept..."*
3. **Line 233:** The grammatical error "an generalization" has been corrected to "a generalization".
- Original: *"This is an generalization..."*
- Corrected: *"This is a generalization..."*
4. **Lines 266-267:** The phrase "on longer" has been corrected to "no longer" to clarify the meaning.
- Original: *"...the network on longer generalizes well..."*
- Corrected: *"...the network no longer generalizes well..."*
5. **Line 314:** The phrase has been corrected for clarity and grammatical correctness. The article "the" has been added before "real world" and "function" has been pluralized to "functions".
- Original: *"...datasets from real world and estimate the smoothness of function."*
- Corrected: *"...datasets from the real world and estimate the smoothness of functions."*
We have carefully reviewed the entire manuscript to ensure that no additional errors are present. Thank you for your attention to detail, which has helped us improve the quality of our paper.
### Response to Questions
1. **Meaning of** $C(\mathcal{X},\mathbb{R})$:
$C(\mathcal{X},\mathbb{R})$ includes all continuous functions from $\mathcal{X}$ to $\mathbb{R}$ with norm defined as $\lVert f \rVert = \sup_{x \in \mathcal{X}} |f(x)|$ for $f \in C(\mathcal{X},\mathbb{R})$. In fact, Lemma 3.2 on Line 186 is the Theorem 1.2 in [hanin2021random]. The definition of $C(\mathcal{X},\mathbb{R})$ is consistent with that of $C^0(T,\mathbb{R}^{n_L+1})$ in the latter. For more information on weak convergence of continuous random processes, we can refer to Section 7 of [billingsley2013convergence].
2. **NNK:** The Neural Network Kernel (NNK) we define here is indeed the standard empirical NTK defined in some other literature. Thank you for your feedback and we apologize for the confusion caused by our notation. We will specifically mention this point in the main text to avoid potential confusion caused by the name.
3. **Why Remove the Bias Term When Obtaining the Lower Bound:** This technical setting is for convenience in the derivation of our proof. After removing the bias term in network structure and further assuming the data is distributed on the sphere, the NTK is a dot-product kernel defined on sphere and the corresponding RKHS meets the so-called Regular RKHS condition (refer to Assumption 2 and Example 2.2 of [li2024generalization]). In general, this technical setting helps us obtain the lower bound of the generalization error.
### References
[billingsley2013convergence] Billingsley, P. (2013). “Convergence of probability measures”.
[hanin2021random] Hanin, B. (2021). “Random neural networks in the infinite width limit as Gaussian processes”.
[li2024generalization] Li, Y., et al. (2024). “Generalization Error Curves for Analytic Spectral Algorithms under Power-law Decay”.
---
Rebuttal Comment 1.1:
Comment: Thank you for your comments and clarification. I concur with the other reviewers that the additional context you provided in your rebuttals would improve the quality of the paper. I will keep my score the same currently and await the revisions.
I also found another typo on line 115 "... absolute positive constant $c$ **abd** $C$...".
---
Reply to Comment 1.1.1:
Comment: Thank you for your thoughtful feedback. I appreciate your agreement with the other reviewers regarding the additional context provided in the rebuttal and its potential to enhance the quality of the paper. I will carefully address the points raised and incorporate the necessary revisions.
Additionally, thank you for pointing out the typo on line 115. I will correct the phrase to “absolute positive constant $c$ and $C$” in the revised version.
I look forward to making these improvements and resubmitting the revised paper. Thank you for your careful review again. | Summary: This paper studies various kernel theories of neural networks, both random feature and neural tangent kernels. The main approach is to use the decay rates of the target function and kernel eigenvalues to get generalization error rates. The paper's novel theoretical contribution is to show uniform convergence of the network's input-output function $f$ to that of a kernel regressor with NTK kernel. The setting is that the weights are scaled to be in the NTK regime and when both are trained under gradient flow. Previously known results are used to characterize the RKHS associated with the NTK and "interpolations" of this space (which is useful if the target function isn't actually in the RKHS). It is argued that, since the network initialization with random weights is not smooth, that the network overall will have a slow error rate. These theoretical arguments are backed up with experiments on synthetic data comparing the standard initialization, that was studied theoretically, with a "mirrored" one. Also, the smoothness of the MNIST, CIFAR-10, and Fashion-MNIST classification problems is estimated and found to be quite different from the basic bound based on input dimensionality.
Strengths: The work here brings together a bunch of connected kernel theory and neural networks results in a (relatively) readable treatement. The uniform convergence of the network's output to a Gaussian process is good to know. The overall goal of understanding the effect of initialization is certainly important, as is highlighting the weaknesses of kernel frameworks in understanding neural networks.
Weaknesses: * In the introduction and experimental sections, the standard and mirror initializations are contrasted. From the introduction, I was expecting to see theoretical results that differentiated between the mirror and standard initializations. The authors could be clearer about what their contributions are and more clearly state what is known about the mirror initialization.
* As someone who is fairly familiar with kernel theory of neural networks, I still found it hard to identify which theorems and other results presented by the authors here are actually novel and which are applications of known results to their setting. For instance, isn't the decay rate of $1/(d+3)$ classical and known from Sobolev theory? I would suggest that the authors clarify this.
* The bulk of the paper is really just setting the stage for your results. I would have liked to see more explanation of how to interpret your results and discussion of how they fit into the greater understanding of these networks/kernels. I would also have liked more detail on the experiments within the main text.
* The synthetic experiments (Fig 1) only cover less than an order-of-magnitude in $n$. They would be much more convincing if they covered more. However, I can imagine you'd run into issues with double-descent and other factors that your asymptotic theory could have trouble with.
Technical Quality: 3
Clarity: 2
Questions for Authors: * Please clarify the definition of the interpolation space (Eq 6 and around it). Your notation for the space of functions is unclear to me. Some explanation should be given. It is unusual to see a function written as a series $\sum_i a_i \lambda_i^{s/2} e_i(x)$
* Fix the formatting issue in line 104
* Prop 2.2 "satisfis" typo
* Do you think it would be more consistent to use the NTK superscript everywhere you treat it rather than f^NTK and K^NT ?
* Line 267 "network on longer" typo
* Does the mirror initialization lead to a significantly different error bound than yours, Thm 4.3?
* Can you include the method of estimating the smoothness of the various classification problems (Table 1) in the main text? Looking at the appendix, I see it depends on an empirical kernel/Gram matrix, but under what kernel is not clear. NTK?
* You report various least-square fits on log-log plots. You should be clear about how these were performed. Linear fits in log-space are known to have problems versus least-square fitting with power-laws. See the work by Clauset et al.
* In your notation section, you use standard $O, \Omega, o, \omega$ symbols but do not define the frequent $\asymp$ symbol. This should be explained.
Confidence: 3
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The work here is very focused on the NTK theory of a multi-layer perceptron/fully-connected network. Strong claims are made about the weakness of such theories. However, different architectures lead to different kernels with different spectral properties and generalization performance. For instance, convolutional architectures lead to convolutional kernels which are known to work better in image classification and other spatially structured settings. It is also known that initialization with weights that are not Gaussian white noise can lead to better performance (see the work on "structured random features" and "rainbow networks").
I did not check the appendix proofs, but I read all of the main text results.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: ### Brief Introduction of Mirror Initialization
*"In the introduction and experimental sections, the standard and mirror initializations are contrasted. ...... what is known about the mirror initialization."*
Thank you for your suggestion. Here we provide a brief introduction of the existing theoretical results on mirror initialization.
- Under mirror initialization, [lai2023generalization], [li2023statistical], and [lai2023generalization2] studied the generalization ability of single-layer fully connected, multi-layer fully connected, and multi-layer residual network, respectively, and proved the minimax optimality of network. For example, [li2023statistical], which studied the generalization ability of multi-layer fully connected mirror-initialized networks, concluded that for $f^* \in [\mathcal{H}^{NT}]^s, (s \geq 1)$, the $L^2$ generalization error is $n^{-\frac{s(d+1)}{s(d+1)+d}}$. This differs significantly from the $n^{-\frac{3}{d+3}}$ we obtained, reflecting the theoretical differences between different initialization.
### More Explanation of Our Contribution
*"As someone who is fairly familiar with kernel theory of neural networks, I still found ...... I would suggest that the authors clarify this."*
Before the response, we apologize for a typo. As mentioned in Theorem 4.2, the smoothness of the Gaussian process is $\frac{3}{d+1}$. Therefore, the smoothness in lines 250, 260, and 281 in the Theorem should also be $\frac{3}{d+1}$ instead of $\frac{1}{d+3}$. This typo does not affect our results.
Thanks for your reminder and we are willing to respond to your questions. Since NTK and Gaussian process are well-studied objects, we do need to clarify what the novel contribution we have done here.
The major contribution appears in Theorem 4.2 where we showed that $P(f^{GP} \in [\mathcal H^{NT}]^{t}) = 1_{t \textless \frac{3}{d+1}}$. Consequently, the generalization error of the gradient descent with standard random initialization is $n^{-\frac{3}{d+3}}$.
To the best of our knowledge, this cannot be simply concluded from classical Sobolev theory.
- **Challenge of Theorem 4.2:**
- If the input is uniformly distributed on the sphere $\mathbb{S}^d$ instead of $\mathcal{X}$, both NTK and RFK are inner-product kernels and their Mercer's decomposition shares the same eigen-function (i.e., spherical harmonic polynomials). In this situation, simple calculations on the eigenvalues would show that $f^{GP} \in [\mathcal{H}^{NT}]^{\frac{3}{d+1}}$.
- However, when the data is a general distribution supported in a bounded open set $\mathcal{X} \subset \mathbb{R}^d$, NTK and RFK will not possess the nice property anymore which makes the claim if $f^{GP} \in [\mathcal{H}^{NT}]^{\frac{3}{d+1}}$ unclear.
- Before we clarify our solution and technical contributions, we first recollect some notations. For a sub-region $S \subset \mathbb S^d$, let $\mathcal{H}_{0}^{NT}(\mathbb{S}^d)$ be the RKHS associated with the homogeneous NTK defined on $\mathbb S^d$ and $\mathcal{H}_0^{NT}(S)=\mathcal{H}_0^{NT}(\mathbb{S}^d) |_S$ be its restriction on the sub-region $S \subset \mathbb{S}^d$. We define $\mathcal{H}_0^{RF}(\mathbb{S}^d)$ and $\mathcal{H}_0^{RF}(S)$ similarly.
- We first show that, on a general $\mathcal{X}$, $\mathcal{H}^{RF} \cong [\mathcal{H}^{NT}]^\frac{d+3}{d+1}$ (where the $\cong$ is isomorphic as Hilbert space not RKHS).
- Let us fix a continuous and isomorphic mapping from $\mathcal{X} \subset \mathbb{R}^d$ to a subregion $S \subset \mathbb{S}^d$. We then show the equivalence (as RKHS) $$\mathcal{H}^{RF} \cong \mathcal{H}_{0}^{RF}(S), \mathcal{H}^{NT} \cong \mathcal{H}_0^{NT}(S).$$ This equivalence is not as trivial as it appears.
- We then need to show that $[\mathcal{H}_0^{RF}(S)] \cong [\mathcal{H}_0^{NT}(S)]^{\frac{d+3}{d+1}}$.
- Notice that the Mercer's decomposition of RFK and NTK share the same eigenfunctions (i.e., the spherical harmonic polynomials), we know that $[\mathcal{H}_0^{RF}(\mathbb S^d)] \cong [\mathcal{H}_0^{NT}(\mathbb{S}^d)]^{\frac{d+3}{d+1}}$. If the restriction $\mid_S$ and the interpolation $[]^{\frac{d+3}{d+1}}$ are commutative, then we are done. Unfortunately, this commutativity is not easy to verify in general.
- Fortunately, by a recent result [haas2023mind], we have $\mathcal{H}_0^{RF}(\mathbb{S}^d) \cong W^{\frac{d+3}{2},2}(\mathbb{S}^d)$ and $\mathcal{H}_0^{NT}(\mathbb{S}^d) \cong W^{\frac{d+1}{2},2}(\mathbb{S}^d)$ where $W^{s,2}$ denotes the usual fractional Sobolev space. Since Sobolev space is a more tractable object, we then carefully verify that $\mathcal{H}_0^{RF}(S) = [\mathcal{H}_0^{NT}(S)]^{\frac{d+3}{d+1}}$ and finish the proof. We would like to emphasize that there are several different definitions for the fractional Sobolev spaces and there are no similar results explicitly stated in the literature.
- By Lemma H.5, we have $[\mathcal{H}^{RF}]^{\frac{3}{d+3}} \cong [[\mathcal{H}^{NT}]^\frac{d+3}{d+1}]^{\frac{3}{d+3}} (\cong [\mathcal{H}^{NT}]^{\frac{3}{d+1}})$.
- If we denote $\mathcal{H}^{RF} = \left\lbrace \sum_{i \in \mathbb{N}} a_i \lambda_i^{\frac{1}{2}} e_i | \sum_{i \in \mathbb{N}} a_i^2 < \infty \right\rbrace$, then the K-L expansion gives us that $f^{GP} = \sum_{i \in \mathbb{N}} Z_i \lambda_i^{\frac{1}{2}} e_i$ where $\{Z_i\}$ are i.i.d. standard Gaussian variables. We can verify that $\mathbf{P}(f^{GP} \in [\mathcal{H}^{RF}]^{t}) = 1$ for $t < \frac{3}{d+3}$ and that $\mathbf{P}(f^{GP} \in [\mathcal{H}^{RF}]^{t}) = 0$ for $t \geq \frac{3}{d+3}$. i.e., we have
$\mathbf{P}(f^{GP} \in [\mathcal{H}^{NT}]^{t}) = 1$ for $t < \frac{3}{d+1}$ and that $\mathbf{P}(f^{GP} \in [\mathcal{H}^{NT}]^{t}) = 0$ for $t \geq \frac{3}{d+1}$.
(We apologize for the inconvenience, due to the 6000-word limit, the second half is provided in the comment.)
---
Rebuttal Comment 1.1:
Comment: Hi, you mentioned that you included a second half to the rebuttal in another comment, but I don't think I can see it. It would be nice to read through it if possible/relevant.
---
Reply to Comment 1.1.1:
Comment: Hi, thank you for your message. I apologize for any confusion. I have now modified the comment to ensure that the second half of the rebuttal is visible to all reviewers. Please feel free to read through it, and I would greatly appreciate any further feedback you may have.
---
Rebuttal 2:
Title: The second part of Rebuttal
Comment: (We apologize for reaching the word limit earlier, and we provide the second half of the rebuttal here.)
### Summary of the Theoretical Significance of the Paper
*"The bulk of the paper is really just setting...... the experiments within the main text."*
Thank you for your suggestion. We will summarize the theoretical significance of the paper within the background of NTK theory.
- Background of our work: For a long time, NTK theory has played an important role in the study of network. Under NTK theory, a series of works tried to explain the remarkable generalization ability of network [lai2023generalization], [li2023statistical], [lai2023generalization2], which applied mirror initialization and derived the minimax optimality of network.
- Impact of standard initialization: However, we wonder how the actual results would be if the standard initialization are considered. Under standard initialization, our result shows that the generalization ability of network under NTK theory is actually quite poor. It is a surprising result and prompts us to think further.
- Significance of our work: On one hand, this tells us that the generalization ability of network is not only influenced by the smoothness of the goal function but also by the way of initialization. Thus, we have to carefully take the way of initialization into consideration. On the other hand, we can consider whether the inconsistency between the theory and reality is due to the overly strong assumptions of NTK theory (e.g., the width of network is sufficient large). In the future, it may be worthwhile to explore how NNK changes during training under finite width to advance the existing NTK theory.
- Experiment: In the experiment of Section 5.1, we chose a single-layer network with width $m =20n$ to ensure the width is sufficiently large. At each $n$, we conducted 10 experiments with a learning rate of 0.6, and each training was run for 10n epochs to ensure sufficient time. We will add these details to the main text. Thank you for your suggestion.
### The Problem of Fig 1
*"The synthetic experiments (Fig 1) only cover ..... theory could have trouble with."*
Thanks for pointing this out and we appreciate your correction. Indeed, the range of our $n$ is not large enough. In kernel theory, it is common for the generalization error to decrease polynomially with respect to $n$ (e.g., Fig 1 in [li2024saturation]). In Fig 1, we aim to demonstrate a similar phenomenon. However, in fact, it indeed may encounter the problem of double-descent as you mentioned. To ensure the convergence of NTK, the width of the network $m$ needs to increase with $n$, but it will result in a high computational cost. The reasons above are why we didn't increase $n$. Thank you for correction again.
### Response to Questions
1. The notation $\sum a_i \lambda_i^{s/2}e_i(x)$ is mainly because $\lbrace\lambda_i^{s/2}e_i(x)\rbrace$ forms an ONB of the interpolation space. Thank you for pointing this out. We will explain this in more detail.
2. Under mirror initialization, for $f^*\in[\mathcal{H}^{NT}]^s$ ($s \geq 1$), the generalization error of network is $n^{-\frac{s(d+1)}{s(d+1)+d}}$, as shown in [li2023statistical]. We appreciate your reminder and we will add this point to the discussion of Theorem 4.3 for the convenience of comparison by readers.
3. Thank you for suggestion on the notations in the experimental part. We estimated the smoothness of real datasets with respect to the NTK of single-layer fully connected network. We will add this point in our paper.
4. Regarding the estimation of decay rate in the log-log plot, there may be some miscommunication. As a function of $n$, e.g., in Fig 1, we denote the experimentally measured generalization error as $e_i(n)$, where $n$ is the sample size and $i$ represents the $i$-th experiment. Then $e_i(n)$ is approximately of the form $e_i(n) \propto n^{-\alpha}Z_{n,i}$, where $Z_{n,i}$ denotes a random variable with values around 1, and $\alpha$ is the decay rate to be estimated. In this case, a linear fit after a log transformation may not be a bad choice. We found that Clauset et al. studied estimating $a$ through i.i.d. samples $x_{i} \sim p(x) (\propto x^{-a})$, which is slightly different from our problem. We guess there may be some miscommunication here.
5. Thanks for your careful reading again. We apologize for our annoying typos. We have tried our best to read through the paper and will make several modifications in the new revision.
### Impact of Network Architecture on Our Results
Our results indeed focus on fully connected network. Thanks for pointing this out. For other architectures, like ResNet or CNN, if we know the properties of the corresponding NTK, RFK, and initial output function, it is also reasonable to apply our framework. This requires considerable preliminary work, but in the future, we are willing to explore whether our results hold under different network settings. Again, thank you for highlighting this point.
---
Rebuttal 3:
Title: Refernces of Rebuttal
Comment: ### References
[haas2023mind] Haas, M., et al. (2023). “Mind the spikes: Benign overfitting of kernels and neural networks in fixed dimension”.
[lai2023generalization] Lai, J., et al. (2023). “Generalization ability of wide neural networks on \(\mathbb{R}\)”.
[lai2023generalization2] Lai, J., et al. (2023). “Generalization ability of wide residual networks”.
[li2024saturation] Li, Y., Zhang, H., and Lin, Q. (2024). “On the saturation effect of kernel ridge regression”.
[li2023statistical] Li, Y., et al. (2024). “On the Eigenvalue Decay Rates of a Class of Neural-Network Related Kernel Functions Defined on General Domains”.
---
Rebuttal 4:
Comment: Thanks for your responses. I think it would be good if you used this time to clarify things in the final paper. I suggest you strengthen this context in the intro when you discuss related work and how your results fit in. I hope you plan to do this. I'll keep my score the same.
---
Rebuttal Comment 4.1:
Comment: Thank you very much for your feedback and valuable suggestions on my rebuttal. I will take this opportunity to clarify the relevant issues in the final paper. Specifically, I will focus on strengthening the discussion of related work on mirror initialization in the introduction and better integrating my findings with existing research. I truly appreciate your guidance and will make the necessary improvements in my revisions. Thank you again for your review and support. | null | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
On the Power of Decision Trees in Auto-Regressive Language Modeling | Accept (poster) | Summary: This paper contributes both theoretical and empirical evidence that autoregressive decision trees (ARDTs) are an expressive machine learning model. On the theoretical side, the authors show that a family of ARDTs with concatenated inputs can, in theory, efficiently (usually polynomial in size of input sequence) simulate automata, Turing machines with fixed memory and runtime, and $k$-sparse circuits. The appendix additionally proves a result of simulating $k$-Juntas. Moreover, they do provide a simple example where traditional decision trees would require exponentially more decision nodes compared to ARDTs, showing the improved expressivity of ARDTs. On the empirical side, the authors apply ARDTs on two tasks: learning to generate short stories and learning to solve a set of reasoning tasks, illustrating ARDTs can work in these settings with only limited training data and a small number of parameters.
Strengths: 1. While autoregressive decision trees are not new, the proven theoretical results are both novel and enlightening. I am also not aware of any such results holding for regular decision trees, increasing the potential impact of the provided results, if correct.
2. Most of the paper is well-written and easy to follow. The necessary preliminaries are explained without being too verbose, with only a couple of exceptions. In general, presentation is good.
3. Applying ARDTs to language modelling might make sense, but is non-trivial. Hence, I do acknowledge that getting ARDTs to work properly with language input might not have been easy. The results on the Big-Bench-Hard are particularly noteworthy, showing the power of neurosymbolic AI methods [1, 2].
[1] Garcez, A. D. A., & Lamb, L. C. (2023). Neurosymbolic AI: The 3 rd wave. Artificial Intelligence Review, 56(11), 12387-12406.
[2] Marra, G., Dumančić, S., Manhaeve, R., & De Raedt, L. (2024). From statistical relational to neurosymbolic artificial intelligence: A survey. Artificial Intelligence, 104062.
Weaknesses: 1. While the proof sketch of Theorems 3 and the full proof of Theorem 4 are very clear and well written, the remainder of the proofs are not as clear and can be hard to follow. Moreover, I have doubts about their correctness:
+ In the proof of Theorem 3 (Appendix A), I fail to see how Lemma 10 correctly applies. The assumption being that $\delta$ is a 2-Junta and a function from $\mathbb{D}^L \rightarrow \mathbb{D}$. However, how is $\delta$ a function from $\mathbb{D}^L \rightarrow \mathbb{D}$? $L$ is not equal to 2 it seems since $n$ can be larger than 2. Additionally, the function $\Psi: \mathbb{D} \rightarrow \{0, 1\}^d$ takes $\boldsymbol{x} \in \mathbb{D}^L$ as input. How is that possible? Do you apply $\Psi$ componentwise? And how does that lead to $(x_L, x_{L - n})$ in line 461?
+ In the same proof, the proof by induction starts from the base case at iteration 1 and says that
$$
\mathcal{T}_1^{AR}(\boldsymbol{x}) = \delta(x_L, PAD) = q_1
$$
but from the definition of the extended $\delta$ transitions, we have that $ \delta(x_L, PAD) = q_0$ and not $q_1$. Hence this would not prove the induction basis. I guess this can be easily solved by correctly defining $ \delta(x_L, PAD) = q_1$?
+ Proof of Theorem 6: First it is defined how a state of the TM is encoded as a string $\boldsymbol{s} \in \mathbb{D}^{M + 1}$, but then the function $g$, which takes those strings as input, is a function from $\mathbb{D}^4 \rightarrow \mathbb{D}^4$. What is supposed to be the input to $g$? Should it just be any sequence $\boldsymbol{x}$ of length 4?
+ Same proof of Theorem 6: The definition of $f$ also seems flawed. Look for example at $f_{M + 1}(\boldsymbol{s}) = g(s_{M}, s_{M + 1}, s_{M + 2}, s_{M + 3})$. The vector $\boldsymbol{s}$ only has $M + 1$ components, do you pad the non-existent ones?
2. The experiment described in Section 4.2 about generating stories seems to have some flaws:
+ It is mentioned that Table 1 contains average scores across 100 stories, yet this does not seem to be the case. How can every average be an exact integer? Is the variance of the scores over all stories equal to 0? The general lack of variability metrics such as standard deviation or quantiles should also be addressed. Moreover, if the variability is indeed 0, can you elaborate on why this can be the case? Does this mean the GPT-4 scores are not good metrics?
+ I do have severe concerns with using GPT-4 as an evaluator. It is well-known that LLMs are **not** consistent in their predictions, not only because of the inherent uncertainty (due to sampling) of the inference process, but also due to changing inputs. Did you do a control test to see if GPT-4 indeed scored the same texts consistently? Do you use sampling during inference? The fact that a previously published work used a similar evaluation protocol does not guarantee it is a good protocol.
+ Separate from the utilised evaluation protocol, I do not agree ARDTs perform on par with GPT-4 or even bigger transformers (line 306), seeing how especially their creativity and plot scores are quite a bit lower.
3. I like the experiment in Section 4.3, yet some of the choices made do not instil confidence and seem doubtful. In particular,
+ why did you only select 4 of the 23 reasoning tasks? Did you try any of the other tasks? Do the chosen tasks work particularly well with decision trees?
+ Why is SOTA lower than the provided baselines? Shouldn't the baselines be lower or equal than SOTA by definition?
+ Apart from chain-of-thought prompting of the LLM baselines, did you also provide a few examples to facilitate few-shot inference? Given that your models (GPT2 + ARDTs) are specifically trained on a small dataset, shouldn't the LLM baselines be given equal playing terms by giving them at least part of the training data as examples in their prompts? Not doing so does give an unfair advantage to your method.
The paper has promise, but there are too many uncertainties right now for me to be able to recommend any form of acceptance. However, I am open to significantly change my score if the authors can address my concerns and questions.
Technical Quality: 3
Clarity: 3
Questions for Authors: I listed my questions together with their corresponding weaknesses above. Given the questions concern both theory and experiments, any change in score from my side does require a constructive answer or explanation on all above questions.
If any of my questions are unclear, please let me know and I will gladly try to make them clearer.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors do mention that the paper is only the beginning of a larger analysis of tree-shaped models added on top of language models/neural networks. Additionally, they do not claim ARDTs are the solution to anything, rather that they can be an interesting class of models that can work rather well. In that sense, I believe the authors address the limitations of their work to a sufficient degree.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first want to thank the reviewer for their thorough and constructive review of our paper. We appreciate your acknowledgment of the novel and enlightening theoretical analysis, clear writing, and the non-trivial application of ARDTs to language modeling.
Q1: Confusions regarding the proof
A1: Thank you for pointing out the clarifications required in our proofs. While we assure you that the proofs are correct, indeed, as you pointed out, they can definitely be better explained and clarified.
- As you note, $\delta$ is not a 2-Junta but rather the function that we use to define the Junta. To be more accurate, let $f : D^L \to D$ s.t. $f(x) = \delta(x_L, X_{L-n})$. Then, $f$ is a 2-Junta, and from Lemma 10 there exists a tree satisfying $T(\Psi(x)) = f(x)$. Indeed, $\Psi$ is applied componentwise.
- There is some abuse of notations in our definitions, where the state $q_i$ defines the state of the Automaton at iteration $i$, while we also use $q_0$ to define the initial state of the Automaton, so in fact it is always the case that the first state is $q_0$, i.e. $q_1 = q_0$. We realize that these notations are confusing, and will fix this by using a different notation to indicate the initial state.
- You are correct, there is a typo and the input to $g$ should be $x$ (of length 4), and not $s$. Thanks for the catch.
- Yes, values that “overflow” can be considered as <PAD> tokens. These do not affect the output of the function. We will clarify this in the paper.
Q2.1 Why is every average of the 100 different stories an exact integer? Is the variance of the scores over all stories equal to 0?
A2.1: The variance of the score is not zero. Following the Tiny-stories [1] paper, we rounded the results to integers. In the revision, we adjusted the results to two decimal places. The results are shown in the **Rebuttal Table 4**.
Q2.2: Did you do a control test to see if GPT-4 indeed scored the same texts consistently?
A2.2: We found that GPT-4's evaluation scores are not entirely consistent; however, the variance (less than 0.5) is considered acceptable (see **Rebuttal Table 5**). Actually, to enhance the consistency of GPT-4's scores, we followed [1] to setting up more precise and informative prompts (see **Lines 534-554 in Appendix**).
As shown in **Rebuttal Table 4**, we have updated the scores in **Table 1** by averaging the results of ten evaluations for each of the 100 stories. Each story was scored using the same prompt provided to GPT-4 ten times. Blue means higher than TinyStories-1M (transformer-based model)
Q2.3: Do you use sampling during inference?
A2.3: No.
Q2.4: I do not agree ARDTs perform on par with GPT-4 or even bigger transformers (line 306)
A2.4: Good point, we will make line 306 more accurate: ARDTs (\~0.3M parameters) trained on the TinyStories dataset are on par with GPT-4 (\~1800B) or even bigger transformers trained on huge internet dataset regarding the creativity and plot, but remain inferior in terms of grammar and consistency.
Q3.1: Why did you only select 4 of the 23 reasoning tasks? Did you try any of the other tasks? Do the chosen tasks work particularly well with decision trees?
A3.1: We just chose 4 representative tasks. The results of all 23 reasoning tasks are shown in **Rebuttal Table 6**. The results demonstrate that ARDTs possess good reasoning capabilities.
Q3.2: Why is SOTA lower than the provided baselines? Shouldn't the baselines be lower or equal than SOTA by definition?
A3.2: We apologize for any confusion caused. The term ‘SOTA’ here refers to the performance benchmarks borrowed from **Table 3 in the BIG-Bench-Hard [2] paper**, which represent the state-of-the-art performance in the BIG-Bench paper [3]. We will change 'SOTA' to 'SOTA Methods in BIG-Bench' in the revision.
Q3.3: Apart from chain-of-thought prompting of the LLM baselines, did you also provide a few examples to facilitate few-shot inference? Given that your models (GPT2 + ARDTs) are specifically trained on a small dataset, shouldn't the LLM baselines be given equal playing terms by giving them at least part of the training data as examples in their prompts? Not doing so does give an unfair advantage to your method.
A3.3: We will run this experiment with a few-shot examples and report the results in the final version.
We hope that this clarifies any misunderstandings and we encourage the reviewer to increase their score if we have resolved their concerns or to let us know otherwise so we may try to clear up any remaining confusion.
[1] Eldan, Ronen, and Yuanzhi Li. "Tinystories: How small can language models be and still speak coherent english?." arXiv preprint arXiv:2305.07759 (2023).
[2] Suzgun, Mirac, et al. "Challenging big-bench tasks and whether chain-of-thought can solve them." arXiv preprint arXiv:2210.09261 (2022).
[3] Srivastava, Aarohi, et al. "Beyond the imitation game: Quantifying and extrapolating the capabilities of language models." arXiv preprint arXiv:2206.04615 (2022).
---
Rebuttal Comment 1.1:
Title: Acknowledgement of Author Rebuttal
Comment: Thank you for taking the time to clarify my concerns. In particular, my concerns about the validity of the theoretical results are now addressed, though I do hope the authors take the time to improve the writing of the proofs. I also appreciate the proposed revisions and more detailed insights when using GPT-4 as an evaluator. Please do add these insights to the main paper where possible or refer to them in the appendix of the paper as they validate your evaluation protocol. Additionally, providing the results for all 23 reasoning tasks from the BIG-Bench-Hard benchmark is greatly appreciated as well, and further eliminates some of my empirical concerns. Please do add these more complete results to the appendix of the paper as they again only strengthen your point.
I will increase my score to "5: borderline accept" for now. If the authors can provide the numbers for the BIG-Bench-HARD reasoning tasks where GPT is given a few examples and the numbers are still somewhat favourable for ARDTs, I am willing to further increase to "6: weak accept". Apart from that, I have one other question that could, together with the results of the few-shot experiment, allow me to increase my score to a full accept.
Why did you limit the number of "parameters" to 0.3M? Did you run into computational bottlenecks when using more than 10 trees (**Rebuttal Table 3**)? If so, this would be an additional limitation worth mentioning, as scaling ARDTs could to be an important component of even better future results.
---
Reply to Comment 1.1.1:
Title: Response to comments by Reviewer qHp8
Comment: We would like to thank you once again for your very helpful and constructive review comments.
A1: Few-shot GPT-4 performance: we present the results of few-shot inference. Due to time constraints, we were unable to complete experiments for all 23 reasoning tasks, so we arbitrarily selected 2 tasks instead. As shown in the table, few-shot inference offers significantly less benefit for reasoning tasks compared to CoT, while our ARDTS has demonstrated clear advantages.
| | | GPT-XL | | Ours | |
|-------------|:---:|:---:|:---:|:---:|:---:|
| | 1 shot | 2 shots | 4 shots | Lin | GPT |
| Navigation | 47.1 | 48.6 | 50.5 | 55.4 | 69.2 |
| Web-of-lies | 50.0 | 52.3 | 51.8 | 53.2 | 71.1 |
A2: Limiting the parameter count: we want to emphasize that the goal of this paper is to perform an initial study demonstrating the capabilities of decision trees for certain language modeling tasks, as they remained largely unexplored in this context until now. While we did not specifically limit the parameter count of the decision trees, and do not see computational barriers in achieving a modest increase in the number of trees, we leave to future work a thorough exploration of how decision trees can scale to match transformers that are several orders of magnitude larger. Such study may require scaling the depth, ensemble size and input and output dimension of the trees, and may be much more hardware intensive, requiring additional engineering and optimization novelties that are beyond the scope of this paper.
We hope this addresses any concerns the reviewer may have. We encourage the reviewer to increase their score if their concerns have been resolved, or to let us know if there are still issues we need to clarify. | Summary: This paper explores the idea to use AutoRegressive Decision Trees for language modelling. From a theoretical perspective, ARDTs are shown to model systems such as finite automata and more generally Turing machines. From an experimental point of view, ARDTs are shown to be able to generate grammatically correct sentences, with performance similar to small transformers.
Strengths: + Novelty in the idea of using ARDTs for language modelling
+ Nice compromise between performance and interpretability
+ Interesting theoretical analysis of modeling different systems with ARDTs
Weaknesses: - Experimental evaluation not aligned with the theoretical framework, as it uses tree ensembles
- Interpretability is reduced when switching from single decision trees to ensembles
Technical Quality: 3
Clarity: 3
Questions for Authors: * The difference between the theoretical framework and the experimental setting described in Comment 8, page 7, is due to the complexity of implementing the theoretical solution, or to other issues?
Confidence: 2
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: * While the paper introduces ARDTs by stressing the point that they are an interpretable model, then in the experimental evaluation suddenly only tree ensembles are used. Is it because performance are otherwise much lower? Tree ensemble clearly do not have the same degree of interpretability of single decision trees.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We first want to thank the reviewer for their thorough review and positive assessment of our paper. Specifically, we appreciate your acknowledgment of the paper's novel ideas, good balance between performance and interpretability, and interesting theoretical analysis.
Q1: The difference between the theoretical framework and the experimental setting described in Comment 8, page 7, is due to the complexity of implementing the theoretical solution, or to other issues?
A1: The setting we study in the theory section is not efficient to implement in practice, as it requires maintaining a decision tree with a large input size (the sliding window) and potentially generating many output tokens. As with many theoretical works, we study a simplified setting that is simpler to analyze mathematically, and while this setting is not efficient to implement in practice, it still captures the key properties of the experimental setting.
Q2: While the paper introduces ARDTs by stressing the point that they are an interpretable model, then in the experimental evaluation suddenly only tree ensembles are used. Is it because performance is otherwise much lower? Tree ensembles clearly do not have the same degree of interpretability of single decision trees.
A2: Although the focus of our paper is to demonstrate the computational capabilities of ARDTs through theoretical and empirical evidence, interpretability (as detailed in **Appendix C** in our paper) is not our main emphasis. As shown in **Rebuttal Table 3**, the performance of a single tree is indeed lower than that of tree ensembles. However, it can still be utilized for interpretability analysis.
We hope that this clarifies any misunderstandings and we encourage the reviewer to increase their score if we have resolved their concerns or to let us know otherwise so we may try to clear up any remaining confusion.
---
Rebuttal Comment 1.1:
Title: Rebuttal
Comment: I thank the authors for the replies and for the additional results presented in the rebuttal. I have raised my score to Weak Accept. | Summary: * This work studies theoretical and practical applications of auto-regressive decision trees (ARDT) in language generation and reasoning tasks.
* Through theoretical analysis, the authors show that ARDTs can learn more sophisticated functions than previously known, such as automata, Turing machines, and sparse circuits.
* Using ARDT with a transformer (GPT-2), they achieved performance comparable to SoTA models (InstructGPT, Codex, and PaLM 540B).
Strengths: * The problem definition, theoretical analysis, model description, and experimental setup are presented clearly, making the paper accessible and informative.
* With the interpretability of decision trees, we can better understand the language generation process.
Weaknesses: 1. No mention of the limitations of ARDT for language modeling.
2. Performance changes with the text length. What will the performance be in generating long texts?
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How does the depth of trees affect performance and inference speed?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thorough review and very positive assessment of our paper. Specifically, we appreciate your acknowledgment of the paper's clarity, informativeness, and its contribution to model interpretability.
Q1: Performance changes with the text length. What will the performance be in generating long texts?
A1: In our paper, we reported the performance of extending a story by 20 words. We have expanded this to 50 words and compared the trend in performance changes, as shown in the **Rebuttal Table 1**.
Although Auto-Regressive Decision Trees (ARDTs) were not specifically designed to handle lengthy texts—a task left for future work—their performance remains robust even when expanded to 50 words. Both the transformer-based model and our method exhibit a decreasing trend as the word count increases, with ARDTs experiencing a slightly greater decline (see **Rebuttal Table 1**).
Q2: How does the depth of trees affect performance and inference speed?
A2: In **Rebuttal Table 2**, we show the impact of tree depth on performance and inference speed. Inference speed is measured during testing by recording the start and end times of the prediction using the time.time()function and then calculating the difference to determine the inference time. As **Rebuttal Table 2** shows, the depth of the tree is positively correlated with performance and inference speed.
We hope that this clarifies any misunderstandings and we encourage the reviewer to increase their score if we have resolved their concerns or to let us know otherwise so we may try to clear up any remaining confusion.
---
Rebuttal Comment 1.1:
Comment: Thanks, Authors, for the detailed response to my queries. My concerns are addressed. | null | null | Rebuttal 1:
Rebuttal: We thank all reviewers for their comprehensive and constructive feedback on our submission. We are pleased that our work is recognized for its novelty (Reviewers o5To, 97Y1, and qHp8), its novel and enlightening theoretical analysis (Reviewers o5To and 97Y1), and a nice balance between performance and interpretability (Reviewers o5To and 97Y1).
We have prepared an extensive, point-by-point response for each reviewer, outlining our plans to address their concerns and suggestions for additional analyses to enhance the manuscript. **All new tables can be found in the attached one-page PDF file**. We believe that our response will address the reviewers' concerns and allow us to promptly resolve any remaining minor issues.
Once again, we would like to thank all reviewers for their comments and we are looking forward to the discussion period.
Pdf: /pdf/6c1f084b8253a882a589d4e3ac08ca68aa6ff265.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Preference Learning of Latent Decision Utilities with a Human-like Model of Preferential Choice | Accept (poster) | Summary: This paper uses a state-of-the art-models model from psychology for preference learning. To make learning tractable, the authors use variational inference to approximate unknown distributions (and expectations). The model is shown to outperform existing approaches, suggesting that it better predicts human preferences and captures contextual choice effects that are commonly observed in human behaviour.
Strengths: - Preference learning is a very relevant topic. This work borrows a state-of-the-art model cognitive model from psychology (Howes et al., 2016) and aims to make it computationally tractable to be applied in the context of preference learning.
- The paper is well-motivated and clearly written. It effectively balances the mathematical theory with intuitive explanations.
- I find this work to be a nice example of cross-disciplinary ML project: model from psychology, use of variational methods to make it computationally feasible, diverse set of experiments (e.g. policy making, hotel/property rankings)
- Thorough empirical evaluation: section 4.1 ensures that the learnt surrogate is a good approximation (when compared to MCMC); then in the following subsections, CRCS is evaluated on a diverse set of datasets (as already mentioned in the previous bullet), as well as simulated case studies. K-fold cross validation is performed, and Wilcoxon rank test is done to check for statistical significance.
Weaknesses: No major weaknesses. Some minor suggestions:
- Line 56 and the whole paragraph after: the meaning of CRCS acronym is not introduced until line 125.
- line 153: when reading this paragraph for the first time, I found it strange that the parameters $w$ are observed (known). It then became clear that they are observed only by the *user* that is being modelled.
- a graphical model describing all the variables in the model can be useful; it can help visualise the differences between the modelled user and the outside observer (which variables are observed or unobserved). Clearly indicating what is given and what is learnt - e.g. the prior $p(w)$ and the marginal $p(x)$ are both assumed given?
- line 221: typo though -> thought
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the number of parameters for each of the models? It will be good to have these reported in Table 1.
2. I might have missed it, but I didn't see the network architectures for $\hat{u}$ and $\hat{q}$?
3. Have you thought applying the model to LLMs for preference learning?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Some of the limitations of this work are discussed in Section 5. Societal impact statement is also included in Section 5.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review of our paper and thoughtful feedback. We address your main questions and feedback below:
>I find this work to be a nice example of cross-disciplinary ML project: model from psychology, use of variational methods to make it computationally feasible, diverse set of experiments.
Thank you for the positive feedback!
>What is the number of parameters for each of the models? [what are] the network architectures for $\hat{u}$ and $\hat{q}$.
We will include details regarding network architectures and sizes to the appendix. We have provided a summary of these details in the general response. In the PDF attached to the general response, you can find the number of parameters for each model in Table 3, and a visualization of the network architecture in Figure 1a.
> Have you thought [of] applying the model to LLMs for preference learning?
This is a very good point. LLMs have had a tremendous impact on preference learning research and properly aligning them to human preferences is an important societal problem. Our results suggest that modeling context effects is important when learning from human preferences, and so we do consider applying our model to LLMs important future work. Right now our model assumes that options are represented by features. Thus, given some featurization of, say, LLM text outputs for a given prompt, our model could be directly applied. However, this would ignore the reading and interpreting of these texts that human evaluators have to do. We think additionally modeling those tasks in the choice model, by extending the foundation provided by the current model, could be even more transformational. Based on what we have written here, we will add a more detailed discussion on the implications of our work to LLM fine-tuning to the related work section.
> the meaning of CRCS acronym is not introduced until line 125.
Thank you for pointing this out. We will introduce the acronym earlier.
>a graphical model describing all the variables in the model can be useful
Agreed! We have added two graphical models in the PDF attached to the global response. One shows the problem from the point of view of the user making a choice (Figure 1b in the attached PDF) and the other shows it from the point of view of an outside observer trying to infer the user’s utility function (Figure 1c in the attached PDF). We will add these figures to the appendix of the paper.
>line 153: when reading this paragraph for the first time, I found it strange that the parameters are observed (known). It then became clear that they are observed only by the user that is being modelled.
Thank you for pointing this out. As another reviewer also got confused by this, we will include the graphical models and also update the writing to clarify that these observations indeed happen in the user's head and are not observable from the outside.
---
Rebuttal 2:
Title: Please respond to the authors
Comment: Hello reviewer cVZM: The authors have responded to your comments. I would expect you to respond in kind. | Summary: The paper introduces two models for learning preferences from human choice behaviors: the Computationally Rational Choice Surrogate (CRCS) and a variant (LC-CRCS) that considers the context effect of the choice set. Both models are based on a state-of-the-art cognitive model of human decision-making. The paper replaces intractable computations in the original model with surrogates to enable feasible estimations and demonstrates the numerical performance of these models through several experiments.
Strengths: The paper is well-written and presents a new model, which is based on Howes et al. (2016), for understanding human choice behaviors. Compared to the original model, the new model is more computationally tractable. Also it uses several real-world datasets to test the models.
Andrew Howes, Paul A Warren, George Farmer, Wael El-Deredy, and Richard L Lewis. Why contextual preference reversals maximize expected value. Psychological review, 123(4):368, 2016.
Weaknesses: 1. Limited Applicability: The models require data that includes the orderings of each feature (or their generation processes) and the true utilities (or their generation processes). Such data is not commonly available. Additionally, prior knowledge of several parameters must be estimated before training the models. For instance, each feature needs a distribution for the corresponding $\tau$, which becomes challenging when the number of features is large.
2.Experimental Focus: The experimental results primarily focus on negative log-likelihoods (NLLs). Although the proposed methods outperform benchmarks in terms of NLLs, it is unclear if this improvement is practically significant. Comparing other metrics (e.g., accuracy) would better demonstrate the contribution of this work. Moreover, the paper does not provide standard deviations for the results to indicate robustness, even though the results are statistically significant.
3. Training Robustness: Line 561 mentions that "As CRCS and LC-CRCS have non-convex likelihoods, we performed gradient descent multiple times, starting from multiple starting points, and chose the parameters that achieved the best train set log-likelihood." It would be helpful to show the mean and standard deviations over several random seeds instead of just the best result to demonstrate training robustness. Training on large datasets can be time-consuming, making multiple training runs impractical.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. How can the true utility or its generation process be observed or estimated in real-world scenarios?
2. How should the prior distributions in Table 3 be selected or estimated? Are there general methods for estimating these prior distributions from a given dataset?
3. How do the models perform on other metrics (e.g., accuracy)?
4. What are the architectures of neural networks $\hat{q}$ and $\hat{u}$?
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your thoughtful review. We address specific questions below:
>Limited Applicability: The models require data that includes the orderings of each feature […] and the true utilities
This is incorrect.The data we used was straightforward choice data consisting of the options presented to the user and the choices they made. While our cognitive model does assume that choices are informed by the ordering and utility observations you mention, these observations happen “inside the choice maker’s head” and are thus latent to an outside observer (see eq. (3)). We will update the paper to be more explicit about this. We have also included graphical models (Figure 1b and 1c in the rebuttal PDF document attached to the general comment) which clarify which variables are observed and which are not to both the choice maker and an outside observer.
>The experimental results primarily focus on negative log-likelihoods (NLLs). […] Comparing other metrics (e.g., accuracy) would better demonstrate the contribution of this work.
The evaluation of our model and the baselines on choice data sets follows prior work [18]. All models we consider predict a probability for each choice (they are probabilistic classifiers), rather than a single deterministic choice. This is important as human choices do not appear to be deterministic. Thus, our main concern is that these models should be well-calibrated. This is not something we can check with a measure like accuracy. Rather, it requires a proper scoring rule, like the NLL we used. In our other experiments - which were more representative of a typical preference learning setting - we did use other metrics to show significant practical improvement. For example, we measured how consistent a ranking implied by the inferred utility is with ranking collected from users in section 4.2.1.
>the paper does not provide standard deviations for the results to indicate robustness, even though the results are statistically significant.
We agree that this would be a good supplement to the statistical significance tests already carried out in the paper. We will add to the appendix a table listing the mean and standard deviations of the averaged negative log-likelihoods obtained over the test folds in the choice data experiments (section 4.2). A separate set of utility and choice model parameters was inferred for each test fold, so this should give readers a good idea of the robustness of the models. We have added this table to the PDF attached to the global response (Table 1).
>[regarding L561: doing gradient descent multiple times] It would be helpful to show the mean and standard deviations over several random seeds instead of just the best result to demonstrate training robustness.
We should stress that we used gradient descent multiple times to infer the latent choice model and utility parameters from human data, not to train the CRCS model itself. Even on the largest choice dataset, each inference run took only a few minutes. As requested, we will add a table to the appendices showing mean and standard deviations of the average NLL obtained over several independent gradient descent runs. We have added this table to the PDF attached to the global response (Table 2).
>How can the true utility or its generation process be observed or estimated in real-world scenarios?
This question gets to the heart of what makes human modeling difficult. As we cannot look inside people’s heads, it is impossible to observe humans’ true utilities or their true choice making process. Modeling lust follow the standard scientific process of proposing a new model based on some theory or assumptions of how this process works, and then evaluating that model against competing baselines. A choice model (including the underlying utility function) can be evaluated in two ways: i) we can check if the model correctly predicts people’s choices (experiments in section 4.2 and 4.3) and ii) we can check if the inferred utility is consistent with other observations that are derived from that same utility function (e.g., the evaluation of the implied rankings we performed in section 4.2.1).
>How should the prior distributions in Table 3 be selected or estimated? Are there general methods for estimating these prior distributions from a given dataset?
The priors for the choice model and utility parameters have very little effect on the model itself. However, as they are used when training the model, it is important that they cover any parameter values that we may wish to use in practice. We followed standard Bayesian practice for selecting them. As we had little prior knowledge of which parameter values were most likely in practice (beyond some insights from Howes et al. [17] regarding which parameter values actually affected the model’s output), we used flat priors that were as broad as sensible (see last paragraph of appendix A.1).
>What are the architectures of neural networks $\hat{q}$ and $\hat{u}$.
We will add these details to the appendix. We have provided an overview of the architecture of both networks in the general response and a visualization in Figure 1a in the accompanying PDF.
---
Rebuttal Comment 1.1:
Comment: Thank the authors for the response. I will maintain my original score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your response. We thought that we had fully answered your questions. To help us understand your decision, could you please point us to which of our responses failed to address your original comments? Most importantly, in answer to your first point regarding limited applicability, our model does not require observed feature orderings or true utilities. In answer to your second point, choice predictions are probabilistic requiring NLLs, not accuracy, to fully assess.
---
Rebuttal 2:
Title: Please respond to the authors
Comment: Hello reviewer hXMa: The authors have responded to your comments. I would expect you to respond in kind. | Summary: In this paper, the authors present a new model which approximates an existing intractable Bayesian model for preference learning. The paper describes a generative model for choice, and shows how inference can be approximated using two different neural networks. The proposed model is then augmented using a cross-feature correction inspired by another piece of existing work. Results show that the proposed model outperforms the baselines on sensible metrics like negative log likelihood of the test set.
**Conclusion:** Overall, the paper makes a neat proposal and is fairly meticulous in developing the central idea. However, the contribution doesn’t appear substantial and might be of interest to a smaller subset of the community.
Strengths: 1. The paper provides ample background and motivating examples, clearly elucidating the relevance of the problems.
2. The development of the model is systematic, and key steps are formally defined making it easy to follow the progression of the ideas.
3. The model appears more expressive in general since it takes into account different sources of noise, and more independent variables (such as the cross-feature influence or the error tolerance of comparing features).
4. The results seem to be generally promising on both real and synthetic data.
Weaknesses: 1. While it was easy to follow the theory, I found it a little difficult to deeply understand the application section. A little more focus on methodology on at least one experiment might have been more helpful. Specifically, it was a little tricky to cleanly connect the theory to the application for me.
2. The contribution is somewhat incremental, and might be better suited for area-specific workshop. Since the Bayesian model already exists, the main contribution is estimating the two intractable quantities using NNs, if I understand correctly. It’s useful to have such models, but I am not sure if they generate any foundational insights.
3. For synthetic data, if the generative model is closer to the inference model, it’s not very surprising that the latter does better than another model unaware of the generative process. This weakens the evidence on synthetic data a little.
Technical Quality: 3
Clarity: 3
Questions for Authors: NA
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: NA
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for the review and feedback on our paper. We address specific questions below:
>While it was easy to follow the theory, I found it a little difficult to deeply understand the application section. A little more focus on methodology on at least one experiment might have been more helpful. Specifically, it was a little tricky to cleanly connect the theory to the application for me.
Thank you for this feedback. We will update the paper to more explicitly connect the data available in sections 4.2 and 4.3 to the observed and unobserved variables discussed in section 3: In 4.2 we are given a large dataset of tuples $(x^{l},y^{l})$ where each $x^{l}$ is a set of options presented to a user and each corresponding $y^{l}$ is the observed choice. Given this data we now look to infer the unobserved utility parameters $w$ and choice model parameters (e.g. $\varepsilon, \boldsymbol{\tau}, \sigma^2_{calc}$ in the case of CRCS) that best explain this data. The difference in 4.3 is that the dataset is not given all at once, rather we have access to the option sets $x^{l}$ and can actively select which $y^{l})$ we want to observe.
>The contribution is somewhat incremental [...] Since the Bayesian model already exists, the main contribution is estimating the two intractable quantities using NNs, [...] It’s useful to have such models, but I am not sure if they generate any foundational insights.
We disagree that the contribution is incremental. The foundational insight is that a general theory of human cognition - namely computational rationality - offers a powerful and general inductive bias for machine learning. We study how to mitigate the effect context effects - a human bias widely studied in cognitive science - have on utility inference from preferences. Whereas prior work has attempted to account for these effects by learning them from data without any such inductive biases [18,35], we follow a model-based approach which leverages an existing cognitive model introduced by Howes et al. [17]. This model, which is an instantiation of computational rationality for decision making, is empirically validated to reproduce known context effects. This provides us with strong inductive biases that significantly improve utility function inference and choice prediction compared to prior work, showing that our model-based approach works for the important problem of learning from preferences. But the cognitive science literature suggests that deriving inductive biases from computational rationality can generalize to any setting in which people make decisions [A,B,C]. This is in our view a foundational contribution to both cognitive science and machine learning. We will discuss this contribution more explicitly in the introduction and discussion of the paper.
We also count a number of other tangible contributions. We have introduced a new choice model, CRCS, which improves utility function inference in preference learning compared to prior work. This required making the model introduced by Howes et al. [17] tractable using surrogates. We have also provided further empirical evidence for this specific model. This is significant because SOTA cognitive models such as [17, 38] are seldom tested on large choice, mainly because inference is generally intractable. Lastly, we have shown that there are context effects that the original Howes et al. [17] model does not capture, and have introduced LC-CRCS which introduced a learnable mechanism that can fit to these additional context effects.
[A] Richard L. L., Howes A., and Singh S. "Computational rationality: Linking mechanism and behavior through bounded utility maximization." Topics in cognitive science 6.2 (2014): 279-311.
[B] Lieder F., and Griffiths TL. "Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources." Behavioral and brain sciences 43 (2020): e1.
[C] Oulasvirta A., Jokinen JPP., and Howes A. "Computational rationality as a theory of interaction." Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems. 2022.
>[The paper] might be of interest to a smaller subset of the community [...] might be better suited for area-specific workshop.
We disagree. Learning from preferences is an increasingly important approach in human-in-the-loop machine learning for inferring unknown utility functions. The modeling of those preferences (choices) is therefore relevant to a wide part of the ML community. We see this reflected in the fact that earlier work on choice modeling, which we improve on, has been published at top-level ML conferences such as ICML [35] and KDD [18]. Furthermore, our work fits within a wider body of recent work published at NeurIPS on human behavior modeling [D,E,F,G].
[D] Belousov B., et al. "Catching heuristics are optimal control policies." Advances in neural information processing systems 29 (2016).
[E] Binz M., and Schulz E. "Modeling human exploration through resource-rational reinforcement learning." Advances in neural information processing systems 35 (2022).
[F] Chandra, K., et al. "Inferring the future by imagining the past." Advances in Neural Information Processing Systems 36 (2023).
[G] Teng T., Kevin L., and Hang Z. "Bounded rationality in structured density estimation." Advances in Neural Information Processing Systems 36 (2024).
>For synthetic data, if the generative model is closer to the inference model, it’s not very surprising that the latter does better than another model unaware of the generative process. This weakens the evidence on synthetic data a little.
We do not agree that this weakens the evidence. It is clear that aligning a system to a user’s utility will make the system more useful. However, how to achieve such alignment in practice is not trivial. The simulated use cases we present point to the fact that preference learning with our model is a practical solution for this.
---
Rebuttal Comment 1.1:
Title: Update after authors' rebuttal
Comment: I appreciate the authors taking the time to address my concerns. The clarification around how the theory connects to the experiments is quite helpful. I also concede the point that this could be an apt addition to the main conference given the recent trend of publications the authors pointed out.
However, I do not quite understand the assertion, `We study how to mitigate the effect context effects - a human bias widely studied in cognitive science - have on utility inference from preferences`. My understanding is context effects are widely observed and the proposed model is learning underlying utilities that can explain their emergence. This makes me wonder if I missed something fundamental in the paper. I am raising my score from 4 to 5, but lowering my confidence from 4 to 3.
---
Reply to Comment 1.1.1:
Comment: Thank you for asking, and for the comments.
> I do not quite understand the assertion, `We study how to mitigate the effect context effects - a human bias widely studied in cognitive science - have on utility inference from preferences.` My understanding is context effects are widely observed and the proposed model is learning underlying utilities that can explain their emergence.
Your understanding is correct. Context effects are widely observed and the underlying utilities are a fundamental factor in their emergence. The point we were trying to make is that when inferring these underlying utilities from choices made by humans - where we expect context effects to have affected these choices - it is important that we do so with a model that correctly models context effects. If context effects are not properly modeled, there is a risk that we infer the wrong underlying utility function. The experiments provide evidence for this. The Bradley-Terry model, which does not model any context effects, is in all cases the worst of the choice models considered.
We hope this helps to clarify.
---
Rebuttal 2:
Title: Please respond to reviewers
Comment: Hello reviewer rsF2: The authors have responded to your comments. I would expect you to respond in kind. | Summary: The submission proposes an approach to preference learning using a model inspired by findings in cognitive science. Specifically, it uses an amortized inference variant of a previously proposed model to enable tractable inference of preference values, and applies it to a number of case studies, where it is shown to outperform both the classical Bradley-Terry model, and additional more recent baselines.
Strengths: The paper sets up its problem well and provides a suitable review of past work. I particularly appreciated the very crisp distinction between user's and outside observer's model in section 3.2. Performance is competitive relative to the baselines chosen, which include relatively recent methods.
Weaknesses: My primary concerns with the paper are related to the actual technical implementation details, which are not articulated clearly enough, and where they are articulated give me some pause. Specifically:
* If I understand things correctly, the policy network takes in $x, w, \tilde{u} and \tilde{o}$, and a sufficiently flexible surrogate should be able to learn arbitrary functions of x and w already. Is this correct? If so, why is the explicit feature engineering needed? The paper also discusses later "built in" context effects in CRCS and "learned" ones in LR-CRCS -- what is meant by this?
* This also makes me wonder: what's the network architecture used and other training hyperparameters (optimizer, learning rates, etc)? In general I would expect a sufficiently flexible surrogate to also achieve better than ~92% agreement with the original model. I'm also puzzled about the claim of insufficient option data to claim the utility network -- shouldn't it be possible to simulate arbitrary amounts of observations in this case such that the network fully spans the range of possible outcomes and enabling amortized inference? I can see why sampling only from the training data would create this restriction, but I don't understand why this choice was made.
Technical Quality: 2
Clarity: 3
Questions for Authors: * Reporting summed rather than averaged NLLs (with standard errors) seems like an odd choice -- why was this done?
* Can the authors comment about the choice of Wilcoxon signed-rank test for some hypothesis tests and t-tests for others?
* Why are baselines not present in the regret panels of figure 1?
Additional Notes:
* Figure 1 tick labels should be larger.
* L53 maybe "computational rationality analysis"?
* L66 it's -> its
* The notation in section 3.3. is a bit confusing, in the following way: u is a function of x, but the notation sometimes uses u and sometimes x (e.g. in the expectations in the utility and policy loss, or how \tilde{u} is conditioned on u but \tilde{o} is conditioned on x. I think this might be because of a distinction between the observer's and user's likelihoods, but this isn't quite spelled out here.
* I believe that the overall setup here is a bilevel optimization problem (because L_util is optimized w.r.t. w and L_pol w.r.t. x), and I'm not sure how incomplete optimization or other issues in the utility network impact the policy network (especially considering that the accuracy of the utility network seems not ideal).
* Line 278 I believe \citet should be \citep there?
Confidence: 4
Soundness: 2
Presentation: 3
Contribution: 3
Limitations: Discussion is adequate, though as noted above I'm puzzled about the limitation on requiring sufficiently many choice sets for estimation.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: Thank you for your review of our paper and your feedback. We have addressed the main questions and concerns you have raised below.
>the policy network takes in $x, w, \tilde{u}$ and $\tilde{o}$, and a sufficiently flexible surrogate should be able to learn arbitrary functions of x and w already. Why is the explicit feature engineering needed? [what is meant by] "built in" context effects in CRCS and "learned" ones in [LC]-CRCS.
Thank you for pointing out that this was unclear. To clarify, the observations $\tilde{u}$ and $\tilde{o}$ are not engineered features, but rather a core part of our cognitive model. As explained in section 3.1, the cognitive model theorizes that context effects come from the fact that humans make choices that maximize option utilities estimated from noisy observations [17], specifically noisy observations of each option’s utility ($\tilde{u}$) and noisy feature value comparisons across options ($\tilde{o}$). It has been empirically validated that maximizing these utility estimates given these observations leads to the context effects exhibited by humans. Our tractable surrogate for this model, CRCS, inherits these same context effects, which is why we describe them as “built in”. However, we found that this model did not reproduce all context effects present in the considered choice data. LC-CRCS addresses this by introducing a learnable component that is able to learn context effects not yet captured by CRCS itself. The advantage of having a model where these context effects are “built in” is that this creates inductive biases that lead to better utility function inference and choice prediction, as we show in the experiments.
>This also makes me wonder: what's the network architecture used and other training hyperparameters (optimizer, learning rates, etc)?
Although these details are available in the code we submitted in the supplement, we do agree that it would be easier for readers if they were available as part of the appendices as well. We will add details on the network architecture and sizes to the Appendix. We have provided a summary in the general response.
>In general I would expect a sufficiently flexible surrogate to also achieve better than ~92% agreement with the original model.
We used the most stringent possible definition of agreement, namely that both models had to agree on the full ordering of the three option utilities. This is clearly a high bar. Moreover, the original model we compared to uses a Monte-Carlo estimator that estimates the expected values of the option utilities from a large but finite number of samples. Thus, the quantity we compare to is not some ground truth reference but rather another (noisy) estimate. As any error could be due to either the proposed surrogate or the original model being wrong, even a perfect surrogate would likely not reach 100% agreement.
>I'm also puzzled about the claim of insufficient option data to [train] the utility network -- shouldn't it be possible to simulate arbitrary amounts of observations[...]? I can see why sampling only from the training data would create this restriction, but I don't understand why this choice was made.
Unfortunately, in this specific instance, there was insufficient detail available regarding how the original user study, reported in [37], was done; no details were available on car and participant features that were used to construct the original option sets. We were therefore unable to generate option sets from a distribution that aligned well with the original experimental condition, forcing us to sample them from the training data only. This was not an issue for the other choice tasks, where the details of the user studies were more fully documented and we were able to simulate an arbitrary number of choices. Appendix A.1 discusses these issues in more depth.
> Reporting summed rather than averaged NLLs (with standard errors) seems like an odd choice -- why was this done?
We reported summed NLLs in the same way as was done in prior choice modeling work [18]. However, we agree that additionally reporting standard deviations or errors would add value to the paper, especially - as reviewer hXMa explained - to help readers understand the robustness of the models to variations in the training data. We will add to the appendix a table listing the mean and standard deviations of the averaged NLLs obtained over the test folds in the choice data experiments (section 4.2). You can find this information in table 1 in the PDF attached to the global response.
> Can the authors comment about the choice of Wilcoxon signed-rank test for some hypothesis tests and t-tests for others?
In most of our experiments we compared different model scores (e.g. NLL) on the same static data, and thus we used paired two-sample tests to compare the scores between models. As normality generally did not hold, we used Wilcoxon signed-rank tests for this. For experiment 4.3, where we used active learning, the evaluation data differed between the models as each would have selected different points to train on, thus we used an unpaired two-sample test, in this case an independent t-test.
We have noticed now that we misreported the hypothesis tests used in sections 4.2.1 and 4.3 – these should be switched. We used an independent t-test in 4.3 and a Wilcoxon signed rank test in 4.2.1. We will fix this in the paper and will explain briefly why each hypothesis test was chosen.
> Why are baselines not present in the regret panels of figure 1?
Comparing to baselines was not the focus of these experiments. These are synthetic tests where we measured parameter recovery and practicality of CRCS in real-world experiments. No human data was used, and instead choices were generated from our own CRCS model. Clearly, our own model will fit best to choices generated by it, so any comparison to the baselines seemed to us as though it would be disingenuous, in this instance.
---
Rebuttal Comment 1.1:
Title: Still confused about CRCS vs LC-CRCS
Comment: I appreciate the authors' clarifications regarding the training setup. I'm still surprised that the agreement to the brute-force approach is so low, but I will also grant the point that maybe it's not that important to match to the brute force approach considering performance of the proposed model on the actual evaluation tasks is still improved.
Regarding feature engineering, my concern is not $\tilde{u}$ and $\tilde{o}$. Rather, I mean the $g(w, x)$ learned linear mapping -- why is this not already learnable within the original CRCS? As far as I can tell, $g$ simply takes the average of the $x$'s, and transforms it linearly, both of which the $\hat{q}$ network can do already.
---
Rebuttal 2:
Title: Please respond to the authors
Comment: Hello reviewer CwtL: The authors have responded to your comments. I would expect you to respond in kind.
---
Rebuttal 3:
Comment: Thank you for your comment. We apologize for misunderstanding your original question regarding LC-CRCS.
Fundamentally, CRCS as we defined it does not learn context effects from (human) data. Instead, context effects emerge given an empirically validated theory (choices maximise expected utility given noisy observations; section 3.1). This theory can be simulated and thus be amortised into \hat{q}. $\hat{q}$ can therefore be trained without any access to human data. There are a number of free variables in the model (the choice model and utility parameters) which are inferred from human data in our experiments, but these determine the utility function and various noise levels within the choice process, and are thus not directly responsible for the context effects. Although not learning context effects from human data has clear advantages, it also means that the model will not generate context effects not explained by the theory. This is why we introduced the additional linear mapping $g(w,x)$ in LC-CRCS. This can be learnt from human data and can therefore fit to context effects not yet explained by the theory or generated by CRCS. | Rebuttal 1:
Rebuttal: We thank all reviewers for the time and effort dedicated to review of our work and for the helpful and constructive feedback.
## Motivation and focus of the paper
Our paper presents a cross-disciplinary approach to preference learning. Humans are known to exhibit a number of context effects when making choices. When these effects are not properly accounted for, learning from choices (preferences) may lead to incorrect inferences of the underlying utility. This has important implications for ML systems that learn from preferences.
The foundational contribution of our paper is the insight that computational rationality, which posits that humans are rational under bounds, can offer a step change in how ML systems learn about humans. Concretely, we propose a model-based approach to preference learning which leverages a SOTA cognitive model [17] derived from this computational rationality theory. The empirically validated computational rationality assumptions built into this model induce known context effects. Using this model, or rather a new tractable variant of it, for learning from preferences therefore introduces a strong inductive bias in our inferences. This type of inductive bias has not been used in preference learning before, and we show that it significantly improves utility inference and choice prediction compared to prior work.
The paper therefore offers a number of concrete contributions. We introduce CRCS, a tractable version of an existing cognitive model by Howes et al. [17] that supports efficient inference of the utility function and model parameters. We show that CRCS can be used in large-scale learning from preferences and that it makes better inferences than a set of recent baselines. This has practical benefits for any existing work that uses learning from preferences. To account for any context effects not yet captured by CRCS, we additionally introduce LC-CRCS, which is able to learn any additional context effects from human data.
## Clarification on neural network architecture
As requested by several reviewers, we provide here details on the architecture of the two neural networks we use in the paper, $\hat{u}$ and $\hat{q}$. These approximate key intractable computations in the paper: $\hat{u}$ takes as input a vector of observations $(\boldsymbol{\widetilde{u}}, \boldsymbol{\widetilde{o}})$ and predicts the expected utility of each option. $\hat{q}$ takes as input a set of options and predicts the likelihood that each option will be chosen. Both networks are multitask networks; they are additionally conditioned on the parameters of the utility function $w$ and the choice model parameters $(\varepsilon, \boldsymbol{\tau}, \sigma_{calc}^2)$.
The architecture of both networks is virtually identical, differing only in the dimension of their inputs, and in the fact that the output of $\hat{q}$ is transformed with a log-softmax function. Figure 1a in the attached PDF shows the architecture of $\hat{q}$. The network design consists of two modules. The first in an embedding module consisting of three layers that takes the utility and choice model parameters and embeds them into a latent embedding space. The output dimension of the layers and the dimension of the embedding space are given in Table 3 in the attached PDF. The second module, the main module, consists of four layers and takes as input the set of options $x$ (or the observations in the case of $\hat{u}$) and transforms these into likelihood predictions (expected utilities for $\hat{u}$). The output sizes of these layers are also listed in Table 3. To condition the main module on the utility and choice model parameters, we concatenated the embedding from the embedding module with the input to layers 2 and 3 of the main module (see figure). We trained $\hat{u}$ and $\hat{q}$ using the AdamW optimizer implemented in pytorch. We used an exponentially decaying learning rate. We list the batch size, number of epochs for training and the starting and ending learning rates in Table 3 in the attached PDF.
## Additional experimental results and clarity improvements
We have collected additional empirical results to support our response to reviewers’ comments. The new figures and tables are in the rebuttal PDF document (attached to this comment). Based on the feedback from the reviewers, we will also expand the paper and incorporate a number of clarifications. We list the most important changes below:
- We will clarify the description of the model. Specifically, we will be more explicit about which variables are observable to the user and to an outside observer. We will also more clearly delineate what part of the theory concerns the cognitive model and what part concerns learning from preferences.
- As requested by reviewer cVZM we have added to the appendix two graphical models, one showing the cognitive choice model (Figure 1b in the attached PDF) and one showing how this fits within the larger inference problem of learning from preferences (Figure 1c in the attached PDF).
- We will update the paper’s introduction and discussion sections to clearly state the foundational insights contributed by this work, based on the response we gave to reviewer rsF2.
- As requested by reviewers CwtL, hXMa and cVZM we have provided further details about the architecture and training details of the neural networks above. We will add these details to the appendix of the paper.
- As requested by reviewers hXMa and CwtL we have additionally reported averaged NLL figures with standard deviations for the choice set experiments in Table 1 in the attached PDF. These will be added to the appendix of the paper.
- As requested by reviewer hMXa we have included a table showing the mean and standard deviation of the average NLL obtained over several gradient descent runs in Table 2 in the attached PDF. We will discuss this table in the appendices.
Pdf: /pdf/b35a058af394ce7d2638e80ca05494f1424c81ab.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
HaloScope: Harnessing Unlabeled LLM Generations for Hallucination Detection | Accept (spotlight) | Summary: This paper proposes a new way of harnessing unlabeled LLM generation as the training set for fact verification. The key assumption is "LLM generates factually correct statements more than hallucinogenic statements", which guarantees the existence of clear subspace between hallucination and non-hallucination statements. As the mean of identifying such subspace, singular decomposition is applied to find out orthonormal vector to project LLM's embeddings. The distance from origin is regarded as the value to define "abnormality". The experiments demonstrate that the proposed idea works out and outperforms several existing methods.
Strengths: * The idea of using unlabeled LLM generation is interesting and show the potential over other counter parts.
* Ablation study may include key components we can alter.
* The paper is clearly written and solve an important challenge - distinguishing hallucination or not.
Weaknesses: There are many weaknesses and things to be considered for further improvements.
**Limited Training Set:** The main advantage of the propose method is the usage of unlabeled LLM generation. It means, we can create much larger training set by generating diverse QA pairs (including multi-domain, subjects, etc). But this study is limited to using only the give training set from a few data sets. Also, the classification performance is much lower than what we can obtain by using labeled set in Figure 5. Showing the potential how this framework gets synergies by adding more LLM generations. (we can simply create such large data by using LLMs with simple prompts or we can add LLM generations on other large-scale QA bench not for fact check). Can the fact verification performance improve by adding more LLM generations?
**Issue on Robustness Comparison:** I don't understand why the proposed method gets higher robustness compared with other methods. The propose one still relies on training on a specific data. How could this increase robustness over others? The latent space extracted by a specific dataset cannot be generalized to all domains. I think this is the similarity between the two datasets, TRIVIA QA, TrustfulQA. Please clearly explain what data distribution, domain difference the two data have, and what components help the robustness against distribution shift.
**Explanation on Hyperparameter:** In Lines 150-151, I think the methods need $T$ to split the LLM generations into two different groups of $\mathcal{H}$ and $\mathcal{T}$. But, there is no mention about this, and any other studies on ablating the value of $T$.
**Visualization Study**: A key assumption is the existence of clear subspace to distinguish hallu and non-hallu statements. I think that the author should provide some qualitative analysis using visualization. We can visualize the projected embedding space with oracle labels on 2D space. This clearly show that how such two classes are separable well in latent space.
Technical Quality: 3
Clarity: 3
Questions for Authors: No Questions.
Confidence: 5
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes, author stated the limitation on Page 17.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank you for recognizing our work as interesting and for studying an important challenge. We appreciate the reviewer's comments and suggestions, which we address below:
**A1. The effect of adding more unlabeled data**
Thank you for the suggestion! Our ablation on the number of unlabeled data for hallucination detection in **Appendix Table 8** indeed shows that scaling up the unlabeled data is beneficial. To further verify this, as suggested, on the TruthfulQA dataset, we have explored (1) adding LLM-generated sentences for a specific concept from the WikiBio dataset [1] into the unlabeled data and (2) adding LLM generations for a different TriviaQA dataset into the unlabeled data. The hallucination detection results with different numbers of added samples are shown as follows (the model is LLaMA-2-7b and the test dataset is TruthfulQA):
| Added samples | WikiBio | Added samples | TriviaQA|
| ------ | ----- | ----- | ----- |
|0 |78.64 |0 | 78.64|
|400 | 79.27|2000 | 80.81 |
|800 | 79.94| 4000| 83.37|
|1200 |80.90 | 6000|82.96 |
|1600 | 81.66|8000|84.19 |
We observe a similar trend as in Table 8 of our paper and will be happy to add more discussion on this in the revised version.
**A2. Discussion on robustness comparison**
Thank you for pointing this out! Firstly, we concur with the reviewer's opinion that our approach can be affected by the distribution shift between the unlabeled data and the test data, which we also discuss in the Limitation section of our paper.
*Although we did not explicitly claim the better robustness of our approach compared to other alternatives*, we are happy to clarify the method robustness and its relationship with dataset similarity. Specifically, the four datasets used in our paper have the following domain differences:
- TruthfulQA: This dataset includes questions from various domains such as health, law, fiction, conspiracies, etc. It is designed to measure the truthfulness of language models by including questions that some humans might answer falsely due to misconceptions.
- TriviaQA: This dataset includes question-answer pairs from trivia enthusiasts. The questions cover a wide range of topics and are paired with evidence documents from sources like Wikipedia and web search results.
- CoQA: This dataset includes passages from seven diverse domains: children’s stories, literature, middle and high school English exams, news, Wikipedia, Reddit, and science.
- TyDiQA: This dataset consists of questions related to Wikipedia articles.
Based on this and Figure 3(a) of our paper, we have made the following observations:
**The knowledge scope matters for method transferability**: We find that a hallucination subspace calculated within a more general knowledge scope can transfer well to data with a smaller knowledge scope, but not vice versa. For example, the subspace extracted from TruthfulQA (where the knowledge scope is more restricted compared to other datasets) demonstrates less generalization ability to other datasets. The detection AUROC drops from 76.42% to 67.81% when the TruthfulQA subspace is used for hallucination detection on the CoQA test set, while the CoQA subspace achieves an AUROC of 77.36% on the TruthfulQA test set (only a 1.28% decrease). Additionally, our approach works well when the unlabeled data and the test data have a similar knowledge scope. For instance, the TriviaQA and CoQA datasets have similar knowledge scopes (Wikipedia and other web knowledge), allowing the subspace learned from one dataset to generalize to the other, as verified by our experiments.
We believe this explains why, in certain cases, our approach is robust when transferring to other data distributions. We would be happy to add this discussion to our revised paper.
**A3. Explanation on the hyperparameter**
Great point. As explained in **line 203** of our paper, we determine the threshold $T$ on a separate validation set consisting of 100 LLM generations. For the LLaMA-2-7b model with the TruthfulQA dataset, we provide further ablation results of $T$ on the test set as follows. Here, "max" and "min" denote the maximal and minimal membership scores of the unlabeled data.
| T | AUROC (in %)|
| ------ | ----- |
|(max - min) * 10% + min | 69.58|
|(max - min) * 20% + min | 75.96|
|(max - min) * 30% + min | 73.20|
|(max - min) * 40% + min | 77.37|
|(max - min) * 50% + min | 77.43|
|(max - min) * 60% + min | 79.19|
|(max - min) * 70% + min | 78.64|
|(max - min) * 80% + min | 74.67|
|(max - min) * 90% + min | 70.31|
The final reported result (78.64%) in our main Table 1 is selected based on the AUROC on the validation set.
**A4. Visualization**
Another great point. As suggested, we provide visualization results [here](https://openreview.net/attachment?id=ukMLqTlT90&name=pdf) on the embeddings of the truthful and hallucinated LLM generations for the TyDiQA-GP dataset with the LLaMA-2-7b model. These embeddings are almost linearly separable and align well with empirical observations in the literature [2].
[1] Manakul et al., SELFCHECKGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models, EMNLP 2023
[2] Zou et al., Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405, 2023.
---
Rebuttal Comment 1.1:
Title: Response to Authors
Comment: Thanks for the clear response on my concerns and questions. All the things has been resolved, so I increase my score to 6. Thanks!
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you so much for taking the time to read our response and increase your rating! We are glad that our rebuttal addresses your concerns.
Thanks,
Authors | Summary: The paper attempts to address the popular problem of hallucination in texts generated by today's LLMs (large language models). Hallucination refers to false or misleading text generated by LLMs. The paper attempts to address hallucination by proposing a truthfulness classifier, HaloScope, that operates on the unlabeled text generated by an LLM during operation. The paper compares Haloscope with existing algorithms and shows its superior performance.
The key idea behind Haloscope is identifying a hallucination subspace in the space of latent (Euclidean) representations of an LLM generation. Thus, when when the latent representation of a generation aligns strongly with this hallucination subspace, the text is classified as potentially hallucinated.
Strengths: The paper is very well-written. The key idea behind HaloScope while being simple in retrospect is novel and a welcome contribution. The exposition in the paper is very clear and the simulations have been excellently presented with all the relevant details. The work on ablation studies (Sections 4.2-4.4) is also a welcome contribution and adds to the overall value of the work.
The reviewer believes the research community will benefit and draw upon the ideas and simulations presented in this paper.
Weaknesses: One point I would like to note is with regard to the problem setup. Currently, the way problem is formulated (Equations 1, 2, 3, 4), it appears that the fact that an LLM generation is a hallucination is independent of the corresponding user prompt. This does not seem to be a good assumption to make. However, the empirical results do outweigh the weakness of such an assumption. The reviewer suggests the authors to describe their modeling choice (Eq 4) a little more in detail, specially how it relates to the user-prompt conditioning.
Technical Quality: 4
Clarity: 4
Questions for Authors: Line 40-41. By assuming the specific mixture of two distributions. The authors are assuming that hallucinated data is generated with probability \pi independent of the input. Can authors please elaborate.
Line 158. What is the difference between lambda and capital T?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 4
Limitations: The distribution shift has been identified as a potential barrier to a reliable usage of HaloScope by the authors. The reviewer agrees with the authors and appreciate their candidness. The reviewer would like to suggest mentioning this as potential future work direction in the conclusion.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 9
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are deeply encouraged that you recognize our method to be novel, welcome, and beneficial to the research community.
Your summary and comments are insightful and spot-on :)
**A1. Clarification on the problem setup**
You raise a great point! We agree with you that the truthfulness of an LLM generation should be dependent on the given user prompt. We plan to revise it as follows: define a new variable $\widehat{\mathbf{x}}$ that denotes the concatenation of the user prompt and the LLM generation, and the distribution P_unlabeled, P_true, P_hal are thus defined over the new variable. Moreover, that will also address your confusion on the assumption that hallucinated data is generated with probability $\pi$ independent of the input. Thank you for catching this issue!
**A2. Clarification on the thresholds**
Thank you for the question! We are happy to clarify their differences. $T$ is the threshold when determining the membership of the unlabeled data (true or false) using the membership estimation score (Equation 7 of our paper), which can be chosen on a small amount of validation data as discussed in Section 4.1 of the paper. $\lambda$ is the threshold on the probabilistic outputs from the hallucination classifier during test time to determine whether a testing LLM generation is hallucinated or not.
**A3. Mention distribution shift in the conclusion**
Certainly! We will be happy to discuss this future work in the conclusion section. Possible extension ideas can include firstly setting up new experimental environments where the unlabeled data and the test data belong to different domains (such as between daily dialogues and medical question-answer pairs), and then developing a distributionally robust algorithm for training. Another interesting idea to explore might be out-of-sample extension for SVD [1] that specifically deals with the reconstruction and projection precision for samples that are not in the training set.
[1] Bengio et al., Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering, NIPS 2003
---
Rebuttal Comment 1.1:
Comment: Thank you for the responses.
A1. Could you write down the new equations that will result so I can get a clear picture?
A2. Acknowledged.
A3. Acknowledged.
---
Reply to Comment 1.1.1:
Title: Author response
Comment: Thank you for taking the time to read our rebuttal! We are happy to further respond to your question on A1, with revised definitions of hallucination detection and unlabeled data.
---------
We first generate the output $x$ conditioned on the prompt $x_\text{prompt}$. Both the prompt and generation will be used in subsequent hallucination detection, which is defined as follows.
**Definition 2.2 (Hallucination detection)**
We denote $P_\text{true}$ as the joint distribution over the truthful input and generation pairs, which is referred to as truthful distribution. For any given generated text $x$ and its corresponding input prompt $x_\text{prompt}$ where $(x_\text{prompt}, x) \in \mathcal{X}$, the goal of hallucination detection is to learn a binary predictor $G: \mathcal{X} \rightarrow \\{0,1\\}$ such that
\begin{equation}
G({x_\text{prompt}, x}) = \begin{cases}
1, &\text{if } {(x_\text{prompt}, x)} \sim P_\text{true} \\\\
0, &otherwise
\end{cases}
\end{equation}
**Definition 3.1 (Unlabeled data distribution)** We define the unlabeled LLM input and generation pairs to be the following mixture of distributions
\begin{equation}
P_{\text{unlabeled}} = (1-\pi) P_{\text{true}} + \pi P_{\text{hal}},
\end{equation}
where $\pi \in (0,1]$. Note that the case $\pi = 0$ is idealistic since no false information occurs. In practice, $\pi$ can be a moderately small value when most of the generations remain truthful.
**Definition 3.2 (Empirical dataset)** An empirical set $\mathcal{M} = \\{(x_{\text{prompt}}^1, x_1), ..., (x_{\text{prompt}}^N, x_N)\\}$ is sampled independently and identically distributed (i.i.d.) from this mixture distribution $P_{\text{unlabeled}}$, where $N$ is the number of samples. $x_i$ denotes the response generated with respect to some input prompt $x_{\text{prompt}}^i$.
Note that in implementation, we indeed consider passing both the prompt and the LLM-generated answer to obtain the embedding, which aligns with our problem definitions. These updates have been made accordingly in our manuscript. Thank you again for the helpful suggestion! | Summary: This paper presents a technique for detecting hallucinations by leveraging unlabeled data generation. Instead of relying on human annotation, the method automatically distinguishes between truthful and untruthful generations using network embeddings and their projection onto singular vectors with high singular values. Subsequently, this information is utilized to train a classifier. The study demonstrates enhancements across multiple benchmarks compared to several baseline methods.
Strengths: * Leveraging unlabeled generation for hallucination detection without any need for human annotation.
* The method scales well for larger models.
* The method does not require sampling multiple generations.
Weaknesses: * The method relies on BLUERT for ground truth evaluation. Can you justify the use of BLUERT score?
Technical Quality: 3
Clarity: 3
Questions for Authors: * Have you explored the effectiveness of the method on tasks beyond QA, such as summarization?
* Can you provide more details on the result of Table 3? What data is used to perform this ablation? This is important because where to extract the embeddings has huge effect on the effectiveness of the method.
Confidence: 3
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: Yes
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We are glad to see that the reviewer recognized the strengths of our work from various perspectives. We thank the reviewer for the thorough comments and suggestions. We are happy to clarify as follows:
**A1. Clarification on the BLEURT metric**
Thank you for pointing this out! BLEURT [1] is designed to evaluate the quality of text by comparing it to reference texts, similar to traditional metrics like BLEU. However, BLEURT leverages pretrained Transformer models, such as BERT, to capture deeper semantic nuances and contextual meanings. This makes it particularly effective for detecting subtle discrepancies between generated outputs and reference ground truths, which is critical in identifying hallucinations that may not be immediately obvious through surface-level evaluation.
In addition, hallucinations can vary widely in form, from subtle factual inaccuracies to more blatant falsehoods. BLEURT's embedding-based approach allows it to handle this variability better than traditional n-gram-based metrics, which might miss nuanced errors. This robustness ensures that the evaluation can effectively capture both major and minor hallucinations, providing a more comprehensive assessment of the model's output quality.
Finally, BLEURT has been shown to correlate well with human judgments in various natural language generation tasks [1, 2], which makes it a reliable proxy for assessing the factual correctness and coherence of LLM outputs, which is essential in hallucination detection. **Additionally, we show that the effectiveness of our algorithm is robust under a different similarity measure, ROUGE, in Appendix D**, which is based on substring matching.
**A2. Effectiveness of our approach on additional tasks**
Thank you for the suggestion! We evaluate our approach on two additional tasks, which are (1) text continuation and (2) text summarization tasks.
For text continuation, following [1], we use LLM-generated articles for a specific concept from the WikiBio dataset. We evaluate under the sentence-level hallucination detection task and split the entire 1,908 sentences in a 3:1 ratio for unlabeled generations and test data. (The other implementation details are the same as in our original submission.)
For text summarization, we sample 1,000 entries from the HaluEval [3] dataset (summarization track) and split them in a 3:1 ratio for unlabeled generations and test data. We prompt the LLM with "[document] \n Please summarize the above article concisely. A:" and record the generations while keeping the other implementation details the same as the text continuation task.
The comparison on LLaMA-2-7b with three representative baselines is shown below. We found that the advantage of leveraging unlabeled LLM generations for hallucination detection on Wikipedia articles still holds.
| Method | Text continuation | Text summarization|
| ------ | ----- | ----- |
|Semantic Entropy |69.88|60.15 |
|SelfCKGPT | 73.23|69.91 |
|CCS∗ | 76.79| 71.36|
|HaloScope (Ours) | **79.37**|**75.84**|
**A3. Details of Table 3**
Absolutely! As in the first row of Table 3, we use 4 datasets (TruthfulQA, TriviaQA, CoQA, and TyDiQA-GP) to ablate on the effect of where the embedding is extracted for hallucination detection. The results are the detection AUROCs based on different locations in a typical transformer block to extract the embeddings, i.e., the output of each transformer block, the output of the self-attention module, and the output of the MLP feedforward layer. From Table 3, we observe that the LLaMA model tends to encode the hallucination information mostly in the output of the transformer block, while the most effective location for OPT models is the output of the feedforward layer.
[1] Sellam et al., BLEURT: Learning Robust Metrics for Text Generation, ACL 2020.
[2] Bubeck et al., Sparks of Artificial General Intelligence: Early experiments with GPT-4, arXiv preprint, 2303.12712.
[3] Li et al., HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models, EMNLP 2023.
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors for addressing my concerns.
---
Reply to Comment 1.1.1:
Title: Thank you!
Comment: Thank you for your appreciation. We're glad we could address your concerns! Please let us know if you have any further questions or suggestions. | Summary: The paper is very clear in the problem it is facing and the solution it proposes is also clearly described. Specifically it is looking at detecting hallucinations produced by generative language models. It does so by taking internal representations of the LLM, projecting these onto SVD factorisation which identifies important directions of variation for the models subspace(s) over some relevant data.
Membership of unlabelled data (hallucination, or not) is then obtained by projecting the centred representation for the datapoint at hand against the principle singular vector. This provides an unsupervised labelling method. This is used to label all data available, and then a binary classifier is trained on this. This is the final hallucination classifier.
Experimental results are given over a few Q+A datasets and against several related baselines for hallucination detection. The proposed method is shown to be the most accurate consistently.
It's actually rather amazing I think that the projection against the primary SVD vector (with scores aggregated across these from different parts of the network) works so well as an unsupervised labeller of truthfullness/hallucination. There's no specific information here at all about the problem of hallucination, and this could have captured the information of any other trait (or even meant nothing at all).
Strengths: * Very interesting empirical results on the provided Q+A datasets.
* Method is clear and simple, assuming access available to the internals of the LLM.
* The ablations across all of section 4.3 are very thorough. These answered the questions I had noted down reading up to that point of the paper.
Weaknesses: * Hallucinations do vary a lot depending on the specific problem at hand, e.g. in question-answering with or without a prompt, or in data-to-text NLG. This paper only looks at question-answer based tasks. It would be interesting to see the method applied on other tasks. There is nothing about the proposed method which limits where it can be applied, it's purely an empirical question as to whether the results are the similar or rather different.
* Access to the internals of proprietary, cloud hosted LLMs is not a given. Often only the predictions are obtainable for these. The method won't be applicable in this case.
Technical Quality: 3
Clarity: 3
Questions for Authors: * Why is projecting against these subspace primary vectors so informative for hallucination detection? I'm quite surprised at how well this works, given that could capture any other feature of the model/data other than hallucination. The variation of this across the different network layers is significant as shown in Figure 3 c. Based on that, the practical way to apply this would be to measure on every layer before selecting which to use right?
* Would it be possible/sensible/silly to incorporate an objective on certain subspaces into the training of the model (if not just zero-shoting the task of interest with an already capable LLM)?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 4
Limitations: * Access to the internals of the LLM is the main one, as already noted.
* More empirical results over different types of language generation tasks which each have their own nuances to what is a hallucination would be of interest.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 8
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the reviewer for the comments and suggestions. We are encouraged that you recognize our approach to be clear and with interesting experiments and thorough ablations. We address your questions below:
**A1. Effectiveness of our approach on additional tasks**
Thank you for the suggestion! We evaluate our approach on two additional tasks, which are (1) text continuation and (2) text summarization tasks.
For text continuation, following [1], we use LLM-generated articles for a specific concept from the WikiBio dataset. We evaluate under the sentence-level hallucination detection task and split the entire 1,908 sentences in a 3:1 ratio for unlabeled generations and test data. (The other implementation details are the same as in our original submission.)
For text summarization, we sample 1,000 entries from the HaluEval [2] dataset (summarization track) and split them in a 3:1 ratio for unlabeled generations and test data. We prompt the LLM with "[document] \n Please summarize the above article concisely. A:" and record the generations while keeping the other implementation details the same as the text continuation task.
The comparison on LLaMA-2-7b with three representative baselines is shown below. We found that the advantage of leveraging unlabeled LLM generations for hallucination detection still holds.
| Method | Text continuation | Text summarization|
| ------ | ----- | ----- |
|Semantic Entropy |69.88|60.15 |
|SelfCKGPT | 73.23|69.91 |
|CCS∗ | 76.79| 71.36|
|HaloScope (Ours) | **79.37**|**75.84**|
**A2. Discussion on access to internals of proprietary, cloud hosted LLMs**
You raise a great point. We concur with your opinion that access to the internal representations is not easy for proprietary, cloud-hosted LLMs. We provide our understanding on this as follows:
- Firstly, we believe that hallucination detection is still an ongoing research topic, and the research efforts on exploring the internal representations of open-sourced models are equally, if not more, important than the research assuming black-box access to LLMs. Such access to model internals is beneficial for transparency and debugging, which can help us understand where, when, and how the hallucinated generations occur. By investigating specific layers or components where the model's reasoning deviates, researchers and developers can debug and refine the model more effectively, leading to improvements in both performance and safety.
- Secondly, internal representations capture the nuanced, multi-layered information that LLMs process as they generate responses compared to the textual outputs. By analyzing these representations, we gain access to a more detailed understanding of the model's internal decision-making process. This granularity allows us to identify potential hallucination subspaces, leading to better hallucination detection performance compared to existing black-box approaches, such as SelfCKGPT [1], Self-evaluation [3], etc., which are already compared in our submission (**Table 1**).
- Finally, we may consider some strategies to mitigate the concerns about the proprietary nature of cloud-hosted LLMs, such as experimenting with proxy models that closely approximate the behavior of the proprietary LLMs. This can help study and detect hallucinations without infringing on intellectual property rights.
Thank you for bringing this up with us! We will include the discussion in the Limitation section of our submission.
**A3. Discussion on subspace primary vectors and the layer-wise variations**
You raise a great point!
Firstly, subspace primary vectors derived through SVD often encapsulate the dominant patterns and variations within the internal representations of a model [4]. These vectors can highlight the primary modes of variance in the unlabeled data, which are not purely random but instead capture significant structural features of the model’s processing. In our case, it could be the hallucination information. Even though these vectors could, in theory, capture various features, they are particularly informative for detecting hallucinations because hallucination and truthfulness patterns are among these primary modes of variation in the unlabeled data. This phenomenon can be verified by the empirically observed separability in both our submission (**Figure 7 in Appendix**) and literature [5]. In addition, we provide visualization results [here](https://openreview.net/attachment?id=ukMLqTlT90&name=pdf) on the embeddings of the truthful and hallucinated LLM generations for the TyDiQA-GP dataset on the LLaMA-2-7b model, which are almost linearly separable. We believe that this can help illustrate the effectiveness of projection against the top singular vectors.
Moreover, for the variation with respect to the layers, we select the layer based on a small number of validation data as described in **Section 4.1** of our submission.
**A4. Incorporating training objective on certain subspaces**
Another great point! We do anticipate the possibility of explicitly regularizing the training of LLMs for better hallucination detection. One straightforward idea is to add an objective that fine-tunes the LLMs (or their feature subspace) to clearly distinguish the true vs. false based on the representation space and the separated unlabeled data at each training step, and then keep training for a few epochs. This could be a promising future work.
[1] Manakul et al., SELFCHECKGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models, EMNLP 2023
[2] Li et al., HaluEval: A Large-Scale Hallucination Evaluation Benchmark for Large Language Models, EMNLP 2023.
[3] Kadavath et al., Language models (mostly) know what they know. arXiv preprint arXiv:2207.05221, 2022.
[4] https://en.wikipedia.org/wiki/Principal_component_analysis
[5] Zou et al., Representation engineering: A top-down approach to ai transparency. arXiv preprint arXiv:2310.01405, 2023 | Rebuttal 1:
Rebuttal: We thank all the reviewers for their time and valuable comments. We are encouraged to see that all reviewers find our approach **interesting, clear, simple, new, welcome**, and **scales well** (smZn, 6VqW, tABH, JMXE), and our results **very interesting, excellently presented**, with **thorough, welcome** ablations (smZn, tABH). Reviewers also recognize our paper presentation to be **clear, very well-written** (smZn, tABH, JMXE).
As recognized by multiple reviewers, the significance of our work can be summarized as follows:
- Our work offers a new algorithmic framework that leverages the unlabeled LLM generations to help hallucination detection, which is an important research question.
- The framework is based on the factorization of the LLM representations, where the membership of the unlabeled data is inferred and subsequently, a final hallucination classifier is learned. The approach is simple, clear, and effective.
- We provide supportive experiments to show the effectiveness of our approach, precisely highlighting how the proposed framework works in practice. Sufficient ablations are provided to help readers understand the method.
We respond to each reviewer's comments in detail below. We are happy to revise the manuscript according to the reviewers' suggestions, and we believe this will make our paper stronger.
Pdf: /pdf/89acd5b1321394c6c9c581d46ebcda2fd370c31d.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
SDEs for Adaptive Methods: The Role of Noise | Reject | Summary: This work derives SDEs for adaptive gradient methods and study the role of gradient noise. The analysis starts from theoretically driving the SDE for SignSGD and highlight its significant difference from SGD. The work further generalize the SDE analysis for AdamW and RMSpropW, two popular adaptive optimizers with decoupled weight decay and reveal key properties of weight decay. Finally, the work integrates the derived SDEs with Euler-Maruyama to confirm that the SDEs faithfully track their respective optimizers with various modern neural networks.
Strengths: -The theoretical results are novel. To my knowledge, this is a first SDE analysis for SignSGD with quantitatively accurate descriptions.
-The theoretical analysis reports some novel properties in terms of gradient noise and convergence. These properties are interesting.
-The proofs seem complete and reasonable.
-A useful theory should be quantitatively verifiable. This work definitely make it. The experiments that SDEs fit the empirical results with various optimizers and models are informative and impressive.
Weaknesses: -It seems that the reported theoretical results and insights cannot directly lead to some theory-inspired and improved methods. This raise a question on the significance of this work.
-While this work did literature review, some important references are still missing, such as [1] on analyzing Adam using SDEs. As weight decay plays a key role in the results, it may be helpful to review recent papers analyzing novel or overlooked properties of weight decay.
Reference:
[1] Xie, Z., Wang, X., Zhang, H., Sato, I., & Sugiyama, M. (2022, June). Adaptive inertia: Disentangling the effects of adaptive learning rate and momentum. In International conference on machine learning (pp. 24430-24459). PMLR.
Technical Quality: 3
Clarity: 3
Questions for Authors: - Please see the weaknesses.
- Could you please explain more how L2 regularization and decoupled weight decay behaves differently in your results?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: This work discussed the limitations in the appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the Reviewer for the significant effort put into this review: We appreciate the acknowledgement of the value of our research. We thank you for the questions as they stimulated us to include some more references and dig deeper to showcase the explanatory power of our SDEs even more.
**Weakness 1:**
"*It seems that the reported theoretical results and insights cannot directly lead to some theory-inspired and improved methods. This raise a question on the significance of this work.*"
**Answer:**
We acknowledge that our work has limitations in terms of developing improved methods. However, we aimed to offer new insights into existing adaptive methods that are known to perform well in practice, even though the reasons for their effectiveness are not yet fully understood. We respectfully believe that, from this perspective, our work holds significant value and is of interest to the community.
**Weakness 2:**
*"While this work did literature review, some important references are still missing, such as [1] on analyzing Adam using SDEs. As weight decay plays a key role in the results, it may be helpful to review recent papers analyzing novel or overlooked properties of weight decay."*
**Answer:**
We thank the Reviewer for reminding us about this interesting paper, which we are familiar with but unfortunately forgot to cite. Rather than studying the role of noise on the dynamics of Adam, their focus is mainly on disentangling the effects of learning rate adaptivity and momentum on saddle-point escaping and flat minima selection. They use the SDE to study how Momentum helps SGD escape saddle points and minima. Analogously, they repeat the analysis for Adam and find that learning rate adaptivity helps to escape saddle points but leads to sharper minima than SGD. Inspired by their results (Thm.2 and Thm.3), they propose Adai (Adaptive Inertia Optimization), which rather than opting for adaptivity of the learning rate, it opts for adaptivity of the momentum parameters: They theoretically predict and experimentally validate that Adai is both fast at escaping saddles and successful at finding flat minima.
As highlighted by Malladi et al., the SDE presented in [1] is not derived within any formal framework and therefore does not come with formal approximation guarantees. However, this is a very insightful and valuable work that we will cite in the final version of the paper. We are of course happy to know if there is a specific point about [1] that we should pay attention to, or if other important references have been missed.
Regarding Weight Decay, we kindly request that the Reviewer provide us with any specific references they have in mind.
**Question 2:**
*"Could you please explain more how L2 regularization and decoupled weight decay behaves differently in your results?"*
**Answer:**
This is a very interesting question: Please, find below the SDE induced by using Adam on an $L^2$-regularized loss together with the equivalent of Lemma 3.13. Most importantly, we observe the $L^2$ regularization used in this way does not provide additional resilience against noise w.r.t. Adam: The asymptotic loss level scales linearly in the noise $\sigma$ exactly as it does for Adam. On the contrary, when $L^2$ regularization is used in a decoupled way as in AdamW, the asymptotic loss level is upper-bounded in $\sigma$.
When Adam is used to optimize the $L^2$-regularized loss $f(x) + \frac{\gamma\lVert x\rVert_2^2}{2}$ for $\gamma>0$, the SDE of the method is:
\begin{equation}
d X_t =-\frac{\sqrt{\gamma_2(t)}}{\gamma_1(t)} P_t^{-1} (M_t + \eta \rho_1 \left(\nabla f\left(X_t\right)-M_t\right) - \gamma X_t) d t
\end{equation}
\begin{equation}
d M_t =\rho_1\left(\nabla f\left(X_t\right)-M_t\right) d t+\sqrt{\eta} \rho_1 \Sigma^{1 / 2}\left(X_t\right) d W_t
\end{equation}
\begin{equation}
d V_t =\rho_2\left( (\nabla f(X_t))^2 + diag\left(\Sigma\left(X_t\right)\right)-V_t\right) d t,
\end{equation}
where $\beta_i = 1 - \eta \rho_i$, $\gamma_i(t) = 1 - e^{-\rho_i t}$, $\rho_1 = \mathcal{O}(\eta^{-\zeta})$ s.t. $\zeta \in (0,1)$, $\rho_2 = \mathcal{O}(1)$, and $P_t = diag{\sqrt{V_t}} + \epsilon \sqrt{\gamma_2(t)}I_d$.
Under the same assumptions of Lemma 3.5, the dynamics of Adam on a $L^2$-regularized loss implies that
\begin{equation}
\mathbb{E}[f(X_t) - f(X_*)] \overset{t \rightarrow \infty}{\leq} \frac{\eta \mathcal{L}_\tau \sigma }{2} \frac{L}{2 \mu L + \gamma (L + \mu)},
\end{equation}
meaning that the asymptotic loss level grows linearly in $\sigma$ as it already does for Adam.
Much differently, the asymptotic loss level for AdamW is
\begin{equation}
\mathbb{E}[f(X_t) - f(X_*)] \overset{t \rightarrow \infty}{\leq} \frac{\eta \mathcal{L}_\tau \sigma }{2} \frac{L}{2 \mu L + \sigma \gamma (L + \mu)},
\end{equation}
which is upper-bounded in $\sigma$.
**Please, find an empirical validation of this bound in Figure 2 of the attached .pdf file.**
---
Rebuttal Comment 1.1:
Title: Thanks for the rebuttal.
Comment: Thanks for the rebuttal and addressing some of the concerns.
I will keep the rating as 6. I tend to accept this work.
---
Reply to Comment 1.1.1:
Comment: Dear Reviewer,
Thank you for your response.
We are glad to know that some of your concerns have been resolved: Could you please share any remaining issues or suggestions you might have? Your feedback is invaluable and will assist us in refining our manuscript further.
We appreciate your time and consideration.
Best regards,
The Authors | Summary: The authors derive SDE for signSGD and Adam(W). The experiments show that the algorithm will converge toward the limit of the theorem indicates.
Strengths: The authors propose "accurate" SDEs for algorithms Sign-SGD and Adam(W).
Weaknesses: 1. In Remark after Lemma 3.6, the authors claim that Sign-SGD is (almost) linear in $\sigma_{max}$. However, with $\Delta$ either in Phase 2 or Phase 3, there should be $\sigma_{max}^2$ in the final bound.
2. All the stationarity holds when Hessian is the same from $X_0$ to $X_t$ and convergence holds for strongly convex. However, the hessian changes a lot during network training.
Technical Quality: 3
Clarity: 2
Questions for Authors: 1. Notation of $W_t$ is not defined. What is $W_t$?
2. How can we extend Lemma 3.13 to convex setting (or even nonconvex case)?
Confidence: 2
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We sincerely thank the Reviewer: We appreciate the questions as they stimulated us to clarify certain aspects and dig deeper to showcase the explanatory power of our SDEs even more.
**Weakness 1:**
*"In Remark after Lemma 3.6, the authors claim that Sign-SGD is (almost) linear in $\sigma_{\text{max}}$. However, with $\Delta$ either in Phase 2 or Phase 3, there should be $\sigma_{\text{max}}^2$ in the final bound."*
**Answer:**
We fully agree with this observation, which is why we say that the dependence is "*almost linear*" in $\sigma_{\text{max}}$. We can rewrite the asymptotic loss level as:
\begin{equation}
\frac{\eta}{2} \frac{\mathcal{L}_{\tau}}{ 2 \mu } \frac{1}{\Delta},
\end{equation}
and observe that
\begin{equation}
\frac{1}{\Delta} =\frac{\pi \sigma_{\text{max}}^2 }{\sqrt{2 \pi} \sigma_{\text{max}} + \eta \mu} = \frac{\pi \sigma_{\text{max}} }{\sqrt{2 \pi} + \frac{\eta \mu}{\sigma_{\text{max}}}}.
\end{equation}
Therefore, when the noise $\sigma_{\text{max}}$ dominates over the learning rate $\eta$ and/or over the minimum eigenvalue $\mu$ of the Hessian, or more in general when $\frac{\eta \mu}{\sigma_{\text{max}}} \sim 0$, we can conclude that the behavior is essentially linear in $\sigma_{\text{max}}$: We will most certainly clarify this aspect better in the final version of the paper.
**Weakness 2:**
*"All the stationarity holds when Hessian is the same from $X_0$ to $X_t$ and convergence holds for strongly convex. However, the hessian changes a lot during network training."*
**Answer:**
We agree that the Hessian of the loss function can change dramatically during training. However, as we specify in Line 186, we are not studying the properties of the iterates during training, but rather characterize the stationary distribution around minima: These are the only points where the optimizer can reach stationarity and possibly stop.
With this in mind, as we specify in Lines 125 to 128, it is common in the literature to approximate the loss function with a quadratic function in a neighborhood around these points. Therefore, the Hessian is constant in this neighborhood.
These two reasons justify why we only study the stationary distribution of SignSGD in Phase 3 for a quadratic loss function. We also add that whatever happens before Phase 3 does not influence what happens at convergence, e.g. the stationary distribution.
In response to the second part of your comment, we have strengthened our convergence analysis beyond the strongly convex case. Specifically, we extended Lemma 3.5 to the general smooth non-convex case (i.e. only requiring $L$-smoothness):
Let $f$ be $L$-smooth, $\eta_t$ be a learning rate scheduler such that $\lim_{t \rightarrow \infty} \frac{\phi_t^2}{\phi^1_t} \overset{t \rightarrow \infty}{\rightarrow} 0$ and $\phi^1_t \overset{t \rightarrow \infty}{\rightarrow} \infty$, where $\phi^i_t = \int_0^t (\eta_s)^i ds$. Then, during
1. Phase 1, $\lVert \nabla f\left(X_{\tilde{t}^1}\right)\rVert_1 \leq \frac{f(X_0) - f(X_*)}{\phi_t^1} \overset{t \rightarrow \infty}{\rightarrow} 0$;
2. Phase 2, $$ \left( \frac{m}{\sqrt{2}}\mathbb{E} \lVert \nabla f\left(X_{\tilde{t}^{(1,2)}}\right)\rVert_2^2 + \hat{q} \sigma_{\text{max}} \mathbb{E} \lVert \nabla f\left(X_{\tilde{t}^{(2,2)}}\right)\rVert_1 \right) \leq \sigma_{\text{max}} \left( \frac{f(X_0) - f(X_*)}{\phi^1_t} + \frac{\eta L d}{2} \frac{\phi_t^2}{\phi^1_t} \right) \overset{t \rightarrow \infty}{\rightarrow} 0;$$
3. Phase 3, $\mathbb{E} \lVert \nabla f\left(X_{\tilde{t}^3}\right)\rVert_2^2 \leq \sqrt{\frac{\pi}{2}} \frac {\sigma_{\text{max}} \eta L d}{2} \frac{\phi_t^2}{\phi^1_t} + \sqrt{\frac{\pi}{2}} \sigma_{\text{max}} \frac{f(X_0) - f(X_*)}{\phi^1_t} \overset{t \rightarrow \infty}{\rightarrow} 0$;
where $\tilde{t}^1$, $\tilde{t}^{(1,2)}$, $\tilde{t}^{(2,2)}$, and $\tilde{t}^3$ are random times with distribution $\frac{\eta_t}{\phi^1_t}$.
Interestingly, in Phase 1, SignSGD implicitly minimizes the $L^1$-norm of the gradient, in Phase 2 it implicitly minimizes a linear combination of norm $L^1$ and $L^2$, and in Phase 3 it implicitly minimizes the norm $L^2$: This result is novel as well and we thank the Reviewer for asking this great question.
**Question 1:**
*"Notation of $W_t$ is not defined. What is $W_t$?"*
**Answer:**
We apologize for not defining this symbol in the main paper.
$W_t$ is the Brownian motion and we will specify this clearly in the final version of the paper. Importantly, we highlight that we included a whole chapter on Stochastic Calculus in Appendix B.
**Question 2:**
*"How can we extend Lemma 3.13 to convex setting (or even nonconvex case)?"*
**Answer:**
Due to some technical issues on AdamW that we will address in the future, we now put forward a generalization of Lemma 3.13 for Adam where we only require $L$-smoothness:
Let $f$ be $L$-smooth, $\eta_t$ be a learning rate scheduler such that $\lim_{t \rightarrow \infty} \frac{\phi_t^2}{\phi^1_t} \overset{t \rightarrow \infty}{\rightarrow} 0$ and $\phi^1_t \overset{t \rightarrow \infty}{\rightarrow} \infty$, where $\phi^i_t = \int_0^t (\eta_s)^i ds$. Then
\begin{equation}
\mathbb{E} \lVert \nabla f \left(X_{\tilde{t}} \right) \rVert_2^2 \leq \left[ f(X_0) - f(X_*) + \mathcal{L}_{\tau} \left( \frac{ \delta B}{\rho_1^2 \sigma^2} \frac{\lVert M_0 \rVert_2^2}{2} + \frac{\phi^2_t \eta \kappa^2}{2} \right) \right] \frac{\sigma}{\kappa \sqrt{\delta B}} \frac{1}{\phi^1_t} \overset{t \rightarrow \infty}{\rightarrow} 0
\end{equation}
where $\tilde{t}$ is a random time with distribution $\frac{\eta_t}{\phi^1_t}$.
---
Rebuttal Comment 1.1:
Comment: Thanks for the authors' response. I have no further questions. Since the authors claim to add clarification in the final version, I raise my score to 6.
---
Reply to Comment 1.1.1:
Title: Thanks!
Comment: Dear Reviewer,
Thank you for your trust and the updated score: We truly appreciate it.
Best regards,
The Authors | Summary: This paper derives SDEs for SignSGD, RMSprop, and Adam.
The analysis offers insights into the convergence speed, stationary distribution, and robustness to heavy-tail noise of adaptive methods.
Strengths: - The derived SDE for SignSGD exhibits three different phases of the dynamics.
- The analysis reveals the difference between SignSGD and SGD in terms of the asymptotic expected loss, the robustness of noise variance, etc.
- The analysis of AdamW provides insights into the different roles of noise, curvature, and weight decay.
Weaknesses: Refer to Questions and Limitations.
Technical Quality: 3
Clarity: 2
Questions for Authors: - What learning rate (lr) do the experiments in Figure 4 use? Within what range of lr does this SDE align well with the original algorithm (experimentally)?
- Could the authors intuitively explain why the asymptotic expected loss of SignSGD is proportional to $\sigma_{\max}$ instead of $\sigma_{\max}^2$?
- How can the derived SDE explain the loss spike phenomenon of SignSGD/AdamW?
- Many works about SGD noise ([1][2][3]) admit the noise structure $\mathbb{E}(g_i-g)(g_i-g)^{\top}\sim \mathcal{L} H$. What conclusions (such as those related to the training phases) can be derived from the SDE if we change the noise assumption in Corollary 3.3 to $\mathbb{E}(g_i-g)(g_i-g)^{\top}\sim\mathcal{L} H$?
[1] Ziyin et al. Strength of minibatch noise in SGD.
[2] Wojtowytsch. Stochastic gradient descent with noise of machine learning type. part II: Continuous time analysis.
[3] Wu et al. The alignment property of SGD noise and how it helps select flat minima: A stability analysis.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 3
Limitations: The SDE for AdamW is limited to quadratic functions.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: We thank the Reviewer for their thorough and thoughtful review. We appreciate the questions posed, as they motivated us to delve deeper and further showcase the explanatory power of our SDEs. However, we would like to clarify that **contrary to what is mentioned under "Limitations", none of our SDEs is limited to quadratic functions: The theory applies to general smooth functions.** We conducted extensive experimental validation that our SDEs correctly model the respective algorithms on a variety of architectures and datasets (see Figures 1, 4, 8, 9, and 11 and the respective experimental details in Appendix F).
**Answers to Q1:**
1. As per Appendix F.5, the learning rates (lrs) used for AdamW are $10^{-2}$ for the Transformer and $10^{-5}$ for the ResNet. For RMSpropW, they are $10^{-3}$ and $10^{-4}$, respectively: We will add these details in the caption of the figures;
2. In our experiments, we first fine-tuned the hyperparameters to ensure the convergence of the "real" optimizers. Then, we used the same hyperparameters to simulate the SDEs.
While we did not ablate the range of the lr over which the SDEs align well with the algorithms, our experiments use a wide range of lrs across different datasets and architectures: From $10^{-3}$ to $10^{-2}$ for SignSGD, from $10^{-4}$ to $10^{-2}$ for RMSprop(W), and from $10^{-5}$ to $10^{-2}$ for Adam(W). Our SDEs match the respective algorithms well in all such cases.
**Answer to Q2:**
In SGD, the error/noise on the update scales with $\sigma^2$. In SignSGD, the $Sign$ operator clips the stochastic gradient and hence it also clips its noise: This clipping/normalization implies that this error scales with $\sigma$.
**Answer to Q3:**
We attempted to address this question while writing the paper, but we were unable to formally explain these phenomena. To satisfy both our curiosity and that of the reviewer, we offer our conjecture in an Official Comment, providing some technical details.
**Answer to Q4:**
Since it was unclear which assumption was precisely meant, we have read the references and selected three noise structures: We study two below and the third one in an Official Comment.
Under these assumptions, we generalized Cor. 3.3 and provided convergence in the same fashion as Lemma 3.5. Additionally, see the **Answer to Question 2 from Reviewer PBh6** for a generalized version of Cor. 3.3 where we only require the loss function to be $L$-smooth.
**Assumption from [1]**
[1] proposes several expressions for $\Sigma$: We took the only one in line with that prescribed by the Reviewer: As per Eq. (16) in Corollary 2, $\Sigma := \sigma^2 f(x_*) \nabla^2 f(x_*)$, where $\sigma^2$ controls the scale of the noise and $x_*$ is an optimum.
Therefore, for $Y_t := \frac{\nabla^2 f(x_*)^{-\frac{1}{2}} \nabla f(X_t)}{\sqrt{2 \nabla f(x_*)} \sigma}$ and $\mathcal{S}(X_t)=\mathbb{E}[(Sign(\nabla f_{\gamma}(X_t)))(Sign(\nabla f_{\gamma}(X_t)))^{\top}]$, Cor. 3.3 becomes:
$$
d X_t = - Erf \left( Y_t \right) dt + \sqrt{\eta} \sqrt{\mathcal{S}(X_t) - Erf \left(Y_t \right) Erf \left(Y_t \right)^{\top}} d W_t.
$$
Therefore, Lemma 3.5 becomes:
Let $f$ be $\mu$-strongly convex, $\lambda$ be the largest eigenvalue of $\nabla^2 f(x_*)$, $S_t:=f(X_t) - f(x_*)$, and $Tr(\nabla^2 f(x)) \leq \mathcal{L}_{\tau}$. Then, during
1. Phase 1, the loss will reach $0$ before $t_* = 2 \sqrt{\frac{S_0}{\mu}}$ because $S_t \leq \frac{1}{4} \left( \sqrt{\mu}t - 2 \sqrt{S_0}\right)^2$;
2. Phase 2 with $\Delta:= \left( \frac{m}{\sqrt{2 f(x_*)}\sigma\sqrt{\lambda}} + \frac{\eta \mu m^2}{4 f(x_*) \sigma^2 \lambda } \right)$: $\mathbb{E}[S_t] \leq S_0 e^{- 2 \mu \Delta t} + \frac{\eta}{2} \frac{ \left(\mathcal{L}_{\tau} - \mu d \hat{q}^2 \right)}{2 \mu \Delta} \left(1 - e^{- 2 \mu \Delta t}\right)$;
3. Phase 3 with $\Delta:= \left(\sqrt{\frac{2}{\pi}} \frac{1}{\sqrt{ f(x_*)}\sigma\sqrt{\lambda}} + \frac{\eta}{\pi} \frac{\mu}{f(x_*) \sigma^2 \lambda}\right)$: $\mathbb{E}[S_t] \leq S_0 e^{- 2 \mu \Delta t} + \frac{\eta}{2} \frac{ \mathcal{L}_{\tau}}{2 \mu \Delta} \left(1 - e^{- 2 \mu \Delta t}\right)$.
**Please, find an empirical validation in Figure 1.a of the attached .pdf file.**
**Assumption from [3]**
[3] assumes that $\Sigma$ is aligned with the FIM and proportional to the loss. Consistently with this and with the prescription of the Reviewer, we take $\Sigma := \sigma^2 f(x) \nabla^2 f(x)$, where we changed the constants to $\sigma^2$ to maintain consistency with the rest of our paper.
Therefore, we have that for $Y_t := \frac{ (\nabla^2 f(X_t))^{-\frac{1}{2}}\nabla f(X_t)}{\sqrt{2 \nabla f(x)} \sigma}$ and $\mathcal{S}(X_t)=\mathbb{E}[(Sign(\nabla f_{\gamma}(X_t)))(Sign(\nabla f_{\gamma}(X_t)))^{\top}]$, Cor. 3.3 becomes:
$$
d X_t = - Erf \left( Y_t \right) dt + \sqrt{\eta} \sqrt{\mathcal{S}(X_t) - Erf \left(Y_t \right) Erf \left(Y_t \right)^{\top}} d W_t.
$$
Therefore, Lemma 3.5 becomes:
Let $f$ be $\mu$-strongly convex, $L$-smooth, $S_t:=f(X_t) - f(x_*)$, and $Tr(\nabla^2 f(x)) \leq \mathcal{L}_{\tau}$ Then, during
1. Phase 1, the loss will reach $0$ before $t_* = 2 \sqrt{\frac{S_0}{\mu}}$ because $S_t \leq \frac{1}{4} \left( \sqrt{\mu}t - 2 \sqrt{S_0}\right)^2$;
2. Phase 2 with $\beta := \frac{\eta}{2} \left( \mathcal{L}_{\tau} - \mu d \hat{q}^2 - \frac{m^2 \mu^2}{\sigma^2 L }\right)$ and $\alpha:= \frac{\sqrt{2} m \mu}{\sqrt{L}\sigma}$,
$$
\mathbb{E}[S_t] \leq \frac{\beta^2 \left( \mathcal{W}\left( \frac{(\beta + \sqrt{S_0} \alpha)}{\beta} \exp\left(-\frac{\alpha^2 t - 2 \sqrt{S_0} \alpha}{2 \beta} - 1 \right) \right) + 1 \right)^2}{\alpha^2} \overset{t \rightarrow \infty}{\rightarrow} \frac{\beta^2}{\alpha^2},
$$
where $\mathcal{W}$ is the Lambert $\mathcal{W}$ function;
3. Phase 3, it is the same as Phase 2 but $\beta := \eta \left( \frac{\mathcal{L}_{\tau}}{2} - \frac{2 \mu^2}{\pi \sigma^2 L }\right)$ and $\alpha:= 2 \sqrt{\frac{2}{\pi}} \frac{\mu}{\sqrt{L}\sigma}$.
**Please, find an empirical validation in Figure 1.b of the attached .pdf file.**
---
Rebuttal 2:
Title: Reviewer's curiosity: Noise Structure and Conjecture on Spiking Phenomena
Comment: **_Given the length limit for the Rebuttal, we decided to include these minor results in an Official Comment._**
**Continuation of Answer to Q4 - The Third Noise Structure:**
[2] discusses two possible assumptions on $\Sigma$: $\|\Sigma(x)\| \leq C f(x)$ and $\|\Sigma(x)\| \leq C f(x)\left[1+|x|^2\right]$. Even though none was in line with the prescription of the Reviewer, we still thought that the one they used, i.e. $\Sigma = C f(x) I_d$ as per Section 2.4, is interesting. Therefore, we take $\Sigma := \sigma^2 f(x) I_d$, where we changed the constant to $\sigma^2$ to maintain consistency with the rest of our paper.
Under this assumption, we have that for $Y_t := \frac{\nabla f(X_t)}{\sqrt{2 f(x)} \sigma}$, Corollary 3.3 becomes:
\begin{align}
d X_t = - Erf \left( Y_t \right) dt + \sqrt{\eta} \sqrt{I_d - diag(Erf \left(Y_t \right))^2} d W_t.
\end{align}
As a consequence, Lemma 3.5 becomes:
Let $f$ be $\mu$-strongly convex, $S_t:=f(X_t) - f(x_*)$, and $Tr(\nabla^2 f(x)) \leq \mathcal{L}_{\tau}$. Then, during
1. Phase 1, the loss will reach $0$ before $t_* = 2 \sqrt{\frac{S_0}{\mu}}$ because $S_t \leq \frac{1}{4} \left( \sqrt{\mu}t - 2 \sqrt{S_0}\right)^2$;
2. Phase 2 with $\beta := \frac{\eta}{2} \left( \mathcal{L}_{\tau} - \mu d \hat{q}^2 - \frac{m^2 \mu^2}{\sigma^2}\right)$ and $\alpha:= \frac{\sqrt{2} m \mu}{\sigma}$,
\begin{equation}
\mathbb{E}[S_t] \leq \frac{\beta^2 \left( \mathcal{W}\left( \frac{(\beta + \sqrt{S_0} \alpha)}{\beta} \exp\left(-\frac{\alpha^2 t - 2 \sqrt{S_0} \alpha}{2 \beta} - 1 \right) \right) + 1 \right)^2}{\alpha^2} \overset{t \rightarrow \infty}{\rightarrow} \frac{\beta^2}{\alpha^2},\end{equation}
where $\mathcal{W}$ is the Lambert $\mathcal{W}$ function.
3. Phase 3 it is the same as Phase 2 but with $\beta := \eta \left( \frac{\mathcal{L}_{\tau}}{2} - \frac{2 \mu^2}{\pi \sigma^2}\right)$ and $\alpha:= 2 \sqrt{\frac{2}{\pi}} \frac{\mu}{\sigma}$.
_Please, find an empirical validation of these bounds in Figure 1.c of the attached .pdf file._
**Continuation of Answer to Q3 - Reviewer's curiosity: Conjecture on Spiking Phenomena:**
This is a very interesting question that we do not address in this paper. While this is not a fundamental element for the flow and contribution of our paper, we gladly try to answer it, both for our and the Reviewer's curiosity.
Although we can not answer this in the general case, we offer the following conjecture to provide an intuition of how one could explain the spiking behavior of the mentioned optimizers.
Since the SDE of RMSprop is less complex and less complicated to work with, we restrict ourselves to this case: Generalizing is only a matter of technicalities.
Let us remind that the SDE of RMSprop is
$$
d X_t = - V_t^{-\frac{1}{2}} (\nabla f(X_t) dt + \sqrt{\eta} \Sigma(X_t)^{\frac{1}{2}} d W_t)
$$
$$
d V_t = \rho( (\nabla f(X_t))^2 + diag(\Sigma(X_t)) - V_t)) dt,
$$
where $\beta = 1 - \eta \rho$.
Intuitively, the dynamics of the parameters $X_t$ is a preconditioned version of SGD. Much differently, $V_t$ is a process that tracks the squared gradient and its noise.
This implies that the expected iterates follow the dynamics:
\begin{equation}
d \mathbb{E}[X_t] = - \mathbb{E}\left[\frac{\nabla f(X_t)}{\sqrt{V_t}}\right] dt.
\end{equation}
Consistently with the noise structure proposed by the Reviewer and used in many papers (see [3] and references therein), let us assume that the covariance of the noise $\Sigma(x)$ scales proportionally to the loss function, e.g. $\Sigma \sim f(x)$.
Spikes seem to happen when the loss is essentially $0$, meaning that $\nabla f(X_t) \sim 0$, $\Sigma \sim 0$, and $V_t \sim 0$. However, if we now draw a minibatch of data for which the gradient is not $0$, e.g. some data points that are outliers, $V_t$ might not have the time to "catch up" with this anomaly. Therefore, the numerator $\nabla f(X_t)$ is non-$0$ while the denominator $\sqrt{V_t}$ is still essentially $0$, meaning that the ratio $\frac{\nabla f(X_t)}{\sqrt{V_t}}$ spikes to infinity, drastically disturbing the dynamics of the iterates and in turn that of the loss function which might spike.
---
Rebuttal Comment 2.1:
Comment: Many thanks to the authors for your careful explanation and detailed rebuttal. I feel that this paper is of great help in understanding signSGD/Adam. I have raised my score.
---
Reply to Comment 2.1.1:
Title: Thanks!
Comment: Dear Reviewer,
Thank you for your kind words and the updated score: We truly appreciate it.
Best regards,
The Authors | null | null | Rebuttal 1:
Rebuttal: Dear Reviewers,
We sincerely appreciate your thorough reviews, insightful comments, and interesting questions regarding our paper: Your feedback has helped enhance our work.
The considerable time and effort we devoted during this rebuttal period were rewarding, as we derived new interesting insights that complemented our paper and made it even more interesting and rich.
We are pleased to report that we have addressed your questions and comments comprehensively, exploring new settings as a result: These responses are detailed in our rebuttals to each of the Reviewers and will be incorporated into the final version of the paper.
We look forward to the upcoming author-reviewer discussion period and **kindly ask you to re-evaluate our paper, considering raising your scores and confidence in your assessments.**
Thank you for your attention.
Best regards,
The Authors
Pdf: /pdf/21bb860bf5f72427c916b97e6a7dfdefbb6d2c04.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
One for All: Multi-Domain Joint Training for Point Cloud Based 3D Object Detection | Accept (poster) | Summary: This paper proposes OneDet3D, a universal one-for-all model that addresses 3D detection across various domains, including both indoor and outdoor scenes. It tackles two primary issues: data-level interference caused by differences in the point clouds themselves and category-level interference caused by label conflicts among categories across different domains.
Strengths: 1. The paper introduces a universal one-for-all 3D detection model applicable across different domains, using only one set of parameters.
2. The proposed method demonstrates superior performance and remarkable generalization ability compared to existing state-of-the-art approaches.
Weaknesses: 1. In lines 60-62, the authors claim that 3D sparse convolution is better than point-based feature extractors due to its robustness to domain gaps. Could the authors elaborate on this in detail?
2. Regarding the domain router, how is the domain label n obtained? Does data from the same dataset share the same domain label?
3. In Table 1, different datasets have different views. I wonder if any design could be implemented to tackle this difference for better generalization ability.
4. In Equation 1, experiments regarding the hyperparameters α and c are missing. The values of these hyperparameters are not mentioned.
5. In Table 2, the best performance achieved by existing methods is not highlighted in bold.
6. Table 4 underlines the second-best performance, while Table 5 does not. Table 4 underlines the second-best performance, while Table 5 does not. It would be beneficial to maintain consistency.
Technical Quality: 4
Clarity: 3
Questions for Authors: Please refer to the weakness section.
Confidence: 5
Soundness: 4
Presentation: 3
Contribution: 4
Limitations: N/A
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: In lines 60-62, the authors claim that 3D sparse convolution is better than point-based feature extractors due to its robustness to domain gaps. Could the authors elaborate on this in detail?**
A1:
Table 3-1: Comparison with point-based feature extractor
| | SUN RGB-D | ScanNet | KITTI | nuScenes |
| :--: | :--: | :--: | :--: | :--: |
|point-based feature extractor|28.3|27.2|15.5|12.1|
|3D sparse convolution based feature extractor|65.0|70.9|84.2|80.9|
Point-wise feature extractors typically exploit metric space distances to learn local features. They extract features from point clouds by leveraging geometric characteristics through operations such as sampling and grouping. These operations heavily rely on the geometric information and metric space distances inherent in point clouds. Consequently, when dealing with point clouds from different domains, especially between indoor and outdoor scenes, the vast differences in scene size, object dimensions, and other aspects can severely disrupt the ability of point-wise feature extractors to learn generalizable features. In contrast, 3D sparse convolution operates directly on the voxelized feature space, making it robust to the difference in point clouds. This makes it better suited for the requirements of multi-domain joint training.
We also conduct the comparative experiments and list the results in the above Tab. 3-1. As can be seen, 3D sparse convolution performs significantly better than the point-wise feature extractor. This further demonstrates the effectiveness of 3D sparse convolution when it comes to the large domain gap in point clouds, compared to point-wise feature extractors.
**Q2: Regarding the domain router, how is the domain label n obtained? Does data from the same dataset share the same domain label?**
A2: We will number the n datasets from 0 to n−1, assigning these numbers as the dataset classification labels. Data from the same dataset will share the same domain label, which will then be used for classification within the domain router.
**Q3: In Table 1, different datasets have different views. I wonder if any design could be implemented to tackle this difference for better generalization ability.**
A3: Both the model architecture and method designs support the better generalization ability for view difference. On the one hand, our OneDet3D adopts a fully sparse architecture. This architecture operates directly on points without requiring dense features with the fixed sizes, making it more flexible and robust in feature extraction when dealing with datasets from different views. On the other hand, in domain-aware partitioning, scatter partitioning is used to partition the normalization layers of point clouds from different domains. This approach prevents data-level interference between point clouds from different domains, thereby effectively addressing the issue of different views.
**Q4: In Equation 1, experiments regarding the hyperparameters $\alpha$ and $c$ are missing. The values of these hyperparameters are not mentioned.**
A4:
$\alpha$ is set to 0.25, and $\xi$ is set to 2.0. $c$ is not a hyperparameter and is just the binary target class label.
To illustrate its reasonability, we also conduct the ablation study about these two hyperparameters on the SUN RGB-D dataset, as in the below Tab. 3-2 and Tab. 3-3. As can be seen, our model is relatively robust to the choice of hyperparameters and we utilize the optimal ones.
Table 3-2: Hyperparameter analysis on $\alpha$
| $\alpha$ | AP25 | AP50 |
| :--: | :--: | :--: |
|0.1|59.7|40.2|
|0.2|62.8|47.1|
|0.25|65.0|51.3|
|0.3|63.6|48.9|
Table 3-3: Hyperparameter analysis on $\xi$
| $\xi$ | AP25 | AP50 |
| :--: | :--: | :--: |
|1.0|55.0|35.7|
|1.5|63.2|48.3|
|2.0|65.0|51.3|
|2.5|64.3|29.4|
**Q5: In Table 2, the best performance achieved by existing methods is not highlighted in bold.**
A5: Thank you for your suggestion. We will highlight the best performance achieved by existing methods in Table 2 in bold.
**Q6: Table 4 underlines the second-best performance, while Table 5 does not. It would be beneficial to maintain consistency.**
A6: Thanks for your suggestion. We will underline the second-best performance for both in the final version to maintain consistency.
---
Rebuttal Comment 1.1:
Comment: Thank you for providing the additional experiments and detailed explanations in your rebuttal. It addressed my concerns effectively. | Summary: This paper proposes OneDet3D, which is a multi-domain jointly trained point cloud object detector for universal 3D object detection. It is the first 3D detector that supports point clouds from both indoor and outdoor scenes simultaneously with only one set of parameters. The experiments are conducted on multiple indoor and outdoor benchmarks.
Strengths: - The proposed techniques are simple yet effective with intuitive explanations.
- This work showcases that multi-domain training can achieve better performance than single-domain training, which is a valuable observation.
Weaknesses: - The performance in Table 2 for the nuScenes dataset is strange, where AP is not aligned with the numbers reported by other methods.
- The sizes of the adopted datasets are quite different. Are any resampling strategies adopted? How to mix them up? The training scheme and implementation details should be added, such as training schedule, data augmentation, and so on.
- The attributes of point clouds are different. For example, indoor datasets may contain RGB information and outdoor data such as Waymo may contain elongation and timestamp information. How to deal with the different channel sizes? A lot of details are ignored by the authors.
- There are many basic grammar issues. For example, "is joint training on" should be "is jointly trained on".
- I notice the authors claim and adopt fully sparse architecture. However, there is no comparison or discussion with previous work with fully sparse structures such as [1 - 4]. What are the differences between the proposed architecture and theirs? Do the issues they addressed still exist?
- Sec. 3.3 is hard to follow and should be rewritten with more details and sufficient explanation. (1) The description of "we utilize language vocabulary embeddings from CLIP" should be more detailed. Does the network predict CLIP embedding? How to supervise? (2) Why is it necessary to convert sparse point features to dense features in Sec. 3.3? (3) What is the specific definition of the dense features? (4) Why is the back propagation unfeasible? (5) How many classification branches does it have? What does the "both branches" in L215 mean?
- Lack of comparison with other similar methods such as [5,6] in Table 4 & 5. \
[1] Fully Sparse 3d object detection \
[2] Voxelnext: Fully sparse voxelnet for 3d object detection and tracking \
[3] FSD V2: Improving Fully Sparse 3D Object Detection with Virtual Voxels \
[4] SAFDNet: A Simple and Effective Network for Fully Sparse 3D Object Detection \
[5] ST3D: Self-training for Unsupervised Domain Adaptation on 3D Object Detection \
[6] Uni3D: A Unified Baseline for Multi-dataset 3D Object Detection
Technical Quality: 2
Clarity: 2
Questions for Authors: See the weakness part. I would like to increase the score if my concerns are well addressed.
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 3
Limitations: The authors do not discuss the limitations and social impacts, which should be added.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: The performance in Table 2 for the nuScenes dataset is strange.**
A1: The reason is that we train and evaluate only on the car category of the nuScenes dataset. Since only the car class is involved in training, we do not need to use the CBGS sampler for class balance optimization during training, which significantly increases the number of training iterations. These factors cause our results to not be aligned with the numbers reported by other methods. For example, for the UVTR method, the original paper reports the AP of 60.9% for all 10 classes, while its AP for the car category is 84.8%. Without using the CBGS sampler, the car AP becomes 80.6%, which is the result we report in our paper. We will illustrate this in the final version.
**Q2: Are any resampling strategies adopted? How to mix them up? The training scheme and implementation details should be added, such as training schedule, data augmentation, and so on.**
A2: When mixing different datasets, we first perform dataset-wise uniform sampling across different datasets. Then, we sample individual point clouds from each dataset for training. Detailed explanations of the training scheme and implementation details are provided in the appendix (L460). The augmentations mainly include global translation, rotation, scaling, and ground-truth sampling augmentation. We train the model with the AdamW optimizer for 20 epochs. The initial learning rate is set to 0.0001 and is updated in a cyclic manner.
**Q3: How to deal with the different channel sizes?**
A3: During multi-dataset training, the attribute channel size is set to the least common multiple of the attribute dimensions from the different datasets, which is 6-dim. The attributes of the point clouds from different datasets are repeated accordingly to match this unified channel size.
**Q4: There are many basic grammar issues.**
A4: Thanks for your suggestions. We will correct them in the final version.
**Q5: There is no comparison or discussion with previous work with fully sparse structures such as [1 - 4]. What are the differences between the proposed architecture and theirs? Do the issues they addressed still exist?**
A5:
Table 2-1: Comparison with previous work with fully sparse structures
| |SUN RGB-D|ScanNet|KITTI|nuScenes|
| :--: | :--: | :--: | :--: | :--: |
|VoxelNeXt|8.3|9.9|68.4|71.0|
|FSD v2|13.6|12.9|60.1|72.4|
|SAFDNet |3.2|1.9|38.7|70.4|
|OneDet3D |65.0|70.9|84.2|80.9|
We compare our method with some existing fully sparse architectures, as shown above. These methods typically aim to achieve a more elegant and efficient network design. However, these methods are generally designed for outdoor scenes. Consequently, the specific designs of their detection heads, such as vote-based methods or BEV detection, are influenced by the structure and content of point clouds and thus are only applicable to outdoor scenes. Additionally, these methods lack designs to address multi-dataset interference, resulting in performance degradation across all datasets during multi-dataset joint training.
In contrast, the anchor-free detection head of our OneDet3D is more versatile for both indoor and outdoor scenes. Furthermore, domain-aware partitioning and language-guided classification can alleviate multi-dataset interference. Therefore, our approach provides a more universal solution for 3D detection.
**Q6: Sec. 3.3 is hard to follow and should be rewritten with more details and sufficient explanation.**
A6:
* We use the prompt "a photo of {name}" to extract language embeddings of the category names from different datasets using CLIP. These language embeddings are then used as parameters of the fully connected layer to perform the final classification, and are kept frozen during training. Our network does not need to predict such embeddings.
* Since the language embeddings from CLIP are dense features, to utilize the fully connected layer with such embeddings for classification, it is necessary to convert the sparse point features to dense features.
* Dense features refer to commonly used non-sparse features. They are expressed in the form of multi-dimensional matrix and stored as tensors within the network for computation.
* Since the language embeddings from CLIP are kept frozen during training, the parameters in the fully connected layer for classification are frozen. Backpropagation is thus unfeasible in the fully connected layer. This makes the network training relatively difficult.
* We have two classification branches. One branch is the frozen fully connected layer, utilizing CLIP embeddings. The other is a sparse convolution layer. This is trainable and is utilized for class-agnostic classification. “Both branches” in L215 thus denote these two branches.
**Q7: Lack of comparison with other similar methods such as [5,6].**
A7:
Table 2-2: Comparison with Uni3D
| | SUN RGB-D | ScanNet | KITTI | nuScenes |
| :--: | :--: | :--: | :--: | :--: |
|Uni3D|9.7|5.6|75.2|76.7|
|OneDet3D |65.0|70.9|84.2|80.9|
* Uni3D: We compare with Uni3D in the above Tab. 2-2. It can be seen that Uni3D can only handle outdoor point clouds, while OneDet3D provides a universal solution for all point clouds.
* ST3D: ST3D is designed for the unsupervised domain adaptation (UDA) task. Firstly, ST3D can only be applied to outdoor scenes and cannot be used for indoor scenes. Secondly, due to its focus on UDA, ST3D requires unlabeled target domain point clouds during training to perform cross-dataset experiments at test time. The trained model can only be used in the corresponding target domain, making this paradigm relatively inflexible. In contrast, our OneDet3D is more universal. It can be used in both indoor and outdoor scenes. Moreover, once trained, it can be directly applied to various scenes without requiring point clouds from the corresponding domain during training. Therefore, our model is more general and flexible.
---
Rebuttal Comment 1.1:
Comment: Thanks so much for your response. It addressed most of my concerns. I appreciate the efforts to implement fully sparse methods such as voxelnext and fsdv2 in the indoor datasets, which are supposed to be added into the main paper. I would like to increase my score. | Summary: This manuscript introduces OneDet3D, a universal point cloud-based 3D object detector designed to address the challenges of multi-domain joint training. The primary motivation is to overcome the limitations of existing 3D detectors, which are typically trained and tested on single datasets, restricting their generalization across different domains. OneDet3D aims to provide a unified solution that can handle diverse indoor and outdoor scenes using a single set of parameters.
The authors claimed the following contributions of OneDet3D:
- Domain-Aware Partitioning: This technique aims to address data-level interference caused by differences in point cloud structures across domains. The parameters related to data scatter and context learning are partitioned and guided by a domain router, allowing the model to learn domain-specific features without increasing complexity.
- Language-Guided Classification: By incorporating text modality through CLIP embeddings, OneDet3D mitigates category-level interference among different datasets. This approach allows for better generalization to new domains and categories.
- Fully Sparse Architecture: The use of 3D sparse convolution and an anchor-free detection head makes the model robust to domain gaps and efficient in handling point clouds from various domains.
Experiments on datasets like SUN RGB-D, ScanNet, KITTI, and nuScenes demonstrate the effectiveness of OneDet3D. The model achieves promising performance in both closed-vocabulary and open-vocabulary settings, showing improvements over existing methods and strong generalization abilities.
Strengths: (+) OneDet3D addresses the challenge in 3D object detection by introducing multi-domain joint training, which enhances the model's ability to generalize across various indoor and outdoor scenes. Such an endeavor is in line with the current research trend for point cloud 3D perception.
(+) The manuscript includes extensive experiments and evaluations on multiple datasets, showcasing the model's superior performance and generalization capabilities compared to some state-of-the-art methods.
(+) The overall OneDet3D framework seems standard and scalable; with more datasets and larger model sizes involved, the performance could be further improved.
Weaknesses: (-) The overall OneDet3D framework is a combination of several previous approaches, which might not demonstrate a strong novelty in the related area:
- The domain router and context partitioning from Sec. 3.2 is similar to what [R1] and [R2] did for reducing domain differences.
- The scatter partitioning method in Sec. 3.2 is closely related to Uni3D [R3] (see their Sec. 3.3).
- The language-guided classification in Sec. 3.3 shares the same intuition with [R4] and [R2].
(-) The experimental comparisons could benefit from involving more recent 3D object detectors. For example, the latest model in Tab. 2 (UVTR) is from two years ago; while most of the other models are from 2020 (or even earlier).
(-) The overall elaboration of this manuscript deserves further improvements. Several claims are made without supporting evidence or references. Besides, most of the technical details are given in the text; having more graphical illustrations or algorithm flows could reduce the redundancy in the elaboration and help readers better understand the proposed method.
---
### References:
- [R1] Towards Universal Object Detection by Domain Attention. CVPR, 2019.
- [R2] Multi-Space Alignments Towards Universal LiDAR Segmentation. CVPR, 2024.
- [R3] Uni3D: A Unified Baseline for Multi-Dataset 3D Object Detection. CVPR, 2023.
- [R4] DaTaSeg: Taming A Universal Multi-Dataset Multi-Task Segmentation Model. NeurIPS, 2023.
Technical Quality: 2
Clarity: 2
Questions for Authors: - **Q1:** As mentioned in Weakness 1, it is recommended to make a clearer comparison and have additional discussions with closely related works from existing literature. Putting the Related Work section behind the Introduction section and adding more detailed analyses and discussions with existing works can be beneficial. Including experimental comparisons and ablation studies with existing works, such as Uni3D, could further justify the effectiveness of the proposed OneDet3D.
- **Q2:** As mentioned in Weakness 2, supplementing the results with more recent single-dataset training approaches could further improve the comprehensiveness of the benchmark studies.
- **Q3:** As mentioned in Weakness 3, the manuscript could benefit from having more graphical illustrations or algorithm flows, instead of just plain text descriptions. This is also in line with the style of ML conferences.
- **Q4:** For the scatter partitioning method in Sec. 3.2: Since the dataset-specific statistics are used, how does OneDet3D handle a new point cloud with unknown statistics during inference?
- **Q5:** How does OneDet3D handle the class mapping differences using the language-guided classification in Sec. 3.3? For example, how to handle cases like the different definitions of `bicycle` and `bicyclist` (in KITTI and Waymo)?
Confidence: 4
Soundness: 2
Presentation: 2
Contribution: 2
Limitations: The authors acknowledge several limitations in their approach: The current focus on supervised learning with fully annotated point clouds limits scalability. Future work should explore weakly labeled or unlabeled data to reduce reliance on extensive annotations.
Additionally, the manuscript lacks a detailed analysis of the computational cost associated with the proposed methods, which is crucial for assessing real-time application feasibility. What is more, there is a risk of overfitting to seen domains during multi-domain training. More experiments on unseen domains would strengthen the claims of generalization. The scalability and generalizability of OneDet3D to other types of sensors and different environmental settings have not been extensively discussed. Further research is needed to assess the model's robustness in diverse real-world scenarios.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 4
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: **Q1: It is recommended to make a clearer comparison and have additional discussions with closely related works**
A1:
* Our main contribution is that we propose **a universal 3D detector that can directly generalize across various indoor and outdoor point clouds**, once trained. Existing works only support either indoor or outdoor point clouds and cannot achieve cross-dataset, *especially cross-scene, generalization*. **We are the first** to implement a universal solution for all types of point clouds, which is our core novelty.
* For this, we tackle multi-dataset joint training involving both indoor and outdoor point clouds. There exist significant differences in structure and content between indoor and outdoor point clouds, making this task highly challenging. In contrast, [R1] and [R4] deal with multi-dataset training with RGB images, [R2] and [R3] focus on outdoor-only multi-dataset training, where the discrepancies between different datasets are far less than those between indoor and outdoor point clouds. OneDet3D demonstrates that despite these substantial differences, 3D detection can still be addressed with a universal solution. This is a crucial advancement for generalization in the 3D domain.
* Our designs also differ significantly from existing methods. [R1] and [R2] merely perform knowledge aggregation across different domains. In contrast, we design a domain router to direct information flow, better preventing data-level interference. [R3] uses dataset-aware channel-wise calculation for mean and variance in BN, while we decouple the scaling and shifting parameters. Compared with [R4], we also address structural optimization, using sparse convolution for class-agnostic and FC layers for class-specific classification. In a word, our methods are specifically designed to address the requirements of the challenging problem, making them fundamentally different from existing methods. These designs form a comprehensive system, spanning model architecture, method design, and training, making it *more than just a simple combination*.
* We compare with Uni3D in the below Tab. 1-1. Uni3D can only handle outdoor point clouds, while OneDet3D provides a universal solution for all point clouds. We will include a discussion of these methods and reposition the related work section to make the discussions more comprehensive.
Table 1-1: Comparison with Uni3D
| | SUN RGB-D | ScanNet | KITTI | nuScenes |
| :--: | :--: | :--: | :--: | :--: |
|Uni3D|9.7|5.6|75.2|76.7|
|OneDet3D |65.0|70.9|84.2|80.9|
**Q2: Supplementing the results with more recent single-dataset training approaches could further improve the comprehensiveness**
A2:
Table 1-2: Comparison with more recent methods
| | | SUN RGB-D|ScanNet|KITTI|nuScenes|
| :--: | :--: | :--: | :--: | :--: | :--: |
|single-dataset training|VoxelNeXt [CVPR23]|18.1|15.4|77.4|80.0|
| | FSD v2 [Arxiv 2308]|25.3|29.1|75.6|82.1|
| | SAFDNet [CVPR24]|12.9|11.6|80.3|84.7|
|multi-dataset training|VoxelNeXt [CVPR23]|8.3|9.9|68.4|71.0|
| | FSD v2 [Arxiv 2308]|13.6|12.9|60.1|72.4|
| | SAFDNet [CVPR24]|3.2|1.9|38.7|70.4|
| |OneDet3D |65.0|70.9|84.2|80.9|
These recent methods target at specific 3D scenes. They may outperform OneDet3D in those particular datasets, but AP tends to drop when the scene changes, especially when switching from outdoor to indoor. After multi-dataset training, due to the dataset-aware interference, AP on all datasets degrade severely. In such multi-dataset scenarios, OneDet3D still achieves the best. Even compared with these recent methods, OneDet3D is still the first universal 3D detector that can generalize across various point clouds.
**Q3: The manuscript could benefit from having more graphical illustrations or algorithm flows.**
A3: We include a flowchart in Fig. 1 of the rebuttal document. We will incorporate this as an algorithm to provide a clearer explanation.
**Q4: How does OneDet3D handle a new point cloud with unknown statistics during inference?**
A4:
Table 1-3: Cross-dataset performance on ScanNet
| | trained on|AP25|
| :--: | :--: | :--: |
|VoteNet|SUN RGB-D|15.3|
|FCAF3D|SUN RGB-D|26.1|
|OneDet3D|SUN RGB-D|29.2|
|OneDet3D|SUN RGB-D, KITTI|31.1|
The most dataset-specific statistics, i.e., the mean and variance, are calculated based on the current batch of point clouds. For point clouds from the new domain, the mean and variance are computed according to the current data. We primarily partition the scaling and shifting parameters. For new domain point clouds, the domain router will calculate the domain probability to assess their similarity to existing domains. As is shown in Equ. 3 of our paper, the domain probability weights the outputs from different dataset-specific statistics, addressing the inference issue for new domains.
We further train our model on SUN RGB-D and KITTI, and test it on ScanNet. The results show that our method remains effective for new point clouds with unknown statistics, because the weighted manner allows to select dataset-specific statistics most similar to the test data. This further validates the reasonability of our design.
**Q5: How does OneDet3D handle the class mapping differences using the language-guided classification?**
A5: Point clouds from different domains use their own CLIP embeddings for classification. They are independent in class-specific classification. As a result, point clouds from different domains can be classified independently, alleviating class mapping differences. Experiments show that despite the different definitions of "car" in KITTI and nuScenes, the performance improves after multi-domain joint training, demonstrating the effectiveness of language-guided classification.
**Q6: The manuscript lacks a detailed analysis of the computational cost**
A6:
Table 1-4: Efficiency comparison
| |FPS|
| :--: | :--: |
|CenterPoint|32.8|
|UVTR|20.6|
|OneDet3D|34.9|
As can be seen, due to the fully sparse architecture, our network is highly efficient.
---
Rebuttal 2:
Comment: Dear Reviewer PVw6:
We thank you for the precious review time and valuable comments. We have provided corresponding responses and results, which we believe have covered your concerns. We hope to further discuss with you whether or not your concerns have been addressed. Please let us know if you still have any unclear parts of our work.
---
Rebuttal 3:
Title: Looking forward to Feedback as Discussion Deadline Approaches
Comment: Thanks for your thorough reviews, which are very helpful to improving the quality of our paper. We apologize for any inconvenience caused, but as the deadline for discussion (Aug 13 11:59 pm AoE) draws near, we would like to provide an update on our progress.
If you need further clarification or have additional questions, please don't hesitate to contact us. Again, we sincerely thank you for your time and effort in reviewing our paper.
Thanks | null | null | Rebuttal 1:
Rebuttal: We include the flowchart of our method in the PDF file here.
Pdf: /pdf/0c00c14ae92d913851451fc839bd03d6526cedc6.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Meta Flow Matching: Integrating Vector Fields on the Wasserstein Manifold | Reject | Summary: This paper addresses the issue of the flow matching method's lack of dependence on the data population. It proposes incorporating the initial population density into the vector field through amortization—using a Graph Neural Network (GNN) to embed the populations and adding this embedding to the input of the vector field network. The paper argues that this dependence would better model the data due to sample interactions, demonstrating improved generalization on unseen initial distributions. The method's application is showcased in perturbation drug screening.
Strengths: ### Originality
The problem setting of adding population dependence to flow matching is novel. The model framework of adding input to the vector field network using a GNN as a population encoder is also novel.
### Clarity
The paper is clearly written, with rigorous mathematical notations. The related work and introductions are especially well-written.
### Quality
The writing is good, and the experiment involves many baselines.
### Significance
The proposed method excels at generalizing to unseen populations, which is a significant improvement over existing methods, particularly when the conditions for generation are complex. The application on drug screening addresses a significant scientific problem and holds promise for personalized healthcare.
Weaknesses: 1. The paper could explain more about the meta-learning aspect of this method.
2. It could include explanations and/or ablation studies on the role of meta-learning and the GNN, especially in the synthetic experiment.
3. More detail is needed on what properties of the Wasserstein manifold of probabilities are used in the model. It is unclear how the model proposed in section 3.2 depends on the properties of the Wasserstein manifold described in section 3.1.
Technical Quality: 4
Clarity: 4
Questions for Authors: 1. How is this model different from using a GNN encoder to add the embedding to the input of the vector field, trained end-to-end? Specifically, what makes it meta-learning? Considering that $\theta$ and $\omega$ are jointly optimized to minimize $\mathcal{L}_{MFM}$, is there anything unique about the training algorithm?
2. How would this meta-learning framework compare with using the embedding from a pretrained encoder, such as a VGAE, or a classification model?
3. For the synthetic experiment, how much does the orientation-invariant embedding from the GNN help the model? Is the observed "generalization" due to the unique inductive bias of the GNN? Would any other encoder, without orientation invariance, such as an MLP, still yield good performance?
4. How well does it scale to high dimensions, considering that only 10 dimensions are used in the experiments? How well does it scale to large datasets where you need to build large KNN graphs? Would it scale effectively to high-dimensional tasks such as image generation?
5. In line 261, it states, “We use CGFM to assess if other models are fitting the data.” Is this used as an evaluation metric in the experiments?
Confidence: 4
Soundness: 4
Presentation: 4
Contribution: 3
Limitations: The authors have adequately addressed the limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 7
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >The paper could explain more about the meta-learning aspect of this method. Specifically, what makes it meta-learning?
Our work follows the naming convention of *Meta Optimal Transport* [1] and is "meta" in the sense of amortizing learning over multiple input distributions. This is related to how "meta" is used in the meta learning setting — meta learning solves multiple learning problems simultaneously while meta optimal transport and meta flow matching solves multiple OT or flow matching problems simultaneously. We added a section on other "meta"-based approaches in our related works and clarified our notion of "meta" in the text.
>It could include explanations and/or ablation studies on the role of meta-learning and the GNN, especially in the synthetic experiment.
We have included a primary ablation study in the main experiments (on synthetic and real data) over the KNN parameter $k$ (see the general response). We are happy to include further ablations.
>detail is needed on what properties of the Wasserstein manifold of probabilities are used in the model.
In lines 150-152, we describe our working assumptions which motivate the usage of the Wasserstein manifold geometry. We assume that (i) we solve a regression problem on the space of marginals. A single point of space is a distribution (ii) the ground true evolutions of the marginals happen according to the continuity equation, i.e. the tangent space corresponds to the one of the Wasserstein manifold and there is no birth\death processes (iii) the final marginal can be uniquely defined from the initial marginal, which corresponds to the existence of an ODE on the Wasserstein manifold with non-intersecting integral curves.
>How is this model different from using a GNN encoder to add the embedding to the vector field, trained end-to-end?
We train the flow model $v_t(\cdot;\omega)$ and the embedding model $\varphi(\cdot;\theta)$ jointly using the flow matching loss (see supplementary PDF Algorithm 1). We can efficiently update model parameters $\omega$ and $\theta$ through alternating gradient updates while using the same loss. This is a desirable property since we don't need to pre-train the GNN encoder or decide on a different loss. We leave further investigation on the distribution embedding model for future work.
>Considering that $\theta$ and $\omega$ are jointly optimized to minimize $\mathcal{L}_{MFM}$, is there anything unique about the training algorithm?
This training algorithm is unique because we use the flow matching loss to train both the flow model (which is a standard use of flow matching) and also the GNN encoder for embedding initial distributions (this is novel). See supplementary PDF Algorithm 1 for training details.
>How would this meta-learning framework compare with using the embedding from a pretrained encoder, such as a VGAE, or a classification model?
This is an interesting direction, but was not the focus of our work. Our objective was to amortize learning over multiple distributions, so we can generalize to unseen distributions in the test set. We believe exploring pre-training of the distributional embedding encoder (training algorithms, losses) as well as exploring other architecture/models (VGAE, classification model) are natural and fruitful directions for future extensions of MFM. We leave this for future work and have discussed this in the text.
>how much does the orientation-invariant embedding from the GNN help the model? Would any other encoder such as an MLP, still yield good performance?
We use the permutation invariant property (not orientation-invariance) of the GNN model to learn embeddings for entire distributions. We explore the need for the GNN versus MLP by conducting an ablation over $k$. Specifically, we have included results for $k=0$, where the GNN encoder reduces to an MLP to try and deduce if using a GNN is necessary. We observe in our experiments that in some cases the MLP parameterization is sufficient for generalization across distributions, while in other settings $k>0$ performs better (see supplementary PDF).
>How well does it scale to high dimensions?
We have included additional high-dimensional experiments (no PCA used, dim=43) in the supplementary PDF (see general response). We observe that MFM performs consistently well in high dimensions as in the low dimensional settings.
>How well does it scale to large datasets where you need to build large KNN graphs?
The single-cell dataset we consider in this work has 2,500 pairs of marginals with 2k-13k cells for each source and target distribution pair. In our experiments, we show that we can efficiently train models for $k=100$ on this dataset. Training time increases with increasing $k$, but the performance gains saturate at certain values of $k$ (see updated experiments in supplementary PDF).
>Would it scale effectively to high-dimensional tasks such as image generation?
While our method can perform the conditional generation (e.g., class-conditional generation of images, see updated results in the supplementary PDF) its main purpose is to condition the dynamics on the starting population, i.e. the conditional variable is the entire collection of datapoints from the initial distribution (which we embed via a GNN). We are not aware of a suitable application of our method for image generation, but we remain open to any suggestions.
>“We use CGFM to assess if other models are fitting the data.” Is this used as an evaluation metric in the experiments?
CGFM is not an evaluation metrics but a baseline model that encodes source populations as one-hot-encoding vectors. CGFM indicates when the model can fit the training data and demonstrates that the test data is substantially different and cannot be generated by the same model (obviously, it does not correspond to any of the encodings). We have clarified this in the text.
[1] Amos, Brandon, et al. "Meta optimal transport." ICML (2022).
---
Rebuttal Comment 1.1:
Comment: I appreciate the authors' efforts in conducting ablation experiments and addressing the clarification questions.
The additional experiments have resolved my concerns in scalability and the necessity of a GNN.
I agree with the authors' clarifications (esp. with the algorithm box) on the novelty of joint training.
Therefore, I have raised my score. | Summary: This paper proposed an extension of the Conditional Generative Modeling via Flow Matching (CGFM) framework. Taken inspiration from the theory of Wasserstein Gradient Flow, this new framework, Meta Flow Matching, proposed to learn the push-forward mapping of multiple measures in the same measures-space. This is motivated by the realistic problem of modeling single-cell perturbation data where we want to see the response of populations of cells of patients when receiving different treatments. A novelty of Meta Flow Matching is that by combining amortized optimization and CGFM, the trained MFM velocity network can model newly observed populations _without_ knowing their labels/conditions. Two empirical benchmarks were performed to showcase the effectiveness of MFM compared to the Flow Matching (FM) and CGFM.
Strengths: - The method proposed is novel enough, and the problem is well-motivated. I also find the idea of integration of GNN to model the conditional variable quite neat. The method is based on the well-studied theory of Wasserstein gradient flow on measure spaces and amortized optimization framework.
- The empirical benchmark, especially on real biological data, seems to showcase the strength of MFM.
- Overall the paper is quite well written and easy to follow.
Weaknesses: - The first part of the methodology section seems to be phrased as a new methodological contribution, but if I'm not mistaken this is just more or less restating the already established theory of W2 gradient flow and continuity equation (eq 14). I think the authors should put Section 3.1 into the background section (2nd Section).
- There is a lack of discussion on whether the 3 crucial assumptions (line 145-152) are satisfied in a realistic biological setting. For example, in theory, Assumption (iii) on the unique existence of the Cauchy problem stands when the velocity field satisfies some regularity conditions -- I'm not sure this can be extended to a parameterized neural network that takes input from another (graph) neural network as an embedding function, which is hardly Lipschitz smooth in most of the case.
- Algorithm boxes at the end of section 3 is highly welcome. Or if the authors cannot allocate the space, I highly recommend putting two (one for training and one for sampling) into the Appendix. It is quite hard to follow how the velocity is trained in reality. For example, what function $f_t(x_0, x_1)$ did the authors take for this work? Is it still linear interpolation as vanilla flow matching? Or does it involve adding some form of stochasticity as in stochastic interpolant or VP-SDE as in diffusion model? Is the coupling $(x_0, x_1)$ sampled to match randomly, or they are sampled to some form of alignment as in the multisample flow matching paper (Pooladian et al. 2023)?
- This might not be the original purpose of this work, but I would love to see how MFM perform on conditional image generation task. One can pick a simple small dataset such as CIFAR10 that already includes class labels, or better yet ImageNet dataset. The performance in this takse will be much more convincing than the synthetic experiment, where I would argue would target the same type of task.
Technical Quality: 3
Clarity: 2
Questions for Authors: See weaknesses.
Confidence: 4
Soundness: 3
Presentation: 2
Contribution: 2
Limitations: See weaknesses.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >The first part of the methodology section seems to be phrased as a new methodological contribution, but if I'm not mistaken this is just more or less restating the already established theory of W2 gradient flow and continuity equation (eq 14). I think the authors should put Section 3.1 into the background section (2nd Section).
Section 3.1 discusses ODEs rather than gradient flows on the W2 manifold, which is equivalent only if the vector field is defined as the W2 gradient of some functional. For instance, diffusion and the porous medium equation are gradient flows [1] but the Schrödinger equation is not [2]. The role of this section is to motivate the formalism used with examples. We have clarified this in the main body of the text.
> There is a lack of discussion on whether the 3 crucial assumptions (line 145-152) are satisfied in a realistic biological setting. For example, in theory, Assumption (iii) on the unique existence of the Cauchy problem stands when the velocity field satisfies some regularity conditions -- I'm not sure this can be extended to a parameterized neural network that takes input from another (graph) neural network as an embedding function, which is hardly Lipschitz smooth in most of the case.
In general, the considered ODEs on the W2 manifold correspond to PDEs on the state space, and the existence and uniqueness of their solution requires an extensive study in every particular case (e.g., see [3]). In practice, we find it rather unrealistic to recover the ground true PDE and the necessary assumptions on the vector field based solely on samples from the marginals as considered in Section 5. Although an interesting direction for future research, this is beyond the scope of our work.
> Algorithm boxes at the end of section 3 is highly welcome. Or if the authors cannot allocate the space, I highly recommend putting two (one for training and one for sampling) into the Appendix ... For example, what function $f(x_0, x_1)$ did the authors take for this work? Is it still linear interpolation as vanilla flow matching? Or does it involve adding some form of stochasticity as in stochastic interpolant or VP-SDE as in diffusion model? Is the coupling $(x_0, x_1)$ sampled to match randomly, or they are sampled to some form of alignment as in the multisample flow matching paper (Pooladian et al. 2023)?
Thank you for your suggestion! We have added the algorithm pseudocode into the main body of the final version of the paper (see the supplementary PDF Algorithm 1 for the pseudocode) and clarified the details in the text further (e.g., that we use independent samples from the marginals and linear interpolation as in the standard flow matching). Also, all the details of the algorithm can be found in the code supplemented to the submission.
Sampling is accomplished by taking trained models $v_t(\cdot;\omega^*)$, $\varphi(\cdot;\theta^*)$, and given an input initial population, first compute population embeddings $h = \varphi\left(\{x_0^j\}_{j=1}^{N'};\theta\right)$, then solve $x_1^j = \int_0^1 v_t(x_t^j | h,c;\omega)^{2} dt + x_0^j$ via ODE solver. We have added this as Algorithm 2 in the main text.
> This might not be the original purpose of this work, but I would love to see how MFM perform on conditional image generation task. One can pick a simple small dataset such as CIFAR10 that already includes class labels, or better yet ImageNet dataset. The performance in this takse will be much more convincing than the synthetic experiment, where I would argue would target the same type of task.
While our method can perform the conditional generation (e.g., class-conditional generation of images, see updated results in the supplementary PDF) its main purpose is to condition the dynamics on the starting population, i.e. the conditional variable is the entire collection of datapoints from the initial distribution (which we embed via a Graph Neural Network). Hitherto, we are not aware of a suitable application of our method for image generation, but we remain open to any suggestions.
[1] Otto, Felix. "The geometry of dissipative evolution equations: the porous medium equation." (2001): 101-174.
[2] Chow, Shui-Nee, Wuchen Li, and Haomin Zhou. "Wasserstein hamiltonian flows." Journal of Differential Equations 268, no. 3 (2020): 1205-1219.
[3] Schaeffer, Jack. "Global existence of smooth solutions to the Vlasov Poisson system in three dimensions." Communications in partial differential equations 16, no. 8-9 (1991): 1313-1335.
---
Rebuttal Comment 1.1:
Title: Thank you for the clarification
Comment: I have read the rebuttal of the authors. I remain in my opinion that the background part in Section 3.1 should be moved back to the earlier section to not be confused as a contribution. I also understand that it is quite hard to find theoretical analysis for this type of work (that leans more on methodological and algorithmic contributions). While the authors' rebuttal clarifies plenty of my concerns, I think the work still requires some reorganization and additional modification. Therefore, I keep my Borderline Acceptance score.
---
Reply to Comment 1.1.1:
Comment: Thank you for your time and effort in reading and responding to our rebuttal. Following your suggestion, we have moved the content of Section 3.1 to the background section to avoid any possible confusion and to make our contribution clearer.
We are happy to consider any additional suggestions and to make further changes to improve the overall quality of our work. | Summary: The paper discussed the novel problem setup of generative modeling of the dynamics of probability distributions. The paper proposed Meta Flow Matching (MFM), an extension of the flow matching framework for implicitly learning the vector fields on the Wasserstein manifold of probability distributions. The paper demonstrated better transferability of the proposed framework on unseen distributions on both synthetic datasets and real-world drug-screen datasets.
Strengths: - The problem setup of learning a flow matching model for mappings between distributions (i.e., a probability path on the Wasserstein manifold), to the best of my knowledge, is novel and has not been explored in previous work.
- The idea of using distribution-specific embeddings (the population embeddings) is well explained and motivated in the paper.
- The proposed method demonstrates better transferability on both synthetic and real-world datasets compared to other baselines.
Weaknesses: - The proposed method seems to be a special case of a conditionally trained flow matching model where the conditions are continuous learnable embeddings. Such an idea has already been applied in various diffusion or flow matching models including image generation (conditioned on text embedding in the latent space), protein co-design [1] (conditioned on sequence, generate protein structure, or vice versa), and peptide design [2] (conditioned on receptor proteins, generate peptides).
- The idea of population embedding in the paper is similar to task embedding, which has been well-explored in the meta learning (e.g. [3]). Although the authors claimed their framework to be *meta* flow matching, related work in meta learning seems to lack.
[1] Campbell, Andrew, et al. "Generative flows on discrete state-spaces: Enabling multimodal flows with applications to protein co-design." arXiv preprint arXiv:2402.04997 (2024).
[2] Li, Jiahan, et al. "Full-Atom Peptide Design based on Multi-modal Flow Matching." arXiv preprint arXiv:2406.00735 (2024).
[3] Achille, Alessandro, et al. "Task2vec: Task embedding for meta-learning." Proceedings of the IEEE/CVF international conference on computer vision. 2019.
Technical Quality: 3
Clarity: 3
Questions for Authors: 1. What is the major difference between the proposed method and a conditionally trained flow matching model? See Weakness 1 for some examples of conditional generative models in other fields. These models also rely on continuous learnable embeddings as conditions during training and sampling.
2. If MFM can recovers the conditional generation via flow matching (CGFM) (Proposition 1), what is the benefit of using the proposed scheme over the latter? Can you provide more details regarding training CGFM? For example, what are fed into the model as conditions for CGFM?
3. There is a (minor) mismatch between Figure 1 and the MFM objective in Eq.15. In Figure 1 (or Eq.14), the flow matching model operates on the probability distributions to output a vector field $v\_t(p\_t)$ in the tangent space (an affine subspace as the probabilities needs to be normalized). However, in Eq.15, the flow matching model still operates on the data space $v\_t(x\_t)$. This is probably because the probability distributions are only implicitly described via data samples. Nonetheless, the authors should avoid saying in the caption of Figure 1 that the model learns a vector field on the Wasserstein manifold. During both training and sampling, the learned vector field is always defined on the data space in this work.
4. GNNs is a reason choice for data with geometric properties. However, the single-cell data does not seem to a exhibit a simple Euclidean geometry for GNN to work. Can you provide more justifications? Is there a better choice for the population embedding?
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 3
Limitations: The authors have adequately and properly discussed the limitations and potential societal impact of the paper in the Appendix.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 6
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >Such an idea has already been applied in various diffusion or flow matching models including image generation, protein co-design [1], and peptide design [2].
The reviewer is correct in that MFM is a conditionally trained flow matching model. Indeed, there are many conditionally trained flow matching models, and there will very likely be more in the future. We note that MFM is **more general** than existing work which considers vector conditionals. MFM expands this to conditioning on input dependent **graphs**. See highlighted $h^i(\theta)$ in Algorithm 1 (supplementary PDF). Existing methods such as MultiFlow [1] have conditions which are input and space independent. We note that Pepflow [2] is concurrent work as it was not made publicly available until after the submission deadline (June 2).
>What is the major difference between the proposed method and a conditionally trained flow matching model? These models also rely on continuous learnable embeddings as conditions during training and sampling.
We have added an explanation of how MFM differs from the existing methods by including a discussion of the previous point/question. To be precise here, MFM depends on **distributions** of vectors instead of (potentially learned) vector conditionals, listed in Equations (14) and (15). To improve clarity, we have added details regarding the training of MFM (Algorithm 1 in the supplementary PDF) and a high level explanatory figure (Figure 1 in the supplementary PDF).
>The idea of population embedding in the paper is similar to task embedding, which has been well-explored in the meta learning (e.g. [3]). Although the authors claimed their framework to be meta flow matching, related work in meta learning seems to lack.
We thank the reviewer for bringing up this interesting connection to the field of meta learning. We have added a section on other "meta" — in the sense of amortization — frameworks, in our related works. Our work follows the naming convention of *Meta Optimal Transport* [4], and is "meta" in the same sense of amortizing OT or flow problems over multiple input distributions. This is related to how "meta" is used in the meta learning setting (which amortizes over learning problems).
Here is a quick summary of the differences between our setting and Task2Vec [3]:
1. Task2Vec uses a task embedding in $R^d$ over datasets of images to condition a ML model for meta learning while we learn a task embedding over an input point cloud to condition a meta flow model.
2. Task2Vec proposes to use the (diagonal of the) Fisher information metric (FIM) of the dataset as the task embedding while we propose to learn a GNN that outputs an embedding for the downstream task. In other words, we learn our task embedding for the end-to-end performance while Task2Vec takes the FIM as the task conditioning information. In the spirit of Task2Vec, we will clarify that we could consider other embeddings of the point clouds, which could be related to their FIM, or other statistics of them — the FIM and other task embeddings may still be nearly computationally intractable for our larger point clouds, so we defer these ablations to future work as they may not be straightforward and may involve some tradeoffs.
>If MFM can recovers the conditional generation via flow matching (CGFM) (Proposition 1), what is the benefit of using the proposed scheme over the latter? Can you provide more details regarding training CGFM? For example, what are fed into the model as conditions for CGFM?
The benefit of MFM is that it can generalize to unseen populations where CGFM cannot. One-hot conditions for initial populations (and also treatments) are fed into CGFM. CGFM does not see the one-hot conditions for the initial populations in test sets. Hence, CGFM cannot generalize since it uses one-hot conditions and does not learn representations of the initial populations. We state this in lines 301-310.
>In Figure 1 (or Eq.14), the flow matching model operates on the probability distributions to output a vector field $v_t(p_t)$ in the tangent space ... However, in Eq.15, the flow matching model still operates on the data space $v_t(x_t)$. Nonetheless, the authors should avoid saying in the caption of Figure 1 that the model learns a vector field on the Wasserstein manifold. During both training and sampling, the learned vector field is always defined on the data space in this work.
The reviewer is correct that we directly parameterize a model which operates on the data space, and that this implicitly defines a vector field on the Wasserstein manifold. We have clarified this difference in the caption of Figure 1.
>GNNs is a reason choice for data with geometric properties. However, the single-cell data does not seem to a exhibit a simple Euclidean geometry for GNN to work. Can you provide more justifications? Is there a better choice for the population embedding?
Single-cell data is most commonly modeled using K-nearest-neighbor graphs [5]. With many other methods building on top of this representation for [Visualization UMAP, Imputation MAGIC, batch correction MNN, and other tasks]. Even though it does not exhibit a simple Euclidean geometry, KNN-GNNs are still widely applicable in this domain. Improved models for population embeddings is an interesting direction, which we leave to future work.
[1] Campbell, Andrew, et al. "Generative flows on discrete state-spaces: Enabling multimodal flows with applications to protein co-design." (2024).
[2] Li, Jiahan, et al. "Full-Atom Peptide Design based on Multi-modal Flow Matching." (2024).
[3] Achille, Alessandro, et al. "Task2vec: Task embedding for meta-learning." IEEE/CVF. (2019).
[4] Amos, Brandon, et al. "Meta optimal transport." ICML (2022).
[5] Heumos, L., Schaar, A.C., Lance, C. et al. Best practices for single-cell analysis across modalities. Nature Reviews Genetics. (2023).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for providing more explanations and clarifications regarding the difference between the proposed method of MFM versus existing methods of CGFM and meta-learning. After clarifying the difference from existing work of conditional flow matching and task embedding in meta-learning, I believe this work indeed offers an alternative perspective that can be interesting to the community. In this regard, I raise my score from 5 to 6. I would suggest the authors highlight these distinctions in the revised manuscript. | Summary: This paper introduces Meta Flow Matching (MFM), a flow matching framework modeling interacting samples evolving over time by integrating vector fields on the Wasserstein manifold. The authors leverage a Graph Neural Network to embed populations of samples and thus generalize the method over different initial distributions. The authors demonstrate the method on individual treatment responses predictions on a large multi-patient single-cell drug screen dataset.
Strengths: Novelty: The method uniquely considers population interactions, unlike previous flow matching methods that model samples individually.
Generalization: The authors extended conditioning on latent variables to conditioning on population index in section 3.2. The proposition in section 3.2 demonstrates that conditional flow matching can fit well within the MFM framework. The experiments show that MFM can generalize to unseen data, outperforming other methods in this regard.
Weaknesses: In Table 1 of the synthetic experiment, the authors compared the performance of FM, CGFM and MFM of k=0,1,10,50. MFM doesn't seem to beat existing methods on the metrics and from the visualizations, it's hard to tell MFM is actually doing better than FM. Also, for the various values of k, some explanations on how performance correlates with values of k and why might be necessary for readers to understand this table.
In both experiments, the authors only compared FM, CGFM, and in Table 2 also ICNN. Probably more methods, like diffusion, should also be taken into comparison. Also, in experiment 2, only W1, W2 and MMD are computed as metrics. While these are useful when modeling distributions, more metrics, especially those specific to this application may be applied.
Technical Quality: 3
Clarity: 3
Questions for Authors: In Table 1, the authors compared k=0,1,5,10,50 and in Table2, only k = 0 and 10 are listed. Some explanations on how this decision is made is probably helpful.
Confidence: 4
Soundness: 3
Presentation: 3
Contribution: 2
Limitations: The authors have not addressed limitations.
Flag For Ethics Review: ['No ethics review needed.']
Rating: 5
Code Of Conduct: Yes | Rebuttal 1:
Rebuttal: >In Table 1 of the synthetic experiment ... MFM doesn't seem to beat existing methods on the metrics and from the visualizations, it's hard to tell MFM is actually doing better than FM.
We thank the reviewer for the opportunity to further clarify our results. Metric-wise from Table 1, MFM outperforms FM and CGFM yielding lower W1 and W2 metrics on the test data (Y's). Moreover, visually FM fails to de-noise the letters in any way on both the train and test sets, while MFM is much closer to achieving the target distributions. We have updated the synthetic experiment (see Table 3 in the supplementary PDF), where we have made the task harder (test letters are completely unseen during training, i.e. in any orientation). Here it is again clear that MFM outperforms FM and CGFM across all metrics for the test set of Y's.
>explanations on how performance correlates with values of k and why might be necessary for readers to understand this table.
We thank the reviewer for pointing this out. We have added an explanation on how $k$ affects performance in the respective experiments sections. In these experiments we considered various values of $k$ primarily as an ablation to observe how performance changes for different $k$'s. We also wanted to observe the role/importance of considering interactions between "particles" (or samples) for learning population embeddings. For example, when $k > 0$ particle interactions are incorporated via a knn graph, while for $k=0$ (DeepSets, permutation invariant MLP) no particle interactions are incorporated. We found:
- **Synthetic experiments:** higher $k$ correlates with better performance on the synthetic experiments.
- **Single-cell experiments:** No clear single selection for $k$ that yields the best performance across all tasks on the single-cell experiments. We observe that for the _replicates_ split $k=0$ on average performs better than $k>0$. Whereas on the _patients_ split, the opposite is true.
This is possibly since the _patients_ split forms a more difficult generalization problem (more diversity between training populations and test populations), and hence it is more difficult to over-fit during training with higher $k$.
>the authors only compared FM, CGFM, and in Table 2 also ICNN. Probably more methods, like diffusion, should also be taken into comparison.
The central focus of this work was to devise a general framework for learning population dynamics while conditioning on entire distributions — to generalize across previously unseen environments/distributions. We focused our work on flow matching due to its versatility and training efficiency (relative to other generative modeling methods used in similar tasks) [1, 2, 3]. Moreover, flow matching has been shown to consistently yield competitive performance in single-cell population dynamics prediction tasks while also being easily extendable to incorporate optimal transport and stochastic formulations [4, 5, ]. **The performance trend witnessed for our flow matching models (MFM > FM, CGFM) should be agnostic to the choice of the backbone model. To show this, we have added an additional set of baseline models to all experiments that are akin to diffusion models** (see supplementary PDF Tables 1, 2, 3). We train FM$^{\text{w/}}\mathcal{N}$, CGFM$^{\text{w/}}\mathcal{N}$, and MFM$^{\text{w/}}\mathcal{N}$ which use a Gaussian source distribution for sampling, i.e. $x_0 \sim \mathcal{N}(0, 1)$, while $p_1$ remains the same set of target letter distributions. Here, MFM still uses the original $p_0$ in the GNN model to learn population embeddings, while the flow model uses $x_0 \sim \mathcal{N}(0, 1)$ as $p_0$. From the updated Tables, it is clear that (MFM$^{\text{w/}}\mathcal{N}$ > FM$^{\text{w/}}\mathcal{N}$, CGFM$^{\text{w/}}\mathcal{N}$), which is consistent with our expectation.
>in experiment 2, only W1, W2 and MMD are computed as metrics. While these are useful when modeling distributions, more metrics, especially those specific to this application may be applied.
Our choice of metrics is directly taken from the standard accepted in the community of single-cell population dynamic prediction and perturbation response prediction [4, 5, 6, 7]. For the single-cell experiments, we have added an additional metric used in [7]: the average correlation coefficient of the feature (bio-marker) means, labeled $r^2$. We have evaluated all single-cell models using this additional metric and observed MFM outperforms the baselines on $r^2$ as well (see supplementary PDF Tables 1 and 2).
>In Table 1, the authors compared k=0,1,5,10,50 and in Table2, only k = 0 and 10 are listed. Some explanations on how this decision is made is probably helpful.
Thank you for pointing this out. We have added a more comprehensive ablation over $k$ for the single-cell experiments (see supplementary PDF Tables 1 and 2). Specifically, we have trained MFM models for $k=0,10,50,100$ on the single-cell experiments.
>The authors have not addressed limitations.
We would like to clarify that we have included a discussion of limitations and broader impacts in the Appendix D and E.
[1] Lipman, Yaron, et al. "Flow Matching for Generative Modeling." ICLR. (2023).
[2] Albergo, Michael Samuel, and Eric Vanden-Eijnden. "Building Normalizing Flows with Stochastic Interpolants." ICLR. (2023).
[3] Liu, Xingchao, and Chengyue Gong. "Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow." ICLR. (2023).
[4] Tong, Alexander, et al. "Improving and generalizing flow-based generative models with minibatch optimal transport." TMLR. (2024).
[5] Tong, Alexander, et al. "Simulation-free schrodinger bridges via score and flow matching." AISTATS. (2024).
[6] Neklyudov, Kirill, et al. "A computational framework for solving Wasserstein Lagrangian flows." ICML. (2024).
[7] Bunne, C., Stark, S.G., Gut, G. et al. Learning single-cell perturbation responses using neural optimal transport. Nature Methods. (2023).
---
Rebuttal Comment 1.1:
Comment: I thank the authors for their response and raise my score from 4 to 5. | Rebuttal 1:
Rebuttal: ## Feedback Summary
We thank all the reviewers for the time they invested in reviewing our paper and for their valuable and constructive feedback that will help improve our work's overall quality.
In this work, we introduced **Meta Flow Matching (MFM)**, a novel framework for learning the dynamic evolution of populations with the objective to generalize across previously unseen populations/micro-environments. By amortizing the flow model over initial populations -- using a GNN architecture to learn conditional embeddings of entire populations -- **we show that MFM can successfully perform prediction on previously unseen test populations in synthetic and real experiments, where baseline methods fail.**
In general, there is consensus among all reviewers that **MFM** is a **novel** and **unique** framework with convincing empirical experiments that showcase its **significance** and ability to generalize across previously unseen populations. In addition, reviewers found our work to be well motivated [mrP3, HpVg] with clear exposition and presentation [mrP3, HpVg, BiEg]. The primary concerns brought up by the reviewers were questions regarding clarifications and experiments. Below we outline how we have addressed these items.
**Improved Experiments:** reviewers asked about the scalability of MFM to larger dimensions and data sets [HpVg, BiEg], ablations over the GNN architecture [btyA, BiEg], and consideration of additional baselines and metric(s) [btyA]. To address these items, we have improved the synthetic and real-data empirical experiments and included the updates in the **supplementary PDF**.
- [HpVg, BiEg] generally asked about considering larger scale and higher dimensional experiments. To address this, we **added high-dimensional experiments** (see Tables 1 and 2 in the supplementary PDF) on the full organoid-drug screen dataset (no PCA reduction used) and demonstrated that MFM outperforms all baselines across all tasks and metrics.
- [btyA] asked about considering application-specific metrics for the experiments on single-cell perturbation data. To address this, we **added the squared correlation metric ($r^2$)** (see Tables 1 and 2 in the supplementary PDF) commonly used in single-cell applications [1]. MFM consistently outperforms baselines on $r^2$.
- [btyA, BiEg] asked about further ablations over the GNN population embedding architecture. To address this, we have **added a more comprehensive ablation over the nearest-neighbor parameter $k$** (see Tables 1 and 2 in the supplementary PDF) on the single-cell data experiments ($k=0,10,50,100$).
- [btyA] asked about additional baselines while [HpVg, BiEg] asked about conditional generation tasks. To address this, we have **added additional baseline models akin to diffusion** (see Tables 1, 2, and 3 in the supplementary PDF) where we use $\mathcal{N}(0, 1)$ as the source distribution. We trained and evaluated these additional baselines on our synthetic and single-cell experiments, showing that MFM also outperforms baseline models in this regime.
- [btyA] had concerns regarding the performance of MFM on the synthetic experiments. To address this, we have **improved MFM by adding _batching over initial population_ during training** (see supplementary PDF, Algorithm 1 where we added the line $i \sim \mathcal{U}_{\{1,N\}}(i)$). In addition, we have also made the overall synthetic letters task more difficult. Originally, 1 orientation of the test letters (Y's) was seen during training. In the updated experiments, the test letter populations are **not** seen in any orientation -- i.e. Y's are entirely unseen during training. We demonstrate that MFM outperforms baselines across all metrics in this experiment.
**Clarifications:** In general, the reviewers asked clarifying questions about the high level details of MFM and to expand on the training procedure of MFM [mrP3, HpVg]. To address these questions, we have **added Algorithm 1** outlining the training procedure for MFM (and similarly CGFM and FM). Furthermore, we have **added an additional explanatory figure** illustrating and comparing the central high-level elements of FM, CGFM, and MFM of our work. Additionally, reviewers asked questions regarding differences between MFM, CGFM, and existing/concurrent approaches [mrP3, HpVg, BiEg]. We address all of these concerns in individual responses and include any relevant additions in the **supplementary PDF**.
We believe we have addressed all the concerns and questions posed by the reviewers, improving the overall quality of our paper. We once again thank all reviewers for their insightful feedback and hope they will consider raising their scores considering the numerous additions we have made to improve the clarity, impact, and significance of our work.
[1] Bunne, C., Stark, S.G., Gut, G. et al. Learning single-cell perturbation responses using neural optimal transport. Nature Methods. (2023).
Pdf: /pdf/6552a40275a6430a0af5cea9ab881447507aed49.pdf | NeurIPS_2024_submissions_huggingface | 2,024 | null | null | null | null | null | null | null | null |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.