title
string
paper_decision
string
review_1
string
rebuttals_1
string
review_2
string
rebuttals_2
string
review_3
string
rebuttals_3
string
review_4
string
rebuttals_4
string
global_rebuttals
string
dataset_source
string
conference_year
int64
review_5
string
rebuttals_5
string
review_6
string
rebuttals_6
string
review_7
string
rebuttals_7
string
review_8
string
rebuttals_8
string
Continual Audio-Visual Sound Separation
Accept (poster)
Summary: The paper proposes a novel continual audio-visual sound separation task, aimed at continuously separating new categories of sound sources while maintaining the performance of previously learned categories. Strengths: 1. The structure of the entire paper is clear, and the expression is fluent. 2. The experimental results can demonstrate the effectiveness of the method, and the quality of the separated audio shown in the visual video is also good. Weaknesses: 1. The Cross-modal Similarity Distillation Constraint proposed in the paper includes two main innovations: 1) instance-aware semantic similarity and 2) class-aware semantic similarity. However, in terms of mathematical expression, compared to the Dual-Audio-Visual Similarity Constraint in reference [1], I believe the innovations of the two papers are almost identical. [1] Pian W, Mo S, Guo Y, et al. Audio-visual class-incremental learning[C]//Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023: 7799-7811. 2. Although the paper has conducted experimental validation on the MUSIC-21 dataset, it does not provide sufficient information to assess the model's generalization ability on other datasets or in real-world scenarios, making it difficult to demonstrate its enhanced applicability in real scenarios where new sound sources are encountered. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. In Fig. 1, the authors illustrate the continual audio-visual sound separation task. Is the significant performance gap between ContAV-Sep and the fine-tuning method due to the small data scale and limited number of categories? 2. I am curious whether this paper has been submitted twice, as I noticed the "Anonymous ECCV Submission Paper ID 6024" in the uploaded video materials. If that is the case, this paper will be directly rejected. 3. Please clarify the difference between the innovation of the Cross-modal Similarity Distillation Constraint in this paper and the Dual-Audio-Visual Similarity Constraint in [1]. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: The innovation of the Cross-modal Similarity Distillation Constraint in this paper may overlap with the Dual-Audio-Visual Similarity Constraint from the previous work. It suggests that while the paper introduces a novel approach, there might not be a clear distinction or significant difference from the existing method in terms of the core innovative aspects. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments! We appreciate the reviewer highlighting the clear structure, fluent expression, and effective experimental results of our paper. We address the raised concerns below and are willing to answer any further questions. > ### **Q1: Difference between the innovation of the Cross-modal Similarity Distillation Constraint (CrossSDC) and Dual-Audio-Visual Similarity Constraint (D-AVSC) in [1].** Thank you for your question! The Dual-Audio-Visual Similarity Constraint (D-AVSC) [1] only focuses on the feature similarities within data generated by the current training model, while our proposed CrossSDC further integrate features learned by previous model into the contrastive loss. In this way, our CrossSDC incorporates the cross-modal similarities knowledge acquired from previous tasks into the contrastive loss, which not only facilitates the learning of cross-modal semantic similarities in new tasks but also ensures the preservation of previously acquired knowledge in audio-visual sound separation. Moreover, the experimental results also demonstrate the superiority of our method with CrossSDC (**7.33/13.55/13.01** for SDR/SIR/SAR) compared to the AV-CIL with D-AVSC [1] (6.86/13.13/12.31 for SDR/SIR/SAR). > ### **Q2: Generalization ability on other datasets.** Thanks for the suggestion! Besides the MUSIC-21 dataset, we conducted experiments on the AVE [2] dataset and have included the experimental results in Appendix. The results of Fine-tuning/LwF/PLOP/AV-CIL/ContAV-Sep(Ours) are 2.07/2.19/2.45/2.53/**2.72** and 5.64/6.43/6.11/6.64/**7.32** for SDR and SIR, respectively, which further demonstrates the effectiveness of our proposed method. For more details, please kindly see Section A.3 in Appendix. > ### **Q3: Is the significant performance gap between ContAV-Sep and the fine-tuning method due to the small data scale and limited number of categories?** We would like to clarify that the significant performance gap between ContAV-Sep and fine-tuning is not due to the small data scale and limited number of categories. Experiments on a larger dataset, AVE [2], demonstrate a similar performance gap. On AVE, fine-tuning achieves an SDR of 2.07 and SIR of 5.64, while ContAV-Sep achieves **2.72** and **7.32**, respectively. This persistent gap underscores that the difference is not solely due to the dataset. For further details, please refer to Section A.3 in the Appendix. > ### **Q4: Whether this paper has been submitted twice?** We confirm that this paper is not a dual submission. The current version is the revised submission of our previously withdrawn paper. We apologize for the oversight in the teaser of the demo video, which contained a typo. This issue has been corrected in our demo. [1] Audio-Visual Class-Incremental Learning. In *ICCV* 2023. [2] Audio-Visual Event Localization in Unconstrained Videos. In *ECCV* 2018. --- Rebuttal Comment 1.1: Comment: Thank for authors' response. However, the technique novelty is limited. The proposed CrossSDC just integrates features learned by previous model into the contrastive loss upon the D-AVSC, so I hold on my score without any modification. --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: Thank you, Reviewer tY1x, for your response! We respect your opinion but respectfully disagree that the technical novelty is limited. In this work, we introduce not only the novel task of Continual Audio-Visual Sound Separation but also CrossSDC to address it. We’d like to emphasize that, in addition to the model, the new task itself is an important contribution to the field. Furthermore, we believe our CrossSDC is indeed a novel approach within continual audio-visual sound separation. By integrating features from previous tasks into the cross-modal contrastive loss, we are pioneering the maintenance of audio-visual similarity across tasks. This allows us to learn audio-visual similarity within the current task and distill similarity from previous tasks, all without requiring a specific model architecture designed to combat catastrophic forgetting. Our approach is simple yet effective, and experiments with two different separators on two datasets demonstrate its superiority. We respectfully leave the final judgment to you, the other reviewers, and the AC. We sincerely appreciate your comments and engagement with our rebuttal!
Summary: This paper introduces a novel task termed "Continual Audio-Visual Sound Separation," aiming to address the practical challenge of separating sound sources for new classes in audio-visual scenarios while retaining performance on previously learned classes. This task is inherently challenging due to the inherent risk of catastrophic forgetting, where models trained on new data often exhibit performance degradation on previously learned classes. To tackle this challenge, the authors propose ContAV-Sep, a novel framework incorporating a Cross-modal Similarity Distillation Constraint (CrossSDC). This constraint preserves cross-modal semantic similarity across incremental tasks by enforcing both instance-aware and class-aware similarities through a combination of contrastive loss and knowledge distillation. Notably, CrossSDC integrates knowledge from past tasks into the contrastive learning process, ensuring the retention of previously acquired cross-modal correlations. Experiments on the MUSIC-21 dataset demonstrate that ContAV-Sep significantly outperforms existing continual learning baselines in terms of standard sound separation metrics (SDR, SIR, SAR) across multiple audio-visual sound separation base models. This work highlights the importance of cross-modal semantic correlation in continual audio-visual learning and provides a novel, effective solution for mitigating catastrophic forgetting in this domain. Strengths: Here are four strengths of the paper, presented as a list: - Clearly identifies and addresses a novel and practical problem in audio-visual sound separation: continual learning in this domain. It is important to mention that this might be an important problem in the field and not so many people have provided good solutions for this problem. - Proposes a novel framework, ContAV-Sep, with a well-defined Cross-modal Similarity Distillation Constraint (CrossSDC) to tackle catastrophic forgetting. - Empirically demonstrates the effectiveness of ContAV-Sep on the MUSIC-21 dataset, showcasing significant performance improvements over strong baselines. - Provides a thorough analysis of the results, including ablation studies and exploration of memory size effects, highlighting the contributions of different components of the proposed method. Weaknesses: - Please replace the “mask” variable with something appropriate like \widehat{\mathbf{m}} - Please fix weird fontsizes like the one in equation 9, in general the manuscript does not seem polished enough for a NeurIPS submission. - Differences smaller than <0.1 dB in terms of SNR metrics are not either significant nor hearable (I would even argue for 0.5 dB but let’s follow the literature in this one), thus I would suggest rounding up all those performance numbers to a one decimal precision (it would also make the Tables less cluttered). - I would like to see an even larger ablation in Table 3 to show the full extent of how the performance deviates with an even larger amount of samples per class and not only 4, a graph would make the visualization better here cause currently it does not convey any meaningful message. - The authors do not make a thorough investigation on previous works in the literature that employ continual learning techniques for related sound processing tasks. I will refer here only a couple of the works that I am aware of like in [A, B] but I am almost certain that there is no lack of thereof to try to include other works and try to make some empirical or theoretical conclusions on how those methods can become interconnected, employed together and in general how they relate. [A] Wang, Z., Subakan, C., Tzinis, E., Smaragdis, P. and Charlin, L., 2019, October. Continual learning of new sound classes using generative replay. In 2019 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) (pp. 308-312). IEEE. [B] Wang, Z., Subakan, C., Jiang, X., Wu, J., Tzinis, E., Ravanelli, M. and Smaragdis, P., 2022. Learning representations for new sound classes with continual self-supervised learning. IEEE Signal Processing Letters, 29, pp.2607-2611. Technical Quality: 3 Clarity: 3 Questions for Authors: The classes seem a bit limited, would the authors want to generalize their experiments to a larger dataset like audioset and show how their method performs there? I think that would be even more interesting since the distribution of classes in Audioset has a long tail for multiple not so prevalent sounds (e.g. consider sounds that are not represented by Speech or music) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think the authors include several limitations of their work, they should also identify the memorization of individual user’s data inside the memory of their continual learning method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions! We appreciate your recognition of the novelty in our work's problem formulation and approach, as well as its effectiveness demonstrated in our experiments. We address your questions below and welcome any further inquiries. > ### **Q1: Replace the "**mask**" variable.** Thank you for your suggestion. We have replaced the "**mask**" variable with a variable **m**. > ### **Q2: Weird fontsizes like the one in equation 9.** Thank you for your suggestion. We have fixed this fontsize issue. > ### **Q3: Rounding up all performance numbers to a one decimal precision.** Nice suggestion. We have updated performance numbers to one decimal precision in our paper. > ### **Q4: Larger amount of samples per class in memory. A graph would make the visualization better.** Thank you for your suggestion! We conducted experiments with the number of samples per class in memory with 10, 20, and 30. The results are shown below. Moreover, as per your advice, we have included a graph in our paper to better show the impact of different sample sizes per class in memory. | # of samples per class | SDR | SIR | SAR | |------------------------|------|------|------| | 10 | 9.0 | 15.2 | 13.7 | | 20 | 9.4 | 15.9 | 13.7 | | 30 | 10.1 | 16.6 | 14.1 | > ### **Q5: Missing references in continual learning for sound processing tasks.** Thanks for the suggestion! We have added relevant references on continual learning for sound processing tasks, including the two suggested works and others [1, 2, 3, 4], and discussed them in our revised paper: [1] Ma, Haoxin, Jiangyan Yi, Jianhua Tao, Ye Bai, Zhengkun Tian, and Chenglong Wang. "Continual learning for fake audio detection." arXiv preprint arXiv:2104.07286 (2021). [2] Bhatt, Ruchi, Pratibha Kumari, Dwarikanath Mahapatra, Abdulmotaleb El Saddik, and Mukesh Saini. "Characterizing Continual Learning Scenarios and Strategies for Audio Analysis." arXiv preprint arXiv:2407.00465 (2024). [3] Wang, Yu, Nicholas J. Bryan, Mark Cartwright, Juan Pablo Bello, and Justin Salamon. "Few-shot continual learning for audio classification." In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 321-325. IEEE, 2021. [4] Zhang, Xiaohui, Jiangyan Yi, Jianhua Tao, Chenglong Wang, and Chu Yuan Zhang. "Do you remember? overcoming catastrophic forgetting for fake audio detection." In International Conference on Machine Learning, pp. 41819-41831. PMLR, 2023. > ### **Q6: Generalize the experiments to a larger dataset like AudioSet.** Nice suggestion! Following common practice, we used the MUSIC dataset as our primary benchmark and included an additional dataset, AVE, in the appendix to validate audio-visual sound separation performance. Extending our approach to very large datasets like AudioSet would indeed be interesting. However, due to limited computational resources and time constraints, conducting extensive experiments on AudioSet is currently challenging. We will investigate this in our future work. > ### **Q7: Identify the memorization of individual user’s data inside the memory.** Thank you for your suggestion. We will address the potential memorization of individual users' data within our continual learning method in the limitations section. This discussion will include the importance of future work to empirically analyze and prevent data memorization, ensuring the development of robust and privacy-preserving models. --- Rebuttal Comment 1.1: Title: Official Comment by Authors Comment: Dear Reviewer wmVP, We sincerely appreciate your valuable feedback, which has greatly improved our paper! We kindly ask if you could confirm whether our response has adequately addressed your concerns. If so, we would be grateful if you might consider raising your rating. Please do not hesitate to let us know if there are any remaining issues. Thank you once again for your insightful feedback! Best Regards, The Authors --- Reply to Comment 1.1.1: Title: Experiments on larger dataset with broader range of sound categories Comment: Dear Reviewer wmVP, To demonstrate that our approach can generalize to larger dataset with broader range of sound categories, we have conducted experiments on the VGGSound dataset, in which we use randomly selected 100 categories for our continual audio-visual sound separation. We train both our method and the baseline methods using iQuery as the separator architecture. The results for Fine-tuning/LwF/PLOP/AV-CIL/ContAV-Sep (Ours) are 3.69/4.71/4.56/4.66/**4.90** for SDR and 7.23/8.89/8.32/8.61/**9.25** for SIR, respectively, which further demonstrate the effectiveness of our approach on a broader range of data categories. Best Regards, The Authors
Summary: The paper introduces ContAV-Sep, the goal is to continuously separate new sound classes while maintaining performance on previously learned classes, addressing the challenge of catastrophic forgetting in continual learning. ContAV-Sep employs a Cross-modal Similarity Distillation Constraint (CrossSDC) to preserve cross-modal semantic similarity across tasks and retain old knowledge, integrated into an audio-visual sound separation framework. Strengths: 1. The proposed method is a good solution to help maintain cross-modal semantic similarity across incremental tasks. 2. The paper has a clear writing structure, and the figures and tables are easy to understand. Weaknesses: 1. The performance improvement of ContAV-Sep (with iQuery) is not significant. 2. What does iSTFT stand for in Figure 2? The symbols appearing in the figure need to be clearly explained in the caption. Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer highlighting our proposed approach and writing. We address the raised questions below and are happy to answer further questions. > ### **Q1: The performance improvement of ContAV-Sep (with iQuery) is not significant.** Thanks for the comment! We would like to clarify that the performance improvement of our ContAV-Sep (with iQuery) is non-trivial. Compared to the baseline continual learning methods LwF/EWC/PLOP/EWF/AV-CIL, our proposed ContAV-Sep shows improvements of 0.57/0.68/0.30/1.98/0.47 in SDR and 0.78/0.54/0.25/2.20/0.42 in SIR, respectively. Furthermore, as demonstrated in the demo included in our supplementary material, our ContAV-Sep (with iQuery) achieves better separation results and sound quality compared to the baselines, highlighting its effectiveness in mitigating catastrophic forgetting. > ### **Q2: What does iSTFT stand for in Figure 2?** Thank you for your question. iSTFT stands for Inverse Short-Time Fourier Transform, a common technique used to convert spectrograms back into audio signals. We have added this explanation to the figure caption for clarity. --- Rebuttal Comment 1.1: Title: Official Comment by Authors Comment: Dear Reviewer 5v9b, We sincerely appreciate your valuable feedback, which has greatly improved our paper! We kindly ask if you could confirm whether our response has adequately addressed your concerns. If so, we would be grateful if you might consider raising your rating. Please do not hesitate to let us know if there are any remaining issues. Thank you once again for your insightful feedback! Best Regards, The Authors
Summary: This approach defines an audio-visual sound separation task where sound separation is the task and during fine-tuning novel classes are added, in the regime of continuous learning. The goal is to avoid catastrophic forgetting which typically leads to decreased performance in task performance on classes which were learned early but then little or no instances arrive in later training stages. They then present a method to solve this task with audio-visual data. Several losses are defined to achieve cross-modal similarity of embeddings through incremental tasks and preserve previous learned knowledge. Their approach does not increase the number of logits to be able to accommodate more classes but instead creates separation masks, more suitable to audio separation than typical classification tasks. Results are compared on meaningful baselines even though this is basically a new task definition and ablating the different loss components shows that the combination yields the best performance. Additional videos and investigations in the supplementary work complete the work. Strengths: Especially in the videos the performance on very similar instruments is impressive. The instruments share a very similar frequency profile and their separation is a very hard problem. The paper is very clearly written, the task is clearly stated and well contrasted and localized within the domain. The mathematical notation is very clear to follow. In general the paper defines an interesting and realistic task, does a thorough investigation and states clearly its limitations. Weaknesses: The paper defines a task and then solves it, which is always a bit easier due to limited competition. A direct comparison between sound separation was excluded by the authors stating that they do not compete in that sense, which is a limitation. Figure 2 is very small and could be more self complete. The relation of the right side's illustration to the left side is not clear from the caption at all which only states the names of the concepts. Also, there is space left and right to increase the size. Probably this resizing was reduced to gain space. This leaves the text very small. It would be good to find space in a different way because the fonts are already extremely small within Figure 2. Some of the dataset details could probably go into the appendix. This is not a summary paper. That the author's found so much related work is commendable but I would have preferred more detailed contrast to existing work, e.g. by picking out certain ideas and contributions, explaining them and then contrasting this work against them. Just having lists of 7 works and summarizing them as "semantic segmentation" seems more like a homework chore than a contribution to the paper. But my fellow reviewers may disagree. Technical Quality: 3 Clarity: 4 Questions for Authors: - It seems from the problem definition that all tasks have distinct classes. That seems like a special, even though the hardest possible, case. Do you have an intuition or looked into variations where actually a smaller part of instances from a specific class is part of all tasks? It seems also more realistic to assume that such a network is not necessarily fine-tuned on a dataset that is completely different from the previous one. - What are the hardware requirements for this? It seems CLIP is used even twice, once for each task and then the features are fed into transformers. Yet the NVIDIA RTC A5000 does seem to have only 24GB. Are there some compromises, e.g. image resolution, which were made due to the hardware available? - On a related note: What is the memory size? How does the memory and computation scale to new tasks as the memory keeps growing beyond 4 tasks? - The choice of 11kHz audio sampling seems odd compared with defaults in audio encoding or other approaches for other audio-visual tasks. What is the reasoning behind it? - How does this scale to hard examples, where both instruments are the same. The second example is already impressive with two woodwinds but it would be interesting how much the approach is able to derive from movement and how much from appearance in the visual modality. - It is interesting how the approach scales to n-classes in Figure 3 but how does it scale to n-tasks? In the appendix there is an investigation into old classes but an investigation of the performance measures for, say, one chosen class after 4,5,6,7... tasks would be interesting to judge the trend. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The limitations are properly listed and the authors are upfront about them. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and encouraging remarks! We address your questions below. If there are any additional questions, we are happy to address them and revise our paper. > ### **Q1: Direct comparison between sound separation.** Thank you for your comment! Indeed, our paper focuses on developing continual learning methods to address catastrophic forgetting in audio-visual sound separation, rather than introducing new separation models. Our research demonstrates that existing audio-visual sound separation models suffer from catastrophic forgetting. We explored two separators, including the state-of-the-art iQuery model, and found that without advanced continual learning mechanisms, performance degrades significantly (e.g., SDR drops to 3.46, while our method achieves 7.33). Therefore, we believe a fair comparison involves applying various continual learning methods to the same separator model, as directly comparing different models without continual learning would result in significantly lower separation performance. This approach allows us to isolate the impact of continual learning techniques on the task. In future work, we will also explore developing a more general and robust audio-visual sound separation model architecture that inherently mitigates catastrophic forgetting. > ### **Q2: Figure 2 is too small.** Thanks for the suggestion! We have found more space for Figure 2 and enlarged it, by moving the dataset details into the Appendix. > ### **Q3: More detailed contrast to existing work in related work.** Thank you for your suggestion! We have added more details of existing works in the related work, such as "...Park et al. [47] extend the knowledge distillation-based [2, 15] continual image classification method to the domain of video by proposing the time-channel distillation constraint. Douillard et al. [14] proposed to tackle the continual semantic segmentation task with multi-view feature distillation and pseudo-labeling...". Due to space constraints, we cannot include the entire section here. > ### **Q4: All tasks have distinct classes. Is it a special case? Do you have an intuition or looked into variations where actually a smaller part of instances from a specific class is part of all tasks?** No, this is not a special case. In continual learning, models are typically trained on a sequence of *new* tasks with distinct classes. This setup is standard for evaluating a method's ability to tackle catastrophic forgetting. In scenarios where a smaller subset of instances from a specific class appears across all tasks, the forgetting issue for that specific class would be minimal or nonexistent. However, our proposed methods could still be applied to mitigate catastrophic forgetting in other classes and leverage knowledge from previous tasks to enhance fine-tuning on the shared classes in new tasks. > ### **Q5: Are there any compromises due to the hardware limitation?** Great question! We did not compromise the original data quality, such as image resolution or compression. Since the video encoder, object detector, and image encoder are frozen, we can pre-extract these features offline and use them to train the subsequent parts of the model, which allows us to remove the computational load of these three frozen components from the GPU, enabling the training process on a single RTX A5000 GPU with 24GB of memory. > ### **Q6: What is the memory size? How does the memory and computation scale to new tasks as the memory keeps growing beyond 4 tasks?** Great question! The memory size is set to 1 sample per old class. By keeping only 1 sample for each old class in memory, it becomes easy to scale to new tasks. > ### **Q7: Why choose 11kHz as the audio sampling rate?** We choose 11kHz as the audio sampling rate as it is a common setting for the MUSIC-21 dataset in existing audio-visual sound separation papers [1,2,3]. > ### **Q8: How does this scale to hard examples, where both instruments are the same? How much the approach is able to derive from movement and how much from appearance in the visual modality?** In our model architecture iQuery [1], it consists of a spatial-temporal video encoder to extract the motion information from the given video. This allows the model to differentiate between instruments based on their motion patterns, even when they are from the same category and have similar audio characteristics. In hard cases where instruments have a similar appearance, the separation capability primarily relies on motion cues from the visual modality. However, when instruments have distinct appearances, the visual modality's appearance features can provide sufficient visual cues to effectively guide sound separation. > ### **Q9: How does Figure 3 scale to n-tasks? One chosen class after following tasks.** Nice suggestion! We randomly select one class ("accordion") from the first task and report its performance after training for each task. The results are shown in the following table (please see the following Official Comment with the title of "Results table of question Q9: How does Figure 3 scale to n-tasks? One chosen class after following tasks"), in which we can see that our method has an overall better performance compared to baselines. We can also observe that with incremental step increases, the performance of each method on this class tends to improve. This is because the memory data can be paired with data from previously unseen new classes to acquire new knowledge for old classes in subsequent tasks, as discussed in our paper. However, the performance drop compared to the upper bound still exists, demonstrating that catastrophic forgetting still occurs. [1] iQuery: Instruments as Queries for Audio-Visual Sound Separation. In *CVPR* 2023. [2] The Sound of Pixels. In *ECCV* 2018. [3] Co-Separating Sounds of Visual Objects. In *ICCV* 2019. --- Rebuttal 2: Title: Results table of question Q9: How does Figure 3 scale to n-tasks? One chosen class after following tasks. Comment: | Method | step 1 | | | step 2 | | | step 3 | | | step 4 | | | |---------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------|--------| | | SDR | SIR | SAR | SDR | SIR | SAR | SDR | SIR | SAR | SDR | SIR | SAR | | LwF | 3.65 | 6.16 | 12.54 | 4.97 | 7.53 | 10.50 | 4.09 | 9.68 | 6.90 | 4.50 | 10.19 | 7.42 | | PLOP | 3.65 | 6.16 | 12.54 | 4.22 | 6.85 | 9.92 | 4.93 | 10.16 | 7.70 | 5.79 | 11.31 | 8.01 | | EWF | 3.65 | 6.16 | 12.54 | 2.62 | 4.42 | 9.71 | 4.02 | 9.16 | 7.84 | 5.01 | 11.03 | 7.37 | | ContAV-Sep (Ours) | 4.46 | 6.80 | 11.86 | 5.14 | 6.60 | 13.09 | 6.62 | 10.49 | 9.89 | 6.02 | 10.15 | 10.05 | | Upper bound | 3.65 | 6.16 | 12.54 | 6.84 | 9.44 | 11.73 | 7.68 | 11.37 | 11.83 | 10.43 | 13.99 | 13.89 | --- Rebuttal Comment 2.1: Comment: I have read the author's responses to my and the other reviewer's questions. Due to its focus on continual learning and added investigations of other datasets and answers to the questions I am confident in my rating. --- Reply to Comment 2.1.1: Title: Official Comment by Authors Comment: Thank you so much for your positive support!
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: This work proposes a continual audio-visual sound separation framework to mitigate the catastrophic forgetting problem Strengths: As I learned, this is the first work that focuses on the catastrophic forgetting problem in audio-visual separation task. Weaknesses: 1. The font size in Figure 2 is too small, reducing the readability of this paper. 2. The technique novelty is limited, the model architecture is totally the same as iQuery, and the most brightness point of this work is just proposes a cross-modal similarity distillation constraint, however, it just the combination of the contrastive loss implemented on the modalities and features extracted from different training step. 3. Experiments are not enough, all experiments are conducted on Music21, which just contains limited data among 21 classes. Experiments conducted on Music [1], VGGSound [4], and AVE datasets [2-3] can provide a more comprehensive evaluation. [1] The sound of pixels. [2] Audioset: An ontology and human-labeled dataset for audio events. [3] Audio-visual event localization in unconstrained videos [4] Vggsound: A large-scale audio-visual dataset Technical Quality: 2 Clarity: 2 Questions for Authors: Please seem the weakness part. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: 1. Limited novelty. 2. Insufficient experiments Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions! We address the raised concerns below. If there are any additional questions, we are willing to address them and revise our paper. > ### **Q1: The font size in Figure 2 is too small.** Thank you for your suggestion! We have enlarged the font size in Figure 2. > ### **Q2: Same architecture as iQuery.** In our work, we address the proposed continual audio-visual sound separation problem, focusing on the challenge of **training audio-visual sound separation models continuously while mitigating the catastrophic forgetting typically associated with continual learning**. To benchmark and fairly compare with existing continual learning approaches, we do not introduce new separation architectures. Instead, we concentrate on developing new learning approaches based on current state-of-the-art separators, i.e., iQuery [1], to tackle the catastrophic forgetting problem in separation, aligning with similar research goals in the broader continual learning literature. Furthermore, besides iQuery [1], in Table 1 of Section 4.2, we present experimental results based on another model architecture, Co-Separation [2]. Our method outperforms baseline methods in this context as well, demonstrating the generalizability of our proposed approach to different separation model architectures. > ### **Q3: Cross-modal Similarity Distillation Constraint is just the combination of the contrastive loss implemented on the modalities and features extracted from different training step.** We would like to note that the proposed Cross-modal Similarity Distillation Constraint (CrossSDC) is a non-trivial contribution. CrossSDC is the first method to integrate previously learned features into the contrastive learning process of the current task. This enables maintaining cross-modal semantic similarity incrementally while distilling prior knowledge from older tasks directly into the current model. This simple yet effective unified training target/constraint effectively addresses catastrophic forgetting in our Continual Audio-Visual Sound Separation task. Extensive experiments with two different separation models validate its effectiveness. > ### **Q4: Experiments conducted on other datasets can provide a more comprehensive evaluation.** Thank you for suggesting additional datasets such as Music, VGGSound, and AVE! Following common practice in recent audio-visual sound separation works, we use the Music-21 dataset as our main benchmark. Note that Music [3] is a smaller subset of Music-21. To further validate our method, we conducted experiments on the AVE dataset and have included the results in the Appendix. The results for Fine-tuning/LwF/PLOP/AV-CIL/ContAV-Sep (Ours) are 2.07/2.19/2.45/2.53/**2.72** for SDR and 5.64/6.43/6.11/6.64/**7.32** for SIR, respectively, which further demonstrate the effectiveness of our approach. For more details, please refer to Section A.3 in the Appendix. We will move these results to the main paper. Additionally, experiments on larger datasets such as VGGSound and AudioSet would be very interesting for future work, and we will discuss these in the main paper. [1] iQuery: Instruments as Queries for Audio-Visual Sound Separation. In *CVPR* 2023. [2] Co-Separating Sounds of Visual Objects. In *ICCV* 2019. [3] The Sound of Pixels. In *ECCV* 2018. --- Rebuttal Comment 1.1: Title: Official Comment by Authors Comment: Dear Reviewer NCtq We sincerely appreciate your valuable feedback, which has greatly improved our paper! We kindly ask if you could confirm whether our response has adequately addressed your concerns. If so, we would be grateful if you might consider raising your rating. Please do not hesitate to let us know if there are any remaining issues. Thank you once again for your insightful feedback! Best Regards, The Authors --- Rebuttal 2: Title: Reply Comment: Many thanks to the reviewers for their responses, I was surprised by how quickly the authors downloaded and processed the 100-class VGGSound dataset from YouTube. The 100 classes may contain roughly 90K videos. I was also surprised by the training speed of the authors. However, I am still not confident enough to recommend this paper for acceptance, **mainly because of its low technical contribution.** I will maintain my original score. --- Rebuttal Comment 2.1: Title: Response to Reviewer NCtq Comment: Thank you for your prompt response! We’d like to clarify that we began experiments with VGGSound at the end of July after receiving the reviewer’s feedback. However, due to the time required for data preparation and model training, we couldn’t provide results earlier. We initially believed the results on MUSIC-21 and AVE sufficiently validated our findings. **We also want to highlight that the two works mentioned by the reviewer utilize only small subsets of VGGSound and AudioSet, as described in their papers or appendices.** For example, [3] uses only 15 musical instrument categories from AudioSet, and [4] focuses on 49 music categories from VGGSound-Music (even though the full set was adopted in this work for sound source localization task and their separation task only used this small subset). **In contrast, our VGGSound subset consists of 100 significantly more broad and diverse categories, extending beyond just musical instruments.** We believe these results clearly strengthen our contributions. We will release all of our source code and pre-trained models. While we respectfully disagree with your assessment of the technical contribution, we leave the final judgment to you, the other reviewers, and the AC. We appreciate your perspective and value your comments and engagement in the discussions. Your suggestion on a broader range of data classes has indeed made our paper stronger. [3] Language-Guided Audio-Visual Source Separation via Trimodal Consistency. In *CVPR* 2023. [4] A Unified Audio-Visual Learning Framework for Localization, Separation, and Recognition. In *ICML* 2023. --- Rebuttal 3: Title: More details of the experiments on VGGSound Comment: We’d like to provide more details about our VGGSound experiments to address the reviewer’s doubts. We randomly selected the following 100 categories: ['playing theremin', 'donkey, ass braying', 'playing electronic organ', 'zebra braying', 'people eating noodle', 'airplane flyby', 'playing double bass', 'cat growling', 'footsteps on snow', 'playing tennis', 'black capped chickadee calling', 'bouncing on trampoline', 'playing steelpan', 'waterfall burbling', 'subway, metr', 'people clapping', 'chipmunk chirping', 'chopping food', 'people shuffling', 'elk bugling', 'alarm clock ringing', 'people booing', 'canary calling', 'chopping wood', 'people humming', 'lathe spinning', 'playing tuning fork', 'playing violin, fiddle', 'singing choir', 'playing timbales', 'children shouting', 'chicken crowing', 'car passing by', 'driving motorcycle', 'bull bellowing', 'lawn mowing', 'playing bugle', 'mouse squeaking', 'child singing', 'playing tympani', 'hair dryer drying', 'basketball bounce', 'driving snowmobile', 'train whistling', 'thunder', 'dog bow-wow', 'ocean burbling', 'cuckoo bird calling', 'sheep bleating', 'splashing water', 'air conditioning noise', 'cattle mooing', 'eagle screaming', 'air horn', 'playing bass guitar', 'sloshing water', 'tap dancing', 'running electric fan', 'playing ukulele', 'playing guiro', 'playing shofar', 'people sniggering', 'people whispering', 'people finger snapping', 'car engine idling', 'bathroom ventilation fan running', 'police car (siren)', 'roller coaster running', 'playing french horn', 'swimming', 'lighting firecrackers', 'playing electric guitar', 'playing castanets', 'people babbling', 'arc welding', 'wood thrush calling', 'wind rustling leaves', 'playing darts', 'planing timber', 'crow cawing', 'shot football', 'writing on blackboard with chalk', 'people slapping', 'using sewing machines', 'raining', 'dog howling', 'playing cello', 'playing trumpet', 'fox barking', 'bowling impact', 'people crowd', 'pumping water', 'ice cracking', 'baby crying', 'playing bass drum', 'playing bongo', 'tornado roaring', 'playing steel guitar, slide guitar', 'playing squash', 'typing on typewriter']. As you can see, our dataset is diverse, containing not only musical instruments but also human sounds, sports, traffic, animals, and various other categories. To the best of our knowledge, recent state-of-the-art methods, including the two suggested by the reviewer, have rarely been tested on such a diverse range of categories in the context of audio-visual sound separation. The total number of videos in this subset is 61,195. For each category, we randomly selected 20 videos for validation, 20 videos for testing, and the remainder for training. This results in 57,195 videos for training, 2000 mixtures for validation, and 2000 mixtures for testing. We divided the 100 categories into 4 incremental tasks, each containing 25 categories. Both our method and the baseline methods were trained for 100 epochs per task. We hope these details further demonstrate the breadth and rigor of our experiments on VGGSound, strengthening our contributions.
null
null
null
null
null
null
Who’s Gaming the System? A Causally-Motivated Approach for Detecting Strategic Adaptation
Accept (poster)
Summary: The paper studied how to identify and rank agents who strategically manipulate their inputs to game machine learning models in multi-agent settings. A causally-motivated approach was proposed to address this challenge. Strengths: The paper is an interesting follow-up to [1]. Provided that the proposed mythology is solid, I can foresee many scenarios where it can be applied. Especially, I feel that it may be applied to analyze college admission mechanisms. [1] Hardt, Moritz, et al. "Strategic classification." Proceedings of the 2016 ACM conference on innovations in theoretical computer science. 2016. Weaknesses: The paper assumes players to be almost perfectly rational, i.e., they have a strong motivation to maximize utility. In reality, however, people are often only bounded rational. If so, that is, if some players do not particularly care about their utility, how would the accuracy of your approach be affected? Technical Quality: 3 Clarity: 3 Questions for Authors: Please find my first question in the above part. Here's another question. After the proposed approach is adopted and the player gaming the most is identified, how can we find our if the identification is correct? That is to say, how to validate your approach. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have appropriately discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer Drhf for their comments. We appreciate that the reviewer found our approach to be practical, and applicable to other real-world problems such as college admissions mechanisms. **Real-world validation.** Great point — real-world validation is an inherent limitation of the synthetic data setting indeed. To mitigate this issue, we did aim to verify that our rankings line up with known drivers of upcoding, such as the prevalence of private healthcare providers in a state as studied in [C1, C2] (Section 5.2; also see Table 1, pg. 9). In practice, real-world validation of causally-motivated approaches, including ours, is difficult, but this limitation is shared by many works in causal inference [C3, C4] that also opt for synthetic/semi-synthetic evaluations [C5] to verify the theoretical claims. Furthermore, we envision our ranking-oriented approach as an initial flag for auditors to subsequently investigate and confirm/rule out gaming. A purely algorithmic validation grounded in the observed data is likely infeasible under our problem assumptions as per our Proposition 1, since any uncertainty in the correct ground truth decisions results in uncertainty in the gaming parameter. **Robustness to the rational actor assumption.** This is an insightful point. Indeed, while rationality is a standard assumption, and much game-theoretic literature shares this limitation, we will more explicitly highlight the assumption that agents are rational as a limitation. However, we wanted to highlight that our focus on rankings of gaming propensity rather than point estimates of the gaming parameter could afford some robustness via Proposition 2 (Section 3.2, pg. 5). Intuitively, violations of the rational actor assumption could simply be another source of noise; then, if such violations plus estimation error are too small to flip any pair of rankings with respect to the ground truth, then our estimated gaming rankings will still match the ground-truth rankings. Informally stated: Let $\Delta_p(d^*_p)$ be the optimal utility-maximizing response, and let $\tilde{\Delta}_p(d^*_p)$ be the observed response. If $|\Delta_p(d^*_p) - \tilde{\Delta}_p(d^*_p)| \leq \varepsilon_1$ and $\sup |\tau - \hat{\tau}| \leq \varepsilon_2$ for some $\varepsilon_1, \varepsilon_2 > 0$, then no rankings where the ground-truth $|\hat{\tau}(p, p’)| > \varepsilon_1 + \varepsilon_2$ will be flipped. Note that this corollary requires stronger conditions than Proposition 2, which only accounted for estimation error in the causal effect estimator $\hat{\tau}$. We conclude by grounding our discussion in the motivating example of gaming the CMS-HCC model. Here, the notion of rationality is well-defined: rational agents game (i.e., perturb their reported diagnoses) such that profit is maximized; our utility function directly corresponds to a dollar amount. We speculate that deviations from the rationality assumption could occur if such agents are unable to carry out the utility-maximizing action (i.e., insufficient resources to, for example, schedule a home visit with a healthcare professional to generate diagnosis codes [C6]), or incorrectly estimate the utility-maximizing action (e.g., agents may incorrectly estimate costs). We leave the formalization of these rationality violations to future work. We will add this discussion of potential violations of the rational actor assumption and its implications on our ranking-based framework to Appendix B.4. **References**\ [C1] Silverman, Elaine, and Jonathan Skinner. "Medicare upcoding and hospital ownership." Journal of health economics 23.2 (2004): 369-389.\ [C2] Silverman, Elaine, and Jonathan S. Skinner. "Are for-profit hospitals really different? Medicare upcoding and market structure." (2001).\ [C3] Shi, Claudia, David Blei, and Victor Veitch. "Adapting neural networks for the estimation of treatment effects." Advances in neural information processing systems 32 (2019).\ [C4] Louizos, Christos, et al. "Causal effect inference with deep latent-variable models." Advances in neural information processing systems 30 (2017).\ [C5] Hill, Jennifer L. "Bayesian nonparametric modeling for causal inference." Journal of Computational and Graphical Statistics 20.1 (2011): 217-240.\ [C6] Weaver, C., et al. “Insurers Pocketed $50 Billion From Medicare for Diseases No Doctor Treated.” The Wall Street Journal, 8 July 2024. https://www.wsj.com/health/healthcare/medicare-health-insurance-diagnosis-payments-b4d99a5d
Summary: This work studies the problem of identifying agents who would likely game a given system. When the gaming parameters are unknown, the authors show that identifying these parameters requires strong assumptions. In contrast, they show an ordering of agents based on their ranking order is learnable from a dataset. They use this ranking to detect gaming in a Medicare application. Strengths: The authors generalize the study of strategic adaptation to a much more realistic setting and provide provable results for computing rankings, which can then be used to detect gaming. The application of Medicare is also interesting. Weaknesses: Although the framework is interesting, it is unclear in general how to use rankings to detect gaming, which is the motivator of the study. A general provable approach to using rankings for gaming, even under some conditions, would improve the applicability of this work. Technical Quality: 4 Clarity: 2 Questions for Authors: Is there any general approach to using rankings as a subroutine to detect gaming? If so, under what conditions would this approach provably work? Confidence: 2 Soundness: 4 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer P2tW for their comments. We appreciate that the reviewer found the application area of U.S. Medicare to be interesting. **Can we actually detect gaming/use it as a subroutine?** Good question — to that end, our Proposition 1 demonstrates that definitive gaming detection is not possible without knowing ground-truth decisions. Intuitively, if the ground truth “correct” decisions were known/easy to collect, gaming detection would be trivial: we could simply check all observed decisions against ground truth in such cases. In summary: * **No framework under our assumptions can answer (via Proposition 1):** “Is Agent 1 gaming their decisions?” * **Our framework can answer (via Theorem 1 & Corollary 1):** “Is Agent 1 gaming their decisions more/less than Agent 2?” Under our assumptions, to enable definitive gaming detection, one would need to assume that ground truth is fully observed, which is likely too strong of an assumption (i.e., everyone is subjected to an accurate audit, or is honest about whether they gamed). As-is, we envision that our ranking-oriented approach would serve as an initial flag for auditors to prioritize which agents to investigate. So, one could think of our ranking-based gaming detection approach could be a “subroutine” in a policy/procedural sense (albeit not in a purely algorithmic sense). However, the reviewer’s comment motivated us to revisit Proposition 1 and explore whether tighter versions of this bound could assist in gaming detection. We found a simple extension of our existing results that could allow for partial gaming detection. Let’s assume, for sake of exposition, that we can evaluate $R’$ and $c’$, similarly to the two-agent example of Section 3.1. This means we can evaluate the lower-bound of Proposition 1. Then, a partial gaming detection approach might take on two steps: 1. Define a threshold $\lambda^*$ such that plans with $\lambda_p$ lower-bound greater than $\lambda^*$ are definitively not gaming 2. Estimate upper and lower bounds on $d^*_p$ to yield tighter lower/upper bounds on $\lambda_p$; plans with $\lambda_p$ upper-bound less than $\lambda^*$ are definitively gaming. Such a threshold might be defined via some tolerance for deviation from ground truth; *i.e.*, we can define gaming as $\Delta_p(d^*_p) - d^*_p \geq \varepsilon$ for some $\varepsilon > 0$, which allows agents to have an error rate of $\varepsilon$ without being considered to be gaming. Based on $\varepsilon$, we can then compute a “threshold” value of $\lambda^*$ that demarcates gaming from non-gaming agents. For the two-agent example in Section 3.1 where $R(x) = x$ and $c(x) = x^2$, this would yield $\lambda^* = \frac{1}{2\varepsilon}$. Then, we can combine the $\lambda^*$ threshold with a tighter version of the bounds of Proposition 1. Suppose that we can claim $d^p \in [\underline{d}^*_p, \bar{d}^*_p]$. Then, revisiting Eq. 8 (Appendix B.1, pg. 14) of our proof for Proposition 1, we’d reach the following: $$\lambda_p \in \left( \frac{R’(\Delta(d^*_p))}{c’(\Delta(d^*_p), \bar{d}^*_p)}, \frac{R’(\Delta(d^*_p))}{c’(\Delta(d^*_p) - \underline{d}^*_p)} \right).$$ For clarity, let’s abbreviate the terms in these bounds as $\underline{b}$ and $\bar{b}$ for the lower and upper bounds, respectively, such that $\lambda_p \in [\underline{b}, \bar{b}]$. Since we assumed $R’$ and $c’$ can be computed, and we assume estimates of $\bar{d}^*_p$ and $\underline{d}^*_p$ are available, we can calculate $\underline{b}$ and $\bar{b}$ directly. Thus, we would conclude: * If $\bar{b} \leq \lambda^*$, then the agent is gaming (where $\lambda^*$ depends on the error tolerance $\varepsilon$), and * If $\underline{b} \geq \lambda^*$, the agent cannot possibly be gaming. Although such a method would be unable to detect or rule out gaming in agents for which $\lambda^* \in [\underline{b}, \bar{b}]$, it could provide more definitive gaming detection results for a subset of agents. Note that this idea is preliminary — we have not had the opportunity to empirically validate this or identify similar approaches in the literature. This method also introduces two previously unused assumptions: 1. One can compute $R’$ and $c’$ 2. One can estimate lower and upper bounds on $d^*_p$. While prevalence estimation via a random sample could yield confidence interval-based bounds on $d^*_p$, and $R’$ can be calculated if we know the model to be manipulated (e.g., the publicly-available CMS-HCC model used by U.S. Medicare), evaluating $c’$ may be unrealistic since it can be any strictly-convex function. Although testing and proposing such a new approach is likely out of scope for our revision, we will refine this argument and comment on this direction in Appendix B.1 as a potential avenue of future work. --- Rebuttal Comment 1.1: Title: Rebuttal Response Comment: Thank you for your response! Although the ranking approach is interesting, I will stick with my given scores since the main goal of the paper is to detect gaming. If your preliminary idea works out, it would greatly strengthen the paper.
Summary: The paper considers the problem of identifying agents with the highest values of scaling parameters in a stylized strategic adaptation optimization model under a wide range of assumptions. The paper casts this problem as a ranking problem via causal effect estimation and provides an algorithm to rank the parameters of agents. The paper then provides an experimental evaluation using synthetic and real-world medicare data to show the observations from the proposed models and algorithm. Strengths: + The paper considers an interesting manipulation problem Weaknesses: - The motivation and justification for the considered problems seem to be lacking at the beginning (mostly presented with very little context and connections) - The model is very marginal compared to existing studies (i.e., the new component seems to assume non-similarities in gaming via a cost-scaling factor); the notations/paper contents are poorly written, the results have few explanations in terms of implications, and there are assumptions in the models that are hard to justify - The proposed approaches (i.e., casts as causal effect estimation) have little to no justifications - The synthetic experimental evaluation is based on a single dataset with manual tuning that has little to no explanations, which limits the generalizability of the observations Technical Quality: 2 Clarity: 2 Questions for Authors: N/A Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See the above and below comments. Abstract "machine learning models" -> provide examples of how the interaction would look like "their inputs to the model" -> such as? what are the outcomes here? from the ML models? what utility? the agents using the models for something? it is very unclear "We consider a multi-agent setting" -> what is the motivation for this goal? what is the implication if it is achieved? why do we care if it games? What do you mean by aggressively? in fact, how do you even define aggressive? "identifying such agents is difficult" -> why? it is not clear to me how would you do this even if you know the utility; why make it more difficult now? " is parameterized via a scalar" -> the meaning is not clear; what is parametrized? agent utility function? what is a scalar here? why is this realistic at all? "is only partially identifiable" -> what does it mean by this? "By recasting the problem" -> why recasting the problem? why this approach is justifiable? "causal effect estimation problem" -> what is this problem? what is the connection to the worst offenders? very confusing here; the next phrase makes little to no sense at all; why/what is the identifiable property? "in a synthetic data study" -> only one data set? what is this coding behavior? what does it have to do with gaming? what are the features? what do you mean associated with gaming? 1 Introduction "guide decisions that impact individuals or entities" -> provide examples please review the literature on mechanism design and game theory as well as their ML related applications "obtain a more desirable outcome" -> outcome is what here? what is the connection between ML models and outcomes? "to the difficulty of generating supporting evidence" -> provide examples; explain more ""utility maximization:" -> fix "to maximize a payout" -> what is the payout? utility function? "which calculates a" -> the government or the companies? " via a publicly available model" -> how do you know they use this model? "Companies may attempt to" -> what happens to the companies that do this? "Beyond health insurance, gaming emerges" -> how does health insurance impact individuals? who is doing the gaming in the said applications? what are you trying to change in the features? Also, are the models already trained? or are you talking during learning? "agents with the highest propensity " -> how do you measure this? "given a dataset of agents" -> dataset of agents doesn't sound right? what does it mean here? do you mean a set of agents? "their observed model inputs" -> is the model fixed here? how do they interact with the models? why can't do this together? "fraud/gaming labels" -> what does this mean? what is the utility here? what are they affecting? what is the connection to distributional assumptions? these are used with little to no context "But past works in strategic adaptation" -> what does it mean by costs in this context? is that the only difference from this vs the other models? "scalar gaming parameter that scales costs " -> the meaning of this is not clear; what is this parameter? it is not explained in texts; the figure is not meaningful without a clear description "partially identifiable" -> what is the meaning of this? "However, by recasting gaming detection as causal" -> again, why is this justifiable? what is this causal effect estimation going to do? "that ranking agents" -> what ranking is solving? I am very confused "a cost-scaling factor" -> why is this more realistic? "Furthermore, much work in strategic a" -> this paragraph seems to be more like related work section; it is out of place and disrupts the flow of reading; you should probably consider an independent related work section; they are also hard to understand with little to no background context presented What is the contribution here? there should be a contribution subheadings and/or related work also "a synthetic dataset" -> how do you have a ground truth on this? what about other datasets? "causal approaches rank the " -> hard to understand this sentence "healthcare providers, a suspected driver of gaming" -> you can say that unless you have concrete evidence; otherwise, you will sued with claims like this "In summary: we " -> this paragraph provides new information that is not discussed or connected to earlier messages 2 Background & Problem Setup "to a payout" -> what is a payout here? why f is mapping only a single agent attribute? should it maps from more agents? or even datasets? "according to some function" -> what is this function? How is R dependent on d and f? why d' has to be in D? "For simplicity, we assume R = f ; " -> if R = f, the function in (1) makes no sense what is the meaning of f has itself as input? the math is not correct How is strategic classification used in strategic adaptation? "To extend strategic adaptation to multiple agents" -> why is this justifiable at all? What is M_p? What is d_i modeling? what decisions are they making? "Agent assignment is" -> how is this model? how do you indicate assumption; i am still not clear about D_p? is that for an agency? within the agnecy they have M_p agents who can manipulate? " to obtain a higher payout." -> why do they want to increase d_i? why can't they decrease? What do they perturb the average instead of individual d_i? how does this connect to f defined earlier? "is the ground truth value" -> how do you actually get the ground truth? before you define c(,) as two parameters and now you have a single parameter; it doesn't look consistent "we introduce assumptions on the" -> you need to justify these assumptions; are they common in the literature? how do you model multiple agents here? it seems (2) only for one agent 3 Theoretical analysis: finding agents most likely to game "We aim to identify agents most likely" -> wait; is lambda_p unknown? why is this the right way to identify agents? "be point-identified" -> unclear meaning what is the meaning of partially identifiable? You need to provide implications of Prop 1 I still don't get why estimating counterfactuals is used in this context; how come figure 2 has no in-text explanations? There are algorithms and figure on page 5 without explanations and connections to the paper 4 Empirical results & discussion "We hand-select " -> what about other connection Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer dPj8 for their detailed comments, which improved our work. First, we address the motivation for our approach and clarify conceptual questions. We then briefly discuss connections to past work. Then, we address concerns about generalizability and our assumptions. We will add these clarifications to our revision. **Motivation for causally-motivated/counterfactual-based approach.** Our main contribution is providing theoretical and empirical evidence that causal effect estimation is a well-motivated, practical avenue for ranking agents by their gaming parameter (i.e., propensity to game). To see why causal inference is needed, consider the following: we observe two insurance companies/healthcare providers, one of whom receives a much higher payout due to higher reported rates of a certain condition. That provider could be seeing sicker patients or they could be gaming and inflating reported diagnosis rates to secure the higher payout. Causal inference methods help us adjust for underlying patient health and compare these reported rates. If the two patient populations are similarly healthy, yet one provider receives a higher payout, it indicates potential gaming/fraud. Concretely, we use pairwise comparisons between agents to construct a ranking of agents by their gaming parameter. Theorem 1 shows that this pairwise comparison problem is equivalent to a causal effect estimation problem (i.e., estimating counterfactual decisions under counterfactual agents) under our assumptions. This yields the algorithm in Figure 4. **Why ranking?** Given a ranking, one could prioritize agents to screen/audit for gaming. One of our main results is demonstrating that directly predicting gaming is impossible without access to ground truth labels stating which agents are gaming. Proposition 1 shows that, absent stronger assumptions, we can only estimate a loose lower bound on the gaming parameter. To reinforce this result, we provide a two-agent example in Section 3.1 (L126-L132) showing how this lower bound may flip rankings with respect to the ground truth gaming parameter. While directly estimating tendency to game is impossible, we show that recovering a ranking is possible. **Other conceptual questions.** We provide clarifications about our approach not covered above. * **Q:** We group these questions about the gaming parameter $\lambda_p$: - "We aim to identify agents most likely" -> wait; is lambda_p unknown? why is this the right way to identify agents? - "agents with the highest propensity " -> how do you measure this? - "scalar gaming parameter that scales costs " -> the meaning of this is not clear; what is this parameter? it is not explained in texts * **A:** Yes, $\lambda_p$ is an unknown scalar, and is how we define the propensity to game. One can think of it as a “penalty factor” for gaming. Lower values of $\lambda_p$ indicate higher gaming propensity, since such $\lambda_p$ down-weights the cost of gaming. We aim to identify agents who are most likely to game (lowest $\lambda_p$). This is of interest in settings where one has limited resources to audit gaming. A ranking approach allows auditors to prioritize investigating a subset of agents in which gaming is more likely (e.g., the top-$K$ agents with the lowest gaming parameters). * **Q:** "fraud/gaming labels" -> what does this mean? * **A:** Fraud/gaming labels refer to ground-truth indicators that denote whether an agent is definitively gaming. With such labels, one could simply use supervised learning to detect gaming, but we assume no such labels in our setting. * **Q:** "identifying such agents is difficult" -> why? it is not clear to me how would you do this even if you know the utility * **A:** If we know the *value* of the utility function and the reward function (i.e., in cases when the reward model $R$ is exactly the model to be manipulated $f$, as in Section 2), then we can infer that cost = utility - reward. Then, all agents that incurred non-zero cost (due to manipulating their own features) gamed by definition. We will more clearly delineate this in the revision. * **Q:** "obtain a more desirable outcome" -> outcome is what here? what is the connection between ML models and outcomes? * **A:** The “desirable outcome” is obtaining higher payout (e.g., higher compensation for treating patients with more complicated conditions). Such an ML model ($f$ in Section 2.1) might take as input the conditions that a provider claimed to treat, and output a dollar payout amount (e.g., via the U.S. government’s CMS-HCC model [A1]). * **Q:** “Also, are the models already trained? or are you talking [about] during learning?” * **A:** We assume a fixed model. Agents are responding to that model by perturbing their own features according to a utility-maximization problem. We make no further assumptions on the model training/fitting procedure. * **Q:** "to obtain a higher payout." -> why do they want to increase d_i? why can't they decrease? * **A:** Good question: since agents are utility-maximizing by assumption, decreasing $d_i$ can provably never maximize utility. We comment on this in Remark 1 (L106-107): note that costs are zero for $\Delta_p(d^*_p) < d^*_p$ (Assumption 3), but the reward function is increasing; hence, decreasing $\Delta_p(d^*_p)$ below ground truth reduces utility. * **Q:** "partially identifiable" -> what is the meaning of this? * **A:** Partial identifiability (e.g., Definition 18.1, [A2]) denotes that multiple parameter estimates are compatible with the data; e.g., a range of values of $\lambda_p$ as in our Proposition 1. --- Rebuttal 2: Title: Rebuttal by Authors (part 2/2) Comment: **Connections to past work.** The reviewer suggests connections to the mechanism design literature. We disagree: our proposed framework differs from mechanism design. Algorithmic mechanism design aims to maximize some social surplus (e.g., sum of utilities for all agents, in the language of our framework) by learning an optimal allocation rule (e.g., what we call a “model”) of some “resource” to agents [A3]. In contrast, our model/allocation rule is fixed, and we study how multiple individual agents respond. This aligns with our motivating problem setting in U.S. Medicare: given a fixed model (CMS-HCC) for calculating payouts to healthcare providers/insurance companies, providers/companies adjust their behavior accordingly. A mechanism design approach could inform upgrades to the CMS-HCC model, but is not our focus. We believe that our review of game-theoretic work in machine learning is extensive, though we are happy to consider suggestions to discuss specific papers. To that end, the closest related area is strategic classification, in which agents maximize their utility function in response to an ML model’s decisions. To that end, we have provided citations to related settings in L34-L38. We pay special attention to works with similar assumptions to ours, *e.g.*: * Utility functions are partially known (their setting) [A4] vs. utility functions are unknown beyond the general form (our setting) * Multiple agents with different capacities to game, modeled by norm constraints on manipulation (their setting) [A5] vs. different cost-scaling factors (our setting) We will consider a separate section/heading for the related works. **Model is “very marginal.”** We disagree that this is a weakness: small changes to the problem formulation may significantly affect the solution space. However, in addition to a novel utility-maximization formulation for gaming in multiple agents, we contribute theoretical analyses of the resultant utility-maximization problem, which motivates a causal effect estimation approach to producing a ranking of agents by gaming propensity. To our knowledge, our approach is the first to adopt a causal effect estimation approach to gaming ranking (though previous works have explored causal mechanism design approaches to counter gaming [A6, A7]). **Evaluation on one synthetic dataset => limited generalizability?** We partially disagree: a synthetic dataset for evaluation is necessary to establish proof-of-concept of the proposed approach. This is because ground truth gaming labels are inherently difficult/infeasible to obtain in practice (e.g., require accurate audits of all agents, or accurate self-reporting of fraud). Synthetic data validation is standard in causal inference (e.g., [A8, A9] are two well-established causal effect estimators that use IHDP [A10], a synthetic dataset, for validation). To further mitigate generalizability issues, we verify our rankings align with known drivers of upcoding (e.g., the state-level prevalence of private healthcare providers [A11, A12], Section 4.3, Table 1, pg. 9). We also align our design choices with our problem assumptions (Appendix C.1, pg. 16-17). We are happy to consider specific suggestions for other datasets or design choices that could improve the generalizability of our findings. **Are assumptions common in the literature? How are they justified?** We justify our assumptions after their introduction on L92-110, with examples where applicable. We like the suggestion to discuss how common our assumptions are in the literature; in particular, many strategic classification works implicitly use some form of Assumption 1 [A6, A13], and our Assumptions 2-4 are more general cases of assumptions found in [A4, A13, A14]. Assumption 5 ensures that the ground truth is predictable from the observed variables $x_i$, and is related to assumptions of no unmeasured confounding [A15]. Assumptions 6-8 are standard assumptions in the causal inference literature (e.g., Chapter 3 of [A15]). These hold independently for all agents. We will add these points to the revision. **Other changes.** Thank you for your diligence in checking our notation — we appreciate the careful notes! We will double-check that all variables are well-defined and identify places where we’ve used similar notation/overloaded notation. --- Rebuttal 3: Title: References cited in rebuttal to Reviewer dPj8 Comment: **References**\ [A1] Report to Congress: Risk Adjustment in Medicare Advantage, December 2021. https://www.cms.gov/files/document/report-congress-risk-adjustment-medicare-advantage-december-2021.pdf\ [A2] Ding, Peng. A first course in causal inference. CRC Press, 2024.\ [A3] Roughgarden, Tim. Twenty lectures on algorithmic game theory. Cambridge University Press, 2016.\ [A4] Dong, Jinshuo, et al. "Strategic classification from revealed preferences." Proceedings of the 2018 ACM Conference on Economics and Computation. 2018.\ [A5] Shao, Han, Avrim Blum, and Omar Montasser. "Strategic classification under unknown personalized manipulation." Advances in Neural Information Processing Systems 36 (2024).\ [A6] Bechavod, Yahav, et al. "Gaming helps! learning from strategic interactions in natural dynamics." International Conference on Artificial Intelligence and Statistics. PMLR, 2021.\ [A7] Horowitz, Guy, and Nir Rosenfeld. "Causal strategic classification: A tale of two shifts." International Conference on Machine Learning. PMLR, 2023.\ [A8] Shi, Claudia, David Blei, and Victor Veitch. "Adapting neural networks for the estimation of treatment effects." Advances in neural information processing systems 32 (2019).\ [A9] Louizos, Christos, et al. "Causal effect inference with deep latent-variable models." Advances in neural information processing systems 30 (2017).\ [A10] Hill, Jennifer L. "Bayesian nonparametric modeling for causal inference." Journal of Computational and Graphical Statistics 20.1 (2011): 217-240.\ [A11] Silverman, Elaine, and Jonathan Skinner. "Medicare upcoding and hospital ownership." Journal of health economics 23.2 (2004): 369-389.\ [A12] Silverman, Elaine, and Jonathan S. Skinner. "Are for-profit hospitals really different? Medicare upcoding and market structure." (2001).\ [A13] Hardt, Moritz, et al. "Strategic classification." Proceedings of the 2016 ACM conference on innovations in theoretical computer science. 2016.\ [A14] Levanon, Sagi, and Nir Rosenfeld. "Strategic classification made practical." International Conference on Machine Learning. PMLR, 2021.\ [A15] Robins, James, and Hernan, Miguel A. “Causal Inference: What If?” Boca Raton: Chapman & Hall/CRC. (2020).
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for their insightful comments, which helped improve our work. Reviewers positively commented on the interestingness of our problem setting [R1, R3], as well as the practicality and realism of our proposed methodology [R2/R3]. We respond to requests for clarification from reviewers individually.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Does Video-Text Pretraining Help Open-Vocabulary Online Action Detection?
Accept (poster)
Summary: This paper explores the problem of open vocabulary video action detection in an online setting, where the action must be detected immediately once it appears in the video stream, vs. the more common offline setting that allows examining the entire video, past and future. The authors propose a model with two main components: a transformer decoder that cross-attends between recent and past video frames, and an action clustering block that uses slot attention to group related frames and then classify them by action. The model is trained on combination of three tasks/losses: contrastive image-text loss between the current frame's visual embedding and text embedding (vs. text embeddings from other clips in the batch), multi-label contrastive video-text loss (for text clips described by multiple text labels), and mask loss for identifying which frames come from the background (no action present). The model demonstrates improved performance over CLIP baselines in the novel open vocabulary online action detection setting. Strengths: The problem setting introduced in this paper, online-streaming action recognition with an open vocabulary, is very realistic for modeling many practical real-world scenarios, e.g. home security cameras. Further, the model is impressively fast at inference (292 fps), making it quite reasonable to practically apply in this setting. The paper includes an expansive, robust set of ablation experiments to validate the numerous model design choices. I appreciate that the paper calls out joint modeling of spatio-temporal information as a limitation and backs this up with an analysis of the model's failure cases. Weaknesses: Line 61 says that "our model successfully learns clusters of similar video frames", and I see that indirectly based on the fact that there's a clustering module that leads to improved performance. However such a claim would be stronger if supported by evidence or examples of frames being correctly clustering based on action semantics. The Object-Centric Decoder unit seems to be confusingly-named, since it is employed "to group frames" (line 149), rather than to focus on specific objects appearing within a frame. In Table 1, in the proposed setting, the model is compared only against CLIP variants. Another baseline worth exploring would be video-text models (as opposed to image-text) applied via a sliding window. There are a few parts of the paper that could be explained more quickly, in my opinion. (See the questions below.) Technical Quality: 3 Clarity: 3 Questions for Authors: Do you see a difference between "open vocabulary" and "zero shot" action recognition? The terms seem to be used synonymously in this paper, e.g. lines 106-107. Related, how much overlap is there between the textual descriptions in ActivityNet and InternVid with the category label sets of THUMOS'14 and TVSeries? Can you explain "the assumption of low background information for VLM" from line 36 in more detail? Can you please clarify "fails to reach future frames during training" from line 37? What precisely is the circle operator in equation (1)? On Line 175, how do you know "when the raw labels are not enough"? In Table 2, OVO_{ad}^{dagger} refers to OV-OAD, correct? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Below are our responses to your concerns: ### Q1: However such a claim would be stronger if supported by evidence or examples of frames being correctly clustering based on action semantics. Here, we give evidence both quantitatively and qualitatively. - Firstly, during inference, the prediction of the current frame comprises two components: the action clustering block ($P_{AC}$) and the distant neighboring frame Transformer block ($P_{DNTR}$), as indicated in Line 215. As illustrated in the table below, excluding the scores of $P_{AC}$ leads to significant performance drops. | Weight of $P_{AC}$ | THUMOS'14 (mAP%) | |:-----|:----:| | 1.0 |37.5| | 0.0 |34.1| - Secondly, in the final version, we will visualize the attention matrix $A$ in the object-centric decoder. Following the computation of Eq. 3, we can obtain a representation of the grouping of the input video frames. ### Q2: The object-centric decoder unit seems to be confusingly-named, since it is employed "to group frames" (line 149), rather than to focus on specific objects appearing within a frame. We will clarify that in the object-centric decoder, the term "objects" refers to video frames. ### Q3: Another baseline worth exploring would be video-text models (as opposed to image-text) applied via a sliding window. We opted for ViCLIP [40], a straightforward video-text baseline model, to directly assess its performance for zero-shot online motion detection on the THUMOS'14 dataset. Following the pre-training settings of [40], we employed a sliding window that samples 8 frames for input into the visual encoder, aligning with the CLIP-II methodology. Moreover, we exclusively fed the last frame of the sliding window, aligning with the CLIP-I approach. The results presented in the table below demonstrate that utilizing the video-text pretraining model directly for zero-shot inference in online action detection yields unsatisfactory performance. ViCLIP can access information from the entire video during pre-training; however, it can only capture very brief video frames during online motion detection inference. This limitation constrains its performance. | Methods | Arch | THUMOS'14 (mAP%) | |:-----|:----:|:----:| | ViCLIP-I | ViT/B |23.0| | ViCLIP-II | ViT/B |24.1| | ViCLIP-I | ViT/L | 25.7| | ViCLIP-II | ViT/L |26.3| | CLIP-I | ViT/B | 28.0| | CLIP-II | ViT/B | 29.1| | OV-OAD | ViT/B | **37.5**| ### Q4: Do you see a difference between "open vocabulary" and "zero shot" action recognition? The disparity between "open-vocabulary" and "zero-shot" action recognition lies in the fact that open-vocabulary enables the recognition of any category, whereas "zero-shot" ensures that the testing categories were not encountered during training [a]. In Line 106-107, we label our approach as "open-vocabulary" since our model is designed to ideally facilitate the online detection of any actions with the assistance of a vision-language model. In this context, "zero-shot" signifies that our method can be directly applied for inference on downstream datasets without the need for additional fine-tuning. ### Q5: How much overlap between the textual descriptions in ActivityNet and InternVid with the category label sets of THUMOS'14 and TVSeries? By utilizing the method of proximate comparison in natural language, we have pinpointed 8 analogous action phrases within the 20 action categories of THUMOS'14, encompassing 40% of its total categories. Likewise, we have identified 9 similar action categories out of the 30 in TVSeries, representing 30% of its total categories. For more detailed information, please refer to the response To-All-Reviewers-Q3. ### Q6: Can you explain "the assumption of low background information for VLM" from line 36? VLM are primarily designed for classification tasks rather than detection tasks. The optimization process of these models typically operates under the assumption that the background information (such as irrelevant environmental content) in the training data is less significant compared to the foreground content (task-relevant objects or regions). Essentially, these models are trained with the notion that each sample's main focus is on the foreground content, with less emphasis on the background, likely because classification tasks typically involve identifying the main objects in an image without detailed scene analysis. Consequently, VLMs may not be well-suited for detection tasks that necessitate precise differentiation between foreground and background elements. A similar perspective is echoed in STALE [29]. ### Q7: Can you clarify "fails to reach future frames during training" from line 37? Given an untrimmed long video, the temporal action detection model can see the features of all frames, while the OAD model can only see the current frame and past frames in each iteration. ### Q8: What precisely is the circle operator in equation (1)? In this paper, the symbol "◦" represent function composition. If $Ψ_{AC}$ and $Ψ_{DNTR}$ are two functions, then $Ψ_{AC}◦Ψ_{DNTR}$ denotes the composition of them, which is defined as $Ψ_{AC}◦Ψ_{DNTR}=Ψ_{AC}(Ψ_{DNTR}(x))$. ### Q9: On Line 175, how do you know "when the raw labels are not enough"? In this context, "raw labels" refer to the captions associated with the selected video clips. Descriptions in large-scale video datasets often lack density. The OAD model utilizes a sliding window to extract multiple frames across the video, easily picking up segments with sparse or minimal captions. ### Q10: In Table 2, VO_{ad}^{dagger} refers to OV-OAD, correct? That's a typo. OVO_{ad}^{dagger} means OV-OAD. ### Reference [a] Hyun, Jeongseok, et al. "Exploring Scalability of Self-Training for Open-Vocabulary Temporal Action Localization." arXiv preprint arXiv:2407.07024 (2024). --- Rebuttal Comment 1.1: Comment: Thank you for the additional clarification, and the analysis of the performance of a video-text baseline. I appreciate the additional inclusion of benchmarks vs. more recent techniques, and on more datasets. I believe all of my concerns were addressed. --- Reply to Comment 1.1.1: Title: Please let us know if your concerns have been addressed Comment: Dear Reviewer wz2n, We wish to express our gratitude for your extensive review and positive comments. **Your comments are invaluable to us, and we are fully committed to thoughtfully incorporating your insights to enhance our paper**. As the discussion phase is nearing its end, we are warmly concerned whether our rebuttal addresses your concerns. **It would be appreciated if you could raise your score on our paper if we address all your concerns.** We thank you again for your effort in reviewing our paper. Best regards, Authors of Paper #1178
Summary: This paper addresses a challenging setting in video understanding: open-vocabulary online action detection. It leverages pre-trained visual language models with a proposed dual-encoder architecture to achieve successful zero-shot detection. Strengths: 1. The proposed method follows a visual-text dual encoder approach and applies it novelly to zero-shot online temporal action detection, achieving state-of-the-art performance. 2. Detailed ablation studies and analyses are included. This paper provides comprehensive ablations regarding the architecture design, loss design, and efficiency analysis, demonstrating the advantages of the proposed method over previous approaches. 3. Clear writing. The paper is well-organized and easy to follow. Weaknesses: 1. The choice of VLM. The method uses CLIP as the visual-text encoder. However, as discussed in the limitations section, CLIP is an image-based understanding model and lacks the capability to capture temporal context. I wonder if other VLMs, such as ActionCLIP, could mitigate these drawbacks for the proposed task. 2. The comparison of the proposed method with previous work, such as OadTR, seems to use different visual encoders/features. Can the authors explain the fairness of such a comparison? Besides, the author should also compare the method with more recent methods, since OadTR is not state-of-the-art in recent works. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations have been well discussed by the authors in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Below are our responses to your concerns: ### Q1: I wonder if other VLMs, such as ActionCLIP, could mitigate these drawbacks for the proposed task. Thank you for the suggestion. We plan to enhance our model's performance in future work by incorporating improved feature extractors like ActionCLIP [38]. ### Q2: The comparison of OV-OAD with OadTR, seems to use different visual encoders/features. Thank you for pointing out the lack of description here. In line 276, we noted that both the fully supervised methods and our OV-OAD utilize the same feature extractor. Specifically, both the base-to-novel fine-tuning methods and the fully supervised methods employ the same feature extractor, namely CLIP's visual encoder ViT/B. In the final version, we will emphasize this statement at the beginning of the paragraph. ### Q3: The author should also compare the method with more recent methods. Please refer to the response To-All-Reviewers-Q1 for detials. --- Rebuttal Comment 1.1: Comment: I appreciate the author's effort during the rebuttal. The author has clarified most of my concerns. Considering weakness 1 and other reviewers' opinions, I will keep my score as borderline accept. --- Reply to Comment 1.1.1: Title: Response to Comments Comment: Thank you for taking the time to review our response and for providing valuable feedback. We are pleased to hear that the majority of your concerns have been addressed. Indeed, your question Q1 has inspired us significantly. In the limitation analysis of the initial draft of our paper, we made a similar intuitive assumption that better vision-language models (VLMs) could further enhance the performance of zero-shot online action detection. We are actively exploring the use of CLIP variants of VLMs (e.g., ActionCLIP) for feature extraction of video frames. However, due to resource constraints within our group, we are currently in the process of extracting features from the **14,950** videos in the ANet dataset using **ActionCLIP ViT-B/16**, which involves frame-by-frame extraction and saving the features to a storage device. We have made significant progress, with approximately **80%** of the extraction complete. We are committed to addressing all your concerns and will incorporate the aforementioned experiments and analyses into the final version of the paper. Moving forward, our primary direction for future research is to improve the feature extraction backbone, building on the insights gained from this work.
Summary: The paper introduces OV-OAD, a novel zero-shot online action detection system leveraging vision-language models for open-world temporal understanding. The authors propose a Transformer-based model with an object-centered decoder unit, trained solely on video-text pairs without manual frame-level annotations. The system is evaluated on THUMOS'14 and TVSeries benchmarks, demonstrating superior performance over existing zero-shot methods. Strengths: 1. Innovation: The paper introduces a significant advancement in action detection by proposing a zero-shot online detection method that does not rely on manual annotations. 2. Technical Depth: The proposed model incorporates a novel object-centered decoder unit within the Transformer framework, which is a sophisticated approach to frame aggregation. 3. Experimental Validation: The paper provides extensive experiments on two benchmarks, demonstrating the effectiveness of the proposed method. Weaknesses: 1. The OadTR model was proposed in 2021, and the authors should consider using more recent OAD models such as MAT, MiniROAD, etc., as comparative benchmarks to enhance the timeliness and competitiveness of their model. 2. The authors have employed a multitude of complex modules for Zero-shot experiments on two datasets, with improvements on the TVSeries dataset being relatively limited compared to THUMOS14. This raises doubts about whether the proposed model is robust enough to serve as a standard baseline for future research. 3. As a pioneering work, the authors have experimented on typical action datasets like THUMOS14 and TVSeries. However, it would be beneficial to also test on atypical human action datasets, such as creating a benchmark for the HDD dataset. 4. In the action clustering block, which serves as the query and which as the key and value between the input Group Embedding and the output tokens? According to Figure 2, it seems that the Group Embedding is the query, and the latter is the key and value, but why is the dimension of the attention weight matrix $A$ $n \times k$ instead of $k \times n$? (line 142) 5. Line 154 mentions that the number of neighboring frames n can be a large value, and Table 4 shows the highest $n$ up to 8. Can the authors present experimental results for higher values of $n$? 6. Could the authors further explain how the model achieves ultra-high inference speed with a larger scale of parameters? For example, the parameter amount is twice that of LSTR, but the inference speed is six times as fast. 7. The paper lacks explanations for some operations, such as the operator "$\circ$" in Equation 1. This may lead to misunderstandings, mistaking it for a Hadamard product rather than a composite function. 8. It is recommended that the authors add visible variables in the figures used to illustrate the method (such as Figure 2) and improve the explanation of different parts of the chart to enhance readability. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The OadTR model was proposed in 2021, and the authors should consider using more recent OAD models such as MAT, MiniROAD, etc., as comparative benchmarks to enhance the timeliness and competitiveness of their model. 2. The authors have employed a multitude of complex modules for Zero-shot experiments on two datasets, with improvements on the TVSeries dataset being relatively limited compared to THUMOS14. This raises doubts about whether the proposed model is robust enough to serve as a standard baseline for future research. 3. As a pioneering work, the authors have experimented on typical action datasets like THUMOS14 and TVSeries. However, it would be beneficial to also test on atypical human action datasets, such as creating a benchmark for the HDD dataset. 4. In the action clustering block, which serves as the query and which as the key and value between the input Group Embedding and the output tokens? According to Figure 2, it seems that the Group Embedding is the query, and the latter is the key and value, but why is the dimension of the attention weight matrix $A$ $n \times k$ instead of $k \times n$? (line 142) 5. Line 154 mentions that the number of neighboring frames n can be a large value, and Table 4 shows the highest $n$ up to 8. Can the authors present experimental results for higher values of $n$? 6. Could the authors further explain how the model achieves ultra-high inference speed with a larger scale of parameters? For example, the parameter amount is twice that of LSTR, but the inference speed is six times as fast. 7. The paper lacks explanations for some operations, such as the operator "$\circ$" in Equation 1. This may lead to misunderstandings, mistaking it for a Hadamard product rather than a composite function. 8. It is recommended that the authors add visible variables in the figures used to illustrate the method (such as Figure 2) and improve the explanation of different parts of the chart to enhance readability. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: In summary, the authors have examined the OAD (Online Action Detection) from an interesting perspective and proposed a novel Zero-shot model, which appears to have profound application value compared to traditional OAD methods. However, additional experiments may be necessary to substantiate the claims fully. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Below are our responses to your concerns: ### Q1: The authors should consider using more recent OAD models. Please refer to the response To-All-Reviewers-Q1 for detials. ### Q2: The improvements on the TVSeries dataset being relatively limited compared to THUMOS'14. We believe that there are two reasons why OVOAD's performance in THUMOS'14 outperforms the improvements in the TVSeries dataset. - First, all categories of ANet have different coverage ratios of **40%** and **30%** for the action categories of THUMOS14 and TVSeries, respectively. Similar text descriptions encountered by the text encoder during pre-training improve the accuracy of visual-text matching. Please refer to the response To-All-Reviewers-Q3 for specific calculations. - Second, the evaluation metrics are calculated differently for the THUMOS'14 and TVSeries datasets, the former used mAP as a metric while the latter used cAP as a metric. ### Q3: It would be beneficial to test on atypical human action datasets, such as creating a benchmark for the HDD dataset. Thank you for your advice. We have validated the zero-shot performance of OV-OAD on EK100 with non-trivial improvement (1.3% cAP). Please refer to the response To-All-Reviewers-Q2 for detials. In the final version, we will evaluate our method on those challenging online motion detection datasets, e.g., FineAction and the mentioned HDD. HDD [a] is a multimodal dataset in a driving scenario. In previous studies, e.g., OadTR [39] and Colar [45], non-visual sensors data are generally used as inputs to the model. The non-visual sensors are sourced from the vehicle's controller area network bus, encompassing critical metrics like car speed, accelerator and braking pedal positions, yaw rate, steering wheel angle, and the rotation speed of the steering wheel. In contrast, OV-OAD has only a visual feature extractor and does not directly process sensor data. Directly evaluating our current model on HDD could lead to unsatisfying performance due to the huge data distribution gap between our training (RGB videos for daily actions) and HDD. ### Q4: The group embedding is the query, and the frame tokens is the key and value, but why is the dimension of the attention weight matrix $A \in n×k$ instead of $k×n$? In the action clustering block, the group embeddings serve as the query, and frame tokens serve as keys and values. As shown in Eq.(4), we define $A = softmax(\frac{K@Q^{\top}}{\sqrt{d}}), A \in \mathbb{R}^{n \times k}$, with the softmax operator normalizing over $k$, and the output of the slot-attention block is $A^{\top}V$, which is of shape $k×d$. We will clarify the definition of $A$ in the final version. ### Q5: Line 154 mentions that the number of neighboring frames $n$ can be a large value, and Table 4 shows the highest $n$ up to 8. Can the authors present experimental results for higher values of $n$? Training our model with a larger $n$ than 8 (e.g. 12) needs V100×8 GPUs for ~5 days on ANet. We will add this study in the final version. We state that $n$ can be a large number relative to the number of group embeddings $k$. Generally, $n$ should be more than twice $k$. We will make these statements clear. ### Q6: The parameter amount is twice that of LSTR, but the inference speed is six times as fast. The inference of LSTR is much slower than ours due to two main reasons: - It requires more input frames than our method. - It relies on optic flow extraction, which is a slow process. In particular, LSTR demands a larger number of input frames for optimal performance, using 520 seconds of video for inference on THUMOS'14, while our OV-OAD utilizes only 8 seconds. This means that LSTR requires **65** times more data than OV-OAD, which likely explains why our model's inference speed is six times faster. Furthermore, the primary speed bottleneck for LSTR is the extraction of optical flow (8.1 FPS), whereas for our model, it is the extraction of image features (292.3 FPS). ### Q7: The paper lacks explanations for some operations, such as the operator "◦" in Equation 1. In Equation 1, the symbol "◦" represents the function composition. We will include detailed explanations for clarity. ### Q8: It is recommended that the authors add visible variables in the figures. We will label the input and output tensors' names and dimensions of each module in Figure 2 for clarity. ### Reference [a] Ramanishka, Vasili, et al. "Toward driving scene understanding: A dataset for learning driver behavior and causal reasoning." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2018. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. While I appreciate the effort to address my concerns, I keep my score as weak accept. --- Reply to Comment 1.1.1: Title: Please let us know if your concerns have been addressed Comment: Dear Reviewer zhJf, We wish to express our gratitude for your extensive review and positive comments. Your comments are invaluable to us, and **we are fully committed to thoughtfully incorporating your insights to enhance our paper**. As the discussion phase is nearing its end, we would be grateful to hear your feedback and wondered if you might still have any concerns we could address. **It would be appreciated if you could raise your score on our paper if we address all your concerns.** We thank you again for your effort in reviewing our paper. Best regards, Authors of Paper #1178
Summary: 1. The authors have proposed a new method for Online Open Vocabulary action detection by leveraging pretrained vision langugae models. 2. To that end they introduce 2 main modules, a distant neighboring frame transformer and an object centric Action clustering unit. 3. They train their model with three objectives on filtered versions of the Activity-Net v1.3 and Intern Vid-10M-FLT datasets. 4. They evaluate the model zero shot on the validation splits of Thumos’14 and TVseries datasets and with a base-to-novel formulation on the Thumos’14 dataset. The authors show that in the case of the former their model beats naive CLIP baselines by ~2-7 % on Thumos’14 and ~2% on TVSeries dataset and in the case of the latter beat the OadTR model trained on the seen split of Thumos’14 by ~6-9% 5. The authors also demonstrate superior inference speed compared to OadTR. Strengths: 1. The authors have proposed a new method for Online Open Vocabulary action detection by leveraging pretrained vision langugae models. 2. The authors introduce 2 main modules, a distant neighboring frame transformer and an object centric Action clustering unit. 3. The authors evaluate the model zero shot on the validation splits of Thumos’14 and TVseries datasets and with a base-to-novel formulation on the Thumos’14 dataset. The authors show that in the case of the former their model beats naive CLIP baselines by ~2-7 % on Thumos’14 and ~2% on TVSeries dataset and in the case of the latter beat the OadTR model trained on the seen split of Thumos’14 by ~6-9% 4. The authors also demonstrate superior inference speed compared to OadTR Weaknesses: 1. The authors claim in lines 41 and 42 that they do not use any frame-level annotations. However the Actitvity Net dataset contains annottaions for start and end times(therefore frames) for actions. And it seems that for the L_current objective in section 3.3, the text supervision is provided to the current frame. These claims seem contradictory and more clarity about this will be better. 2. In the zero shot baseline comparison the improvement in case of Thumos-14 seem to be much larger than that of TV-series. Some explanation regarding this huge disparity is neccessary. Zero shot evaluations on more datasets can help indicate the robustness of this method to different data distributions. 3. In the zero shot baseline comparison, there seems to be a very large improvement for Thumos’14 in case of the ANet model compared to the InternVid model. The improvement is 2% for the latter while ~7% for the former. Could this be attributed to similarity of actions, between thumos’14 and Anet ? Some investigation regarding this could shed light on the previous point. if that is the case then the improvement range disparity for TV-series( ~2%) could be explained. 4. The authors have compared their results for the open vocab evaluation with only the OadTR model. Comparison with more/better Online Action detection models (like GATEhub, MiniROAD, LSTR, Colar) is necessary for a holistic evaluation of the proposed model. 5. The ablations do not contain the different parts of the action clustering unit. Some results without the Object centric decoder are needed to justify its introduction. In table 5, there is no ablation for not using the final transformer encoder. It should be added to justify its introduction as well. 6. In Figure 2. it is not clear what is being fed from the output of the object centric decoder to the final transformer encoder. It needs to be clearly mentioned in the figure for clarity. Technical Quality: 2 Clarity: 2 Questions for Authors: see weaknesses Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Below are our responses to your concerns: ### Q1: The statement "do not use any frame-level annotations" may appear contradictory. Thank you for highlighting this inaccurate claim. We will rephrase it as follows: "avoid utilizing fine-grained temporal annotations, which involve humans labeling actions frame by frame to ensure no actions are missed." Regarding our use of ANet, the annotations consist of timestamps and corresponding actions for each temporal segment. However, the timestamps lack the ability to differentiate between consecutive frames, leading to potential mislabeling of frames with incorrect actions. Additionally, instances within ANet may sometimes omit actions or assign incorrect labels. For example, in sample *5n7NCViB5TU.mp4* from the training set lasting 121.44 seconds and depicting discus throwing by 3 athletes (referred to as Athletes 1-3), there are discrepancies. Athlete 1's action completion was marked within the time window $[22.4, 39.5]$, while ANet indicated $[24.3, 38.1]$ as the segmentation point. Similarly, for Athlete 2, the action completion time window was $[62.5, 73.1]$, with ANet lacking a corresponding segmentation label. ### Q2: Zero shot evaluations on more datasets can help indicate the robustness. We validate OV-OAD's zero-shot performance on EK100, showcasing a non-trivial improvement (1.3% cAP). For further details, please refer to the response To-All-Reviewers-Q2. ### Q3: Different improvement for THUMOS’14 and TVSeries. Please refer to the response To-All-Reviewers-Q3 for detials. ### Q4: Comparison with more online action detection models. Please refer to the response To-All-Reviewers-Q1 for detials. ### Q5: The ablations on the action clustering unit. We will ablate different parts of action clustering block soon. The reason behind using the object-centric decoder lies in its capacity to generate more semantically explicit segments compared to a standard transformer decoder block, owing to its differentiable clustering module. This superiority has been validated in [41-42]. Nevertheless, we will compare it with the aforementioned rival in the following ablation. ### Q6: The ablation for not using the final transformer encoder. Removing the final transformer encoder (also means abandon $L_{contrast}$) would leads to training anomalies and performance drops. Without the textual guidance from $L_{contrast}$, the open-vocabulary capability is significantly compromised. As shown in Table 5 (rows 2 vs. 3). we see that reducing the number of layers in the final transformer encoder from 6 to 3 results in a performance loss. ### Q7: Figure 2 needs to be clearly mentioned for clarity. The object-centric decoder automatically associates $n$ input frame tokens using $k$ learnable grouping embeddings, ultimately outputting grouped tokens (${\cal G}' \in \mathbb{R}^{k \times d}$) which also can globally describe the video clip. We will label the tensor dimensions of the module's outputs in Figure 2. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed reply. Most of my concerns have been addressed except for Q6 in your rebuttal where you claim "As shown in Table 5 (rows 2 vs. 3). we see that reducing the number of layers in the final transformer encoder from 6 to 3 results in a performance loss." The performance changes by 0.1 which is too small in my opinion to demonstrate the utility of the final transformer encoder. I suspect this is the case because the authors perform average pooling (as depicted in Figure 2.) after the encoder which makes a lot of the processing in the final encoder redundant. I would also like to point out crucially that removing the final text encoder does not mean abandoning the L_{contras} loss. I suggest that the authors remove the final encoder completely and run an experiment with the same set of objectives as before to confirm this point. This is not a major issue but i think it will improve the quality of the paper. My rating stays unchanged. --- Reply to Comment 1.1.1: Title: Response to Comments Comment: Thank you for taking the time to review our response and for offering valuable feedback. We are glad to know that most of your concerns have been resolved. Your feedback on Q6 was particularly insightful. We initially misunderstood your question and proceeded with the experiment as per your recommendation regarding the exclusion of the final transformer encoder. The data presented in the table below indicates a marginal performance decrease of around **0.6%** upon removing the final transformer encoder. Additionally, this action results in a **15%** reduction in certain training parameters, specifically in the visual encoder. We are fully committed to addressing all your concerns and will incorporate the aforementioned experiments and analyses into the final version. | $Ψ_{DNTR}$ | $Ψ_{AC}$ | THUMOS'14 (mAP%) | |:---|:----:|:----:| |4|6-2|37.5| |4|6-0|36.9| --- Rebuttal Comment 1.2: Title: Ablation Results for Action Clustering Block Comment: Here, we analyze the impact of an object-centered decoder compared to a standard transformer decoder unit within the action clustering block. Both are designed to bind semantically similar static frames into a group embedding. The outcomes presented below demonstrate that the object-centered decoder outperforms the standard transformer decoder in performance. |Clustering Unit|THUMOS'14 (mAP%)| |:-----|:-----:| | Object-Centric Decoder | **37.5** | | Standard Transformer Decoder | 37.0 |
Rebuttal 1: Rebuttal: # To All Reviewers We sincerely thank each reviewer for providing constructive comments for our paper, which are very helpful to improve our paper. Below, we address the general issues raised by the reviewers. ### Q1: Comparison with more online action detection models We conducted a comparative analysis between OV-OAD and both the fully-supervised and base-to-novel fine-tuning methods, leveraging the open-source LSTR [44] and MAT [36] frameworks. For the fully-supervised training, we adhered to the experimental configurations outlined in [36, 44], with the deviation being the utilization of solely CLIP/ViT-B extracted RGB features as inputs. Key parameters remained consistent, such as training with 20 epochs, 'adam' optimizer, a learning rate of 7e-5, among others. For the base-to-novel fine-tuning, we followed the experimental fine-tuning setup of [20, 33], including the Adam optimizer, a learning rate of 1e−3, and the Cosine decay function for training. | Train-Test Split | Methods | THUMOS’14 (mAP%) | |:--------------:|:----------:|:----------:| | 100%-0% | OadTR-D8 | 47.4 | | 100%-0% | LSTR | 47.7 | | 100%-0% | MAT-D48 | **48.2**| |100%-0% | CLIP-I | 28.0 | | 100%-0% | OV-OAD | 37.5 | | 75%-25% | LSTR | 26.9 | | 75%-25% | MAT-D48 | 25.5 | | 75%-25% | OadTR-D8 | 33.7 | | 75%-25% | CLIP-I | 38.6 | | 75%-25% | OV-OAD | **44.6** | | 50%-50% | MAT-D48 | 7.9 | | 50%-50% | LSTR | 9.1 | | 50%-50% | OadTR-D48 | 9.6 | | 50%-50% | CLIP-I | 28.6 | | 50%-50%| OV-OAD | **35.9** | The results are reported in the table above. The MAT-D48 indicates that MAT utilizes the ground truth of future frames (48 frames in 12 seconds) during training. We observe that the state-of-the-art MAT (ICCV23) method achieves better performance in the fully-supervised setting, but its performance is poor in the base-to-novel setting. It is evident that traditional OAD models combined with base-to-novel fine-tuning methods are not suitable for direct application to zero-shot online action detection tasks. ### Q2: Zero shot evaluations on more datasets We evaluated OV-OAD's performance on the challenging EPIC-KITCHENS-100 (also known as EK100) dataset. This dataset comprises first-view shots and notably deviates from the ANet data distribution. Following [a], we conducted online action detection inference on the complete test set of EK100, covering 97 verb categories. Similar to TVSeries, we employed per-frame calibrated average precision (cAP) as the evaluation metric. The results presented in the table below illustrate that OV-OAD enhances online action detection for first-view videos compared to CLIP. | Methods | Arch | EK00 (cAP%) | |:-----|:----:|:----:| | CLIP-I | ViT/B | 40.1| | CLIP-II | ViT/B | 39.9| | OV-OAD | ViT/B | **41.4**| In addition, we also chose the challenging and large OAD dataset FineAction [b], for validation. Because of resource constraints, the results are not available at the moment, and we will add a description of the testing process and experimental results of this dataset in the final version. ### Q3: Different improvement for THUMOS’14 and TVSeries. - Between THUMOS‘14 and TVSeries on ANet The variance in improvements between THUMOS'14 and the TV series can be attributed to the resemblances in data distribution across these two datasets and our utilization of ANet. The substantial boost of OV-OAD on THUMOS'14 can be attributed to ANet encompassing a broader array of action categories. By employing a method of close comparison in natural language, we identified 8 similar action phrases within THUMOS'14's 20 action categories, constituting **40%** of its overall categories. In the TV series dataset, we identified 9 similar action categories out of 30, equating to **30%** of its total categories. - Between ANet and InternVid on THUMOS‘14 Similarly, OV-OAD achieves better performance on THUMOS'14 due to the fact that ANet covers a wider range of action categories than IVid. Using the same natural language tools, we compared the coverage ratios of ANet and IVid for THUMOS categories, which stood at **40%** and **15%**, respectively. To elaborate, when contrasting the IVid and THUMOS datasets, we considered 500 high-frequency verbs as the action categories for IVid. Then, we pinpointed 3 categories from them that were similar to THUMOS'14. ### Reference [a] Damen, Dima, et al. "Scaling egocentric vision: The epic-kitchens dataset." Proceedings of the European conference on computer vision (ECCV). 2018. [b] Liu, Yi, et al. "Fineaction: A fine-grained video dataset for temporal action localization." IEEE transactions on image processing 31 (2022): 6937-6950.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Hierarchical Visual Feature Aggregation for OCR-Free Document Understanding
Accept (poster)
Summary: This paper introduces an approach to enhancing document understanding capabilities by employing a hierarchical visual feature aggregation technique alongside pretrained Multimodal Large Language Models (MLLMs). The method utilizes a feature pyramid hierarchy integrated with cross-attentive pooling, which effectively captures multi-scale visual information. Additionally, the paper presents a new instruction task designed to further boost model performance. The experimental results on standard benchmarks demonstrate the effectiveness and superiority of the proposed method in advancing document understanding. Strengths: 1. This paper conducts extensive ablation studies to meticulously demonstrate the effectiveness of each proposed module and parameter. These studies provide a comprehensive understanding of the individual contributions and interactions within the framework, thereby validating the design choices and overall robustness of the system. 2. This paper introduces a novel relative text-position prediction task, which significantly enhances document understanding. This task leverages the spatial relationships between text elements to improve the model's ability to interpret and contextualize information within documents. Weaknesses: 1. The baseline for the ablation studies presented in this paper may be considered less robust, as evidenced by the performance on DocVQA, where it only achieves 35.17% accuracy, significantly below the 72.7% achieved by the proposed model. This discrepancy suggests that the baseline's performance could potentially undermine the persuasiveness of the conclusions drawn from the ablation study. It is possible that certain modules or techniques, which show effectiveness against this baseline, may not demonstrate the same level of impact when compared to a stronger baseline. 2. While the multi-scale feature pyramid hierarchy is identified as the paper's primary contribution, the discussion surrounding this module could be more thorough. For instance, the paper could benefit from an exploration of alternative configurations, such as the implications of utilizing only two local scales versus three or four. A deeper analysis of these options would provide insights into the optimal balance between computational efficiency and performance, thereby enriching the understanding of the module's role in the overall framework. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the concerns in Weaknesses. Other questions: Whether the "Ours" in Table 3 applied the relative text-position prediction task? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for the reviewer's insightful recognition of our extensive ablation studies and the novel relative text-position prediction task, which validate our approach's robustness. Below are our responses to the main concerns. **[W1] Robustness of the ablation studies** As 35.17% represents the performance of the moderate BLIP2-OPT backbone and 72.7% represents the performance of the larger mPLUG-Owl backbone on DocVQA, they are not directly comparable. To provide the proper baseline, we brought the numbers for the mPLUG-Owl baseline in Table D (row 5) from the ablation study of UReader paper [14]. To further address the concern about the robustness of our ablation studies, we have conducted additional experiments on the mPLUG-Owl-based model. Due to the limited time constraints in the rebuttal period, we focused on components that might be considered most marginal: the reconstruction process (row 2 vs row 4, row 6 vs row 8) and Relative Text-Position Prediction (RTPP) (row 3 vs row 4, row 7 vs row 8). Note that we compare RPT to RFT for RTPP experiments. As shown in Table D, the reconstruction loss and RPTT consistently improve performance on most tasks with both the moderate and large backbone models, BLIP2-OPT and mPLUG-Owl, respectively. We believe these consistent gains demonstrate the effectiveness of our components. We will provide the full ablation studies for the large model in the revised version. Table D: Results of ablation studies on the reconstruction loss and RTPP. Note that we compare RPT to RFT for RTPP experiments. The last row for each backbone model represents our final model. | Backbone | MS+HVFA | Recon | RFT | RPT | PTP | DocVQA | InfoVQA | DeepForm | KLC | WTQ | TabFact | ChartQA | VisualMRC | TextVQA | TextCaps | |-----------|:-------:|:-----:|:---:|:---:|:---:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:----------:|:---------:|:----------:| | BLIP2-OPT | | | | | | 35.17 | 20.34 | 4.14 | 18.50 | 15.36 | 53.37 | 31.76 | 201.39 | 43.20 | 126.34 | | BLIP2-OPT | ✓ | | | ✓ | ✓ | 50.46 | 28.48 | 13.18 | **23.82** | 20.06 | 59.83 | 49.48 | 226.65 | 56.04 | 134.06 | | BLIP2-OPT | ✓ | ✓ | ✓ | | ✓ | 50.62 | 29.49 | 12.55 | 22.88 | 20.51 | 59.28 | 48.41 | 226.55 | 57.19 | 134.44 | | BLIP2-OPT | ✓ | ✓ | | ✓ | ✓ | **51.38** | **29.60** | **14.58** | 23.78 | **21.15** | **59.87** | **50.41** | **228.65** | **57.32** | **135.24** | | mPLUG-Owl | | | | | | 56.7 | --- | --- | --- | 22.9 | --- | 56.7 | 205.0 | 54.3 | --- | | mPLUG-Owl | ✓ | | | ✓ | ✓ | 71.33 | 45.21 | 52.42 | **36.88** | 34.14 | 68.02 | 62.90 | 224.34 | 58.73 | 122.32 | | mPLUG-Owl | ✓ | ✓ | ✓ | | ✓ | 71.77 | **45.92** | 51.89 | 36.37 | 33.71 | 67.86 | 62.35 | 225.60 | 59.16 | 122.79 | | mPLUG-Owl | ✓ | ✓ | | ✓ | ✓ | **72.68** | 45.90 | **53.02** | 36.73 | **34.51** | **68.19** | **63.28** | **226.43** | **59.22** | **123.11** | **[W2] Analysis on using more scales** We appreciate the reviewer's suggestion to explore alternative configurations of our multi-scale feature pyramid hierarchy. Our method is designed to be flexible and can be extended to use an arbitrary number of scales. For example, a 3-scale configuration with feature hierarchy can be implemented as follows: combining features from scales 3 and 2, then merging the results with features from scale 1. Due to significant time constraints during the rebuttal period, it is challenging to conduct extensive experiments on these alternatives. We are currently running an experiment with the model that integrates 3 scales. We present the validation performance of each task after 2 epochs in Table E. Table E. Validation performance with varying scales after 2 epochs. | # of scales | Throughput (img/s) | DocVQA | InfoVQA | DeepForm | KLC | WTQ | TabFact | ChartQA | VisualMRC | TextVQA | TextCaps | |:-----------:|:------------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:----------:|:---------:|:----------:| | 1 | 1.481 | 36.29 | 21.24 | 16.62 | 36.23 | 14.31 | 52.52 | 27.44 | 178.04 | 42.63 | 126.56 | | 2 | 1.088 | 42.18 | 24.53 | 21.26 | 38.74 | 16.60 | 53.13 | 32.81 | 195.55 | 45.22 | 129.24 | | 3 | 0.494 | **44.86** | **27.57** | **25.61** | **39.48** | **18.77** | **54.17** | **33.05** | **201.49** | **46.92** | **131.82** | When integrating the 3rd scale in addition to the 1st and 2nd scale, the number of visual inputs increases about $(1+g+4g+16g)/(1+g+4g) ≈ 4.2$ times, while the throughput of our model only decreases by a factor of 2. Note that $g$ represents the number of grids in line 124. The validation results demonstrate that incorporating the 3rd scale still helps the document understanding but the improvement gain is diminished. This is expected because, as more scales are added, the incremental benefit decreases since the model has already captured most of the relevant information from the initial scales. We expect this experiment to be finished within the discussion period and will post the performance on the test set. **[Q1] Clarification on Table 3** Yes. The results labeled as "Ours" in Table 3 indeed represent our final model, which includes the Relative Text-Position Prediction Task. --- Rebuttal Comment 1.1: Title: Final rating Comment: Thanks for the rebuttal. After reading the other reviews and the rebuttal, I would like to keep my rating. Although the rebuttal addressed some of my concerns, such as the analysis and the ablation studies, I agree with Reviewer CDZQ that the multi-scale features integration is not novel. --- Rebuttal 2: Title: Final test results for using more scales Comment: Table F. Test scores of the BLIP-2-based model with varying scales. | # of scales | Throughput (img/s) | DocVQA | InfoVQA | DeepForm | KLC | WTQ | TabFact | ChartQA | VisualMRC | TextVQA | TextCaps | |:-----------:|:------------------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:----------:|:---------:|:----------:| | 1 | 1.481 | 40.18 | 25.19 | 6.49 | 18.97 | 17.80 | 56.85 | 38.88 | 219.74 | 47.54 | 128.39 | | 2 | 1.088 | 51.38 | 29.60 | 14.58 | 23.78 | 21.15 | 59.87 | 50.41 | 228.65 | 57.32 | 135.24 | | 3 | 0.494 | **52.78** | **31.25** | **20.59** | **25.42** | **23.21** | **60.12** | **52.21** | **230.22** | **60.22** | **137.57** | The experiments with the 3-scale variant of our model have been completed and the test scores for the different number of scales across each benchmark are presented in Table F. We used the BLIP-2-based version of our model and all models are trained using RTPP for the text reading task for this experiment. The results show that incorporating an additional 3rd scale continues to help with document understanding, but the improvement gain becomes less significant. This is understandable, as the model has already captured most of the key information from the initial two scales, so the additional benefit is reduced. Note that training the model with all three scales without our hierarchical feature aggregation module is not feasible on A100 GPUs due to memory constraints. We will revise our manuscript to include this analysis. --- Rebuttal 3: Comment: Dear Reviewer Pfd9 Because the end of discussion period is approaching, we kindly ask you whether our response is helpful to clarify you or not. Also, if you have any question or additional comments, please do not hesitate to contact us. We thank you for your time and efforts to review our paper. Best wishes, Authors. --- Rebuttal 4: Title: Clarification on the contribution of our work Comment: Thanks for your timely response. As we stated in our rebuttal to Reviewer CDZQ, our main contribution lies not only in using multi-scale features, but also in **how we aggregate them without significantly increasing computational costs**. This is an important issue for MLLMs. While multi-scale features clearly benefit MLLMs, it is not computationally feasible without an appropriate strategy. By addressing this challenge, our approach enables broader applicability of MLLMs across different hardware environments. --- Rebuttal Comment 4.1: Comment: Dear Reviewer Pfd9, We sincerely appreciate your time and effort in reviewing our work. Given that Reviewer CDZQ has acknowledged the value of the hierarchical feature aggregation module as a meaningful contribution to multi-scale feature aggregation, we kindly ask if you could reconsider the contribution of our work in this light. Thank you for your consideration.
Summary: This paper presents hierarchical visual feature aggregation for OCR-free document understanding, leveraging feature pyramid hierarchy with cross-attentive pooling to handle the trade-off between information loss and efficiency. Additionally, a relative text-position prediction task is proposed to address the text truncation issue. Experiment results based on BLIP-2 and mPLUG-Owl demonstrate the effectiveness of the proposed method. Strengths: 1. This paper innovatively utilizes a feature pyramid hierarchy to fuse multi-scale visual features without increasing the computational complexity of large language models. 2. This paper presents a novel instruction tuning task that is robust to text truncation issues. 3. Extensive experiments and ablation studies showcase the effectiveness of the proposed method on BLIP-2 and mPLUG-Owl models. Weaknesses: 1. The comparison in Table 3 is not comprehensive enough; models such as mPLUG-DocOwl 1.5 and TextMonkey should be included. 2. In the ablation experiments based on BLIP-2, the performance improvements brought by the reconstruction layer (Table 4) and the Relative Text-Position Prediction Task (Table 6, compared to Reading Full Text) are minimal. I wonder whether they can work for more advanced LMMs such as mPLUG-Owl. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In line 131, the description "augmenting each grid by a 2 × 2 scale, effectively enlarging the receptive field size" is confusing. How does this operation lead to an enlarged receptive field size? 2. How are the transposition and matrix multiplication operations performed in Equation 2 given the 4-dimensional matrix F_{i+1} and F^’_{i+1}. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors did not validate their proposed methods on more advanced models such as mPLUG-DocOwl 1.5 and TextMonkey, so the generalizability of their methods remains open to discussion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We truly thank the reviewer for recognizing our key innovations in multi-scale feature fusion, novel instruction tuning, and comprehensive experimental results across different models. Below are our responses to the main concerns. **[W1] Comparison to the recent works** Table B: Comparison of different configurations over TextMonkey [R4] and mPLUG-Owl 1.5 [R5]. | Method | Backbone | # Model Params | # Trainable Params | # Pretraining Data | # Fine-tuning data | |------------------|-------------|----------------|--------------------|--------------------|--------------------| | TextMonkey [R4] | Qwen-VL [25] | 9.7B | 7.9B | 1.4B+76.8M | 2.1M | | mPLUG-DocOwl 1.5 [R5] | mPLUG-Owl 2 [R6] | 8.1B | 412M | 400M | 4M+25K | | Ours | mPLUG-Owl | 7.2B | 96M | 1.1B | 650K | Initially, we compare the configurations with our framework and two other approaches, TextMonkey and mPLUG-Owl 1.5, represented in Table B. We would like to highlight two key differences between both models and ours. Both models use more advanced Multimodal Large Language Model (MLLM) backbones for initialization - Qwen-VL and mPLUG-Owl2. Our model, in contrast, is based on the mPLUG-Owl. Additionally, both models benefit from substantially larger training datasets and more trainable parameters, which directly impacts their performance. Despite these advantages, according to the reported results, our method surpasses TextMonkey. Although the mPLUG-DocOwl 1.5 model performs slightly better than ours, this is possible given its use of a more advanced backbone and larger dataset. For a fair comparison, the best option would be to augment both backbones with our framework and train the models with the same data, which is not feasible within a limited rebuttal period. However, we expect improvements in our results by incorporating our framework into the new backbones with more data, since the multi-scale aggregation module facilitates efficient capturing of different levels of detail. **[W2] Effect of the reconstruction loss and RTPP on mPLUG-Owl** Table C: Results of ablation studies on the reconstruction loss and RTPP. Note that we compare RPT to RFT for RTPP experiments. The last rows for each backbone model represent our final model. | Backbone | MS+HVFA | Recon | RFT | RPT | PTP | DocVQA | InfoVQA | DeepForm | KLC | WTQ | TabFact | ChartQA | VisualMRC | TextVQA | TextCaps | |-----------|:-------:|:-----:|:---:|:---:|:---:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:----------:|:---------:|:----------:| | BLIP2-OPT | ✓ | | | ✓ | ✓ | 50.46 | 28.48 | 13.18 | **23.82** | 20.06 | 59.83 | 49.48 | 226.65 | 56.04 | 134.06 | | BLIP2-OPT | ✓ | ✓ | ✓ | | ✓ | 50.62 | 29.49 | 12.55 | 22.88 | 20.51 | 59.28 | 48.41 | 226.55 | 57.19 | 134.44 | | BLIP2-OPT | ✓ | ✓ | | ✓ | ✓ | **51.38** | **29.60** | **14.58** | 23.78 | **21.15** | **59.87** | **50.41** | **228.65** | **57.32** | **135.24** | | mPLUG-Owl | ✓ | | | ✓ | ✓ | 71.33 | 45.21 | 52.42 | **36.88** | 34.14 | 68.02 | 62.90 | 224.34 | 58.73 | 122.32 | | mPLUG-Owl | ✓ | ✓ | ✓ | | ✓ | 71.77 | **45.92** | 51.89 | 36.37 | 33.71 | 67.86 | 62.35 | 225.60 | 59.16 | 122.79 | | mPLUG-Owl | ✓ | ✓ | | ✓ | ✓ | **72.68** | 45.90 | **53.02** | 36.73 | **34.51** | **68.19** | **63.28** | **226.43** | **59.22** | **123.11** | In response to concerns about the robustness of reconstruction loss and Relative Text-Position Prediction (RTPP) on the advanced backbone model, we conducted additional ablation studies on the mPLUG-Owl-based model, which are presented in Table C. Note that we compare RPT to RFT for RTPP experiments. The results demonstrate that both reconstruction loss and RTPP consistently improve performance on most tasks with both the moderate and large backbone models, BLIP2-OPT and mPLUG-Owl, respectively. We believe these consistent gains demonstrate the effectiveness of our components. **[Q1] Clarification on the receptive field** In line 131, the description “effectively enlarging the receptive field size” should have been removed. We are sorry for the confusion. To clarify, augmenting each grid by a 2 × 2 scale does not enlarge the receptive field as originally stated. Instead, this operation provides a more detailed view of each grid area by effectively zooming in, allowing our model to capture finer-grained visual features and smaller text fonts within each cell. This increased resolution of information potentially leads to a better understanding of local details. We will revise this description to accurately reflect the purpose and effect of this operation. **[Q2] Clarification of Matrix multiplication** We simply flatten both matrices to the shape of $(H_i \times W_i \times Q) \times C$. We will revise the description to provide a clearer explanation. [R4] Liu et al., TextMonkey: An OCR-Free Large Multimodal Model for Understanding Document, arXiv 2024 [R5] Hu et al., mPLUG-DocOwl 1.5: Unified Structure Learning for OCR-free Document Understanding, arXiv 2024 [R6] Ye et al., mPLUG-Owl2: Revolutionizing Multi-modal Large Language Model with Modality Collaboration, CVPR 2024 --- Rebuttal Comment 1.1: Comment: Thank you for your repsonses. I would like to maintain the original rating. --- Reply to Comment 1.1.1: Comment: Dear Reviewer rxzX, We thank you for giving our paper a positive score. We will revise the main paper to reflect your comments and discussions. If you have any question or additional comments, please do not hesitate to contact us. Best wishes, Authors.
Summary: The paper analyzes the impact of features at different scales for document understanding. It proposes a method to combine multi-scale features without signifficantly increasing computational complexity. In addition the paper also proposes two new instruction tuning tasks that allow the model to better extract text information from the documents. Experiments are reported on a varied set of datasets and tasks showing Strengths: - The paper introduces multi-scale feature extraction for document understanding. This can help to analyze larger and high resolution documents, without signifficantly increasing the cost. - Two new instruction tuning tasks are defined to help the model learn how to read the text inside the document. - Experimental results show that combining the proposed approach with existing MLLMs obtain better results than other existing methods. A detailed ablation study shows the contribution of all the components of the framework. Weaknesses: - From a technical point of view there is not much novelty. Multi-scale feature integration has been largely been used in many different domains in a similar way. - Experimental results lack a better comparison with SoA models. The proposed method is compared with OCR-free models and similar approaches using MLLMs, but ignores other methods of the SoA (maybe using an external OCR) that can obtain better results. For instance, in the leadeboard of DocVQA and InfographicVQA (rrc.cvc.uab.es) there are methods with a better performance. For a fair analysis of the results, I think that these results should be included and discussed comparing and contextualizing with the results obtained by the proposed method. Also, for tasks that mainly work with natural images, such as TextVQA, it would be better to compare with specific methods for scene text VQA. Comparing with methods designed to work with document images like Donut may not be the best option. Technical Quality: 3 Clarity: 2 Questions for Authors: See above in weaknesses Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: There is no specific discussion on the limitations of the method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are deeply grateful for the reviewer's insightful summary of our work, highlighting the key contributions of our multi-scale feature extraction approach, new instruction tuning tasks, and comprehensive experimental results. Below are our responses to the main concerns. **[W1] Incremental novelty (multi-scale features)** The proposed approach is the first practical solution for integrating multi-scale features in high-resolution multi-modal large language models (MLLMs), which was deemed impractical due to resource constraints. Our framework preserves the benefits of multi-scale features without substantial increases in computational costs. Specifically, we developed a hierarchical visual feature aggregation module that substantially reduces computational costs using cross-attentive pooling and feature pyramid hierarchy, making multi-scale integration computationally feasible for MLLMs. Unlike prior works relying on single-scale inputs, our method allows efficient multi-scale processing in MLLMs, significantly improving performance. **[W2] Justification on comparisons** We appreciate the reviewer's comments but respectfully and partly disagree that our comparisons are lacking, due to the following reasons: Our research focuses on OCR-free multi-task learning with MLLMs for recognizing document images (e.g., DocVQA) and understanding natural images (e.g., TextVQA), which is different from task-specific learning and OCR-based learning. Direct comparisons with task-specific models would not fully capture this key aspect of our work. For instance, ChartPaLI [R1], the state-of-the-art for ChartQA, uses instruction data specifically designed for chart data; it doesn't align with our general-purpose approach. On the other hand, the top performers on the referenced leaderboards, such as those for DocVQA and InfographicVQA, utilize significantly larger models, advanced language models with 20-55B parameters such as InternLM2 [R2] and PaLI-X [R3]. These are not directly comparable to our LLaMA-based mPLUG-Owl model due to significant differences in scale and computational costs. Moreover, the size and origin of the massive training data used for these large language models are often not fully disclosed, making direct comparisons potentially unfair. Hence, we compared our model only with OCR-free methods based on comparable MLLMs in terms of model size and dataset scale. We included Donut as a baseline, a well-established OCR-free approach, to demonstrate the effectiveness of our approach; Donut has an advantage in document-related tasks because it is pretrained with document-specific data, but our method is still competitive without such a specialized pretaining technique. [R1] Carbune et al., Chart-based Reasoning: Transferring Capabilities from LLMs to VLMs, arXiv 2024 [R2] Cai et al., InternLM2 Technical Report, arXiv 2024 [R3] Chen et al., PaLI-X: On Scaling up a Multilingual Vision and Language Model, arXiv 2023 --- Rebuttal 2: Comment: Dear Reviewer CDZQ Because the end of discussion period is approaching, we kindly ask you whether our response is helpful to clarify you or not. Also, if you have any question or additional comments, please do not hesitate to contact us. We thank you for your time and efforts to review our paper. Best wishes, Authors. --- Rebuttal Comment 2.1: Comment: I thank the authors for your responses to my comments. I agree that the hierarchichal feature aggregation module is a useful contribution for multi-scale feature aggregation. Concerning the comparison with the SoA I agree that there are different types of methods that are not fully comparable in terms of parameters or training data, but I think that when comparing with SoA all types of methods should be included with a specific discussion on the advantages of the proposed method even if it does not get the best results. The proposed method cannot be the best performing in terms of accuracy but it can have other positive aspects (ocr-free, number of parameters, data required to train) than can be remarked in the discussion. Overall, I can raise a bit my original rating. --- Rebuttal 3: Comment: Thank you for your thoughtful feedback and for recognizing the contribution of our hierarchical feature aggregation module. We appreciate your suggestion regarding the comparison with state of the art (SoTA) methods. Table G: Comparison with task-specific SoTA methods. | | DocVQA | InfoVQA | DeepForm | KLC | WTQ | TabFact | ChartQA | VisualMRC | TextVQA | TextCaps | |------|:-------------------------:|:-------------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:-----------------------:|:----------------:|:----------------:|:------------------:|:--------------:| | SoTA | 92.34 (InternVL-1.5-Plus) | 75.74 (InternVL-1.5-Plus) | 68.8 (mPLUG-DocOwl 1.5) | 38.7 (mPLUG-DocOwl 1.5) | 40.6 (mPLUG-DocOwl 1.5) | 80.4 (mPLUG-DocOwl 1.5) | 81.3 (ChartPaLI) | 364.2 (LayoutT5) | 82.22 (Omni-SMoLA) | 164.3 (PaLI-3) | | Ours | 72.7 | 45.9 | 53.0 | 36.7 | 34.5 | 68.2 | 63.3 | 226.4 | 59.2 | 123.1 | We list the performance of state-of-the-art models at the moment of submission in Table G. The SoTA models can be categorized into 3 categories. **Large Foundation Models**: These models leverage extensive pretraining datasets and large model parameters. For DocVQA and InfographicsVQA, InternVL-1.5-Plus [R7] ranks highest on the leaderboard. This model has 26B parameters and is pretrained on 26 dataset corpora before being fine-tuned on an additional 49 corpora. However, the exact amount of training data is not specified in their technical report. Additionally, QwenVL-Max is currently the SoTA for DocVQA, though technical details for this model are unavailable. For TextVQA and TextCaps, variants of PaLI [R8, R9] hold the top rank. PaLI, which is trained on the 10B-WebLI dataset, is a significantly stronger foundation model than our backbone, mPLUG-Owl, which only uses a 1B dataset for pretraining. While PaLI-3 [R9] has 5B parameters, it still benefits from the 10B-WebLI dataset for pretraining. Moreover, direct comparison becomes more challenging when considering the differences in the unimodal encoders of each model. InternVL-1.5-Plus consists of InternViT-6B and InternLM2-20B, while PaLI uses ViT-G/14 and UL-2 for vision and language processing, respectively. In contrast, our backbone, mPLUG-Owl, utilizes ViT-L/14 and LLaMA, which are considerably weaker than InternVL-1.5 and PaLI variants. Once again, we have compared our model only with OCR-free methods based on comparable MLLMs in terms of model size and dataset scale. **Methods Using OCR Engines**: For VisualMRC, current OCR-free models lag behind LayoutT5 [R10], which utilizes external OCR engines. This is likely because VisualMRC data is text-rich and features long documents, indicating significant room for improvement. While LayoutT5 has a relatively small model size of 770M parameters compared to recent MLLMs, incorporating OCR engines in such tasks provides a substantial advantage, particularly in accurately processing and understanding extensive textual information. **Fine-Tuning Foundation Models with Document-Specific Data and Methods**: Our work falls into this category. For DeepForm, KLC, WTQ, and TabFact, the current SoTA model is mPLUG-DocOwl 1.5 [R5]. As mentioned in our response to reviewer rxzX, mPLUG-DocOwl 1.5 is based on mPLUG-Owl 2, a direct extension of our backbone, mPLUG-Owl. While the models are similar in size, mPLUG-Owl 2 is known to perform significantly better than mPLUG-Owl. Additionally, mPLUG-DocOwl 1.5 is fine-tuned on 4M document datasets, which is several times more than our 650K dataset. It is also worth noting that mPLUG-DocOwl 1.5 was uploaded to arXiv in March and can be considered concurrent with our work. For ChartQA, where ChartPaLI is the SoTA [R1] as mentioned in the initial rebuttal, instruction data specifically designed for chart data is used, and the model is based on PaLI, pretrained with the 10B-WebLI dataset. We hope that this response clarify you. If you have any question or additional comments, please do not hesitate to contact us. [R1] Carbune et al., Chart-based Reasoning: Transferring Capabilities from LLMs to VLMs, arXiv 2024 [R5] Hu et al., mPLUG-DocOwl 1.5: Unified Structure Learning for OCR-free Document Understanding, arXiv 2024 [R7] Chen et al., How Far Are We to GPT-4V? Closing the Gap to Commercial Multimodal Models with Open-Source Suites, arXiv 2024 [R8] Wu et al., Omni-SMoLA: Boosting Generalist Multimodal Models with Soft Mixture of Low-rank Experts, arXiv 2024 [R9] Chen et al., Pali-3 vision language models: Smaller, faster, stronger, arXiv 2023 [R10] Tanaka et al., VisualMRC: Machine reading comprehension on document images, AAAI 2021
Summary: The paper presents a novel approach to OCR-free document understanding using pre-trained Multimodal Large Language Models (MLLMs). The approach uses multi-scale visual features to handle different font sizes within document images. To address the high computational cost associated with multi-scale visual inputs for pre-trained MLLMs, the authors propose a hierarchical visual feature aggregation module. This module reduces the number of input tokens to LLMs by employing a feature pyramid hierarchy with cross-attentive pooling, thereby balancing information loss and efficiency without being affected by varying document image sizes. In addition, the paper introduces an innovative instruction tuning task that incorporates text position information within images, improving model readability and robustness to text truncation. Extensive experiments demonstrate the effectiveness of the framework in achieving superior document understanding performance in various tasks. Strengths: - The paper provides a detailed description of model sizes, trainable parameters and pre-training data size. This information is critical for real-world application of these models where size and efficiency are critical factors. - The authors carry out a very detailed ablation study. For some aspects of the study, they ensured that the number of parameters remained constant, minimising bias and allowing a clearer understanding of the impact of each component. - The paper introduces a novel hierarchical visual feature aggregation module and a new instruction tuning task. Weaknesses: - Complexity vs. gain: The use of reconstruction loss increases complexity for what appears to be a small performance gain, as shown in Table 5 (lambda=0). - RTP Usage: While the Random Text Positional (RTP) approach is interesting for handling complex documents, it is only used during training. There is potential to optimise and use RTP during inference to further improve performance. See the questions. - Lack of open source availability: The models and code are not open source. Technical Quality: 3 Clarity: 4 Questions for Authors: - RTP optimisation: The RTP method reads a random percentage of text using a common reading order. Would it be more efficient to read specific, complete sections of the document based on its structure, such as reading entire articles in a newspaper? - Text position encoding: Can you provide more details on how text position is encoded in the Positional Text Positional (PTP) method? - Table 3 Formatting: The best result is not highlighted in bold for the 1st and 3rd columns of Table 3. Typos and errors: - Line 169: "stop-gradient" should be corrected to "stop-gradient". - Line 189: "Table 7" should probably be "Table 1". - Line 243: RTPP is introduced but never explained. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's acknowledgment of our introduction of novel components, thorough ablation studies, and detailed model information. Below are our responses to the main concerns. **[W1] Effectiveness of reconstruction loss** Table A. Effectiveness of reconstruction loss on both models. The bold-faced numbers indicate the best performance in each column for each backbone model. | Backbone | Recon | DocVQA | InfoVQA | DeepForm | KLC | WTQ | TabFact | ChartQA | VisualMRC | TextVQA | TextCaps | |:---------:|:-----:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:---------:|:----------:|:---------:|:----------:| | BLIP2-OPT | | 50.46 | 28.48 | 13.18 | **23.82** | 20.06 | 59.83 | 49.48 | 226.65 | 56.04 | 134.06 | | BLIP2-OPT | V | **51.38** | **29.60** | **14.58** | 23.78 | **21.15** | **59.87** | **50.41** | **228.65** | **57.32** | **135.24** | | mPLUG-Owl | | 71.33 | 45.21 | 52.42 | **36.88** | 34.14 | 68.02 | 62.90 | 224.34 | 58.73 | 122.32 | | mPLUG-Owl | V | **72.68** | **45.90** | **53.02** | 36.73 | **34.51** | **68.19** | **63.28** | **226.43** | **59.22** | **123.11** | The reconstruction is performed only during training, so it incurs no additional cost for inference. Although the introduction of reconstruction loss increases training complexity, the extra cost is marginal, particularly compared to other components in our training strategies. More importantly, as shown in Table A from our ablation study, the reconstruction loss consistently improves performance on most tasks with both the moderate and large backbone models, BLIP2-OPT and mPLUG-Owl, respectively. We believe these consistent gains demonstrate the effectiveness of our reconstruction loss, especially given that the inference cost remains unchanged. **[W2 & Q1] Using Relative Text-Position Prediction (RTPP)** This comment including the associated question is a bit confusing partly due to the wrong terminology. It would be appreciated if you revise the comments for better answers. Doing the best under the current context, we presume that "Random Text Positional (RTP)" means Relative Text-Position Prediction (RTPP) in our approach. Using RTPP during inference is an interesting idea, but it may not be straightforward because it may require additional modules for generating proper instruction tasks or increase inference costs significantly; it can be a direction for the extension of our paper. **[W3] Lack of open source availability** We understand the concern about open-source availability. Due to the proprietary issue, we have to go through bureaucratic procedures to release the full source code. However, we will put our every effort into scientific transparency and reproducibility. We have already provided detailed implementation details in Section D in the Appendix. We will further provide extra details and ensure that our work is independently verified by the research community. **[Q2] More detail on Predicting Text Positional (PTP) task** The PTP task follows the same process as the Reading Partial Text (RTP) task for generating the position pair ($p_{start}$, $p_{end}$) and its corresponding text segment. The key difference lies in how to use this information: 1) RTP: The position pair is a part of the instruction, and the model predicts the text. 2) PTP: This is opposite to RTP: the text segment is given as the instruction, and the model predicts the position pair. In both cases, the position is represented by ($p_{start}$, $p_{end}$), indicating the start and end positions of the text segment within the document. We hope this clarifies the encoding process. We are happy to provide more details or examples in the revised manuscript if needed. **[Q3] Table 3 Formatting Issue** We highlighted the best results among MLLM-based instruction tuning methods. We will clarify this in the revised version of our paper. **[Q4] Typos & Errors** Thank you for pointing out our mistakes and we will revise our manuscript for better clarity. FYI, RTPP stands for Relative Text-Position Prediction as specified in line 243, and consists of RPT (Reading Partial Text) and PTP (Predicting Text Position). --- Rebuttal Comment 1.1: Comment: Thank you for your reply. I'm convinced of the value of the method, but the impact is limited by the fact that the code and models are not published as open-source. In today's hyper-competitive environment, this distribution is really essential to have an impact. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback and for recognizing the value of our method. We understand the importance of transparency and reproducibility in research. We have already provided detailed implementation information in our submission and will also include **pseudo-code for the core modules** to enhance transparency and reproducibility further. We are committed to making our work as accessible as possible. Also, if you have any question or additional comments, please do not hesitate to contact us. --- Rebuttal 2: Comment: Dear Reviewer cWFY Because the end of discussion period is approaching, we kindly ask you whether our response is helpful to clarify you or not. Also, if you have any question or additional comments, please do not hesitate to contact us. We thank you for your time and efforts to review our paper. Best wishes, Authors.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SF-V: Single Forward Video Generation Model
Accept (poster)
Summary: This paper proposes a method to accelerate video generation model inference speed by distilling the multi-step reasoning of Singular Value Decomposition (SVD) into a single-step generation using adversarial networks. This approach achieves comparable results to multi-step SVD generation while significantly improving inference efficiency. Experimental results validate the effectiveness of the authors' method. Strengths: 1. The algorithm proposed by the authors significantly improves video generation inference speed without compromising the quality of the generated results. 2. The authors propose using two parallel heads, a spatial head and a temporal head, to implement the discriminator. This resolves the issue of a single head potentially leading to the generation of static videos. Weaknesses: 1. The authors trained the entire network using 1 million internal videos. For those without access to this dataset, reproducing this work could be challenging. Will the authors make these data and source code publicly available? 2. How many training data points are needed to ensure that the distilled model has strong generalization capabilities? The training process for SVD likely requires far more than 1 million data. Is using 1 million data for distillation training sufficient? 3. The author mentioned using the encoder of a Unet for initializing the discriminator. What is the insight behind this choice? Why did they consider this initialization method? 4. In Adversarial Diffusion Distillation, they use distillation loss to transfer knowledge from the teacher model to the student model. Why did the authors not use this approach here? Is there any specific reason? Technical Quality: 3 Clarity: 3 Questions for Authors: see Weaknesses Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: It's unclear how well the method generalizes currently, as the authors only conducted quantitative analysis on the UCF101 dataset. This uncertainty arises because the authors trained the model using only 1 million data points. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. The detailed responses regarding each concern are listed below. --- **Q1. Reproducing the results.** A1. We plan to make our code and checkpoint public for the reproducing of our work. --- **Q2. Training data.** A2. Our dataset contains around 1M video clips of various lengths. As we mentioned in the paper (Line 191-192), our model is trained for 50K iterations with a total batch size of 32. For each training sample, we randomly select 14 consecutive frames for a video temporally downsampled to 7 FPS. We agree our dataset is far less the one used for SVD training. However, in this work, we fine-tune SVD for efficient generation instead of learning motion from scratch, which makes our model requires less data. We humbly believe it is an advantage of our approach that we can require much less data than SVD while being able to fine-tune the model to single-step. On the other hand, we do expect our model can benefit from more high-quality data training and longer training iterations. To validate the assumption, we run the following experiment. - First, we randomly selected 700K videos out of the 1M dataset and use that to train our model. Our method achieves FVD of $275.7$ with 700K videos while $180.9$ using 1M videos. We also provide the qualitative results in Fig. C of the *attached one-page PDF*. - Second, we report the performance of the model under various training iterations. Due to the limit of computation, we report results for training the model within 50K iterations. In the following table, we show how the FVD score changes along the training process. >| Iters (K) | 20 | 30 | 40 | 50 | | --- | --- | --- | --- | --- | | FVD | 542.2 | 479.5 | 217.1 | 180.9 | --- **Q3. Discriminator initialization.** A3. We use the encoder of the UNet to initialize the backbone for its strong video understanding ability and we fix the weights of the backbone during training. Such design can further facilitate faster training and fewer memory consumption. To further demonstrate the effectiveness, we train the discriminator backbone together with the added heads *without* initializing it from the pre-trained UNet encoder. We observe a significant drop of performance, *i.e.,* the FVD on UCF-101 is 1490.2 (without initialization from UNet) vs. 180.9 (with initialization of UNet.). We also provide qualitative results in Fig. C of the *attached one-page-PDF*. --- **Q4. Distillation loss.** A4. There are two main reasons for not using the teacher-student distillation loss in ADD [26]. - First, to obtain distillation target in ADD [26], during training, the denoising process for the teacher model is required. Thus, the teacher model needs to run multiple inference steps. Such a training process can drastically increase the training time. - Second, distillation loss in ADD [26] needs to be calculated in the pixel space instead of latent space. Decoding the latents and calculating the gradients with the video VAE decoder will introduce significant computation overhead for video models, leading to out-of-memory issue for SVD. Thus, we use the reconstruction objective instead of the distillation loss in our approach to make the model training feasible. --- **Q5. Generalization ability.** A5. As shown in the Supplementary Materials, we qualitatively evaluate our method on images of various styles and objects, demonstrating the strong generalization ability of our method for video generation. As for the quantitative evaluation metric, we chose FVD on UCF-101 as it is the most adopted benchmark for video generation. For example, it is adopted in SVD[14] and AnimateLCM [21]. We fully agree with the reviewer that more benchmark datasets are helpful for video model evaluation and it is an exciting direction for future work. --- Rebuttal Comment 1.1: Comment: Dear Reviewer Q9Lc, Thank you for your valuable feedback on our submission. We have provided additional explanations to address the key points you raised, including reproducing our results, training data, effectiveness of our discriminator initialization, reasons why we don’t use distillation loss, and the generalization ability of our method. As the deadline for the Author-Reviewer discussion is approaching, we would like to kindly ask if our responses sufficiently clarify your concerns or if there are any remaining issues you would like us to address. We appreciate your time and consideration. Best, Authors --- Rebuttal Comment 1.2: Comment: Thank you for the thorough response. The rebuttal has addressed most of my concerns. I am happy to keep my initial score. --- Reply to Comment 1.2.1: Comment: Dear Reviewer Q9Lc, Thank you for reading our rebuttal and providing a response! If there are any remaining issues you would like us to address, we are always willing to provide more clarification. Best, Authors
Summary: The paper tackles the task of distilling diffusion-based text-to-video models into single-step models, achieving much higher sampling speeds. To this end, they build a framework to fine-tune a pretrained video diffusion model. This fine-tuning is done in an adversarial setting in the latent space, whereby the generator and discriminator are designed to achieve higher image quality and video consistency. In particular, the generator is initialized from a pretrained video model, and the discriminator is partially initialized with the layers of a pretrained encoder, where additional spatial-temporal layers are added and trained. The model achieves SOTA SVD quality in the setting of 1-step video diffusion and an additional speedup compared to existing art. Strengths: The paper is very well written and easy to read. In particular, the introduction and related work well put the work in context and the contribution. The related work section is sufficiently extensive and provides the right context to identify the existing gap the paper is trying to solve. The results and visualizations are clear, well-explained, and detailed. The paper is also well validated, showing excellent results compared to the state-of-the-art in single-step video generation, achieving superiority in terms of quality as well as sampling time. By fine-tuning an existing video model, the authors can also achieve good training time and use any knowledge captured by the pretrained model. In that sense, the proposed approach makes sense and is well-suited for the task. Weaknesses: My main concerns are concerning the novelty and significance of the proposed approach in the following sense: 1. One component of the method is a re-design of the generator and discriminator. First, this in itself seems to be of limited novelty. The proposed components are standard and well-established in the community. Finetnuning the discriminator by adding additional layers is not new, and similarly, for the generator, copying the pretrained encoder seems sensible but does not offer a new contribution. Second, one could take the existing single-step adversarial text-to-image approaches and replace the generator and discriminator with the architecture proposed in this paper. The authors identify such papers well (e.g. [24, 25, 26, 27, 28]) but do not compare them to such a baseline. Hence, assessing whether the proposed architectural changes are meaningful is difficult. 2. The proposed training regime (adversarial training and reconstruction losses) is another component. On the flip side to 1, is this training regime important? In that sense, one could take existing single or multi-step video diffusion baselines (e.g., [14, 21]) and change the training regime to that proposed in this paper. Further, assuming that this training regime improves in the video setting, one could assume that it is more general and applicable also in the single-step text-to-image regime. Further, the proposed components both seem to be marginally novel. Adversarial training for single-step distillation and the temporal components are also not new. Is there something non-trivial in the combination of both components? Technical Quality: 3 Clarity: 4 Questions for Authors: I believe the paper provides a great engineering effort, significantly improving sampling speed and quality compared to the state-of-the-art. However, I am concerned about the novelty of the proposed approach, as indicated in the weaknesses section. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Limitations are briefly addressed but not sufficiently analyzed. It would be great to see examples of errors introduced by the approach or some further analysis. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We provide detailed response in the following. --- **Q1. About novelty.** A1. We agree with the reviewer that certain design components, *e.g.,* adversarial training and constructing the discriminator initialized from the diffusion model, have been explored in different manners in other tasks. However, we would also like to kindly emphasize that distilling a video diffusion model into single-step with the similar performance as 25-steps *has not been studied* in the literature. In the following, we would like to kindly clarify that the novelty comes from the design of the whole training pipeline that is tailored for solving a specific challenging task, instead of the individual component like adversarial training. Our framework is inspired by LADD [27], which distills an image diffusion model into fewer steps. Here, we highlight major differences compared with LADD. - _First_, we extend the image latent adversarial distillation from image to the video domain by incorporating spatial and temporal heads to achieve one-step generation for video diffusion models. The training computation to distill a video model is much larger than distilling an image model. Therefore, we carefully design our discriminator, which is not used in the image domain, to enable the stable training. Notably, our discriminator only uses certain part of the UNet that we find important. Such design is different from exiting arts that re-use the whole UNet. In this way, we can significantly reduce the training computation for adversarial training, and train the video model much faster and consume much less GPU memory. - _Second_, based on the EDM-framework, we observe that sampling $t^\prime$ using a discretized lognormal distribution provides more stable adversarial training compared to the logit-normal distribution used in LADD. - _Third_, unlike LADD, we utilize real video data instead of synthetic data for training, removing the storage and computation burden for calculating and saving the synthetic data. - *Fourth*, we employ reconstruction objective (Eq. 9 in the main paper) along with R1 gradient penalty (Eq. 11 in the main paper) to stablize the training of the video diffusion model. In we only focus on how we design the discriminator (*e.g.,* re-using some weights from the generator and fixing them), indeed, they do not look fancy and seem like lacking novelty. However, we humbly think it is important to look at how such design, when smoothly combined with techniques (like how to sampling the noise), can solve an important task. We hope such an explanation would make sense to the reviewer. --- **Q2. Comparison with the approaches that built for distilling image diffusion models [24, 25, 26, 27, 28].** A2. Directly taking existing step-distillation methods for distilling text-to-image models to video generation is not sufficient to achieve reasonable results. As suggested by the reviewer, we compare our approach with the image-based distillation methods. - First, DMD [25] and ADD [26] are too computationally costly to implement in video generation tasks. Both of these works require loss calculation in the pixel space instead of latent space. Forwarding and backwarding through the VAE decoder while training will introduce a huge GPU memory overhead and slow down the whole training process. - Second, since LADD [27] outperforms SDXL-Lightning [28] by significant margin [27], we try our best to implement LADD [27]. We also implement UFOGen [24]. We replace the generator in LADD and UFOGen with SVD. Please kindly notice that simply re-using the discriminator from LADD and UFOGen leads to the *out-of-memory issue*, since the computation in the video model is much larger than the image model. Here we replace the discriminator from LADD and UFOGen to the one proposed in our work. We report the FVD score on UCF-101 in the following table. Additionally, we provide qualitatively comparisons between our method and LADD and UFOGen in the Fig. C of the *attached one-page-PDF*. >| Method | UFOGen | LADD | Ours | | --- | --- | --- | --- | | FVD | 1917.2 | 1893.8 | 180.9 | From these results, we can see that, for distilling video diffusion models, our approach performs much better than exiting step-distillation methods built upon image-based-diffusion models. --- **Q3. Importance of the proposed training regime** A3. We are not sure whether we fully understand this concern, yet, please allow us to try to answer them in three different perspectives. - First, as the experiments shown in A2, our adversarial training regime is very important, as it achieves better results than the ones proposed in LADD [27] and UFOGen [24], which are two methods for distilling image-based-diffusion models. - Second, the reviewers suggest apply our approach for two work: SVD[14] and AnimateLCM [21]. Actually, our video model is fine-tuned from SVD [14]. In Tab.1 of the main paper, we show single-step model achieves similar results as 25-steps SVD, and better results than 8-steps AnimateLCM [21]. - Third, directly applying our training pipeline to the image-based-diffusion model is non-trivial. For example, It requires the changing of our discriminator, the noise sampling process, and the changes of adversarial losses. We humbly think such experiments are beyond the scope of this work. --- **Q4. Discussion about limitations** A4. Thank you for the suggestion. We observe that when the given conditioning image indicates complex motion, _e.g._ running, our model tends to generate unsatisfactory results, _e.g._ blurry frames, as shown in the Fig. D of the *attached one-page PDF*. Such artifacts are introduced by the original SVD model, as can be observed in Fig. D of the *attached one-page PDF*. We believe a better text-to-video model can solve such issue and are interested in trying our step-distillation approach to such a model when there is an open-sourced one. --- Rebuttal Comment 1.1: Comment: Dear Reviewer FT7s, Thank you for your valuable feedback on our submission. We have provided additional explanations to address the key points you raised, including our novelty, further comparisons, importance of our training regime, and limitations of our method. As the deadline for the Author-Reviewer discussion is approaching, we would like to kindly ask if our responses sufficiently clarify your concerns or if there are any remaining issues you would like us to address. We appreciate your time and consideration. Best, Authors --- Rebuttal 2: Comment: Dear Reviewer FT7s, Thank you for reading our rebuttal and providing response! We would like to apologize that our previous response in Q1 regarding the comparison between our approach and LADD does not contain the detailed explanation regarding the experiments that we have done. In the following, we connect our previous experiments along with the difference between our work and LADD. First, in Tab. 2 of the main paper, we conduct ablation experiments with different discriminator heads settings. We show that, using spatial and temporal heads achieves better generation quality than only using spatial head (which is the design from LADD). Additionally, our discriminator only uses certain part of the UNet that we find important, while LADD does not. In this way, we can significantly reduce the training computation for adversarial training, and train the video model much faster and consume much less GPU memory. By directly using the discriminator design from LADD, we notice the out of GPU memory issue on 80G Nvidia A100 GPU. Second, based on the EDM-framework, we observe that sampling $t^\prime$ using a discretized lognormal distribution provides more stable adversarial training. In Fig. 6 and Fig. 7 of the main paper, we show the importance of our proposed noise sampling schedule for achieving the better performance. Third, we utilize real video data instead of synthetic data for training, removing the computation burden for calculating and saving the synthetic data. For instance, generating 1M videos using SVD requires approximately 3K GPU hours. Fourth, we employ reconstruction objective (Eq. 9 in the main paper) along with R1 gradient penalty (Eq. 11 in the main paper) to stablize the training of the video diffusion model. Without R1 gradient or the reconstruction objective, we observe frequent training divergence. Besides, as mentioned in our answer for Q2, we use the proposed training regime from LADD [27] combined with the discriminator proposed in our work and conduct quantitative and qualitative comparison experiments. Our method significantly outperforms LADD [27] training regime in the video distillation task, demonstrating the effectiveness of our proposed training regime. Thanks again for the feedback from the reviewer. We will add the above discussion to the revised paper and hope it can be helpful to understand the difference between our work and LADD. We would deeply appreciate it if the reviewer could reconsider the score and we are always willing to address any of your further concerns. Best, Authors
Summary: The authors propose a method to generate similar-quality as the original video diffusion model Stable Video Diffusion (SVD) in a single feedforward pass. To this end, they take the pre-trained SVD model and fine-tune it with a reconstruction and adversarial loss. The discriminator uses a frozen copy of the SVD encoder along with trained spatial & temporal readout heads to encourage both spatial and temporal consistency. Further, the authors add noise to the inputs of the discriminator to stability training. The results are of slight better quality than 16 steps of SVD, and slightly worse than 25 steps of SVD. Strengths: - The paper is very clear and easy to read - The presented approach is simple - It works! Quality is somewhat comparable to several diffusion steps with the original model Weaknesses: - A large chunk of the methods section repeats how the original diffusion model is trained -- since this is already known information and not directly relevant to the presented method, it could be shortened or moved to the appendix - The proposed method is fairly simple and incremental, though the results are better than existing methods Technical Quality: 3 Clarity: 3 Questions for Authors: - L257: what's the relative compute / time that goes into the encoder / decoder compared to the actual latent model? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors briefly discuss the fact that the VAE decoder & encoder for image conditioning. It should be noted that training GANs can often come with the additional headache of instabilities. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. The detailed responses regarding each concern are listed below. --- **Q1. About improving the writing for the preliminary section.** A1. Thanks for the suggestion! We will revise the writing of the manuscript. --- **Q2. About the approach is simple.** A2. We agree with the reviewer that our approach, built upon adversarial training, is relative straightforward and simple to re-implement. On the other hand, as high-lighted by the reviewer, our approach actually works, which we believe further demonstrate the value of this paper - simple yet effective. In fact, our approach also outperforms other distillation based methods like latent consistency model. Additionally, this is the first paper showing that a single forward video diffusion model can obtain similar quality as a 25-step model. Such results and findings could inspire later research and efforts to develop more advanced approaches to further improve the generation quality. --- **Q3. Inference time for image encoding and encoding.** A3. We calculate sampling time for each component when generating $14$-frames videos at a resolution of $1024\times 576$. We use A100 GPU as the testbed to get the latency. - SVD use both CLIP image encoder and VAE encoder to encode the first frame of the video as the condition input: - The runtime for CLIP image encoding is 0.03 s. - The runtime for VAE image encoding is 0.01 s. - The latency for forwarding the UNet once is 0.30 s. - The latency for VAE decoding the latent to get videos is 2.32 s. --- **Q4. Training instabilities of GANs.** A4. We agree with the reviewer that applying GAN during training is non-trivial. To avoid the mode collapse and achieve the stabilized training, we carefully design our training pipeline. For instance, we use hinge loss as the adversarial training. Additionally, we employ reconstruction objective (Eq. 9 in the main paper) and R1 gradient penalty (Eq. 11 in the main paper) during training to make the training process more stable. Moreover, based on the EDM-framework, we observe that sampling $t^\prime$ using a discretized lognormal distribution provides more stable adversarial training. We have also tried to re-train our model multiple times using the same dataset and approach. The training is stable across different runs and we can re-produce very similar results. --- Rebuttal Comment 1.1: Comment: Dear Reviewer Do55, Thank you for your valuable feedback on our submission. We have provided additional explanations to address the key points you raised, including our approach, inference time, and training instabilities. As the deadline for the Author-Reviewer discussion is approaching, we would like to kindly ask if our responses sufficiently clarify your concerns or if there are any remaining issues you would like us to address. We appreciate your time and consideration. Best, Authors --- Reply to Comment 1.1.1: Comment: Dear Reviewer Do55, We would like to thank you again for your valuable feedback on our paper. As the period for the Author-Reviewer discussion is closing today, we would like to use this opportunity to kindly ask if our responses sufficiently clarify your concerns. We sincerely appreciate your time and consideration. Best, Authors
Summary: The paper proposes an idea of training a distillation approach using GAN based technique. The advantage which is suggested by the authors is that such distillation approach can reduce the computational cost associated with the sampling new samples during the inference time. Instead of taking multiple sampling steps only single step is required for inference. Strengths: The idea is sound and well founded. Utilizing GAN based distillation approach to reduce the computational overhead related to the sampling process in diffusion models. This is especially important for videos since a minute long video consists of almost 1.5k frames. Weaknesses: - Videos do not look like they're real world videos. How does this work with the real world videos. - The motion is very laminar in the videos. How is the performance when trained on videos with non laminar motion. Technical Quality: 3 Clarity: 3 Questions for Authors: - GANs suffer from mode collapse problem how was it ensured that is not the case here with this approach? - GANs suffer from the problem of instability in training how was it handled - Why weren't standard video datasets like UCF101 used in evaluation? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: GAN based models suffers from training instability because of the presence of adversarial loss. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. The detailed responses regarding each concern are listed below. --- **Q1. About generating real world videos.** A1. Thanks for the suggestions. In this work, we fine-tune SVD, which is an image-to-video model, into single sampling step. Our model can take *arbitrary* given image as input and generate the corresponded video. In the main paper (Line 221-222 and Fig. 5), we show the results of using the real images as input to generate the real world videos. The saved videos can be found in our Supplementary Materials. Additionally, we compare our generation results with other approaches. We also provide more examples of using real images as input to generate real world videos. The examples are shown in the Fig. A in the *attached one-page PDF*. --- **Q2. About some videos with laminar motion.** A2. Thank you for noticing the laminar motion for some videos. We would like to kindly mention that this paper aims to improve the sampling efficiency of the pre-trained SVD model, instead of improving the motion quality of SVD. In fact, we have shown in the main paper (Fig. 5) and the *Comparisons section* in the webpage of the Supplementary Material, the motion quality of our approach is *similar* to the motion of 25-steps of the original SVD model. In fact, we use training videos with large motion to fine-tune SVD. Some examples of training videos are shown in the Fig. B of the *attached one-page PDF*. Nevertheless, improving the motion of SVD without modify its whole architecture (*e.g.,* image-based encoder), is still challenging, which also beyonds the scope of this paper. We further provide more examples in Fig. C of the *attached one-page PDF*, demonstrating that for the same conditioning image, running SVD with 25 steps can also generate videos with laminar motion. One reason of that SVD generates videos with laminar motion is due to the SVD can only synthesize short clips. We believe that in the future, with more open sourced video generation models synthesizing minutes-long videos, the motion can be significantly improved. By that time, we would be very interested in applying our approach to those models. --- **Q3. About the training challenges of using GAN, such as mode collapse and instability.** A3. We agree with the reviewer that applying GAN during training is non-trivial. To avoid the mode collapse and achieve the stabilized training, we carefully design our training pipeline. For instance, we use hinge loss during adversarial training. Additionally, we employ reconstruction objective (Eq. 9 in the main paper) and R1 gradient penalty (Eq. 11 in the main paper) during training to make the training process more stable. Moreover, based on the EDM-framework, we observe that sampling $t^\prime$ using a discretized lognormal distribution provides more stable adversarial training. We have also tried to re-train our model multiple times using the same dataset and approach. The training is stable across different runs and we can re-produce very similar results. --- **Q4. standard video datasets like UCF101 used in evaluation** A4. We agree with the reviewer that UCF-101 is a standard dataset that should be used in the evaluation. In the paper, we mainly use UCF-101 to obtain the quantitative results. For example, in Line 211 of main paper, we explain the details of how we use UCF-101. In Tab.1, Tab.2, and Tab.3, we show the calculated FVD on UCF-101 to compare our approach with others and ablating the design principles of our approach. --- Rebuttal Comment 1.1: Comment: Dear Reviewer R1mD, Thank you for your valuable feedback on our submission. We have provided additional explanations to address the key points you raised, including the generation of real-world videos, laminar motion, and issues related to instability during training and evaluation metrics. As the deadline for the Author-Reviewer discussion is approaching, we would like to kindly ask if our responses sufficiently clarify your concerns or if there are any remaining issues you would like us to address. We appreciate your time and consideration. Best, Authors --- Reply to Comment 1.1.1: Comment: Dear Reviewer R1mD, We would like to thank you again for your valuable feedback on our paper. As the period for the Author-Reviewer discussion is closing very soon, we would like to use this opportunity to kindly ask if our responses sufficiently clarify your concerns. We sincerely appreciate your time and consideration. Best, Authors
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their thoughtful comments and positive feedbacks. We appreciate their findings of the strengths for this paper, including: - **our studied problem** (reducing the computational overhead for video diffusion models) is important (Reviewer R1mD) and well validated (Reviewer FT7s); - **our approach and idea** are sound and well founded (R1mD), are simple and working (Reviewer Do55), make sense and well-suited for the studied task (Reviewer FT7s), and resolve the issue of generating static videos (Reviewer Q9Lc); - **our results** are excellent, clear, well-explained, and detailed (Reviewer FT7s), are comparable to original model with several diffusion steps (Reviewer Do55), achieve SOTA SVD quality and sampling time in 1-step (Reviewer FT7s), and significantly improve video generation inference speed (Reviewer Q9Lc); - **our paper** is very clear, well-written, and easy to read (Reviewer Do55, FT7s) with sufficient related work and the right context (Reviewer FT7s). In the following, we provide detailed answer to the major concern of each reviewer. Additionally, we attach an **one-page PDF** with more visualization results. Pdf: /pdf/ad25a75a06866fa0f88deea8e29c61217deea559.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Gradient Methods for Online DR-Submodular Maximization with Stochastic Long-Term Constraints
Accept (poster)
Summary: In this work, the authors address the problem of online DR-submodular maximization subject to long-term stochastic (linear) constraints. At each round $t\in[T]$, after committing an action $\x_t$, a random reward and an unbiased estimate of the point are revealed. This paper focuses on the stochastic DR-submodular maximization setting, where the objective functions are i.i.d. sampled from a distribution. Moreover, this work also considers the semi-bandit feedback setting where only the noisy function value and an unbiased gradient estimator of that point can be observed. The main contributions of this work can be summarized as follows: - A stochastic gradient ascent-based algorithm for stochastic online DR-submodular maximization with stochastic long-term constraints achieving $O(\sqrt{T}) \frac{1}{2}-$regret and $O(\sqrt{T})$ constraint violation with high probability. in the semi-bandit feedback setting. - A stochastic gradient ascent-based algorithm for stochastic online DR-submodular maximization with stochastic long-term constraints achieving $O(\sqrt{T}) (1-\frac{1}{e})-$regret and $O(\sqrt{T})$ constraint violation with high probability. in the semi-bandit feedback setting. It is worth noting that the query complexity is significantly lower compared to previous works in both settings. Strengths: The presentation was clear, and I enjoyed reading this paper. The results were clearly explained, and I especially liked the proof of Algorithm 1. Weaknesses: I have two concerns: - The paper lacks empirical evaluation, which would be valuable in highlighting a use case and demonstrating the effectiveness of your proposed approach. - The theoretical results, while commendable, are not particularly groundbreaking. Technical Quality: 2 Clarity: 3 Questions for Authors: I have no questions Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your time and your encouraging review. We address your concerns in the following: ### Weaknesses > The paper lacks empirical evaluation, which would be valuable in highlighting a use case and demonstrating the effectiveness of your proposed approach. We agree with you that empirical evaluation can be valuable to demonstrate the effectiveness of proposed approaches in real-world applications. However, we highlight a few points: 1. Our paper primarily emphasizes the theoretical contribution of our approach. 2. *Our Algorithm 1 is the first algorithm* for online stochastic DR-submodular optimization with long-term constraints under semi-bandit feedback. There would not be any baselines to compare it to. All of the algorithms from previous works listed in Table 1 are for simpler settings (full information feedback, $0 \in \mathcal{K}$, some requiring exact gradients). We could potentially empirically evaluate Algorithm 1 against those in simpler settings (full information feedback, $0 \in \mathcal{K}$, exact gradients) but we do not think it would be fair to do so. 2. Similarly, our Algorithm 2 is the first algorithm for online stochastic DR-submodular optimization with long-term constraints with full-information feedback (when $0 \in \mathcal{K}$ is queryable). There would not be any baselines to compare to (aside from our Algorithm 1 designed for a harder setting of semi-bandit feedback). We could potentially apply Algorithm 2 in a simpler setting ($0 \in \mathcal{K}$, exact gradients) to evaluate against other algorithms listed in Table 1, though we think it would not be entirely fair to do so. 3. Existing methods for DR-submodular maximization with long-term constraints typically exhibit high query complexity, often exhibiting $\sqrt{T}$ per round for query complexity. In contrast, our method achieves a query complexity of just 1 per round. > The theoretical results, while commendable, are not particularly groundbreaking. We highlight some aspects we believe are significant in Q1 of the general response above. ### Questions > I have no questions --- Rebuttal Comment 1.1: Comment: Thank you for the response. Regarding the empirical evaluation baselines, the choice largely depends on the specific experiments. In most cases, you can compare your results to random baselines and existing methods (I suggest plotting the function value and the query complexity). I've decided to maintain my score since the theoretical contribution of this work is nice but not groundbreaking. --- Reply to Comment 1.1.1: Comment: Thank you again for your time and consideration. > In most cases, you can compare your results to random baselines and existing methods We would like re-confirm that we agree experiments can be valuable but would like to clarify that _there are no existing methods to compare to_ for the problem setting we considered. Our algorithms are the first. At best we could apply our methods for simpler problems (full information, $0\in\mathcal{K}$), though other methods designed for that special setting could have an advantage. We would also note that random baselines would incur linear regret and anticipate that such a comparison may not be illuminating.
Summary: The paper studied the problem of online monotone DR-submodular maximization subject to long term stochastic constraints. In detail, it explored the stochastic DR-submodular maximization setting, where the objective functions are i.i.d. sampled from a distribution. Previous works considered adversarial objective functions at each step. Additionally, it also considered the semi-bandit feedback setting, where only the noisy function value $f_t(\mathbf x_t)$ and an unbiased gradient estimator of that point $\tilde{\nabla} f_t\left(\mathbf{x}_t\right)$ can be observed. They proposed the first stochastic gradient ascent based algorithm in both semi-bandit feedback setting and first order full-information setting. Strengths: - The paper introduced two interesting settings: stochastic utility and semi-bandit feedback, which are more general than those previously studied in the literature. - They proposed the first stochastic gradient ascent based algorithm under these novel settings. Their algorithms require only 1 gradient query per round, while all previous works require $\sqrt{T}$. This makes their algorithms significantly more query-efficient compared to prior approaches. These algorithms were inspired by primal-dual updates in online convex optimization, and it was interesting to see this connection. - The paper is well-written, with clear and easy-to-follow presentation and writing Weaknesses: - The authors did not motivate the stochastic DR-submodular setting well. It is not explained why previous works used adversarial setting and what the motivation is for relaxing this scenario. - It isn't clear what the signfiicance is of not requiring that 0 be in the constraint region. Technical Quality: 3 Clarity: 3 Questions for Authors: - What is the significance of handling search spaces that do not necessarily include 0 between your algorithm and other algorithms? Could you provide more details? - Why do most prior works assume the adversarial choice of functions? Are there any example applications that would fall into the stochastic setting considered in this paper? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your review. Here we address the questions in "Weakness" and "Questions" as follows. ### Weaknesses > - The authors did not motivate the stochastic DR-submodular setting well. It is not explained why previous works used adversarial setting and what the motivation is for relaxing this scenario to stochastic setting. First, for objectives, we emphasize that in DR-submodular maximization, the stochastic and adversarial settings are distinctly separate scenarios, as we've detailed in lines 61-63 of the paper: "Note that this setting is still of interest because as opposed of assuming each arriving function $f_t$ being DR-submodular, we only assume the expectation of $f_t$ to possess DR-submodularity". Working with the stochastic objectives holds equal significance to the adversarial ones, given that many real-world applications are commonly framed within a stochastic framework, such as influence maximization and facility location problems (described in general response), among others. Second, when the constraints are selected adversarially, for either stochastic or adversarial objectives, there is a known hardness result: [Mannor et al., 2009] provided a simple counterexample showing that achieving sub-linear regret against the best-fixed benchmark action in hindsight while maintaining sub-linear total constraint violation is not always possible. Subsequently, works addressing adversarial constraints alter the regret definition to consider only benchmark satisfying the constraint proportionally over some window of size $W\in [T]$, e.g., [Sadeghi et al., 2020]. We have included a discussion with this point in Appendix G. It is not clear if our results in this paper could be extended to that setting. > - It isn't clear what the significance is of not requiring that 0 be in the constraint region. See response to the first question below. ### Questions > - What is the significance of handling search spaces that do not necessarily include 0 between your algorithm and other algorithms? Could you provide more details? In offline DR-submodular maximization literature (without long-term constraints), there appears to be a significant hardness gap between $0\in \mathcal{K}$ and otherwise when only feasible points can be queried (e.g. with semi-bandit and bandit feedback). With $\mathcal{K}$ being a general convex set, the best-known approximation ratio is 1/2 [Hassani et al., 2017]. The tightness of the upper bound is not known, but is conjectured to be 1/2 [Pedramfar et al., 2023]. With the assumption $0\in \mathcal{K}$, the best-known result is $1-1/e$ [Mokhtari et al., 2020]. In online scenarios, we note that when $0\in \mathcal{K}$, there is no clear path to get $1-1/e$-regret in semi-bandit setting with only $\sqrt{T}$ regret even not considering long-term constraint. When the origin is included, the best known $1-1/e$-regret in semi-bandit setting without long-term constraint is $O(T^{3/4})$ in [Pedramfar et al., 2023]. As indicated by Lemma 2, the $1-1/e$ approximation needs additional information (to query function value at $z\*x$). One possibility is that when $0\in \mathcal{K}$, we could utilize an ETC style in the semi-bandit setting, where we alternately sample $z_t\*x_t$ (giving estimates for $\nabla F$), and $x_t$ (giving reward signal,) to achieve $1-1/e$ approximation ratio but a regret worse than $O(\sqrt{T})$. Table 1 in [Pedramfar et al., 2023] gives a great summary of regret results for online DR-submodular optimization. In the final version of the paper, we will also include application examples that showcase the importance of general convex regions $\mathcal{K}$ for which the origin is not feasible. Please refer to Q2 in the general response above for those examples. > - Why do most prior works assume the adversarial choice of functions? Are there any example applications that would fall into the stochastic setting considered in this paper? Please see response to the first Weakness above. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you for the response and answering my questions. As I am not an expert in the area, I will defer to the other reviewers regarding the theoretical / technical contributions and their novelty.
Summary: This paper investigates the online learning of DR-submodular with a stochastic budget (linear) constraint. The constraints vary from different rounds. In each round, the constraint is sampled from an unknown distribution independently, and the constraint in round $t$ can only be observed after the learner have submitted her action $x_t$. The goal is to minimize the regret and total constraint violation. This paper achieves $O(\sqrt{T})$ regret and $O(\sqrt{T})$ total constraint violation in both semi-bandit setting and first-order feedback setting. Here, the semi-bandit setting means the learner can only obtain the gradient at the played action. The first-order feedback setting means the learner can query a gradient at any point each round. For semi-bandit setting, the author achieves $O(\sqrt{T})$ 0.5-regret and $O(\sqrt{T})$ total constraint violation. For the first-order feedback setting, the author achieves $O(\sqrt{T})$ (1-1/e)-regret and $O(\sqrt{T})$ constraint violation. These results have the same bound with the existing works, but the algorithms in this paper only need to query 1 gradient each round and the existing algorithms need $\sqrt{T}$ gradient queries each round. Reducing the number of gradient queries is the main contribution of this paper. Strengths: 1. The problem studied is well motivated and has many potential applications 2. The proposed algorithm is computational-efficient compared with the existing results, and the authors provide a rigorous theoretical guarantee. Weaknesses: 1. The main weakness is the technical contribution. The method used in this paper is a simple combination between non-oblivious PGA [37] and regularized Lagrangian function techniques [26]. I do not find the new analysis or algorithmic challenge. 2. The author's references to previous works do not seem to be appropriate. Especially the citation about the paper which proposed the non-oblivious PGA. For example, in line 135—137, Lemma 2 is just a simple corollary from [37] (just setting the curvature to 1 in that paper). However, the authors use vague statements, that is “The proof can be derived from [37] and is presented in Lemma 2 of [39]”. First, I think the author should directly cite [37] behind the Lemma 2 rather than use another statement to show the reference, since this lemma is proposed in [37]. Second, [39] seems do not contribute to this lemma (this paper just writes down the lemma where the submodular curvature is set to 1), so the author should not mention [39] here. Also, in line 286—289, Lemma 7 is also a direct corollary of Proposition 1 of [37] (by setting curvature γ=1). However, the author did not cite [37] directly behind Lemma 7. The authors just write “[37] present a computational approach for…” and did not mention that Lemma 7 is also from [37]. 3. The 1/2 approximation ratio is not satisfactory, even in the semi-bandit setting. It's known that the 1-1/e approximation ratio can be achieved in the full-bandit setting, which is a harder setting than the semi-bandit setting. As the authors say, these bandit algorithms with 1-1/e approximation ratio require the assumption that 0∈K. But I don't think it is necessary to remove this assumption. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. There is a new paper extends the non-oblivious PGA technique to non-monotone function setting: https://arxiv.org/abs/2401.08330. Can we use this result to generalize your results to non-monotone setting? 2. Why do the authors assume that the utility functions are sampled i.i.d. from a distribution? We know the online gradient ascent works for adversarial environment. Can we till derive the current results if the utility functions are adversarially selected? This question also applies on the constraint, what if the constraint is adversarially selected? 3. See weakness 1, what is the new challenge in your algorithm design and analysis? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your careful reading and helpful suggestions. Next, we will address some of your concerns in "Weakness" and "Questions". ### Weaknesses > 1. The main weakness is the technical contribution. The method used in this paper is a simple combination between non-oblivious PGA and regularized Lagrangian function techniques. I do not find the new analysis or algorithmic challenge. We address technical contributions in Q1 of the general response above. > 2. The author's references to previous works do not seem to be appropriate $\dots$ Thank you for your detailed suggestions. We agree with you and will cite [37] for Lemmas 2 and 7 to clarify [Zhang et al.]'s contributions. > 3. The 1/2 approximation ratio is not satisfactory, even in the semi-bandit setting. It's known that the 1-1/e approximation ratio can be achieved in the full-bandit setting, which is a harder setting than the semi-bandit setting. As the authors say, these bandit algorithms with 1-1/e approximation ratio require the assumption that $0\in \mathcal{K}$. But I don't think it is necessary to remove this assumption. First, in the final version of the paper we will include application examples that showcase the importance of general convex regions $\mathcal{K}$ for which the origin is not feasible. Please refer to Q2 in the general response above for examples. Second, we note that when $0\in \mathcal{K}$, there is no clear path to get $O(\sqrt{T})$ $(1-1/e)$-regret with semi-bandit feedback *even without considering long-term constraints*. When the origin is included, the best known $(1-1/e)$-regret bound for semi-bandit feedback without long-term constraint is $O(T^{3/4})$ in Pedramfar et al., 2023 [24]. As indicated by the non-oblivious function in Lemma 2, the $1-1/e$ approximation needs information at two locations (e.g. a function value at $x$ and a gradient value at $z*x$). One possibility is that when $0\in \mathcal{K}$ and we have semi-bandit feedback, we believe we could utilize an explore-then-commit approach to bound $(1-1/e)$-regret, but the horizon $T$ dependence would be worse than $O(\sqrt{T})$. For reference, Table 1 in [Pedramfar et al., 2024] is good summary of regret bounds for online DR-submodular optimization under different types of feedback (semi and full bandit), objective functions, and feasible regions. ### Questions > 1. There is a new paper extending the non-oblivious PGA technique to non-monotone function setting: https://arxiv.org/abs/2401.08330. Can we use this result to generalize your results to non-monotone setting? That is a great question. We think it might be possible but were not able to carefully verify this in the short rebuttal time. We will leave it as a future direction. > 2. Why do the authors assume that the utility functions are sampled i.i.d. from a distribution? We know the online gradient ascent works for adversarial environment. Can we still derive the current results if the utility functions are adversarially selected? This question also applies on the constraint, what if the constraint is adversarially selected? First, we note that the analysis we perform does not extend to adversarial objectives. Specifically, the operations in equation (38), which uses tower rule of expectations to establish upper bound on the difference of Lagrangian functions, rely on the stochastic nature of the objective. On the other hand, in adversarial objective scenarios, we anticipate that the methodologies applied in Lemma 7.5 from [Hassani et al., 2017] could be used to go through those steps, but we will leave it as future work. Second, we emphasize that in DR-submodular maximization, the stochastic and adversarial settings are distinctly separate scenarios, as we've detailed in lines 61-63 of the paper: "Note that this setting is still of interest because as opposed of assuming each arriving function $f_t$ being DR-submodular, we only assume the expectation of $f_t$ to possess DR-submodularity". Working with the stochastic objectives holds equal significance to the adversarial ones, given that many real-world applications are commonly framed within a stochastic framework, such as maximum coverage and budget allocation problems (described in general response), among others. When the constraints are selected adversarially, for either stochastic or adversarial objectives, there is a known hardness result: [Mannor et al., 2009] provided a simple counterexample showing that achieving sub-linear regret against the best-fixed benchmark action in hindsight while maintaining sub-linear total constraint violation is not always possible. Subsequently, works addressing adversarial constraints alter the regret definition to consider only benchmark satisfying the constraint proportionally over some window of size $W\in [T]$, e.g., [Sadeghi et al., 2020]. We have included a discussion with this point in Appendix G. It is not clear if our results in this paper could be extended to that setting. > 3. See weakness 1, what is the new challenge in your algorithm design and analysis? We address technical contributions in Q1 of the general response above. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I generally agree with your clarifications, especially regarding the motivation for removing the $0\in\mathcal{K}$ assumption and the consideration of a stochastic constraint. However, I respectfully disagree with your rebuttal concerning the "technical contribution, 2." While the $1/2$-approximation algorithm is indeed simple and efficient, I believe that its design and analysis do not introduce new ideas or require significant effort. Researchers might not have documented this algorithm because the $1/2$ approximation ratio is not particularly compelling. But I agree with that Lemma 6 provides some new ideas. So I will raise my score to 5 and I recommend that you highlight the distinctions in your analysis compared to previous works in your next revision. --- Reply to Comment 1.1.1: Comment: Thank you again for your time and feedback. In the next revision we will more clearly highlight distinctions with prior works. >Researchers might not have documented this algorithm because the approximation ratio is not particularly compelling. We would like to clarify that in our rebuttal, we meant our algorithms that handle long-term constraints (using a single query per round and yeild $O(\sqrt{T})$ constraint violation). We do not claim novelty for algorithm design for the simpler setting without long-term constraints. The 1/2 ratio for monotone DR submodular maximization (no long-term constraints) using a projected gradient method was shown by Hassani et al. [10]. (It appears it was not until Pedramfar et al. [24] that it was pointed out that all reported (1-1/e) guarantees were based on $0\in \mathcal{K}$ or at least queryable while the 1/2 was not, leading to a conjecture that 1/2 may be the best achievable when $0\not \in \mathcal{K}$ for semi-bandit or bandit feedback.)
null
null
Rebuttal 1: Rebuttal: We thank all the reviewers for carefully reading the paper and their constructive suggestions. In the following we address some common questions. > Regarding technical contribution. Some reviewers have concerns about limited technical contribution. Here we emphasize two technical novelties: 1. Lemma 6, which upper bounds the relaxed constraint violation ($\tilde{g}_t$) using dual variables ($\lambda_t$), is completely new and is a critical result to derive the $\mathcal{O}(\sqrt{T})$ constraint violation. Previous works did not have an analogous bound. Without this bound, we would only be able to obtain a worse $\mathcal{O}(T^{3/4})$ constraint violation. We obtained Lemma 6 by carefully digging into the updates of the $\lambda_t$'s and discovering some relations that we could leverage. We note that this lemma could be of independent interest to the online (non-)convex optimization community. For other papers dealing with the simpler setting of OCO that achieve better than $\mathcal{O}(T^{3/4})$ constraint violation, some special conditions (e.g., strong duality [Akhtar et al., 2021] or a strong convexity surrogate function [Yu et al., 2020]) are needed. However, when the objective is non-convex and such conditions are difficult to establish, our approach could potentially be used to obtain $\mathcal{O}(\sqrt{T})$ constraint violation. DR-submodular objective is one such setting. 2. Our algorithm design is simple, yet it has remarkably outperformed existing benchmarks in both regret and constraint violations. We believe achieving superior guarantees with a simple algorithm is a *strength* of our work, not a weakness. Furthermore, this simplicity also translates into complexity efficiency, as evidenced by the substantial reduction in the number of gradient oracle calls required by our algorithm (1 oracle query per round for our methods versus $O(\sqrt{T})$ for previous methods). We will emphasize these arguments in the final version of the paper. > Application examples of $0 \notin \mathcal{K}$. We will discuss the following example motivating applications in the final paper to highlight the importance of general convex regions $\mathcal{K}$ for which the origin is not feasible. For such problems, the best-known approximation ratio is 1/2 [Hassani et al., 2017]. 1. **Maximum Coverage [Krause et al., 2014]** Imagine a city's emergency management agency aiming to optimize the deployment of emergency response teams (firefighters, medical personnel, rescue teams) in many rounds during crises like major fires or earthquakes. The objective is to maximize coverage across different city zones. Allocating 0 teams to any zone isn't feasible, as it means no emergency response. The goal is to allocate resources in each round to maximize overall response coverage, while satisfying the long term agency budget. 2. **Budget Allocation [Soma et al., 2014]** This is a apecial case of the influence maximization problem. Let a bipartite graph $G = (S, T; W)$ represent a social network, where $S$ and $T$ are collections of advertising channels and customers, respectively. The edge weight represents the influence probability of channel $s$ to customer $t$. The goal is to distribute the per-round budget among the source nodes, and to maximize the expected influence on the potential customers over multiple rounds, while satisfying a long-term constraint (e.g., money spent). Corporate management may require a minimum level of customer engagement with each campaign overall or from select target groups. There may also be per-round minimum contractual purchase requirements with the advertising partners. Thus, allocating 0 budget in any round may not be permitted. 3. **Facility location [Krause et al., 2014]** Consider a scenario where a company needs to decide on the locations (virtual) to set up new service centers to maximize service coverage over multiple rounds, while satisfying a total budget constraint over all rounds. At each round, each customer segment must receive at least a certain level of service or coverage, which means 0 is not a feasible solution because no facilities cannot provide any service. All the problems above were initially studied in discrete domain and extended to continuous domain in [Bian et al., 2019]. Furthermore, when faced with a discrete objective, one can always use the "relax and rounding" strategy to transition from addressing a discrete problem to tackling a continuous one. Such techniques are frequently utilized within the submodular maximization community, as exemplified by the work of [Chen et al., 2018]. #### Additional References Bian et al. Continuous Submodular Function Maximization. 2020. Krause and Golovin. Submodular function maximization. In Tractability: Practical Approaches to Hard Problems. Cambridge University Press, 2014 Soma et al. Optimal budget allocation: Theoretical guarantee and efficient algorithm. ICML 2014. Chen et al. Projection-Free Online Optimization with Stochastic Gradient: From Convexity to Submodularity. ICML 2018.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mutual Information Estimation via $f$-Divergence and Data Derangements
Accept (poster)
Summary: This paper introduces a new method for mutual information estimation using a discriminative training approach based on f-divergences. Notably, the authors address a well-known limitation for this class of estimators which exhibit exponential sample complexity in the strong dependence limit. The authors also demonstrate a simple but surprising observation that the method for drawing marginal samples can have a huge impact on estimates, and show how to correctly handle this with "data derangements". Strengths: - The f-divergence development elegantly avoids partition function estimation. - While the problem with negative samples is intuitive, it was surprising to see how strong the negative effects could be. - Between the main text and appendix, a good selection of results studying a wide range of settings. Results look relatively robust. - It was nice to show a high-dimensional result with images. Weaknesses: - The theoretical analysis only consider the case when the optimum discriminator is exactly known. It would be nice to have some understanding of the effect of mis-estimation. Unlike other approaches, we don't get a lower bound on MI, only an estimate. - Results section could be improved for clarity. I didn't follow the architecture very well, maybe it can be illustrated? (Appendix D helped - maybe it's just a space issue. A schematic in the appendix would still be helpful though.) - The final conclusions in the Variance Analysis could be stated more clearly. Technical Quality: 4 Clarity: 3 Questions for Authors: Am I correct that one drawback of this procedure is that it doesn't result in an MI lower bound? (Because the objective that is not optimized is not a MI bound, it just guarantees that at the optimum you get a solution that gives the true MI.) If so, is it easy to understand why other discriminative approaches do give a lower bound and this one doesn't? Why did the derangement sampling strategy affect time so much? Does it change the convergence speed, or am I missing something else? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: I think these could be made more explicit. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the positive feedback. We particularly appreciate that the Reviewer caught the key points of our contribution and highlighted the novelty and strengths of our paper. Regarding the questions, please find below our point-to-point response. **Weaknesses**: * We derive in Theorem 3.1 the expression of the proposed class of estimators under the assumptions of optimal convergence. However, as the Reviewer noticed, in practice, due to sub-optimality, it is insightful to know the distance of the MI estimate from the true MI value. For the sake of completeness, we report in the proof of Lemma 3.2 the effect of this discrepancy and show that the convergence of the proposed MI estimator depends also on the choice of the generator function $f$. In particular, from eq. (50) it can be observed that the discrepancy between the true instantaneous MI and the estimated one is controlled also by $f$. Please refer to our answer to Reviewer 3r9p for the detailed formula as no space is left here. Therefore, as we stated in the paper (see lines 77, 114, and 246), different divergences have different performance, and this is mostly due to numerical aspects. Overall, we found that GAN-DIME represents the best trade-off. * The architecture description in the main text is brief for space limitations and because the joint and separable architectures are already-known in the domain of MI estimation. We will include a schematic in the appendix to improve the clarity (see the one-page pdf). Thanks. * In line 148 we write “The great advantage of $f$-DIME is to avoid the partition function estimation step, significantly reducing the variance of the estimator.”, which is the biggest result related to the variance (reinforced in line 152 “Hence, the $f$ -DIME class of models has lower variance than..”). Lemma 4.1 returns a theoretical upper bound on the variance of $f$-DIME, while Lemma 4.2 states that the variance of $f$-DIME is finite when $X$ and $Y$ are correlated Gaussian random variables. However, we will improve the clarity by separating the paragraph starting with “Furthermore, we provide in Appendix C two supplementary results.”, to emphasize that these two lemmas are additional results w.r.t. the main one previously stated. Thanks. **Questions**: * The Reviewer is right about the fact that the $f$-DIME estimator does not constitute a lower bound on the true MI. This is due to two main reasons that make $f$-DIME different from the others VLB MI estimators. 1) The general value function $\mathcal{J_f}$ in eq. (5), utilized during training, is the dual representation of the more general $f$-divergence and the KL-divergence is only one special case. Notice that the value $\mathcal{J_f}$ is a lower bound on the $f$-divergence. 2) We exploit the maximizer of $\mathcal{J_f}(T)$ (i.e., $\hat{T}$) to build the MI estimator, at inference time. This is a key component that allows us to get rid of the partition function for MI estimation, and it comes at the expense of not having a lower bound estimator. However, we do not consider it as a significant drawback as from all the numerical experiments provided, the estimates are rather robust and in many cases with lower values compared to overshot estimates obtained with MINE and NWJ which are theoretical lower-bounds. Moreover and perhaps quite remarkably, $f$-DIME can be adjusted to be a lower-bound by adding the partition term (in the SMILE, MINE or NWJ fashion) during inference time. From the code offered by the authors of SMILE in [17] it is clear that SMILE is obtained with such a procedure. We did not discuss how to transform $f$-DIME to a lower-bound estimator as our intention for having low-variance estimate was to get rid-off the partition function, but one way to do such adaptation is to use the extracted density-ratio inside the expressions of NWJ or MINE, as in the following: $I_{fDIME-{NWJ}}(X;Y) = E_{p_{XY}}\biggl[ \log \biggl(\bigl(f^{*}\bigr)^{\prime}\bigl(\hat{T}(\mathbf{x},\mathbf{y})\bigr) \biggr) \biggr] - E_{p_{X}p_{Y}}\biggl[\biggl(\bigl(f^{\ast}\bigr)^{\prime}\bigl(\hat{T}(\mathbf{x},\mathbf{y})\bigr) \biggr) \biggr]+1$, where $I_{fDIME-{NWJ}}(X;Y)$ is the $f$-DIME estimator obtained using any $f$-divergence dual representation of eq. (5) but with the partition term of the NWJ estimator. We will add this comment in a short dedicated Section D.1.5. * The derangement procedure does not visibly improve the convergence speed, rather it significantly reduces the amount of samples to be used during each training iteration. In fact, the expectation in the second term of $\mathcal{J_f}$ in eq. (5) only requires $N$ samples obtained with derangements rather than $N(N-1)$ as in the joint architecture (see line 219), thus the deranged architecture has computational complexity $\Omega(N)$. Differently, the complexity of the joint architecture is $\Omega(N^2)$, given that it needs $N(N-1)$ samples every iteration, as we explained in Section 6.1. This time difference is clearly represented in Fig. 8 in the Appendix. Additionally, in the case of i.i.d. samples, the derangement can be developed with a shift-based approach (see line 299). This renders the deranged architecture extremely fast (see Fig. 5). **Limitations**: We will include a note in the Appendix Sec. D.1.4 about the fact that $f$-DIME nature renders it neither a lower bound nor an upper bound. We will also add a comment on the interesting trade-off between low-variance estimates and true MI lower-bound in a new section in the Appendix, Sec. D.1.5. **Rating**: We hope that our responses clarified the Reviewer’s doubts. Given the positive initial feedback, we hope our answers and changes make the contribution of the paper even clearer and stronger. If so, please consider increasing the score of the paper, thanks. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for the detailed feedback, and clarification on variance results. I'll raise my score based on that, and partially to balance out the review of dXEo which seems off base. --- Reply to Comment 1.1.1: Title: We thank the Reviewer Comment: We sincerely thank the Reviewer for the positive comment and we are glad to read that the Reviewer increased the score to 7. --- Rebuttal 2: Title: Follow-up question on stability / benchmarks Comment: I just realized that you didn't cite or include the benchmarks from Czyz et al, "Beyond Normal: On the Evaluation of Mutual Information Estimators", NeurIPS 2023. This seems like an omission, though it doesn't directly probe the things you were focusing on. The reason I realized this is the following. I sent the title of your paper to my students and directed them to see if there is an arxiv version (without telling me the authors of course!). They found it and ran your method on our internal benchmarks, including Czyz et al. The results were very unstable, and we got NaNs on most of these results. Do you have any thoughts on stability of your results / this benchmark? Also, it seems that some of the experiments (self-consistency) were not yet implemented in the code, though I understand cleaning up code often happens closer to final publication. (I realize this is is quite late in the cycle, and would appreciate even brief high level thoughts / speculation on these.) --- Rebuttal Comment 2.1: Title: Answer on benchmarks and stability Comment: Dear Reviewer, We believe there might be a misunderstanding here. In our paper we reported many tests with the benchmarks from Czyz et al. (we did cite such paper, please check ref. [24]). Figs. 3 and 4 especially describe the excellent results attained by $f$-DIME in those benchmarks. The reason why the Reviewer’s students did not find them in the arxiv version is that such a version is outdated. Regarding the experiments for the Czyz et al. benchmarks, the results we obtained were stable. We noticed that the version of the libraries used for the tests may have an impact. We will specify on GitHub the exact versions of the libraries and code we used to obtain the results in Figs. 3,4. Moreover, we will include a docker image in the repository to facilitate reproduction. The motivation why the code for the self-consistency tests is not provided yet is partially as the Reviewer wrote, and partially because the results of the self-consistency tests are reported in the Appendix, and it is not required to provide code for results in the Appendix. However, we will also include this part of the code on GitHub. Finally, we agree about instability for the self-consistency tests for HD-DIME and some NaNs encountered for KL-DIME (as can be observed from Figs. 17,18,19), while GAN-DIME returned stable results. Thanks.
Summary: The authors provide a new representation of mutual information (in term of a general $f$-divergence) which has the advantage that the corresponding estimator has a low variance, in stark contrast to the MINE estimator which has a variance which is exponential in the size of the mutual information. The new estimator is tested on multi-dimensional Gaussian distributions. Strengths: The goal to provide low-variance estimator for mutual information estimation is a timely and very important goal with many potential applications. Weaknesses: The reviewer is puzzled by the statement of Theorem 3.1 on which the rest of the paper relies. The issue is the existence of a permutation $\sigma$ such that $p(\sigma(y)|x) = p(y)$. Do the author mean to assume the existence of such permutation? The reviewer may be missing a point but I cannot see why such permutation should exist which makes the permuted random variable $Y$ independent of $X$. I don't see why this should be true for general data and if it holds true this seems like a very strong assumption because it implies a form independence between $X$ and $Y$. If this point, which is central to the paper, is not clarified, the paper cannot be considered for publication in the present form. Technical Quality: 2 Clarity: 3 Questions for Authors: Clarify the issue of the existence of the permutation in Theorem 1. Prove its existence, or provide examples to which it applies. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The limitations have not been addressed properly, especially in regard about the assumptions of Theorem 3.1. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent reviewing our paper. We understand the Reviewer’s main concern is centered around the permutation law $\sigma(Y)$ which renders $Y$ independent from $X$. While it is true that such a function may not exist in practice and we assume its existence in the theorem statement, our contribution is twofold: to study the effect that a random permutation strategy has on MI estimators and propose a novel derangement strategy which solves an important flow derived from random permutations, which have been most often used in the literature. Please, also notice that we test the proposed family of estimators not only on multi-dimensional Gaussian distributions, but also on both complex Gaussian transformations (half-cube, asinh, and swiss roll mappings, see Fig. 3) and on non-Gaussian distributions (uniform and student distributions, Fig. 4), following the procedure illustrated in [24]. Please find below our detailed answer. **Weaknesses**: First of all, a permutation strategy is a basic and quite common technique adopted in most of the neural MI estimators. Its role is to provide a practical way to simulate sampling from the product of marginals $p(x)\cdot p(y)$, and we define $\sigma(\cdot)$ as the permutation function. For instance, in [a], the authors study the maximization of mutual information (for capacity estimation) and use the same notation. Notice that in [a] the authors use as a permutation function a random permutation operation, further demonstrating the importance of our findings on the derangement procedure. In fact, the real issue is that currently many papers (accepted at top ML conferences), like [b] and [c], use a random permutation as practical implementation of such a permutation procedure, which we show in this paper leads to upper bounds. Moreover, in the first neural MI estimator (MINE) the authors also proposed the same random permutation approach [16]. Theorem 3.1 assumes its existence and provides the closed-form expression of the estimator in ideal conditions. However, as also the Reviewer noticed, such a permutation function may not exist in practice and researchers typically adopt a simple random permutation (sometimes also called shuffling mechanism) strategy to break the joint relationships and thus sample from the marginals. We thus study in Sec. 5 the practical implications of such choice and we find out that the naive random permutation strategy actually creates problems, since it renders the MI estimators biased and upper bounded. This is a very important finding as it highlights a severe limitation in existing samplers. To overcome such a limitation, we then propose to use a derangement strategy which, by definition, guarantees that no sample will end up in the same position when shuffled. For completeness of explanation, we provide here an example of what we meant with such a definition. Let’s suppose for simplicity that $N=3$. Then, a random permutation of $\mathbf{y} = [y_1, y_2, y_3]$ is $[y_2, y_3, y_1]$, but it can also be $[y_1, y_3, y_2]$. In the latter case, it is evident that $y_1$ remains in the same initial position. Fixed points appear only with permutations. Instead, a random derangement of $\mathbf{y} = [y_1, y_2, y_3]$ ensures that no elements of $\mathbf{y}$ after derangement ends up in the same initial position, differently from a permutation. This observation is crucial for both Lemma 5.1 and Theorem 5.2. Finally and very importantly, the derangement sampling strategy is generic and can be applied to any MI estimator as illustrated in Fig. 15 in the Appendix. **References**: [a] Li, Z., She, R., Fan, P., Peng, C., & Letaief, K. B. (2023). Learning channel capacity with neural mutual information estimator based on message importance measure. IEEE Transactions on Communications. [b] Hu, Z., Kang, S., Zeng, Q., Huang, K. &amp; Yang, Y.. (2024). InfoNet: Neural Estimation of Mutual Information without Test-Time Optimization. Proceedings of the 41st International Conference on Machine Learning, in Proceedings of Machine Learning Research 235:19283-19303. [c] Rhodes, B., Xu, K., & Gutmann, M. U. (2020). Telescoping density-ratio estimation. Advances in neural information processing systems, 33, 4905-4916. **Rating**: We hope that our responses clarified the Reviewer’s doubt expressed in the weaknesses section. We tried to better explain what our contribution is for what concerns the permutation function. If the Reviewer believes the contribution is now clear and evident, please consider increasing the score of the paper, thanks. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed answers. I will raise the score to take into account better the contributions of the paper on data derangement strategies which are novel and very interesting. However am still not convinced by the theoretical part of the paper, and it is not correct to state that the authors hav a low variance estimator of the MI. It may well be that the estimator proposed is a good substitute but the case is theoretically not made. --- Reply to Comment 1.1.1: Comment: We’d like to thank the reviewer for reading our reply and recognizing the novel contribution on data derangement. In the second part of the Reviewer’s comment, we believe there still is a misunderstanding about our claim that the proposed MI estimator has low variance and that “the case is theoretically not made”. We politely disagree: 1. We explain that $f$-DIME has lower variance w.r.t. the variational lower bound (VLB) estimators (see lines 134, 152, 242 and Eq. (13)). 2. This statement is theoretically proved in Section 4. Herein, we begin by summarizing what was found in [17]: the estimation of the partition function is responsible for the high variance condition in VLB MI estimators. Then, we explain that $f$-DIME avoids the partition function estimation, thus decreasing the estimator’s variance. This is shown by Eqs. (12-14). 3. Let us try to explain the findings with different terms to ease the understanding. VLB estimators comprise two terms: the first term is the expectation over $p_{XY}$, the second term is the expectation over $p_Xp_Y$. This indeed reflects the notation used, where the variance of VLB estimators is indicated with $Var_{p_{XY}, p_Xp_Y}$. From (11), it is possible to notice that the variance of the VLB estimators is the sum of the variance of two expectations. The variance of the second expectation ($E_{p_Xp_Y}$) is always greater or equal than $0$, which leads to (12). In particular, for high values of true MI, the variance of $f$-DIME ($Var_{p_{XY}}[E_{p_{XY}^M}[\log\hat{R}]]$) is significantly lower than the variance of the VLB estimators, because the second expectation term has exponentially increasing variance with the true MI (demonstrated in [17]). 4. Additionally, we reported two Lemmas in the Appendix that provide: 1) an upper bound on the variance, 2) a characterization of the variance when X and Y are correlated Gaussian random variables. 5. Finally and for completeness, we briefly discuss the bias/variance comparison between $f$-DIME and the other MI estimators analyzed in the paper, to justify why $f$-DIME attains an excellent bias/variance trade-off. SMILE decreases the variance of VLB estimators by clipping the partition term. Therefore, also SMILE attains variance comparable to $f$-DIME, but it is biased. CPC is a low variance estimator, however it is more biased than all the other estimators considered, since it is upper bounded by $\log(N)$. NJEE is low variance because it uses $2d-1$ neural networks to obtain the prediction. For this reason, NJEE is computationally extremely expensive (please see line 290 and subsequent ones). Furthermore, NJEE is biased for various choices of $d$. In conclusion, $f$-DIME attains lower variance w.r.t. the VLB estimators, and in general exhibits an excellent bias/variance trade-off. 6. The experimental results provide further evidence of the findings. We hope that this provides more clarity and offers the basis to raise the score further. Thank you again.
Summary: This paper proposes a novel discriminative mutual information estimator via the form of f-divergence, which exhibits a bias/variance trade-off. Experimental results show that the proposed estimator is comparable to existing neural estimators. Strengths: 1. The proposed MI estimator avoids directly computing the density ratio and enjoys a bias/variance trade-off. 2. The authors investigate the impact of data derangement and propose a novel training strategy based on data derangement. 3. Theoretical analysis on characterzing the properties of the proposed estimator are provided. Weaknesses: - Can the author theoretically show how close the mutual information estimated by using the value function ($J_f(T)$ in Theorem 3.1) is to the true mutual information? - The function T(·) in Theorem 3.1 is missing its definition. - Eq (10) seems to be a bit of problem, first row: $e^{I(X;Y)} \rightarrow e^{I_{NWJ}^{M,N}}(X;Y)$, second row: $e^{I(X;Y)} \rightarrow e^{I_{MINE}^{M,N}}(X;Y)$. - Figure 1 shows that the proposed estimator still has a large variance like the existing methods. High variance does not seem to be well addressed. - The experimental utilizes low-dimensional datasets, which raises concerns about the comprehensive evaluation of the proposed estimator's efficacy in high-dimensional contexts. The authors' investigation does not adequately address the potential implications of high-dimensional data on the estimator's performance. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to Weaknesses Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent reviewing our paper and for the valuable feedback. The overall opinion and strengths highlighted by the Reviewer are indeed a precise summary of our contribution. We hope that our replies below will help the Reviewer increase his/her good opinion about the paper. Thank you. **Weaknesses**: 1. The Reviewer’s question is very relevant, as in practical scenarios, there is always a numerical bias coming from sub-optimality. Lemma 3.2 also studies this effect of not reaching optimality in the MI estimator and quantifies the discrepancy of the instantaneous MI in terms of convergence to the optimal density ratio (or log-density ratio), see (50). In particular, we show that the bias in the instantaneous MI estimate also depends on the generator function of the $f$-divergence as: $i(X;Y) - \hat{i}_{n,fDIME}(X;Y) \simeq \delta^{(n)} \cdot \Biggl[\frac{\mathrm{d}}{\mathrm{d}T} \log \bigl( \bigl(f^{*}\bigr)^{\prime}(T)\bigr) \biggr \rvert _{T=\hat{T}} \Biggr]$, where $\delta^{(n)}=\hat{T}-T^{(n)}$ is the displacement between the optimum discriminator $\hat{T}$ and the obtained one $T^{(n)}$ at the iteration $n$, while $\hat{i}_{n,fDIME}(X;Y)$ is the instantaneous MI estimated using $\mathcal{J}_f$ in Theorem 3.1. 2. Since Theorem. 3.1 is general (not strictly related to neural networks), we will write $T: \mathcal{X}\times \mathcal{Y} \to \mathbb{R}$. Thanks for noticing the lack of domain definition. Nevertheless, in line 102 we specified that “We propose to parametrize $T(x, y)$ with a deep neural network $T_{\theta}$ of parameters $\theta$ and solve with gradient ascent and back-propagation”. 3. We disagree with the Reviewer’s comment. Eq. (10) is correct as it is written in the paper. As a matter of fact, $I(X;Y)$ represents the true MI, while $I_{NWJ}^{M,N}(X;Y)$ and $I_{MINE}^{M,N}(X;Y)$ are the MI estimators. Eq. (10) is correct because it relates the variance of the estimators with the true MI. The Reviewer's misunderstanding may be caused by lack of clarity in the notation. In the paper, we specified that “the variance scales exponentially with the ground-truth MI” and to further improve the clarity, we will substitute "where" in line 142 with “Where $I(X;Y)$ is the true MI and” 4. Fig. 1 does not show any comparison with existing MI estimators, thus we suppose the Reviewer referred to Fig. 2. In our paper (see line 152) we state that $f$-DIME has lower variance w.r.t. the variational lower bound (VLB) MI estimators (e.g. MINE, NWJ). This can be immediately noticed from Figs. 3-4 but also from Fig.6 and Fig.15 in the Appendix and from Table 1. It is true that the variance of $f$-DIME and SMILE, NJEE, CPC are comparable. However, CPC is upper bounded (thus strongly biased), SMILE exhibits overall worse performance than GAN-DIME, and NJEE is biased for various values of $d$ and computationally demanding. 5. We understand the Reviewer’s comment may be caused by the lack of high-dimensional results in the main document, where we decided to compare our estimator using common benchmarks and scenarios ([24]). However, as other Reviewers have noticed (and we added a sentence in page 7, line 250), we did provide several experiments with higher-dimensional data in the Appendix. In fact, in addition to the plots with dimension $d=20$ in Figs. 9,10, we reported two scenarios of high-dimensional datasets. Fig. 16 shows the challenging case $d=100$, while the section of consistency tests also shows the case of MI estimation of image datasets. Lastly, while the estimation of MI for high-dimensional data remains challenging, it is important to remark that we did not claim to have solved such a challenge, but rather to have provided new tools, such as a new family of estimators and a new data sampling strategy, which can empower researchers in their quest. **Rating**: We hope that our answers help the Reviewer to clarify some details and improve the understanding of the value of the numerical results. If the Reviewer believes we succeed in doing so, we encourage him/her to increase the rating. Thank you. --- Rebuttal Comment 1.1: Title: Approaching the end Comment: Dear Reviewer 3r9p, As the rebuttal period is approaching its end, we would really appreciate your feedback. The other 3 reviewers expressed their satisfaction with our responses, and we would like to know if our answers also clarified your doubts. Thanks again for your valuable time.
Summary: This paper investigates the long-standing task of estimating mutual information in high-dimensional data. The authors point out that mutual information estimation methods using variational lower bounds suffer from either high bias or high variance, and proposes an alternative solution leveraging the variational representation of the f-divergence. They further claim that the proposed solution, f-divergence mutual information estimators (f-DIME), exhibits an excellent bias/variance trade-off, higher accuracy and lower complexity. Strengths: 1. The proposed framework is applicable to any f-divergence, generalizing it beyond the commonly-used KL-divergence (Theorem 3.1). 2. Theoretical analysis on bias-variance trade-off. (1) Bias. The global optimum of the estimator converges to the real value of the mutual information (Lemma 3.2), making it an unbiased estimator when the assumptions are met. (2) Variance. Variance Analysis (section 4) shows that the proposed method equal or lower variance than methods using the variational lower bound. 3. As shown in Figure 1, the proposed derangement strategy for data sampling enables a lower bias estimation than the permutation strategy, as it is not upper bounded by log(N). 4. Runtime analysis (Figure 5) is a good addition to the empirical results. Weaknesses: 1. They benchmark a series of experiments using a wide range of mutual information estimators (Figure 2, 3, 4). However, the empirical results do not seem to support the central claim of “an excellent bias-variance trade-off”. It is clear that CPC is suffering from the log(N) upper bound issue where N is the number of data samples, but the proposed f-DIME method (KL-DIME, HD-DIME, GAN-DIME) do not seem to be consistently better in either bias or variance compared to the competing methods MINE and SMILE. 2. Minor issue: The methods are not ordered the same way in Figure 2 versus Figure 3 and 4. It would be helpful to order them consistently. 3. Minor issue: “NWJ” was not defined before first used in line 67. 4. There are non-neural network methods that have good theoretical rates and/or good experimental results for higher dimensions. They should at least refer to these if not include some comparisons. These include the following: https://journals.aps.org/pre/abstract/10.1103/PhysRevE.69.066138 (they reference to this but don't compare. This method often performs very well experimentally) https://doi.org/10.1109/TIT.2021.3100108 https://proceedings.neurips.cc/paper/2015/hash/06138bc5af6023646ede0e1f7c1eac75-Abstract.html https://www.pnas.org/doi/full/10.1073/pnas.1715593115 Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I would ask the authors to address weakness #1. 2. I believe it would be helpful to quantify the bias and variance in the empirical results (Figure 2, 3, 4), for better comparison across KL-DIME, HD-DIME, GAN-DIME, NJEE, MINE and SMILE. I just noticed that some relevant information is present in Table 1, 2, 3 and 4 of the Appendix. Would the authors consider presenting such results in the main text instead (for instance, as a nested bar plot)? 3. In the related works section, the authors describe how some mutual information estimators produce high-biased estimates as they are upper bounded by log(N). Could the authors also explain the root cause(s) of high-variance estimators in the same section? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for the time spent reviewing our paper. In particular, we thank the Reviewer for having spotted the strengths of the paper, it is clear from her/his review that the key points and contributions of the paper have been captured. We provide below our answer to the detailed comments. **Weaknesses**: 1. While it is true that we benchmarked $f$-DIME with several other estimators and that the choice of the generator function $f$ does impact the performance of the estimator, the GAN-DIME estimator (proposed as the best $f$-DIME estimator in lines 282 and 1029) actually consistently showed the best performance in all the experiments, especially over MINE and SMILE. As a matter of fact, Fig. 2 for SMILE and Figs. 3 for both MINE and SMILE illustrate how they have higher bias compared to GAN-DIME. Moreover, MINE also exhibits large variance w.r.t. to any $f$-DIME estimator. The high-variance phenomenon (theoretically discussed in Sec. 4, eq. 10) of MINE is noticeable for high values of true MI. The reason why this phenomenon is not evident in Fig. 4, is because the plots are obtained for small values of true MI. Nevertheless, the bias of the two estimators is visible also there. We did not report MINE in all the experiments as its performance was consistently worse than GAN-DIME. Additional results on the high-variance of MINE, however, are shown in Fig. 6 and Fig. 15 in the Appendix, while numerical values are reported in Table 1. SMILE, which is known to perform better than MINE (see [17]), performs worse than GAN-DIME in the scenarios illustrated in Fig. 2-4, which are the state-of-the-art benchmarks ([24]). This is even clearer (for many values of true MI) in the Gaussian case of Fig. 2, the half-cube and swiss roll cases of Fig. 3 and the scenarios of Fig. 4. 2. We thank the Reviewer for noticing that, we will sort them accordingly. 3. We will rewrite the sentence in line 65 as: “Another variational lower bound based on the KL divergence dual representation introduced in [15] leads to the NWJ estimator (also referred to as f-MINE in [16])” 4. We thank the Reviewer for mentioning relevant related work. As we have explained in both the abstract and the introduction, to contextualize our contribution and facilitate the analysis, we compare $f$-DIME with state-of-the-art neural estimators. While we also agree that non-neural methods have great performance in certain low-dimension scenarios, it is widely known that they fail to scale at larger dimensions (see [a]). For instance, KSG fails to obtain reasonable estimates even beyond 5 dimensions [b]. Results in [16] demonstrate the superiority of MINE wrt KSG for 20-dimensional multivariate Gaussians. Moreover, none of the mentioned papers offer experiments with dimension $d$ larger than 10. Interestingly, the independence test in [c] utilized random permutations instead of derangements. We will include [c] and [d] in the related work. **Questions**: 1. See above. 2. We are glad to read that the Reviewer checked the Appendix carefully and we agree with the importance of such tables. We will include Tab. 1 as a nested bar plot in the main if the paper gets accepted (see the rebuttal pdf), as we will have an additional page of content to fill in. We thank the Reviewer for suggesting the nested bar plot. 3. Due to limited space constraints, we wrote “the partition function does not need to be computed, leading to low variance estimators”. Implicitly, this sentence reveals that the cause of high variance in MI estimators resides in the partition function estimate. We understand it is a subtle concept for new researchers entering the domain, therefore we will insert in the camera ready version the sentence “the SMILE estimator was introduced in [17], where the authors proved that the estimate of the partition function is the cause for high-variance in variational lower bound estimators.” **References** [a] Gao, Shuyang, Greg Ver Steeg, and Aram Galstyan. "Efficient estimation of mutual information for strongly dependent variables." Artificial intelligence and statistics. PMLR, 2015. [b] Mukherjee, S., Asnani, H., & Kannan, S. (2020, August). CCMI: Classifier based conditional mutual information estimation. In Uncertainty in artificial intelligence (pp. 1083-1093). PMLR. [c] Zeng, Xianli et al. “Jackknife approach to the estimation of mutual information.” Proceedings of the National Academy of Sciences 115 (2018): 9956 - 9961. [d] Kevin R. Moon, Kumar Sricharan, and Alfred O. Hero. 2021. Ensemble Estimation of Generalized Mutual Information With Applications to Genomics. IEEE Trans. Inf. Theor. 67, 9 (Sept. 2021), 5963–5996. **Rating**: We appreciate the Reviewer recognizing the theoretical and numerical contributions. We hope that our answers and changes clarify the Reviewer’s doubts and can contribute to grading our paper more positively. Thank you. --- Rebuttal Comment 1.1: Title: Satisfactory answers. Comment: The answers are satisfactory to me. I appreciate the clarification of 1 and this should be written more emphatically in the paper. I will raise my score to a 6. --- Reply to Comment 1.1.1: Comment: We deeply thank the Reviewer for the positive comment and we will indeed write the clarification of 1 more emphatically in the paper. To ensure accurate reflection of the Reviewer's comment, we kindly request an update of the score to 6 before the rebuttal period ends, as we still do not see this change reflected. Thank you.
Rebuttal 1: Rebuttal: We thank the Reviewers for their valuable feedbacks. Please find below the one-page PDF containing extra information supporting our point-to-point rebuttal. Sincerely, The Authors Pdf: /pdf/4605d373fc70e48d9fe74e4fe8d2b4cd8d2fc605.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Efficient Policy Evaluation Across Multiple Different Experimental Datasets
Accept (poster)
Summary: This paper studies how to evaluate policies where source and target sites have distribution shifts. The authors introduce identification criteria for the effectiveness of policies, and develop doubly robust estimators that achieves fast convergence. The results are also generalized to multiple source datasets. Simulation results are provided to show the effectiveness of the method. Strengths: 1. This paper is well written and easy to follow. The setups are clearly introduced and motivated, the assumptions and methodologies are accurately stated. 2. The proposed framework is general to cover both two domains and multiple domains. 3. The empirical results seem good and well aligned with the theories for both synthetic simulation and real world datasets. Weaknesses: I am not an expert in this field and not familiar with causal inference. I find no major weaknesses. Some minor weaknesses are on the empirical evaluations. For example, the non-synthetic experiments are only conducted on ACTG 175 clinical trial dataset. Experimenting on other different datasets will enhance the empirical results. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses part. Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We agree with your comment and have added another experiment using a real-world dataset. --- > For example, the non-synthetic experiments are only conducted on ACTG 175 clinical trial dataset. Experimenting on other different datasets will enhance the empirical results. We appreciate this feedback. We provide an additional experiment using the real-world dataset called "Project STAR" dataset (Krueger & Whitmore, 2001) [1] in the global response (attached pdf). We will add full details about the experiment in the revised version of the paper. --- Rebuttal Comment 1.1: Title: Official Comment by Authors Comment: The discussion period is nearing its end, with just about a day remaining. Could you please check and confirm if our rebuttal has addressed your concerns and comments?
Summary: The paper develops novel graphical criteria and estimators for evaluating the effectiveness of various policies by integrating data from multiple experimental studies. Through theoretical analysis and empirical validation via simulations, the paper demonstrates the robustness and fast convergence of the proposed estimators. Strengths: The authors develop nonparametric identification criteria to determine whether the effect of a policy may be expressed through an adjustment formula from two separate distributions induced by policy interventions collected from different populations. Further, they generalize the identification criteria and propose a general multiplier robust estimator applicable to the evaluation of policies from multiple source datasets. Weaknesses: The paper is difficult to follow. It is very hard to understand. Some grammatical mistakes and confused notations, e.g., 1. line 71, ‘variable $\mathbf{Z}$ For example’ -> ‘variable $\mathbf{Z}$. For example’ 2. unify the notations if necessary, try to avoid writing $\bigtriangleup_{ij}$ and $\bigtriangleup_{i, j}$ simultaneously. For the experimental section, please give detailed descriptions on the simulated experiment and the empirical experiment (if the descriptions are too long, please move to the appendix), which allows reader to understand the advantages of the proposed framework. For the theoretical section, the authors first give study when combining two experiments, followed by the study when combining multiple experiments. Honestly, the materials in the two studies are similar. Authors can unify the two studies: presenting the study when combining multiple experiments followed by the two examples (example 2 and example 3). This can release more space to discuss the core components. Technical Quality: 3 Clarity: 3 Questions for Authors: Consider example 3. The proposed framework allows researchers to evaluate a policy from the three experiments. Suppose that all the datasets are synthetic together such that the resulting data is treated as a single dataset obtained from one experiment; probably, we can evaluate a policy based on the synthetic dataset as well. Could you conduct a study that compares the two policy evaluation methods? In some situations, it is necessary to evaluate a policy when the size of the multiple treatments is large. Consider example 3 again, but this time, we consider all the big cities in the US instead of three cities in the US. The computation complexity increases, obviously. Are there any efficient methods to handle the relevant problem? The adjustment criterion assumes that the distribution is invariant across various experiments. To my realization, the distributions vary even in the examples presented in the paper. How does the proposed methodology handle situations where the underlying assumptions are violated? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper focuses on combining two experiments for policy evaluation only. It would be better if the authors can discuss the optimal policy when combining two experiments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful review. Please find below a point by point answer to all questions and concerns. Please let us know if we can clarify anything further. --- > __(1)__ line 71, ‘variable $\mathbf{Z}$ For example’ -> ‘variable $\mathbf{Z}$. For example’; __(2)__ unify the notations if necessary, try to avoid writing $\Delta_{ij}$ and $\Delta_{i,j}$ simultaneously. Thank you for pointing this out. We will fix these. --- > For the experimental section, please give detailed descriptions on the simulated experiment and the empirical experiment Thank you for the feedback. We will add further description on the details of the simulations. For further reference, we have provided the details of the dataset leveraged in this paper in the appendix. --- > Honestly, the materials in the two studies are similar. Authors can unify the two studies: presenting the study when combining multiple experiments followed by the two examples (example 2 and example 3). This can release more space to discuss the core components. Thank you for your suggestion. We designed the current structure of the paper since our theories are new to both the causal and RL communities. We wanted to introduce the idea starting from a canonical setting (combining two experiments) and then extending it to more general settings. --- > Suppose that all the datasets are synthetic together such that the resulting data is treated as a single dataset obtained from one experiment We added the simulation under the suggested scenario, and the result is reported in the attached PDF (Figure 3) in the global response. The key observation in this figure is that the errors do not converge to zero and diverge as the sample size increases. This shows that the estimators are not consistent, unlike the proposed estimators that are consistent with the true effect of the policy in the target domain. --- > The computation complexity increases The computational complexity of the proposed estimator is $O(m \cdot T(n,m))$, where $m$ is the number of variables in a causal graph, $n$ is the number of samples, and $T(n,m)$ is the time complexity of training ML models for estimating nuisances, which is commonly polynomial in $n$ and $m$. The complexity is $O(m \cdot T(n,m))$ because we are learning $2 \times L \times m$ nuisances for constructing the proposed estimator. This analysis shows that even if the number of source domains (upper bounded by $m$) increases, the time complexity increases linearly with $m$. Therefore, the proposed estimator is scalable with respect to the number of source domains. --- > To my realization, the distributions vary even in the examples presented in the paper. We couldn't parse this question. In our setting, the distributions vary across domains. However, if some invariances between domains (such as the domain transfer for $Y$ and $W$ in Def. 4 and Def. 6) hold, our proposed method is applicable. --- > How does the proposed methodology handle situations where the underlying assumptions are violated? If the assumptions are violated, the proposed estimator will not be consistent with the effect of the policy in the target domain and may deviate systematically from the true value. For example, as shown in the attached PDF in the global response, if we ignore the heterogeneity assumption and treat all datasets as if they are from the same population, the resultant estimates do not converge and even diverge. --- Rebuttal Comment 1.1: Title: Official Comment by Authors Comment: The discussion period is nearing its end, with just about a day remaining. Could you please check and confirm if our rebuttal has addressed your concerns and comments?
Summary: The paper studies off-policy evaluation in a transfer learning setting with multiple source datasets collected from observational and/or randomized studies. The objective is to evaluate the effect of a target policy on a possibly different target population. To achieve this, the author(s) assume at each time point, at least one source population shares the same distribution of the dynamic variable given the historical covariate-treatment pair as the target population. This ensures the identifiability of the target population's policy value, even no source population perfectly matches its entire distribution over time. The author(s) further propose doubly robust estimators, investigate their theoretical properties and finite sample performance. Strengths: **Originality**: To the best of my knowledge, the proposed identification formula and the proposed estimators introduced have not yet appeared in the existing literature. **Quality**: The theorems established seem to be correct, and the methodologies proposed are theoretically sound and potentially useful. **Clarity**. Overall, the paper is well-organized and easy to follow. Weaknesses: I have two major concerns, regarding the technical assumptions and numerical experiments, along with several moderate comments. I detail them one by one below: **Assumptions**. The assumptions might appear overly realistic: * **Same number of studies to horizon**. In addressing a $T$-horizon off-policy evaluation (OPE) problem, the author(s) assume that there are exactly $T$ studies, each corresponding to a time point $t$, where exactly one study per time point shares the same outcome distribution (the distribution of $Y_T$ if $t=T$ and that of $W_{t+1}$ otherwise) with the target population at that time. These settings seem unrealistic in practical scenarios. It would be beneficial for the author(s) to provide examples of real applications that validate this assumption. The current analysis using the ACTG study seems artificial; I will elaborate on this concern in more detail later. * **Knowledge of matching source population**. Additionally, the author(s) seem to require prior knowledge of which source population matches the target population’s outcome distribution at each time point. In practical scenarios, while we may have access to multiple studies, it is typically unknown which one aligns with the target's outcome distribution at each time point. A more realistic approach would adaptively learn which studies are similar to the target distribution at each time based on the data. This adaptive learning scenario, in my opinion, would better fit real-world applications. * **Identical distributions**. Although the outcome distributions between the source and target populations may be similar at each time, they are not necessarily identical. Even if the distributions are not exactly the same, as long as the differences are minimal, it is reasonable to use source data for transfer learning remains. Recent studies have addressed such distributional shifts using regularization or adaptive weighting: - https://arxiv.org/pdf/2112.09313 - https://arxiv.org/pdf/2111.15012 - https://arxiv.org/pdf/2406.00317 **Numerical experiments**. The experiments are overly simplified: * **Single-horizon settings.** While the paper studies multi-horizon dynamic treatment regimes, the simulations are conducted in a single stage setting. I would suggest to use D4RL benchmark datasets or OpenAI Gym environments to evaluate the proposed methodologies in multi-stage studies. * **Lack of competing methods.** There are some naive estimator to consider in these transfer learning settings. However, the author(s) did not include them in the experiments. For instance, one can ignore the differences in the outcome distributions and assume the multiple datasets come from a single population, with a mixture of multiple behavior policies that generates the action. * **Real data**. The ACTG dataset is generated under a single population. To evaluate the proposed methodology, the author(s) manually created a second population and a target population. It would be better if a real "multi-source" dataset could be used. **Related works**. * Under the identical distribution assumption, this paper mainly considers settings with covariate shift. There are some recent works that studied similar problems, although in single-stage settings, e.g., https://onlinelibrary.wiley.com/doi/epdf/10.1111/biom.13583. * In the related work section, the author(s) argued that their work can be viewed as a bridge between causal inference and OPE by leveraging formal theories in causal inference to solve OPE problems. In that sense, there are prior works that similarly integrated both fields. For instance, standard policy evaluation methods in the RL literature uses the backdoor adjustment to learn the Q- or value as a function of the state, to address the confounding effects (see e.g., https://web.stanford.edu/class/psych209/Readings/SuttonBartoIPRLBook2ndEd.pdf). Meanwhile, other studies have applied the front-door adjustment formula for OPE in the presence of unmeasured confounders (https://www.tandfonline.com/doi/full/10.1080/01621459.2022.2110878). Finally, some works have leveraged double negative controls for OPE (https://cdn.aaai.org/ojs/6590/6590-13-9815-1-10-20200520.pdf). **Typo**: $Z$ is used to denote the dynamic covariate in Section 2. However, this notation has been replaced with $W$ starting from Section 3. Technical Quality: 3 Clarity: 3 Questions for Authors: Would it be possible to establish the semiparametric efficiency of the proposed estimators? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations have been discussed in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your extensive review, we really appreciate the feedback. In the following, we comment on the mentioned weaknesses and address outstanding questions and concerns separately. --- > __(1)__ Same number of studies to horizon __(2)__ While the paper studies multi-horizon dynamic treatment regimes, the simulations are conducted in a single stage setting. We appreciate the careful reading of our setting and thank you for raising this point. Given that the paper tackles a novel problem setting, we have presented a canonical case of the T-horizon off-policy evaluation problem that assumes access to T studies. This assumption can be relaxed in practice, depending on the causal structure. For concreteness, in the attached PDF, we provide a simple example where a two-horizon OPE problem can be solved with access to a single source domain. More generally, the number of heterogeneous domains and the horizon are not necessarily matched if a given causal structure satisfies the proposed criterion formalizing the invariance of mediated and outcome variables across domains. --- > Knowledge of matching source population. In the worst case, where every variable has arbitrarily different distributions from those of the target domain, the data obtained from the source population may not provide any clues for estimating the quantity defined in the target domain. To transfer knowledge, some form of invariance between sources and the target domain is essential. In our proposed framework, the required knowledge about domain discrepancy is relatively minimal. Instead of quantitative information, qualitative information on which variables' distributions could differ from the source is sufficient to apply the proposed method. For example, if the age distributions in London and New York are different, we can use this information without needing to know the extent of the difference. --- > A more realistic approach would adaptively learn which studies are similar to the target distribution at each time based on the data. Thank you for the intriguing proposition. In the setting we consider, however, we do not have access to data from the target population (except for baseline covariates) – and therefore, adaptive learning, which requires sources from the target population, may not be directly applicable to our problem setting. We want to remark that our problem setting might be reasonable for safety-critical or costly applications where inference of causal effects is necessary before implementing a candidate policy of interest and collecting data in the target domain. --- > Although the outcome distributions between the source and target populations may be similar at each time, they are not necessarily identical. We appreciate the references provided and will cite them. However, we respectfully disagree with the statement. The invariance of outcome distributions across domains stems from the ignorability assumption, which is present in all mentioned papers (e.g., Assumption 1 in the first paper) and in this paper (fourth bullet of Def. 4). This assumption establishes conditional independence between the outcome variable and domain-representing index variables, ensuring consistent outcome distributions across domains. --- > D4RL benchmark datasets or OpenAI Gym environments We appreciate these suggestions. However, the proposed benchmark may not be suitable for our problem setting, as it lacks elements like heterogeneous population inference, unobserved confounding, or causal structures. Instead, to further convey the practical utility of the proposed approach, we conduct an additional experiment using a real-world dataset from Project STAR dataset (Krueger & Whitmore, 2001). This includes multiple stages and a semi-synthetic set-up (in which the existing dataset is partitioned to form different environments). We include a full description in the global response and attached pdf. --- > For instance, one can ignore the differences in the outcome distributions and assume the multiple datasets come from a single population We added the simulation under the suggested scenario, and the result is reported in the attached PDF (Figure 3) in the global response. The key observation in this figure is that the errors do not converge to zero and diverge as the sample size increases. This shows that the estimators are not consistent, unlike the proposed estimators that are consistent with the true effect of the policy in the target domain. --- > Real data To enhance the practical implications of the proposed method, we added a real-world data simulation using the dataset from Project STAR (Krueger & Whitmore, 2001) in the attached pdf (Figure 2). --- > Related works. Thanks. We will definitely cite the introduced paper. --- > Would it be possible to establish the semiparametric efficiency of the proposed estimators? Yes. Since the proposed estimator is composed of heterogeneous datasets from multiple domains, multiple sampling distributions $P^i$ corresponding to each dataset should be leveraged to construct the semiparametric efficiency bound. Specifically, the partial influence function [2], an influence function defined relative to each $P^i$, can be constructed to provide an efficiency bound for the part corresponding to $\mathbb{E}_{\mathcal{D}_i}$. We will add a detailed discussion on the semiparametric efficiency using the partial influence function in the revised version of the paper. [2] Pires, Ana M., and João A. Branco. "Partial influence functions." Journal of Multivariate Analysis 83.2 (2002): 451-468. --- Rebuttal Comment 1.1: Title: Official Comment by Authors Comment: The discussion period is nearing its end, with just about a day remaining. Could you please check and confirm if our rebuttal has addressed your concerns and comments?
Summary: This work presents a method for evaluating effectiveness of policies across multiple domains using a new graphical criteria and estimators by combining data from multiple experimental studies. The authors report error analysis of the proposed estimators that gives provides fast convergence guarantee, and additionally share simulation results as well as empirical verification on real data. Strengths: The targeted problem seem to be of importance, and the proposed method provides theoretical error and convergence analysis, backed by empirical results both based on simulation and real data. Weaknesses: The the proposed method seems to have important potentials in real world applications, though the majority of the paper has been dedicated to theoretical analysis and the empirical evaluations are very limited. There is little discussion on how this could impact realworld problems, and there is not a detailed analysis on the empirical results of the results on ACTG175 dataset. From figure 3.c, it can be seen a contradictory message that the proposed DML method does not perform better than OM on all datasets, though there is no discussion on that. also authors state that "simulations on real and synthetic data are provided for illustration purposes only." which brings the question to what extend this approach is useful in real world scenarios. Technical Quality: 3 Clarity: 2 Questions for Authors: - Please provide more empirical analysis on the proposed method ideally on more real-world datasets if possible. - Please discuss the results on ACTG175 datasets and the different pattern observed in Fig3.c. - Provide more empirical evidence on real-world data. Confidence: 1 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and insights. In the following, we aim to answer each one of your concerns in turn. Please let us know if we can provide more details. --- > __(1)__ the empirical evaluations are very limited. There is little discussion on how this could impact realworld problems, > __(2)__ Please provide more empirical analysis on the proposed method ideally on more real-world datasets if possible. > __(3)__ Provide more empirical evidence on real-world data. We appreciate this feedback. We provide an additional experiment using the real-world dataset called "Project STAR" dataset (Krueger & Whitmore, 2001) [1] in the global response (attached pdf). We will add full details about the experiment in the revised version of the paper. [1] Krueger, Alan B., and Diane M. Whitmore. "The effect of attending a small class in the early grades on college‐test taking and middle school test results: Evidence from Project STAR." The Economic Journal 111, no. 468 (2001): 1-28. --- > there is not a detailed analysis on the empirical results of the results on ACTG175 dataset. Please note that the data generation process relating to ACTG 175 is described in the Appendix. More detailed discussions on the results of the synthetic and ACTG 175 experiments will be added. --- > From figure 3.c, it can be seen a contradictory message that the proposed DML method does not perform better than OM on all datasets, though there is no discussion on that. > Please discuss the results on ACTG175 datasets and the different pattern observed in Fig3.c. Thank you for this comment. This observation is expected. The construction of the DML estimator combines aspects of the OM estimator and the PW estimator. The error of the DML estimator can be represented as a product of the errors of nuisances for the OM estimator and the errors of the PW estimator. As a result, if the PW estimator has a large error, this can be reflected in a correspondingly larger error of the DML estimator. Since the error of the PW estimator is relatively larger than that of the OM estimator in Figure 3c, we expect the DML estimator to underperform compared to the OM estimator when the sample size is small. However, when both the OM and PW estimators converge to the truth, we expect that the DML estimator will outperform the other estimators as the sample size grows. --- > also authors state that "simulations on real and synthetic data are provided for illustration purposes only." which brings the question to what extend this approach is useful in real world scenarios. Please consider the context in which this statement is located. The full sentence is in our *Broader Impact Statement* section as follows: _"Finally, we emphasize that simulations on real and synthetic data are provided for illustration purposes only. These results do not recommend or advocate for the implementation of a particular policy, and should be considered in practice in combination with other aspects of the decision-making process."_ With these sentences, we do not question the usefulness of our methods, which we have justified through a series of theoretical and empirical evidence. Rather, we clarify that our paper should not be read as an endorsement of specific policies. --- --- Rebuttal Comment 1.1: Title: Official Comment by Authors Comment: The discussion period is nearing its end, with just about a day remaining. Could you please check and confirm if our rebuttal has addressed your concerns and comments? --- Rebuttal Comment 1.2: Comment: I thank the authors for the provided response. In light of the additional experiments, I have updated my scores accordingly.
Rebuttal 1: Rebuttal: Thank you again for your time and dedication in reviewing our work. In this global response, we describe additional results, illustrated with figures in the attached PDF, to address outstanding comments and questions. In particular, we attach three figures that are described below. --- **Figure 1: Relaxation of the Assumption on the number of source datasets** To respond the Reviewer cwjE's concern on "_the author(s) assume that there are exactly $T$ studies_", we describe a scenario in which a single source domain can be used to evaluate a two-stage policy. This shows that the assumption of having exactly $T$ source datasets for solving a $T$-stage Offline Policy Evaluation problem can be relaxed in practice, depending on the causal structure of the domains. --- **Figure 2: Experiment on Project STAR dataset** The second figure describes an additional experiment evaluating the effect of policies determining teacher/student ratios to improve academic achievement. Specifically, we use a semi-synthetic version of the Project STAR dataset (Stock et al., 2007) [1]. This study investigated the impact of teacher/student ratios on academic achievement for kindergarten through third-grade students. It was a four-year longitudinal study where students were randomly assigned to one of three interventions with different class sizes each year, following different randomization procedures. The dataset is publicly accessible from the R data repository. The causal diagram we assume for this setting is provided in Fig. 2 of the attached PDF. Specifically, we consider the evaluation of a 3-stage policy that determines student-teacher ratio across three different grades over time (these are the variables $(X_1, X_2, X_3)$ in the target environment). We observe intermediate outcomes $(W_1, W_2)$ that represent academic scores in grades 0 (Kindergarten) and 1, baseline covariates $\mathbf{C}$ such as ethnicity and gender, and the outcome of interest $Y$ representing academic scores at the end of grade 3. To mimic the setting where data at different stages was collected from different domains, we subsample the dataset using different sets of probabilities to induce differences in the distributions of baseline covariates (similar to the procedure conducted for ACTG 175). We consider a similar evaluation setup as in the main body of this paper and evaluate the proposed estimators PW, OM, and DML on different dataset sizes, plotting their absolute errors compared to the ground truth effect of the candidate policy. The results are shown in Fig. 2 of the attached PDF. We observe that the DML estimator converges faster. The performance pattern is qualitatively similar to the other presented experiments, with all methods improving with sample size towards convergence (in the limit), and the DML estimator converging faster. [1] Stock, J.H. and Watson, M.W. (2007). Introduction to Econometrics, 2nd ed. Boston: Addison Wesley. --- **Figure 3. Performance of naive estimators that ignore the differences across domains** To respond to comments from reviewers cwjE and xiN2, we implemented the "naive" estimator that ignores discrepancies across domains. We considered the 2-stage synthetic simulation scenario described in Sec. 5.1 and plotted performance as a function of increasing sample size. The key observation in this figure is that the errors do not converge to zero and diverge as the sample size increases. This shows that the estimators are not consistent, unlike the proposed estimators that are consistent with the true effect of the policy in the target domain. Pdf: /pdf/ebdb1cc7a49682337e2aacecea882056bcd6ba75.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Infusing Self-Consistency into Density Functional Theory Hamiltonian Prediction via Deep Equilibrium Models
Accept (poster)
Summary: Paper presents DEQH model which is a Deep Equilibrium Model to solve quantum Hamiltonians. DEQH uses the deep equilibrium model because it converges to a fixed point, which matches the self-consistent nature of Hamiltonian solving. Paper presents architecture based on the QHNet, results and comparison with QHNet for the MD17 and QH9 datasets, and convergence and ablation study. Strengths: Paper addresses a significant problem of computational chemistry. Paper is original and novel in integrating deep equilibrium model with Hamiltonian solving (combining fixed point convergence and self-consistency property makes sense). Paper has good set of experiments comparing with an alternative state of the art model and reaches better results than other model. Weaknesses: It seems there are differences between DEQH and QHNet, including the input of the predicted H, but the differences are not clearly highlighted in the paper. this makes the comparison harder to interpret. The result of H, Psi errors being low while epsilon errors are high for DEQH is strange. What is the meaning of H, epsilon, and Psi, and why is the epsilon error higher in many cases? No error bars on tables and ablation study, makes results harder to interpret. Technical Quality: 2 Clarity: 2 Questions for Authors: see above. Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: A limitation I see is that DFT ground truth calculation is still used in the loss. Since DEQH model is a Hamiltonian solver, it would be nice to see how it performs using a self-consistency loss. It seems that avoiding having to run many DFT calculations would be a big improvement, whereas the current results of paper show DEQH gets lower error results than previous model but still needs DFT data and requires 1.5x training time. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review. We will address each point in our response accordingly. ## Weakness 1: Thank you for your feedback. DEQH is a general method specifically devised to instill self-consistency into pre-existing models, while QHNet is a distinct network that predicts the Hamiltonian directly. Specifically, DEQH functions as a Hamiltonian solver, learning the iterative solving process $H^* = f(H^*, Z, R)$, while QHNet operates as a predictor $H = f(Z, R)$, where $H, Z, R$ denote Hamiltonian matrix, atomic number, and atomic coordinates resepctively. f represents a neural network and $H^*$ denotes the fixed point of the equation. In our research, we utilized QHNet as the fundamental backbone and incorporated DEQH into it, thereby creating DEQHNet. As mentioned in the Supplementary Material A.11, we opted for QHNet over PhiSNet due to the current implementation of PhiSNet being restricted to single-molecule support. This limitation stems from PhiSNet’s matrix prediction module, which is tailored to predict matrices of fixed size for identical molecules. DEQH introduces self-consistency by approaching the problem of Hamiltonian solution from the vantage point of a fixed-point iteration, which necessitates the inclusion of the Hamiltonian in the network's input. To ensure the broad applicability of the Hamiltonian across various graph neural networks, it must be transformed into signals on the graph. This is demonstrated in Section 4.1, where we transposed the Hamiltonian, overlap matrix, and other equivalent tensors into invariant node features. Furthermore, we provide a PDF document in the global rebuttal section, which includes a figure delineating the distinction between the off-the-shelf model used for Hamiltonian prediction and the DEQH model. We understand the importance of clearly communicating these differences and will further elaborate on these points in the revised version of our paper. ## Weakness 2: Thank you for your question. In Appendix A.7, we provide detailed definitions of the metrics. H refers to the Mean Absolute Error (MAE) of the Hamiltonian matrix, while epsilon and Psi are the eigenvalues (orbital energy) and eigenvectors (orbital coefficients) obtained by solving generalized eigenvalue problems. The metrics for epsilon and Psi are the MAE of the orbital energies and the cosine similarity of the orbital coefficients, respectively. We explain in Appendix A.9 why epsilon is higher than the baseline in many cases. We selected a molecule randomly, added a Hermitian-Gaussian noise matrix to its Hamiltonian, and solved the corresponding generalized eigenvalue equation. We observed that as the noise on the Hamiltonian increased, the range of errors in the orbital energy also increased. The orbital energy errors reported for both QHNet and DEQHNet on the MD17 and QH9 datasets fall within this range, suggesting the experimental results are reasonable. This implies that only when the MAE of the Hamiltonian is sufficiently small can we ensure the corresponding orbital energy deviation is also small. In repeated experiments on the MD17 dataset, we found that when the Hamiltonian MAE of the model is at the same level, it is possible to obtain both a lower and a higher orbital energy MAE than the baseline. We believe this contributes to the performance difference observed between DEQHNet and the baseline model. This somewhat suggests that using existing Hamiltonian error measurements cannot definitively reflect the errors in the eigenvalues. This is a nontrivial, separate research question that we plan to investigate in the future. We will elaborate on this point more clearly in the revised version of our paper to provide a comprehensive understanding of the model's performance. ## Limitations: Thank you for your insightful comments. Indeed, the DEQH model introduces self-consistency using DEQ, but it remains a supervised learning task that requires DFT data as labels. The statement "DEQH model is a Hamiltonian solver" refers to the DEQH model's ability to learn the iterative process of solving the Hamiltonian through DFT data, rather than it being a Partial Differential Equation (Schr\"odinger equation) solver. With this mechanism, DEQH is a better way for supervised learning of Hamiltonian in that it exploits more information in a model of limited size by passing through the model multiple times until convergence, and that the model learns the iteration map which is a physical mechanism that generalizes better. Your idea of bypassing DFT computations is intriguing and aligns with our thoughts. As we mentioned in Appendix A.11, some existing methods introduce self-consistency using unlabeled data. Our approach is entirely orthogonal to these methods. Therefore, combining these methods to achieve a trade-off between DFT computational cost and accuracy is a promising direction worth exploring. --- Rebuttal 2: Comment: Thank the authors for response. I still think the ablation study is hard to interpret, but I appreciate the additional comparison with other benchmarks. I have increased score accordingly --- Rebuttal 3: Comment: We are pleased that our response has satisfied you and are very grateful for the increased score. As outlined in our rebuttal, the DEQH functions as a Hamiltonian solver, learning the iterative solving process $H^* = f(H^*, Z, R)$, while the QHNet acts as a predictor with the formula $H = f(Z, R)$. In practice, if the dataset includes overlap matrices, the network input can also incorporate this information, treating it similarly to the Hamiltonian by transforming it into equivariant node features (even if it's absent from the dataset, the calculation of the overlap matrix is quite straightforward and can be generated during data preprocessing). The overlap matrix offers a wealth of detailed information and can be computed easily with $Z$ and $R$. Accordingly, the equations for the DEQH and the modified Hamiltonian predictor become $H^* = f(H^*, Z, R, S)$ and $H = f(Z, R, S)$, respectively, where $S$ is the overlap matrix. In line with this, we conducted several experiments in our ablation study, testing the QHNet ($H = f(Z, R)$), QHNet w/ S ($H = f(Z, R, S)$), DEQHNet ($H^* = f(H^*, Z, R, S)$), and DEQHNet w/o S ($H^* = f(H^*, Z, R)$) on the Uracil dataset. The results depicted in Figure 4 indicate that: * Incorporating the overlap matrix as an input leads to a lower Mean Absolute Error (MAE) in predicting the Hamiltonian (DEQHNet's Hamiltonian MAE is lower than DEQHNet w/o S, and QHNet's Hamiltonian MAE is higher than QHNet w/ S). Furthermore, the inclusion of the overlap matrix results in higher similarity in the orbital coefficients, suggesting that the overlap matrix is beneficial for the network's learning of the Hamiltonian and also supports our experimental hypothesis that the overlap matrix provides orbital information, which in turn improves the results for the orbital coefficients. * Regardless of the presence of the overlap matrix in the network's input, the DEQH's Hamiltonian MAE is consistently lower than that of the QHNet-based Hamiltonian MAE, indicating that the DEQH model benefits from the introduction of self-consistency. We will ensure that all new experimental results are included in the manuscript and will promptly refine our paper as soon as we are permitted a polish for the final version. We deeply appreciate your valuable feedback.
Summary: The paper introduces a novel neural network architecture DEQH, extending deep equilibrium models (DEQs) to improve predictions of quantum Hamiltonians. The architecture constrains solutions to ensure self-consistency of the Hamiltonian, thereby improve generalization capability and test accuracy. Strengths: Constraining the network to obey self-consistency is natural thing to do and matches the physical constraints of the problem at hand, without the need for costly DFT calculations. The paper proposes an elegant parameterization. Further, it provides an extensive empirical analysis showing the practical benefits of the proposed approach on several datasets. Weaknesses: As mentioned, DEQH model acts fundamentally as a solver, iteratively determining the Hamiltonian with fixed point iterations. It is not directly clear to me why this would necessarily be better, or how this compares, to methods that integrating frameworks that also ensure self-consistency. Why would we expect to be better both computationally and in terms of performance, or is there a clear trade-off? Technical Quality: 3 Clarity: 3 Questions for Authors: How do we expect DEQH to compare to methods that ensure self-consistency through integration? Is there are trade-off between performance and computational cost, or do we expect the method to be better overall. Why? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper proposes a novel way to embed self-consistency into neural network architectures for quantum Hamiltonian prediction and provides empirical evidence of benefits. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's recognition of the innovation and significance of our work and will address each point in our response accordingly. ## Weakness: Thank you for your insightful question. The Hamiltonian exhibits self-consistent iterative properties. Specifically, in the DFT solution process, the information from the previously obtained Hamiltonian is used to construct the current Hamiltonian for the current DFT solution. This process is repeated until certain criteria meet a set threshold. We hypothesize that introducing DEQ enables the network to learn the iterative mechanism of solving the Hamiltonian, rather than directly learning the solution to the Hamiltonian. This shift could allow the network to better capture inherent physical laws, thereby improving generalizability. Compared to models without self-consistency, DEQ introduces additional orbital information to the network input. This, coupled with the introduction of scientific priors, can expedite DFT convergence even when the model's output is simply used as the initial value for DFT (as shown in Appendix A.8). In contrast with methods that introduce self-consistency (which often include DFT computations in the loss function during model training), our method offers an architectural enhancement. The DEQH model does not directly include DFT computations, leading to a more efficient computational process. Empirically, if the dataset and molecular size are relatively small and the cost of DFT computations during training is acceptable, previous methods introducing self-consistency might outperform ours (as suggested by DEQHNet's results on water, where DEQHNet requires more data). However, when the dataset is large enough, or when it includes large molecules where DFT computation cost is prohibitive, the benefits of the DEQH model become more significant. As mentioned in Appendix A.11, since these methods are orthogonal, combining them could be a promising direction for future exploration. We will make sure to discuss these points more clearly in the revised manuscript. ## Questions: Thank you for your insightful question. Compared to works introducing self-consistency, our method enhances the model architecture. As mentioned in the related works section of our paper, several methods introduce this attribute during the training process at the loss level, requiring direct DFT computation. In contrast, the DEQH model does not directly involve DFT computation, leading to more efficient computational performance. Empirically, if the data volume and molecular size are relatively small, and the DFT computational cost during training is acceptable, previous works introducing self-consistency might perform better (as suggested by DEQHNet results on water, where DEQHNet requires a larger data volume). However, when the data volume is sufficient, or the data contains larger molecules making the DFT computation cost too high, the improvements offered by the DEQH model become more substantial. As we mentioned in Appendix A.11, since these methods are orthogonal, combining them is a direction worth exploring. --- Rebuttal Comment 1.1: Comment: I thank the author’s for the further explanation and clarifications. I keep my recommendation for acceptance with the score of a 7, conditioned on including discussions and additional experiments from the other rebuttals in the final manuscript. --- Reply to Comment 1.1.1: Comment: We are delighted that our response has met with your satisfaction. We will make sure to incorporate all of the latest discussions and experimental results into the manuscript and enhance its overall presentation promptly once the opportunity for a final polish is granted. Your constructive feedback is highly appreciated.
Summary: The authors introduce DEQH model, which combines deep equilibrium models with existing ML models to predict quantum Hamiltonian, and the author adopt QHNet as the backbone and further develop DEQHNet. The authors evaluate the proposed method on benchmarks MD17 and QM9, the results show some effectiveness. Strengths: 1. The idea of incorporating the DEQs is interesting, which have some kind of intrinsic principles as the authors demonstrated in the paper. 2. When compared with QHNet, the DEQHNet shows advantages. Weaknesses: 1. Though I am relatively familiar with ML for Hamiltonian prediction and have gained some knowledge of DEQ, it is still not easy for me to understand the main part of this paper. 2. This work only compare with one baseline named QHNet, there are some other works which can also be baselines [1] [2] [3]. 3. Source code is not available. [1] Unke O, Bogojeski M, Gastegger M, et al. SE (3)-equivariant prediction of molecular wavefunctions and electronic densities[J]. Advances in Neural Information Processing Systems, 2021, 34: 14434-14447. [2] Gong X, Li H, Zou N, et al. General framework for E (3)-equivariant neural network representation of density functional theory Hamiltonian[J]. Nature Communications, 2023, 14(1): 2848. [3] Wang Y, Li H, Tang Z, et al. DeepH-2: Enhancing deep-learning electronic structure via an equivariant local-coordinate transformer[J]. arXiv preprint arXiv:2401.17015, 2024. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Why is the model performance so much worse than the baseline in terms of energy? Has the author explored the reasons? 2. Is the method proposed by the author extensible and can it be applied to other existing ML models? Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review. We will address each point in our response accordingly. ## Weakness 1: Thank you for your feedback. The DEQH model is designed to predict the Hamiltonian. Given the self-consistent iterative properties of the Hamiltonian, we regard these properties as a fixed-point iteration problem, which inspires us to use DEQ to introduce self-consistency into existing models for Hamiltonian prediction. We provide a PDF document in the global rebuttal section, which includes a figure delineating the distinction between the off-the-shelf model for Hamiltonian prediction and the DEQH model. Furthermore, we will revise our manuscript to simplify the technical language and include more background information about DEQs and Hamiltonian prediction. We will also add further explanations about the integration of these two components and how this combination contributes to the overall model. We hope that these revisions will make our paper more comprehensible to readers with various backgrounds. We appreciate your feedback and will make sure to address this concern in the revised version of our paper. ## Weakness 2: Thanks for your insightful suggestion. Although DEQH model could be extended to periodic systems, DEQHNet, being built on the foundation of QHNet, is tailored for non-periodic systems. Hence, a direct comparison with DeepH-E3 and DeepH-2 may not be suitable. In response to your feedback, we include additional benchmark results for SchNOrb [1], PhiSNet [2], and DeepH [3] on the MD17 dataset as follows. |Dataset|Model|H[10$^{-6} E_h$] $\downarrow$|$\epsilon$ [$10^{-6}E_h$] $\downarrow$|$\psi$ [$10^{-2}$] $\uparrow$| |--|--|--|--|--| |Water|SchNOrb|165.4|279.3|100.00| ||QHNet|10.79|33.76|99.99| ||PhiSNet (ori)|17.59|85.53|100.00| ||PhiSNet (reproduce)|15.67|-|99.94| ||DeepH|38.51|-|-| ||DEQHNet|36.07|335.86|99.99| |Ethanol|SchNOrb|187.4|334.4|100.00| ||QHNet|20.91|81.03|99.99| ||PhiSNet (ori)|12.15|62.75|100.00| ||PhiSNet (reproduce)|20.09|102.04|99.81| ||DeepH|22.09|-|-| ||DEQHNet|18.73|106.94|100.00| |Malonaldehyde|SchNOrb|191.1|400.6|99.00| ||QHNet|21.52|82.12|99.92| ||PhiSNet (ori)|12.32|73.50|100.00| ||PhiSNet (reproduce)|21.31|100.60|99.89| ||DeepH|20.10|-|-| ||DEQHNet|17.97|93.79|99.90| |Uracil|SchNOrb|227.8|1760|90.00| ||QHNet|20.12|113.44|99.89| ||PhiSNet (ori)|10.73|84.03|100.00| ||PhiSNet (reproduce)|18.65|143.36|99.86| ||DeepH|17.27|-|-| ||DEQHNet|15.07|107.49|99.89| However, it should be noted that the data used in DeepH is re-labeled by OpenMX[4]. Furthermore, considering the high training cost of the original PhiSNet, we provide PhiSNet results from the QHNet paper for a more balanced comparison. We appreciate your suggestion and will make sure to incorporate these benchmark results in the revised version of our paper. [1] Schütt K T, Gastegger M, Tkatchenko A, et al. Unifying machine learning and quantum chemistry with a deep neural network for molecular wavefunctions[J]. Nature communications, 2019, 10(1): 5024. [2] Unke O, Bogojeski M, Gastegger M, et al. SE (3)-equivariant prediction of molecular wavefunctions and electronic densities[J]. Advances in Neural Information Processing Systems, 2021, 34: 14434-14447. [3] Li H, Wang Z, Zou N, et al. Deep-learning density functional theory Hamiltonian for efficient ab initio electronic-structure calculation[J]. Nature Computational Science, 2022, 2(6): 367-377. [4] Ozaki T, Kino H. Numerical atomic basis orbitals from H to Kr[J]. Physical Review B, 2004, 69(19): 195113. ## Weakness 3: Thanks for your feedback. We understand the importance of making source code available for reproducibility and transparency in research. We have prepared an anonymous link (https://anonymous.4open.science/r/nips-rebuttal-80C4/) for the review. ## Question 1: Thanks for your question. In Section A.9 of Supplementary Material, we discussed this difference in detail. We randomly selected a molecule, added a symmetrized (Hamiltonian matrices need to be symmetric) Gaussian noise matrix, and solved the corresponding generalized eigenvalue equation. As shown in Fig. 5, when the noise on the Hamiltonian gradually increased, the error range of orbital energy also increased. The orbital energy errors we reported for both QHNet and DEQHNet on the MD17 and QH9 datasets fall within the range shown in the figure, which suggests that the experimental results are reasonable. This observation implies that only when the Mean Absolute Error (MAE) of the Hamiltonian is sufficiently small can we ensure that the corresponding orbital energy deviation is also small. Additionally, we repeated the experiments on the MD17 dataset and found that when the Hamiltonian MAE of the model is at the same level, it is possible to obtain both a lower and a higher orbital energy MAE than the baseline. In the comparison between QHNet and PhiSNet, we observed similar situations. This somewhat suggests that using existing Hamiltonian error measurements cannot definitively reflect the errors in the eigenvalues. This is a nontrivial, separate research question that we plan to investigate in the future. We will elaborate on this point more clearly in the revised version of our paper to provide a comprehensive understanding of the model's performance. ## Question 2: Thanks for your question. Indeed, our approach is designed to be highly adaptable and can be applied to off-the-shelf machine learning models. As stated in the introduction, our method introduces self-consistency to off-the-shelf models used for Hamiltonian prediction through the use of Deep Equilibrium (DEQ) models. To incorporate this into existing models, one simply needs to add a block that processes node features constructed from the Hamiltonian and overlap matrix inputs. This makes it relatively straightforward to combine any existing model with DEQ. We believe this adaptability is one of the key strengths of our method and will further highlight this in our revised manuscript. --- Rebuttal Comment 1.1: Comment: Thank you for your efforts and detailed response. However, I still have some concerns about the additional experiments: DEQHNet performed poorly on water, not even surpassing QHNet; its performance on ethanol was also not better than QHNet; and on Malonaldehyde, it showed no advantage compared to the baseline. It only had some advantage on uracil. Additionally, although the authors provided visualizations to demonstrate the method, I still believe that the presentation of the paper needs further improvement (the Reviewer eJGc also mentioned this point). In summary, although the idea of DEQHNet seems interesting, its presentation and experimental performance are not convincing enough, so I will maintain my original rating. --- Rebuttal 2: Comment: Thank you for sharing your further comment. Nevertheless, we are afraid that your comment is based on a factual misunderstanding. The fact is, DEQHNet outperforms QHNet in all cases, including ethanol and malonaldehyde, with the only exception of water due to limited available labels (500; other molecules have 25,000 labels) as we have explained in the paper. Such results indicate that DEQHNet improves upon existing end-to-end Hamiltonian prediction methods universally. We are not sure about the specific reason why you have the misunderstanding, but we would like to stress that the primary comparing metric should be Hamiltonian MAE, which directly measures the prediction accuracy of the models. Other metrics, i.e., orbital energy and orbital coefficients, are derived quantities from the Hamiltonian. As we mentioned in the rebuttal, the prevailing MAE metric for Hamiltonian has been found not perfectly aligning with metrics of these derived quantities, which has also been observed in the comparison between QHNet and PhiSNet. In the appendix of our paper and in the rebuttal, we have explained why our method performs well in terms of Hamiltonian error and orbital coefficients. We also pointed out that the reported orbital energy falls within the error range of the current Hamiltonian. Specifically, in Section A.9 of the Supplementary Material and the Question 1 section of the rebuttal, we discussed this discrepancy in detail. We randomly selected a molecule, added a symmetrized Gaussian noise matrix (Hamiltonian matrices need to be symmetric), and solved the corresponding generalized eigenvalue equation. As shown in Fig. 5, as the noise on the Hamiltonian increased, the error range of the orbital energy also increased. The orbital energy errors we reported for both QHNet and DEQHNet on the MD17 and QH9 datasets fall within the range shown in the figure, suggesting that the experimental results are reasonable. Additionally, we can provide further evidence from the QHNet paper, which observed a similar phenomenon. In their study, the PhiSNet achieved a Hamiltonian error of only 15.67x10$^{-6}$ Hartree on water, but the orbital energy error was "ten times worse than that by QHNet (33.76)", approximately 330x10$^{-6}$ Hartree. This observation implies that only when the Mean Absolute Error (MAE) of the Hamiltonian is sufficiently small can we ensure that the corresponding orbital energy deviation is also small. In the comparison between QHNet and PhiSNet, we observed similar situations. This somewhat suggests that using existing Hamiltonian error measurements cannot definitively reflect the errors in the eigenvalues. These evidence suggests that it is a nontrivial and long-standing challenge in the domain of Hamiltonian prediction to design a proper metric on the matrix space so that reflects the metric of derived quantities e.g. energy, for which we will investigate in future work. Finally, concerning your comment that "the presentation of the paper needs further improvement," we have already stated in our rebuttal that "We will also add further explanations about the integration of these two components and how this combination contributes to the overall model. We hope that these revisions will make our paper more comprehensible to readers with various backgrounds." The PDF we provided in the global rebuttal is merely an initial visualization, as the current NeurIPS review process only permits a one-page PDF that includes images and tables but no text.
Summary: This paper introduces the DEQH (Deep Equilibrium Quantum Hamiltonian) model, which integrates Deep Equilibrium Models (DEQs) for predicting quantum Hamiltonians. By incorporating DEQs, the model captures the self-consistency of Hamiltonians without needing iterative Density Functional Theory (DFT) calculations during training, enhancing computational efficiency. DEQHNet, a specific implementation, demonstrates improvements in prediction accuracy on the MD17 and QM9 datasets. The model acts as both a predictor and solver, iteratively refining the Hamiltonian to achieve self-consistency. Ablation studies further validate the effectiveness of this approach. Strengths: 1. The DEQH model proposed eliminates the need for iterative DFT calculations during training, which reduces computational overhead. 2. The DEQHNet model demonstrates improved accuracy in predicting Hamiltonians on the MD17 and QM9 datasets. 3. The model inherently captures the self-consistency required for accurate Hamiltonian prediction. 4. This paper includes ablation studies to analyze the contribution of different components of the model. 5. The paper shows quick convergence of the DEQHNet model. Weaknesses: 1. The presentation of the paper could be improved. In this paper, the integration of Deep Equilibrium Models (DEQs) with the Hamiltonian solver is presented in a quite technical way. Additional visual illustrations and diagrams could help clarify the workflow of the DEQH model and its components. 2. This paper primarily compares DEQHNet with QHNet. Benchmarking with additional methods could help better evaluate the results. Technical Quality: 3 Clarity: 3 Questions for Authors: What are the runtime and memory usage for DEQHNet compared to previous methods? What are the theoretical assumptions underlying the DEQH model, and how do they impact the generalizability of the results? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are immensely grateful for your assessments. We will address each point in our response accordingly. ## Weakness 1: Thanks for your feedback and suggestions. We provide a PDF document in the global rebuttal section, which includes a figure delineating the distinction between the off-the-shelf model used for Hamiltonian prediction and the DEQH model. We hope this visual aid will enable readers to gain a clearer and more intuitive understanding. Furthermore, in the revised version, we will include additional diagrams and illustrations to better depict the workflow of the DEQH model and its components. This will include detailed flowcharts and schematic representations that highlight the key processes and interactions within the model. We are grateful for your insights, which have been instrumental in improving the clarity and quality of our work. ## Weakness 2: Thank you for your valuable feedback. We agree that adding more benchmark results would provide a more comprehensive evaluation of our DEQHNet model's performance. In response to your suggestion, we include additional benchmark results for SchNOrb [1], PhiSNet [2], and DeepH [3] on the MD17 dataset as follows. |Dataset|Model|H[10$^{-6} E_h$] $\downarrow$|$\epsilon$ [$10^{-6}E_h$] $\downarrow$|$\psi$ [$10^{-2}$] $\uparrow$| |--|--|--|--|--| |Water|SchNOrb|165.4|279.3|100.00| ||QHNet|10.79|33.76|99.99| ||PhiSNet (ori)|17.59|85.53|100.00| ||PhiSNet (reproduce)|15.67|-|99.94| ||DeepH|38.51|-|-| ||DEQHNet|36.07|335.86|99.99| |Ethanol|SchNOrb|187.4|334.4|100.00| ||QHNet|20.91|81.03|99.99| ||PhiSNet (ori)|12.15|62.75|100.00| ||PhiSNet (reproduce)|20.09|102.04|99.81| ||DeepH|22.09|-|-| ||DEQHNet|18.73|106.94|100.00| |Malonaldehyde|SchNOrb|191.1|400.6|99.00| ||QHNet|21.52|82.12|99.92| ||PhiSNet (ori)|12.32|73.50|100.00| ||PhiSNet (reproduce)|21.31|100.60|99.89| ||DeepH|20.10|-|-| ||DEQHNet|17.97|93.79|99.90| |Uracil|SchNOrb|227.8|1760|90.00| ||QHNet|20.12|113.44|99.89| ||PhiSNet (ori)|10.73|84.03|100.00| ||PhiSNet (reproduce)|18.65|143.36|99.86| ||DeepH|17.27|-|-| ||DEQHNet|15.07|107.49|99.89| We would like to note that due to the design of DeepH, the data used is re-labeled by OpenMX [4], which may lead to different results compared to other models. Furthermore, considering the high training cost of PhiSNet, we also provide PhiSNet results from the QHNet article for a more balanced and comprehensive comparison. These results will be incorporated into the revised manuscript to provide a more robust and thorough evaluation of DEQHNet, thereby offering readers a clearer understanding of its performance relative to other state-of-the-art methods. [1] Schütt K T, Gastegger M, Tkatchenko A, et al. Unifying machine learning and quantum chemistry with a deep neural network for molecular wavefunctions[J]. Nature communications, 2019, 10(1): 5024. [2] Unke O, Bogojeski M, Gastegger M, et al. SE (3)-equivariant prediction of molecular wavefunctions and electronic densities[J]. Advances in Neural Information Processing Systems, 2021, 34: 14434-14447. [3] Li H, Wang Z, Zou N, et al. Deep-learning density functional theory Hamiltonian for efficient ab initio electronic-structure calculation[J]. Nature Computational Science, 2022, 2(6): 367-377. [4] Ozaki T, Kino H. Numerical atomic basis orbitals from H to Kr[J]. Physical Review B, 2004, 69(19): 195113. ## Question 1: Thanks for your question regarding the runtime and memory usage of DEQHNet compared to previous methods. As discussed in Section A.10 of our Supplementary Material, the training time of DEQHNet on the QH9 dataset is approximately 1.5 times that of QHNet. This is primarily due to the need for DEQ models to perform fixed-point iteration solving, which results in a higher runtime compared to models that do not incorporate a self-consistency mechanism. Regarding memory usage, DEQHNet requires extra blocks to process node features constructed from the Hamiltonian and overlap matrix inputs. This results in a slightly larger memory footprint compared to previous network structures. In our experiments, DEQHNet only had a 28.5% increase in parameters compared to QHNet, demonstrating that it improved accuracy without a significant increase in parameters. We are currently quantifying these requirements more precisely and will include a detailed comparison of memory usage in the revised manuscript. ## Question 2: Thanks for your insightful question. The theoretical foundations of our DEQH model are rooted in the common practice in Density Functional Theory (DFT) of iteratively solving the electronic Schrödinger equation. The process begins by initializing a Hamiltonian, solving the generalized eigenvalue problem, and then constructing a new Hamiltonian for the next iteration. We treat this iterative property of the Hamiltonian as a fixed-point iteration problem. This allows us to leverage the characteristics of DEQ in solving fixed-point problems to impose self-consistency on the network for Hamiltonian prediction. The results from the QH9-stable-ood experiment demonstrate that our approach can significantly reduce the Mean Absolute Error (MAE) of the Hamiltonian, suggesting an enhanced capability for generalization. Additionally, the DFT Acceleration ratio presented in Appendix A.8 further reflects the generalizability of our model to a certain extent. We hypothesize that introducing DEQ enables the network to learn the iterative mechanism of solving the Hamiltonian, rather than directly learning the solution to the Hamiltonian. This shift could allow the network to better capture inherent physical laws, thereby improving generalizability. --- Rebuttal Comment 1.1: Comment: Thanks a lot for taking the time and effort to answer my questions. I would be considering raising the score in the next two days. --- Reply to Comment 1.1.1: Comment: We are glad to know that our response is satisfactory to you. We will make sure to include all the new experimental results in the manuscript and improve the presentation as soon as we are permitted a polish for the final version. We are grateful for your valuable feedback.
Rebuttal 1: Rebuttal: We are grateful for the valuable feedback provided by all the reviewers. We provide a PDF document in the global rebuttal section, which includes a figure delineating the distinction between the off-the-shelf model used for Hamiltonian prediction and the DEQH model. We hope this visual aid will enable readers to gain a clearer and more intuitive understanding. Pdf: /pdf/dfe9ae004091456ab4addd4be74c509a9dd2abca.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
L4GM: Large 4D Gaussian Reconstruction Model
Accept (poster)
Summary: This paper proposed L4GM, an efficient large 4D Gaussian reconstruction model to produce animated objects from videos by a single feed-forward. L4GM leverages ImageDream and LGM to achieve multiview images of the first frame as the input. The overall model is built upon the pre-trained LGM with cross-view and temporal attention blocks, which could be easily modified by rearranging the feature dimension. Besides, this paper proposed autoregressive reconstruction and 4D interpolation, largely improving the overall smoothness of the generated 4D. To train L4GM, the authors rendered animated objects from Objaverse, and conducted sufficient experiments to verify the effectiveness. Strengths: 1. As a feed-forward technique, L4GM enjoys good efficiency and performance, especially for the high-resolution 4D generation. 2. The overall pipeline is convincing while the paper is also clearly written. 3. This paper contributes to large-scale 4D data with 12M videos rendered from Objaverse. Weaknesses: 1. The main concern is the novelty. Although the overall pipeline is convincing, this paper is more like an extension of LGM for 4D generation. Most techniques are very straight-forward, such as temporal and cross-view attention, and multi-view synthesis by ImageDream+LGM. 2. Another concern is the setting of repeating the multiview images from the initial timestep as inputs for other frames. It works more like seeking temporary relief rather than a solid solution. For example, this repeating suffers from solving 4D generation with the motion of turning around, whose multiview images would contain conflict to the inputs. Why not use ImageDream+LGM to achieve more multiview inputs? Moreover, more quantitative and qualitative ablation studies should be considered for this setting, such as no-repeating views, repeating views, and multi-timestep-multi-view inputs respectively. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Maybe there is a typo of temp. embed. in Figure 6(b), which should be time embed? 2. Why there are 12M videos at all? The authors said that 110k animations are captured with 48 views. So the overall videos should be about 5M. 3. Since L4GM is trained with animated objects from Objaverse, the comparison of Consistent4D in Table 1 is not convincing enough. To the best of my knowledge, most objects in the Consistent4D test set are from Objaverse too. Please refer to https://github.com/yanqinJiang/Consistent4D/blob/main/test_dataset_uid.txt for more details. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The authors have discussed limitations in the supplementary. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. The main concern is the novelty. Although the overall pipeline is convincing, this paper is more like an extension of LGM for 4D generation. Most techniques are very straight-forward, such as temporal and cross-view attention, and multi-view synthesis by ImageDream+LGM. A1. As pointed out by reviewers (4G9y, UkTx, hQSp), we would like to emphasize that L4GM is the first 4D reconstruction network. To achieve this, we propose several methods to successfully generate 4D assets in this setting. One of our core technical contributions is the novel framework itself. Regarding the reviewers’ concern about the techniques being "very straightforward," as mentioned in the main paper, we intentionally kept the model simple to ensure better scalability. We would also like to highlight that building a large 4D object dataset and training a reconstruction model on it have not been explored before. While the design choices may seem straightforward in hindsight, several uncertainties existed during the model's development. These included: 1) whether a model could learn to reconstruct dynamic 3D objects over time from a single multi-view, 2) identifying an appropriate input format for combining single-view video and multi-view images, and 3) determining a suitable 4D representation for the feed-forward model to predict. Thus, we believe L4GM is a major step towards generating high-quality 4D assets. > Q2. Another concern is the setting of repeating the multiview images from the initial timestep as inputs for other frames. It works more like seeking temporary relief rather than a solid solution. For example, this repeating suffers from solving 4D generation with the motion of turning around, whose multiview images would contain conflict to the inputs. Why not use ImageDream+LGM to achieve more multiview inputs? Moreover, more quantitative and qualitative ablation studies should be considered for this setting, such as no-repeating views, repeating views, and multi-timestep-multi-view inputs respectively. A2. “Multi-timestep-multi-view" setting: Using ImageDream to generate more multiview inputs will generate very different 3D objects in different time steps. Please refer to an example in the rebuttal PDF. Generating temporally consistent multi-view videos is in fact a challenging task at the early stage of research [A]. “No repeating views and repeating views” setting: We would like to reiterate that repeating views are only designed to let the model better handle the “T+V” input. Thus, no repeating views means that we need to change the model architecture to take a different input format, which we deem non-trivial and with the risk of hurting the pretrain weights. We agree that there are cases when the generated multiview images would conflict with the input video. We believe that the challenge can be tackled by improving the model's robustness to inaccurate multiviews, with strategies like augmentation and random dropout, which we leave for future works. [A] Liang et. al., Diffusion4D: Fast Spatial-temporal Consistent 4D Generation via Video Diffusion Models, 2024 > Q3. Maybe there is a typo of temp. embed. in Figure 6(b), which should be time embed? A3. Yes, it is a typo. Thanks for pointing this out. > Q4. Why there are 12M videos at all? The authors said that 110k animations are captured with 48 views. So the overall videos should be about 5M. A4. One animation could be longer than one second. The 110k animations have 250k seconds after filtering, which sums up to 250k*48 = 12M 1-second videos. > Q5. Since L4GM is trained with animated objects from Objaverse, the comparison of Consistent4D in Table 1 is not convincing enough. To the best of my knowledge, most objects in the Consistent4D test set are from Objaverse too. Please refer to https://github.com/yanqinJiang/Consistent4D/blob/main/test_dataset_uid.txt for more details. A5. Thanks for pointing this out. We refer the reviewer to a new evaluation of the model only trained on the GObjaverse subset in the rebuttal PDF. We manually checked that these test samples are not part of the GObjaverse subset, so the issue can be avoided. The new results remain state-of-the-art and have no significant difference from the numbers we report in the main paper. --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the rebuttal. The rebuttal addressed most of my concerns. I raise my score to 5. Importantly, the limitation and related discussion about repeating views should be included in the revision. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer 2k2z Comment: Thank you for raising the score. We appreciate your suggestions and will include them in the revision.
Summary: This paper utilizes rendering of animated objects from objverse(-xl) to extend lgm into 4D generation. Specifically, L4GM uses four orthogonal images of an object and the object's monocular dynamic video to obtain 3D Gaussians at each moment, enhancing the consistency between different moments through temporal self-attention layers. Subsequently, the smoothness of generated actions is further improved through a 4D Interpolation Model. L4GM leads in generation speed and evaluation metrics in the task of video-to-4D. Strengths: 1. This paper significantly improves the generation speed of text-to-4D by utilizing a large-scale, object-centered dynamic dataset to extend lgm into 4D generation, and can model 4D objects in wild videos, achieving highly generalized results. 2. The paper proposes a 4D interpolation Model that can increase the frame rate of generated 4D objects, making the motion smoother. This will alleviate the problem of insufficient frame rates in video generation models. Weaknesses: The paper discusses extensively how to use dynamic datasets for pre-training, which is also a very important part of this work and will have a significant impact on the community. Whether this dataset is open source is also extremely important for evaluating this work, but the paper does not mention this point. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In section 5, it is mentioned that some objects come with animations. Does this mean the objects already have predefined animations? 2. In the supplementary materials, I noticed that some generated 4D objects have limited motion range and motion rationality, which is a common issue in other 4D generation works. Since evaluating these is a resource-intensive task, are there any possible evaluation methods that could be discussed? This could be part of future work, and once an evaluation method is established, the motion quality of 4D generation could potentially be further improved. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The main issue lies in the amplitude and controllability of the animations. Is there a potential direction for this in the future? Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. The paper discusses extensively how to use dynamic datasets for pre-training, which is also a very important part of this work and will have a significant impact on the community. Whether this dataset is open source is also extremely important for evaluating this work, but the paper does not mention this point. A1. We will release the code for both the model and dataset building upon the acceptance of the paper and internal approval. Besides, we have tried our best to provide enough technical details in the paper to reproduce this work. > Q2. In section 5, it is mentioned that some objects come with animations. Does this mean the objects already have predefined animations? A2. Yes. The Objaverse dataset provides 3D objects with predefined 3D animations. Please kindly refer to this example in the Objaverse dataset https://skfb.ly/FWLt, where the dragon has been animated. > Q3. In the supplementary materials, I noticed that some generated 4D objects have limited motion range and motion rationality, which is a common issue in other 4D generation works. Since evaluating these is a resource-intensive task, are there any possible evaluation methods that could be discussed? This could be part of future work, and once an evaluation method is established, the motion quality of 4D generation could potentially be further improved. A3. In terms of "limited motion range and motion rationality", we highlight that we have performed a user study (in Table 2) on 24 videos from diverse sources (as detailed in Appendix G.1) to evaluate the motion quality. Our method achieves a high win rate over existing works on "Motion Realism" and "Motion Alignment w. Input Video". In addition, limited motions are not bound by our approach. As a reconstruction model, the reconstructed motion largely depends on the input video. We agree with the reviewer that a large-scale, comprehensive evaluation benchmark for 4D generation will be important to standardize and simplify evaluation. Although there exists a popular evaluation benchmark Consistent4D that automatically computes metrics, where we also evaluate our approach, the benchmark only provides 7 test samples and does not have a metric for motion quality. Improving the 4D evaluation benchmark is a crucial future work. > Q4. The main issue lies in the amplitude and controllability of the animations. Is there a potential direction for this in the future? A4. As mentioned above, the animation depends on the input video. Controlling the animations via video generation or editing is feasible but lies beyond the scope of this work. Our work only serves as a deterministic reconstruction model that reconstructs the animation from 2D to 3D. --- Rebuttal 2: Comment: The author addressed my concerns. I agree that the animation is determined by the video. Thanks a lot. I will raise my score from borderline accept to weak accept. --- Rebuttal Comment 2.1: Title: Thanks to Reviewer hQSp Comment: We are glad to hear that our response addressed your concerns. Thank you for raising the score.
Summary: This work introduces a novel framework for generating animated 3D objects from single-view videos. The proposed framework employs a feed-forward approach, thus eliminating the need for computationally expensive optimization. The core idea is to create a large-scale synthetic multi-view video dataset and train a corresponding 4D LGM model, which is initialized from the 3D LGM [49] and enhanced with temporal self-attentions. Additionally, an interpolation network is introduced to improve the frame rate further. Overall, this method achieves superior performance in producing 4D assets, both in terms of efficiency and reconstruction quality. Strengths: - To the best of my knowledge, this is the first work to achieve feed-forward generation of 4D assets from a given video. The successful utilization of a large-scale dataset to tackle this challenging task may inspire further research in the community on related topics. - The proposed framework demonstrates superior performance compared to existing methods across all evaluation metrics, as evidenced by the current evaluation (Figure 5, Tables 1). Additionally, the user study indicates a general preference for the results produced by this framework. - Moreover, several significant ablation studies have been conducted on different components of the framework. Notably, the justification for using a pre-trained LGM (Figure 6(b)) is particularly convincing. - Furthermore, the autoregressive generation method can produce animated objects with longer intervals, beyond the training time intervals. Weaknesses: Technical contributions - Despite its impressive performance, this work's technical contributions are somewhat limited. The two main contributions are temporal self-attention and the 4D interpolation network. - The temporal self-attention can be seen as a direct extension of the multi-view self-attention proposed in [49]. - While the 4D interpolation network can speed up inference, its overall performance does not seem to have a substantial impact, as demonstrated in the supplementary material video. Having said that, I agree that a straightforward solution to a new problem should be recognized, and the above weakness is relatively minor. Technical Quality: 3 Clarity: 3 Questions for Authors: Some Questions: - Regarding the temporal self-attention layer, why is only the time dimension considered in the self-attention mechanism rather than incorporating both multi-view and temporal information (full attention)? An analysis focusing on efficiency and performance would be valuable to understand the rationale behind this design choice. - Why is the ablation study performed using PSNR (a reconstruction-based metric) instead of the metrics provided in Table 1? - It appears that only qualitative comparisons are offered for the interpolation network. Could the performance be evaluated using the metrics shown in Table 1? - Given that this work is an original contribution to the feed-forward generation of 4D assets, the community would greatly benefit from the release of the dataset and code to enable the reproduction of the work. Are there any plans to release these resources? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - This work also necessitates a "synthetic" multi-view video dataset for training. According to the implementation details, generating this dataset takes approximately 3 days using 200 GPUs. While this is easier to obtain than a real dataset, it still demands significant computational resources. Including a more comprehensive discussion on this in the limitations section would be beneficial. - The proposed framework is claimed to generalize extremely well to in-the-wild videos (Abstract). However, there are only a few samples provided (4 samples in Figures 1 & 4), and many of these are in an animated style, with the exception of the left example in Figure 4. This left example displays various artifacts on the produced objects, such as blurred arms. Given the current evidence, this claim appears to be inadequately supported, and it is recommended to either include more samples or lower the claim accordingly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. Technical contributions: A1. We appreciate that the reviewer agrees that “a straightforward solution to a new problem should be recognized”.We would also like to highlight that L4GM is the first feed-forward 4D reconstruction model, which could open up more possibilities for generating high-quality 4D assets. With regard to detailed technical contributions, as the reviewer pointed out, our core technical contribution is “a novel framework for generating animated 3D objects from single-view videos.” In terms of new techniques required to build such a model, besides the ones mentioned in the review, there are also 1) autoregressive reconstruction and 2) multi-view copying for input batching. > Q2. The temporal self-attention can be seen as a direct extension of the multi-view self-attention proposed in [49]. A2. We agree. We choose temporal self-attention to keep the 3D reconstruction pre-train as much as possible and also to keep the model simple for better scalability. > Q3. While the 4D interpolation network can speed up inference, its overall performance does not seem to have a substantial impact, as demonstrated in the supplementary material video. A3. Besides inference speed, the interpolation model also helps to alleviate the autoregressive reconstruction error so that we can reconstruct longer videos. Considering a reconstruction model that reconstructs a 1-second 30-FPS video at a time and can maximally self-reconstruct 10 times without quality drop, then the longest video to reconstruct will be 10 seconds. Our 3x-upsample interpolation model allows the reconstruction model to reconstruct a 3-second 10-FPS video at a time, thus improving the maximum video length from 10 seconds to 30 seconds. > Q4. Regarding the temporal self-attention layer, why is only the time dimension considered in the self-attention mechanism rather than incorporating both multi-view and temporal information (full attention)? An analysis focusing on efficiency and performance would be valuable to understand the rationale behind this design choice. A4. We use the temporal attention inspired by video diffusion models [2,3], which have been verified to work well when extending image generation to video generation at a large scale. Nonetheless, We agree that full attention is also a reasonable design choice, so we perform an additional experiment comparing full attention to temporal attention and show the results in the rebuttal PDF. Full attention requires more computation but brings no visible improvement to the results. > Q5.Why is the ablation study performed using PSNR (a reconstruction-based metric) instead of the metrics provided in Table 1? A5. The Consistent4D evaluation benchmark only has 7 test samples so the numbers can be noisy, thus we choose to show PSNR plots. Nonetheless, we carry out evaluations on the Consistent4D benchmark and present results in the rebuttal PDF. > Q6. It appears that only qualitative comparisons are offered for the interpolation network. Could the performance be evaluated using the metrics shown in Table 1? A6. Thanks for the suggestion. We have additionally evaluated the interpolation network on the Consistent4D benchmark by downsampling the test video framerates by 3 times. Results are presented in the rebuttal PDF. The numbers achieved by the interpolated 4D reconstruction and the baseline 4D reconstruction have no significant difference. > Q7.Given that this work is an original contribution to the feed-forward generation of 4D assets, the community would greatly benefit from the release of the dataset and code to enable the reproduction of the work. Are there any plans to release these resources? A7. We will release the code upon the acceptance of the paper and internal approval. > Q8. This work also necessitates a "synthetic" multi-view video dataset for training. According to the implementation details, generating this dataset takes approximately 3 days using 200 GPUs. While this is easier to obtain than a real dataset, it still demands significant computational resources. Including a more comprehensive discussion on this in the limitations section would be beneficial. A8. Thanks for the suggestion, we will discuss the approach’s requirement on computational resources and improve our limitation section. > Q9. The proposed framework is claimed to generalize extremely well to in-the-wild videos (Abstract). However, there are only a few samples provided (4 samples in Figures 1 & 4), and many of these are in an animated style, with the exception of the left example in Figure 4. This left example displays various artifacts on the produced objects, such as blurred arms. Given the current evidence, this claim appears to be inadequately supported, and it is recommended to either include more samples or lower the claim accordingly. A9. We will modify the tone of the claim and make it more accurate. By “in-the-wild” we meant to emphasize that the model generalizes on unseen data domains that are different from our synthetic training data, like generated videos from Sora or real-world videos ActivityNet. --- Rebuttal Comment 1.1: Comment: Thank you very much for the additional experiments and comments. These have addressed my concerns. I am maintaining my positive stance towards the current manuscript. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer UkTx Comment: Thank you for the response. We are glad to have your concerns clarified.
Summary: This paper proposes a model for 4D reconstruction from a single video, building upon dynamic 3D gaussians and LGM architecture [49] previously applied to static 3D scenes. By processing generated multi-view images (derived from the first frame using prior method) alongside the video, the model outputs 4D Gaussians to reconstruct the video dynamics. The model was trained on a synthetic multi-view video dataset (Objaverse) and shows qualitative generalization to real-world images. Based on the good performance and extensive ablation studies, I recommend accepting this paper. Strengths: - The paper is clearly written, with good figures, structure, and well-explained motivation of design decisions - The task of 4D reconstruction is both timely and of significant interest to the community. - Though the paper contains strong-worded claims (see below), the demonstrated results show strong 4D reconstruction capabilities - Ablation studies highlight the benefits of pretraining and the chosen representation. Weaknesses: - Claiming that model generalises “extremely well” to in-the-wild lacks empirical support (apart from cherry-picked qualitative results) and likely not true due to training assumptions (e.g. masks, static camera at 0 degree elevation). One possible evaluation to substantiate the claim would be a comparison with other works such as HyperNeRF. - Since the code is not available, the details about the architecture and training are not sufficient for reproducing experiments and should be expanded. E.g. I could not find how exactly 32 heldout views are sampled neither in supplementary nor in the main paper. - Minor: The architecture and training are not new and relies heavily on prior techniques, including a multi-view image generator. This perhaps aligns work more closely with the computer vision and engineering community (e.g., CVPR) Technical Quality: 3 Clarity: 3 Questions for Authors: - Could you provide more information about various axis of failure cases and the generalization limits to in-the-wild videos? - Could you provide more details about "grid distortion" mentioned on line 594? The referenced paper also lacks clarity on this. - Given the use of 128 80GB A100 GPUs, could you provide details on training the model with fewer resources? (particularly memory use) Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Authors have acknowledged the limitations, however, the claim about generalizing "extremely well" on in-the-wild videos is unsubstantated and likely not true due to training assumptions (e.g. masks, static camera at 0 degree elevation). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Q1. Claiming that model generalises “extremely well” to in-the-wild lacks empirical support (apart from cherry-picked qualitative results) and likely not true due to training assumptions (e.g. masks, static camera at 0 degree elevation). One possible evaluation to substantiate the claim would be a comparison with other works such as HyperNeRF. A1. We will modify the tone of the sentence. By “in-the-wild” we emphasize on unseen data domains that are different from our synthetic training data, like generated videos from Sora or real-world videos ActivityNet. We agree that the method may not handle some real-world settings like non-zero elevation angles or unsegmented videos very well. Although we have discussed some failure cases in Appendix I, we thank the reviewer for bringing up HyperNeRF and will consider using that example to strengthen the limitation sections. > Q2. Since the code is not available, the details about the architecture and training are not sufficient for reproducing experiments and should be expanded. E.g. I could not find how exactly 32 heldout views are sampled neither in supplementary nor in the main paper. A2. We will release the code upon the acceptance of the paper and internal approval. The mentioned details can actually be found in Appendix B. L574, “For random cameras, we select a random elevation from [-5◦, 60◦]...We set the camera radius to 1.5“, where random cameras are the 32 heldout views. > Q3. Minor: The architecture and training are not new and relies heavily on prior techniques, including a multi-view image generator. This perhaps aligns work more closely with the computer vision and engineering community (e.g., CVPR) A3. We believe that this work is closely related to a wide range of artificial intelligence fields, like AI content generation, and 3D perception/simulation in robotics, so it should be well-fitted into the NeurIPS community. > Q4. Could you provide more information about various axis of failure cases and the generalization limits to in-the-wild videos? A4. As mentioned in A2, a brief analysis has been provided in Appendix I, including motion ambiguity, multi-object scenes, and ego-centric scenes. We will expand the discussion to include more failure case analysis. > Q5. Could you provide more details about "grid distortion" mentioned on line 594? The referenced paper also lacks clarity on this. A5. It is a data augmentation technique from the reference paper LGM[49]. Grid distortion simulates 3D inconsistency by grid sampling an image with a distorted grid. This makes the model more robust to inconsistent multiview input images. Please find the detailed implementation at https://github.com/3DTopia/LGM/blob/main/core/utils.py#L63. > Q6. Given the use of 128 80GB A100 GPUs, could you provide details on training the model with fewer resources? (particularly memory use) A6. Various practical techniques could be applied to trade of longer training time for less memory consumption, for example, gradient accumulation. Our training takes 1 day on 128 GPUs, so the same model training can be finished on 32 GPUs in 4 days with gradient accumulation, which is also a reasonable training time. > Q7. Authors have acknowledged the limitations, however, the claim about generalizing "extremely well" on in-the-wild videos is unsubstantated and likely not true due to training assumptions (e.g. masks, static camera at 0 degree elevation). A7. We agree that such limitations exist and will make the claims in the paper more accurate. --- Rebuttal Comment 1.1: Comment: Thank you for your response.
Rebuttal 1: Rebuttal: We thank reviewers for the encouragement and insightful feedback. We are glad that the reviewers found: - (4G9y, UkTx, hQSp) This is the first work to achieve feed-forward generation of 4D assets from a given video. The task of 4D reconstruction is timely and of significant interest, demonstrating strong results and superior performance compared to existing methods . - (4G9y, UkTx) Ablation studies effectively highlight the benefits of pretraining and the chosen representation, with significant insights on different components . - (hQSp, 2k2z) The proposed 4D interpolation model increases the frame rate and smoothness of motion, extending the capabilities of L4GM in high-resolution 4D generation. - (2k2z, hQSp) The paper contributes significantly to large-scale 4D data with 12M videos rendered from Objaverse, showcasing highly generalized results. We refer the reviewers to the uploaded PDF file for additional experiments, including: - (UkTx) Evaluating ablation models and interpolation models on the Consistent4D benchmark. - (UkTx) Explore replacing temporal attention with full attention. - (2k2z) Generating multi-view multi-step videos using ImageDream. - (2k2z) New evaluation on the GObjaverse subset, which has no overlap with the Consistent4D benchmark. In addition, we would like to address some common concerns, including : - (4G9y, hQSp, UkTx) Code and data release: We will release the code upon paper acceptance and internal approval. - (4G9y, UkTx) Claim on in-the-wild: We will modify the tone of the claim. By “in-the-wild” we emphasize on unseen data domains that are different from our synthetic training data, Please kindly let us know if you have any further questions. Pdf: /pdf/37b2cc41785647edd850726f98e5b98fd402291b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Reinforcement Learning Guided Semi-Supervised Learning
Accept (poster)
Summary: This paper introduces Reinforcement Learning-Guided Semi-Supervised Learning (RLGSSL), a novel method that combines reinforcement learning (RL) with semi-supervised learning (SSL). By formulating SSL as a one-armed bandit problem, the authors employ a RL-based loss function to guide the learning process. Additionally, the method incorporates a semi-supervised teacher-student framework to improve learning stability. Strengths: The paper introduces a unique approach by framing semi-supervised learning as a one-armed bandit problem and integrating reinforcement learning to optimize the generation of pseudo-labels. Weaknesses: The rationality of the method design is not entirely convincing. Firstly, in Equation (2), why is the non-differentiable MSE used instead of the differentiable cross-entropy loss? Why use RL when supervised learning could be applied? Additionally, in the explanation of Equation (3), how is the "meaningful weight" defined? Would this design still be applicable when there is a significant imbalance in the number of samples for each class in the dataset? Training with RL methods is challenging and often unstable, though RL has its own advantages. The authors are advised to add experiments demonstrating the benefits of the RL-based design compared to a supervised design. Some descriptions in the paper might be confusing for readers without background knowledge. For example, the terms "convex combination" in line 190 and "fluid decision boundaries" in line 191 might be unclear. According to the results in Table 3, the improvements brought by the proposed method are not significant. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. In line 215, the authors mention the reward function that can enhance the model's robustness and generalization. How is this proven in the experiments? 2. Could the authors further explain the description from lines 228 to 231? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: In addition to the limitations discussed by the authors in the appendix, please refer to my comments in the Weaknesses section for further details. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort the reviewer has dedicated to reviewing our work. **About the rationality and advantage of using RL**: We use Reinforcement Learning (RL) to enhance the exploration of pseudo-labels. Traditional approaches in Semi-Supervised Learning (SSL) can encounter issues like overfitting and challenges in generating pseudo-labels. RL brings benefits of exploration and the ability to handle non-differentiable operations by treating the predictor as a policy function. Our proposed method establishes a pioneer and novel investigation framework for integrating the power of RL through non-differentiable losses with standard SSL loss. We conducted an ablation study in Table 5 of our manuscript by dropping the RL which is the variant “$\mathcal{R}:$ w/o sg[$\theta$]". This variant demonstrates poorer results compared to our RLGSSL method showcasing the necessity of the bandit framework in our work. **About MSE loss**: We opted for the Mean Squared Error (MSE) when dealing with mixup data in the reward function, aligning with the established practices found in seminal works such as MixMatch [1], which utilizes L2 loss for this purpose.MSE loss is less sensitive to label noise compared to CE loss. Since pseudo-labels for the unlabeled data are not ground truth labels, they may contain some degree of uncertainty or noise, and hence using MSE loss can be a good option. In addition, a non-differentiable reward function ensures that the reward function is solely employed for model assessment, rather than being directly utilized for model updating, enforcing the working mechanisms of RL. **About imbalance scenarios**: In this work, we followed the established convention for research in classic semi-supervised learning (SSL). While our method has demonstrated promising results in this context, the specific scenario of class imbalance in the dataset was not directly investigated. The RL component's focus on decision-making and optimization is broadly applicable across different types of learning tasks. Therefore, while specific adjustments might be necessary, the foundational idea of RLGSSL could be extended and applied to different domains and tasks including but not limited to imbalance scenarios. Exploring the applicability and potential adaptations of our approach for imbalanced datasets presents a valuable direction for future work. **About experiments showing the advantage of RL**: We conducted an ablation study in Table 6 of our manuscript with a variant labeled "$\mathcal{R}:$ w/o sg[$\theta$]", where we removed the stop-gradient operator from the reward function. This modification allowed the reward function to become differentiable with respect to $\theta$, effectively transforming the setup into a supervised learning framework. This variant showed inferior performance compared to our original RLGSSL method which demonstrates the advantage of using RL. **About background knowledge**: Given the constraints of space and the expected familiarity of NeurIPS audiences with standard machine learning terminology, we focused on detailing our novel method rather than explaining well-established concepts such as "convex combination" and "fluid decision boundaries." We appreciate your understanding and encourage readers seeking definitions of these common terms to consult foundational ML literature or resources. **About improvement**: The improvement offered by our RLGSSL method is substantial as it significantly outperforms state-of-the-art works in semi-supervised learning (SSL). Specifically, RLGSSL surpasses the Interpolation Consistency Training (ICT) by a notable margin of 3.29% on CIFAR10 using just 1,000 labels, as detailed in Table 1 of our manuscript. Similarly, for CIFAR100 with 4,000 labeled instances, the improvement margin is 3.15%. it's noteworthy that our method in Table 3 has already achieved a very low error rate of 3.52% for CIFAR10 using 4000 labels, where any additional improvement is inherently limited. Note that MarginMatch only achieves a 0.09% improvement over the previous Meta Pseudo-Labels method on the same dataset setting, which is much smaller than the performance gain our method achieved with an even lower error rate. **About effectiveness of the reward function** In the ablation studies presented in Table 5 of our manuscript, we examined the variation where "$\mathcal{R}=1$", effectively removing the reward function and assigning equal rewards to all pseudo-labels. The results from this table clearly demonstrate that eliminating the reward function leads to an increased error rate in the model, thereby underscoring the critical role of the reward function in enhancing model performance. **About lines 228-231 and how the weights are defined**: We use a uniform probability distribution to represent the least informative prediction outcome in a reinforcement learning framework, where each class is considered equally likely, reflecting maximum uncertainty. The expected Kullback-Leibler (KL) divergence then measures how much the probabilistic outputs (policy outputs) of the model deviate from this non-informative, uniform distribution. By quantifying the divergence, the expected KL-divergence effectively gauges the level of informativeness or certainty in the model's predictions. This measurement is utilized as a weight for the reward function within the reinforcement learning setup. Consequently, this weighted approach incentivizes the model to produce predictions that are more distinct or discriminatory, moving away from the uniform distribution toward more informative and class-specific predictions, thereby enhancing the model's ability to discriminate effectively among different classes. [1] Berthelot, David, et al. "Mixmatch: A holistic approach to semi-supervised learning." Advances in neural information processing systems (NeurIPS), 2019. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. However, I remain unconvinced by the rationale for using RL. The authors mentioned that "MSE loss is less sensitive to label noise compared to CE loss," which justifies using MSE as the reward function. I believe the opposite is true, as CE has better noise resistance. Therefore, building RL training based on this reward function is not convincing. The explanation for the lack of significant improvement in the results in Table 3 is also unconvincing. While the test error on CIFAR-10 is low, and the differences may not be significant, there is also no noticeable improvement on CIFAR-100. This improvement might be due to model optimization and parameter selection, rather than demonstrating the significant advantage of the proposed algorithm. The authors might have misunderstood my question 1. I asked how the proposed method enhances the model's robustness and generalization, as claimed in the original paper. The response was about the impact of the reward function on performance. Therefore, while this work appears interesting, there is room for improvement. I hope the authors continue their efforts to further improve the paper. I cannot recommend accepting the current version and keep my initial scores. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the response. **About MSE loss**: Many outstanding SSL methods [1,2,4] use MSE loss on unlabeled data, and with a similar idea MixMatch [3] applies MSE to the mixup of labeled and unlabeled data. MSE is chosen because it effectively enforces consistency, helping to stabilize training and prevent overconfidence in noisy pseudo-labels. Studies like [1] show that MSE consistently yields slightly better results than cross-entropy (CE) loss on unlabeled data. The table below highlights the effect of replacing MSE loss with CE loss in our RLGSSL in terms of test errors. These results are consistent with the studies in [1]: | Dataset | CIFAR-100 (4000) | CIFAR-100 (10000) | |-----------------|------------------------|-------------------| | RLGSSL | ${36.92}_{(0.45)}$ |${29.12}_{(0.20)}$ | | R(MSE->CE) | ${37.14}_{(0.53)}$ |${31.37}_{(0.52)}$ | **About improvement on CIFAR-100**: The improvement on CIFAR-100 dataset is not marginal. As detailed in Table 1 of our manuscript, RLGSSL outperforms the second-best method, ICT, by 3.15% and 3.12% on CIFAR-100 with 4000 and 10000 labeled samples, respectively. Furthermore, in Table 3, using a WRN backbone, RLGSSL surpasses the state-of-the-art MarginMatch by 1.24% with only 2500 labeled samples on CIFAR-100. This consistent outperformance across different datasets and experimental setups clearly demonstrates the significant advantage of our proposed algorithm, RLGSSL. **About robustness and generalization**: Our model demonstrates strong generalizability and robustness, evident in its consistent performance with the lowest average test error and low standard deviation across multiple datasets. RLGSSL not only achieves significant improvements on CIFAR-100 but also outperforms state-of-the-art methods on other datasets. This consistent success across various datasets highlights the robustness and adaptability of our model, confirming its effectiveness in varied scenarios. [1] Laine, Samuli, and Timo Aila. "Temporal Ensembling for Semi-Supervised Learning." International Conference on Learning Representations. 2017. [2] Tarvainen, Antti, and Harri Valpola. "Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results." Advances in neural information processing systems 30 (2017). [3] Berthelot, David, et al. "Mixmatch: A holistic approach to semi-supervised learning." Advances in neural information processing systems (NeurIPS), 2019. [4] Verma, Vikas, et al. "Interpolation consistency training for semi-supervised learning." Neural Networks 145 (2022)
Summary: This paper presents a method called Reinforcement Learning Guided Semi-Supervised Learning (RLGSSL), which frames SSL as a one-armed bandit problem. The method features a reward function that measures the discrepancy between the model's predictions on mixed data and pseudo-labels, guiding the learning process. Additionally, it employs a teacher-student model to enhance stability and reduce noise in pseudo-labels. The proposed joint loss function combines RL loss, supervised loss, and consistency regularization loss. Experiments validated the performance of the proposed method and the indispensability of each component. Strengths: 1. This paper attempts to solve the problem of SSL from the perspective of the bandit problem, offering a fresh angle to the research. 2. The ablation study convincingly demonstrates the indispensability of each component of the proposed method. 3. The paper is written in a fluent and clear manner. Weaknesses: 1. While the paper attempts to solve the SSL problem from a bandit perspective, it refers to the loss function as RL loss. The key distinction between RL and bandit problems is the presence or absence of state transitions, and the authors seem to have conflated these concepts. 2. The paper employs bandit terminology to explain parts of the methodology where it might not be necessary. Forcing SSL into a bandit framework seems somewhat unnatural, despite the novel perspective. 3. The paper appears to combine previously existing methods -- Regularization-Based Methods, Teacher-Student-Based Methods, and Pseudo-Labeling Methods. This raises questions about whether the innovation is sufficient. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Why does Table 1 and Table 3 use different numbers of labeled samples for the same datasets, such as using 1000, 2000, 4000 for CIFAR-10 in Table 1, and 250, 1000, 4000 in Table 3? Additionally, why are the comparison methods different in these tables? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have discussed limitations in their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort the reviewer has dedicated to reviewing our work. **About terminology**: The bandit problem can be viewed as a special case of reinforcement learning where there is only one transition in the trajectory. Given that the bandit problem is an older concept and less common in current literature, we formally define it as a Single-Step Markov Decision Process (SSMDP), as described in [1], to improve clarity. In this framework, we focus on maximizing the instant reward rather than the cumulative reward over multiple transitions. **About the necessity of bandit framework**: We use Reinforcement Learning (RL) and specifically one-armed bandit framework to enhance the exploration of pseudo-labels.We design the training procedure for the SSL predictor as the training of a policy function in RL. This approach enhances the predictor's performance by incorporating a novel, non-differentiable RL loss. We conducted an ablation study in Table 4 of our manuscript by dropping the RL which is the variant “$\mathcal{R}:$ w/o sg[$\theta$]". This variant demonstrates poorer results compared to our RLGSSL method showcasing the necessity of the bandit framework in our work. **About novelty**: Our approach, RLGSSL, is not merely a combination of existing SSL strategies but a novel application of reinforcement learning principles specifically designed to enhance SSL. RLGSSL introduces a specialized RL framework that uses a unique prediction assessment reward function to generate accurate and reliable pseudo-labels. This method innovatively incorporates an RL-based loss to leverage the strengths of RL, promoting superior generalization performance. Our extensive experiments, particularly the ablation studies shown in Table 5, highlight the effectiveness and innovation of our approach. The significant drop in model accuracy when the RL loss is removed ( in variant $- \text{w/o } \mathcal{L}_\text{rl}$ ) demonstrates the integral role of this component, confirming that RLGSSL is a fundamentally new method that transforms traditional SSL dynamics. **About the difference in tables**: The research field of standard semi-supervised learning (SSL) is extensive, with various papers employing different experimental setups. In this work, we have aimed to compare our method against a broad spectrum of standard SSL research. The primary difference between the setups in Table 1 and Table 3 lies in the backbone used. We rely on the results as reported in the related works in their respective papers. If a method is absent from any of the tables, it indicates that those authors did not provide results for that specific experimental setup. [1] M. S. Mortazavi, T. Qin, and N. Yan, “Theta-resonance: A single-step reinforcement learning method for design space exploration,” arXiv preprint arXiv:2211.02052, 2022. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I find the idea behind this work to be quite innovative, however, I still lean towards maintaining my original score. Regarding the use of the bandit framework, it seems more like a narrative tool rather than something strongly tied to the core method. Additionally, although bandit problems are a branch of RL, the primary focus and algorithms between them differ significantly. In this paper, the problem is framed using a bandit setting, yet the proposed solution is described as using an RL loss, which seems inappropriate. I also agree with reviewer mW6a that this paper has significant room for improvement. I hope to see a more refined version in the future. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the response. We would like to clarify that the bandit framework is not simply a narrative tool within our study. The entire training process is built upon the bandit framework, where the policy function is optimized through iterative interactions with pseudo labels based on feedback from the reward function, rather than using a simple policy gradient. Although there are differences between the solutions for bandit problems and traditional RL problems, the primary goal of the bandit problem remains to maximize the reward function. Our carefully designed KL-divergence weighted negative reward, as discussed in Section 3.1.2, is well-suited to the bandit setting and serves as an effective solution for maximizing the instant reward received by the agent through our policy. This formulation is not merely theoretical but operational, with the bandit's reward mechanism directly influencing the learning process through the RL loss. As shown in our experiments (Tables 1 and 2 in the paper), leveraging this bandit-inspired RL loss leads to measurable improvements over state-of-the-art SSL methods across multiple datasets, indicating a concrete, beneficial impact on performance rather than a superficial narrative alignment. Furthermore, the adaptation of RL loss in this context is well-founded, as our approach dynamically adjusts to both labeled and unlabeled data, akin to how RL algorithms optimize actions based on rewards—a principle core to both general RL and bandit problems.
Summary: The authors proposes a novel Reinforcement Learning (RL) Guided semi-supervised learning (SSL) method, RLGSSL, that formulates SSL as a one-armed bandit problem and deploys an innovative RL loss based on weighted reward to guide the learning process of the prediction model adaptively. The core idea is to use RL to guide the selection of informative unlabeled samples, thereby improving the learning efficiency and effectiveness of SSL models. Strengths: The authors have introduced RLGSSL, a new approach based on Reinforcement Learning that effectively handles Semi-Supervised Learning (SSL). This method uses RL to learn effective strategies for generating pseudo labels and guiding the learning process. The authors have devised a reward function for assessing predictions that encourages the learning of accurate and reliable pseudo-labels while maintaining a balance between the usage of labeled and unlabeled data. They have also developed a new RL loss that allows the reward from pseudo-labels to be incorporated into SSL as a non-differentiable signal in a reinforced manner, promoting better generalization performance. Furthermore, the authors have investigated integration frameworks that combine the power of both RL loss and standard semi-supervised loss, providing a new approach that has the potential to lead to more accurate and robust SSL models. Extensive experiments have demonstrated that this proposed method outperforms state-of-the-art SSL approaches and validates the integration of RL strengths in SSL. Weaknesses: The integration of RL introduces additional computational overhead, which may require substantial computational resources, especially for large-scale datasets. The current formulation assumes that the labeled and unlabeled data are drawn from the same distribution, which may not hold true in real-world scenarios. This limitation could affect the generalizability of the model. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. How can the RLGSSL method be adapted to handle scenarios where the labeled and unlabeled data come from different distributions? 2. How can the hyperparameter tuning process be automated or simplified to make the RLGSSL method more user-friendly and less resource-intensive? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort the reviewer has dedicated to reviewing our work. **About computational overhead**: In training deep models, backpropagation is typically the most computationally intensive step. Our method, RLGSSL, features a non-differentiable reward function and a streamlined algorithm, ensuring minimal additional computational overhead. This contrasts with many state-of-the-art methods, which often significantly extend training times due to their complexity. The table below demonstrates that for a batch size of 32 on the CIFAR10 dataset using a WRN28 backbone, the Cuda time for RLGSSL is significantly lower than that for other methods in a single training iteration. This underscores its efficiency and reduced computational cost. | Algorithm | CPU time | Cuda time | |----------------|----------|-------------| | MixMatch | 5.549s | 51.039ms | | FixMatch | 5.167s | 287.179ms | | UDA | 5.196s | 287.088ms | | RLGSSL (ours) | 5.186s | 23.472ms | **About distribution mismatch scenarios**: The RL component's focus on decision-making and optimization is broadly applicable across different types of learning tasks. Therefore, while domain-specific adjustments might be necessary, the foundational idea of RLGSSL could be extended and applied to different domains and tasks. Techniques such as domain-adaptive pretraining, where models are initially trained on a source domain and then fine-tuned on a target domain, or incorporating domain adversarial training, which encourages the model to learn features that are invariant across different domains, could be particularly effective. Exploring the integration of these strategies with RLGSSL to robustly address distribution shifts could be a promising direction for future research. **About hyperparameter tuning automation**: To automate and simplify hyperparameter tuning for the RLGSSL method, leveraging tools such as Bayesian optimization[1], Hyperband[2], and AutoML frameworks can be effective. These methods efficiently explore the parameter space by balancing the exploration of new configurations with the exploitation of promising ones, thereby minimizing resource expenditure. Developing more advanced and integrated hyperparameter tuning strategies for RLGSSL could be a valuable direction for future research. [1] Santos, Maria. "Bayesian Optimization for Hyperparameter Tuning." Journal of Bioinformatics and Artificial Intelligence 2.2 (2022). [2] Li, Lisha, et al. "Hyperband: A novel bandit-based approach to hyperparameter optimization." Journal of Machine Learning Research 18.185 (2018).
Summary: One of the bottlenecks for Semi-supervised learning (SSL) is achieving high performance with limited labeled data, as the model is often complex and needs multiple loss functions. Recently RL has been increasingly used in fine-tuning complex models with non-differentiable reward functions. Thus with these observations, the authors proposed the Reinforcement Learning Guided Semi-Supervised Learning (RLGSSL) method, which uses RL to optimize the generation of pseudo-labels in SSL. More specifically, the pseudo-label predictor serves as the policy function and soft pseudo-labeling acts as the actions. Technically, the authors formulate SSL as a one-armed bandit problem with a continuous action space and deploy a novel RL loss to guide the SSL process based on a reward function specifically designed for semi-supervised data. Moreover, they further incorporate a semi-supervised teacher-student framework to augment the RL loss with a supervised loss and a prediction consistency regularization loss, aiming to enhance learning stability and efficacy. In this way, RL could provide exploration and manage non-differentiable operations. Strengths: 1. The idea of leveraging RL to optimize the pseudo-labels generator in Semi-Supervised Learning (SSL) is a good catch. 2. The motivation is reasonable and in the experiment section, the author conducted broad experiments to showcase the effectiveness of the proposed method. Weaknesses: 1. The comparison between the proposed method and other methods is not enough. E.g. the author could showcase the uniqueness of the proposed method in related work section, 2. The presentation of this paper is below average. E.g. there should be more figures and the writing should be more concise and logical. 3. The author did not mention the side effect of bringing RL into SSL. E.g. will the training process be more time-consuming? Technical Quality: 2 Clarity: 2 Questions for Authors: 1. The writing should be more concise, and the logic here is quite messy. E.g. in "We treat SSL as a special one-armed bandit problem with a continuous action space. One-armed bandit problem can be considered a single-step Markov Decision Process (MDP) [39]. In this problem, the agent takes a single action and receives a reward based on that action. The state of the environment is not affected by the action....". This content has been repeated a few times previously, but the reader still can not find the logic that connects these sentences. 2. The research question in this paper is not quite emphasized, it should be focused on, with deeper analysis. 3. Another thing is the authors may consider presenting the novelty more explicitly. E.g. do more comparisons between related work and the proposed methods in the introduction and related work section. 4. More related work. This is an extension of question 3, since the idea of RL to fine-tune complex models is prevailing, readers may expect the paper to show the related work of such an idea, e.g. in the paper, the authors said "Recently, RL has been applied to fine-tune complex models that typically fail to align with users’ preferences.", then the reader would expect more related work here. And if there exists a method that tries to combine SSL and RL in any sense, then it should appear in the baseline as well. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors did not mention the limitations of the proposed method. A possible limitation may fall on the training efficiency. E.g. according to Figure 1, there are multiple training loss and training objects. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the time and effort the reviewer has dedicated to reviewing our work. **About visualization**: We will add some figures to help visualize the results in future revisions of our paper. Nevertheless, the effectiveness of SSL is well captured in the test accuracy results reported, which is a primary and dominating evaluation norm in the literature. **About time complexity**: In training deep models, backpropagation is typically the most computationally intensive step. Our method, RLGSSL, features a non-differentiable reward function and a streamlined algorithm, ensuring minimal additional computational overhead. This contrasts with many state-of-the-art methods, which often significantly extend training times due to their complexity. The table below demonstrates that for a batch size of 32 on the CIFAR10 dataset using a WRN28 backbone, the Cuda time for RLGSSL is significantly lower than that for other methods in a single training iteration. This underscores its efficiency and reduced computational cost. | Algorithm | CPU time | Cuda time | |----------------|----------|-------------| | MixMatch | 5.549s | 51.039ms | | FixMatch | 5.167s | 287.179ms | | UDA | 5.196s | 287.088ms | | RLGSSL (ours) | 5.186s | 23.472ms | **About the research question**: In the 'Problem Setup' section, we formally and clearly define the research problem our study addresses, ensuring that readers clearly understand the framework and parameters within which our findings operate. **About more related works**: The 'Reinforcement Learning' subsection of Related works in the manuscript provides a detailed overview of various studies that employ reinforcement learning (RL) techniques to tackle diverse challenges. We will expand the related work section to include additional studies that apply reinforcement learning (RL) to fine-tune complex models. It is important to note that to the best of our knowledge, our work is the first to use RL to guide semi-supervised learning (SSL), which represents the novelty of our approach. **About limitation**: The limitation section can be found in Appendix E of the manuscript. **About loss terms**: Our RLGSSL method adds just one extra loss term—the RL loss—to the standard framework of semi-supervised learning (SSL), which typically includes supervised and consistency losses. The non-differentiable nature of the reward function within this reinforcement learning (RL) loss ensures that it does not significantly increase computational overhead during training (as shown above). Furthermore, the only trainable entity is the student model; the teacher model's parameters are updated using an Exponential Moving Average (EMA), which simplifies the training process. This streamlined approach addresses concerns about training efficiency, --- Rebuttal Comment 1.1: Comment: Thanks for the response. The Rebuttal mitigates my concern about training efficiency. So I will keep my positive opinion of this paper.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Active preference learning for ordering items in- and out-of-sample
Accept (poster)
Summary: This paper proposes an active learning algorithm for selecting pairs of items for comparison in order to learn a ranking of the items. The ranking error is measured by the (normalized) number of swapped pairs compared to the true ordering (Kendall’s Tau distance). The algorithm chooses pairs of items based on an upper bound on this ranking error. Experiments are conducted on one synthetic and 3 real-world benchmarks. Strengths: * Active learning for pairwise comparisons is an interesting problem to study * The approach is justified by theoretical analysis * The empirical evaluation looks promising Weaknesses: * The paper uses strong assumptions on the pairwise comparisons. Specifically, the response is assumed to be: $P(C_{ij}=1) = \sigma(\theta_*\cdot(x_i-x_j))$, where $\sigma$ is the sigmoid function (eq 2). These assumptions are used for fitting MLE parameters. These assumptions are common in the literature. * The optimization problem for choosing a pair (eq 5 and 7) seems hard. The cost is quadratic in the number of items since all pairs are considered. An approximation that depends linearly on the number of items would make the approach more practical. Technical Quality: 3 Clarity: 3 Questions for Authors: * “We restrict algorithms to only query pairs for which an annotation exists and remove the pair from the pool once queried.” – What is the number of available annotations for each dataset in figure 2 in the experiments? Only the number of items n and the dimension d are specified. Also, there is no notion of annotation noise here, since the labels are set, right? * What is $\Delta_*$? It seems from line 159 that: $\Delta_*=\min_{i,j} \Delta_{ij}/|i-j|$, but it would be better to define it explicitly. Does it really depend on the index difference $|i-j|$? What is the reasoning behind this? Minor: * Line 74: “Ordering algorithms based only on preference feedback cannot solve this problem since observed comparisons are uninformative of new items.” This statement is not clear, as long as items are represented as attributes/features $x_i$ then comparisons may be informative for new items. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: I suggest adding the assumption on response (eq 2) and the $O(n^2)$ complexity to the list of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: The response is assumed to be: $P(C_{ij}=1) = \sigma(\theta_*(x_i-x_j))$.** While this assumption underlies our theoretical analysis and motivates the current version of GURO, we believe that we have taken many steps to highlight the practical usefulness of our algorithm and that the results are more generally applicable. * We describe in Appendix D.3 how one can derive similar results for other generalized linear models, meaning that one can extend these results beyond the logistic feedback setting. * We introduce a hybrid model that is aimed at overcoming model misspecification. In Figure 5 in Appendix F.2 we show that these hybrid models still perform comparably to non-contextual algorithms even when features are completely uninformative. * In Figure 2, we evaluate comparisons made by human annotators. While the algorithm uses a logistic model, the data does not adhere to this. Still, we observe good convergence properties. GURO outperforms TrueSkill, an algorithm that does not assume a logistic model. Finally, we thank reviewer tuDy for noting that these are common assumptions. **R: The optimization problem for choosing a pair (eq 5 and 7) seems hard.** The motivating example for this paper is the annotation of medical images. As annotations made by radiologists are expensive, our main goal has been to minimize sample complexity, or as stated in the introduction: ''... how can we learn the best ordering possible from a limited number of comparisons?''. We believe that the algorithm is already practically applicable as the bottleneck will in most cases be the time it takes the user to perform a comparison (e.g., about 6 seconds in the case of IMDB-WIKI-SbS). It is however true that for large collections, computational bottlenecks can be encountered if we evaluate all possible comparisons at every iteration, a problem shared by most active preference learning algorithms [1,2,3]. As we describe in Appendix F, in the cases where $n^2$ is large we can instead evaluate our sampling criterion on a uniform subsample of candidate comparisons with no noticeable impact on performance, similar to [2]. Since the computational complexity of GURO was of interest to reviewer LY2M as well, we have written a discussion regarding this in the general rebuttal which we intend to include in a final version. This discussion also includes a potential alternative version of the sampling criterion that scales linearly with $n$. **R: There is no notion of annotation noise here, since the labels are set, right?** The label for each **annotation** is set, but there is still noise in the **comparisons**. This is perhaps most clear for the ImageClarity dataset where each comparison has been annotated more than 5 times on average by different annotators. As these annotators can disagree with each other, we get different comparison outcomes depending on which annotation we sample. Furthermore, even in the case where we don't have multiple annotations for each comparison, we still observe noise since annotations can be inconsistent. Say the true order is $a \succ b \succ c$. We might observe the three annotations: $a \succ b$, $b \succ c$, and $c \succ a$. The final annotation is inconsistent with the best possible ordering, but we likely observe this in our data as human annotators disagree with each other and do not necessarily provide responses consistent with their previous comparisons. These inconsistencies are the reason for the error on the held-out comparisons not converging toward $0$ in Figure 2. **R: What is the number of available annotations for each dataset in figure 2?** The number of available comparisons for each dataset is provided in Table 1 (located above Figure 2) under "\#comparisons". * **ImageClarity** - 27 730 * **WiscAds** - 9 528 * **IMDB-WIKI-SbS** - 110 349 **R: What is $\Delta_*$? Is it $\Delta_* = \min_{ij} \Delta_{ij}/|i-j|$?** You are correct, $\Delta_*$ should be defined as $\Delta_* = \min_{ij} \Delta_{ij}/|i-j|$ and we will add an explicit definition. Note, that we in the proof of Thm 1 assume, w.l.o.g, that the items are indexed such that $i > j$ implies $y_i > y_j$. This means that $|i-j|$ is the distance between the elements in the ordered list. Hence, we can lower bound $\Delta_{ij}$ by a constant times the difference in position in the ordered list. Substituting $\Delta_{ij}$ by $\Delta_* |i-j|$ in the lower bound on line 158 allow us to simplify that expression since all items in the sum now only depends on $\Delta_* |i-j|$ and we can apply results for geometric sums. **R: Clarification regarding ''algorithms based only on preference feedback'' on line 74.** As also pointed out by LY2M, this sentence can be improved. By "based only on preference feedback" we mean sorting algorithms that disregard attributes/features, regardless of whether they are present. This is the case for TrueSkill, and Figure 2d highlights its inability to generalize to new items. We will improve our formulation for the final version. For example, ''algorithms based only on preference feedback that ignore contextual information''. **R: I suggest adding the assumption on response (eq 2) and the $O(n^2)$ complexity to the list of limitations.** We thank the reviewer for this suggestion and believe that while we have addressed the former limitation in our first answer above, and will add a discussion on complexity to the final version, both topics warrant being mentioned as potential future directions to explore further. We thank tuDy for their suggestions for improving the text and have incorporated these into the updated manuscript. **References** * [1] Qian, et al. (2015). Learning user preferences by adaptive pairwise comparison. * [2] Canal, et al., (2019). Active embedding search via noisy paired comparisons. * [3] Houlsby, et al., (2011) Bayesian Active Learning for Classification and Preference Learning --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications.
Summary: This paper considers the ranking problem based on pairwise comparisons. The goal is to get the best sampling strategy for the best ordering from a limited number of comparisons. Under a logistic model on the difference between scores, the authors provide the analysis for the upper bound on the ordering error, which provides insights on sample selections. Following the idea of minimizing the bound, this paper proposes the GURO algorithm for pair selections. The proposed method is evaluated in four image ordering tasks with either synthetic labels or real world labels. Strengths: The proposed algorithm is well motivated by the theoretical result on the ordering error. It has very strong theoretical guarantees on the performance. The theory presented in the paper looks good to me. And it helps the reader to understand the algorithm better with some justification from Bayesian analysis. I find the paper very well written. The way the authors presented the results is very clear and easy to follow. Weaknesses: Although the theory presented in the paper looks good to me, I find it very similar to the result presented in the original Logistic bandit paper [1]. By treating the input space as the difference between features, the problem is simplified to a standard logistic bandit problem. And The result in Lemma 1 and some analysis before Theorem 1 are very similar to Lemma 2 and 3 in [1]. While I understand Theorem 1 is specifically for the ranking error, I think it is still straightforward to get Theorem 1 from existing lemmas. And for the empirical study, I see the proposed approach actually does not always perform better than baselines, especially BALD. The good result is on synthetic data, there is no significant advance on real data. Therefore, I am not convinced with the claimed statement in the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: Please provide more discussion on how difference the proposed method is comparing to logistic bandits. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes, the authors touch upon the limitation in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: Please provide more discussion on how difference the proposed method is comparing to logistic bandits.** As mentioned in our submission (l109, l138), our theory builds on the same techniques as previous papers on logistic bandits. However, we want to highlight that the problem considered here differs substantially from the standard bandit problem. Lemma 1 is a standard argument used in the literature (even before logistic bandits) which we include and tailor to our setting for completeness. Moreover, Theorem 1 is tailored toward our setting and justifies a selection criterion that is good for ranking but not for (regret minimization) bandits. Reviewer 6ajg did not specify which paper "[1]" is referring to but if it is Faury et al. (2020), the reviewer is correct that Lemma 1 is similar to their Lemma 3. An important distinction is that we provide a bound on the probability of error while they offer a high-probability upper bound on the prediction error. In principle, one can turn a bound on the prediction error to a bound on the probability of error. However, in the case of Lemma 3 in Faury et al. (2020), this would require solving a difficult expression involving squares of logarithmic terms. We are not sure this can be done in an analytical way that results in a parsable final bound. The reason that we can present a clean upper bound is due to the decomposition on l687 where we decompose the prediction error into first and second-order terms and then proceed to bound them independently. This is a new contribution in our work. **R: The proposed approach actually does not always perform better than baselines, especially BALD. [...] Therefore, I am not convinced with the claimed statement in the paper.** We assume that the reviewer is referring to the claim of superior sample efficiency. This claim refers to a comparison with previously published algorithms (Uniform, BALD, CoLSTIM, TrueSkill). In all cases in Figures 1-2, a method proposed in this work (GURO, GURO Hybrid or BALD Hybrid) performs better than these baselines, although the difference on ImageClarity is smaller. For the fully contextual versions, GURO consistently uses fewer samples than BALD to reach the same ordering quality, thus having superior sample efficiency. GURO Hybrid and BALD Hybrid are close but GURO Hybrid is never worse. Both are consistently better than TrueSkill. These are new results that were not known in previous research. In Section 6.2 we reason that the cause of the similar performance between BALD Hybrid and GURO Hybrid is the increased dimensionality due to item-specific terms leading to BALD attributing more of the errors to epistemic uncertainty. It is possible that we would eventually see a similar plateau for BALD Hybrid in 2b as in 2c, but due to us having a limited amount of pre-collected annotations, these experiments can't be extended much further without the overlap of comparisons selected by the algorithms becoming too large. We emphasize that the Hybrid model is a novel contribution presented in this work. It is a pragmatic solution to utilize contextual attributes even when they are not sufficient to produce a complete ordering. We agree that our claims regarding the empirical results can be clarified further in the abstract and the introduction, and will therefore take steps to: * Clarify what we mean by GURO having superior performance to active preference learning baselines. * Further highlight that BALD Hybrid is new to this work. Finally, beyond GURO and Hybrid variants, we believe that one of this paper's main contributions is the evaluation of existing algorithms when applied to recover a complete ordering. To the best of our knowledge, this is the first time BALD has been used explicitly to order a list. Our experiments, where we also apply our hybrid modification, offer insights into when this works well (BALD in Figure 2a, BALD Hybrid in 2b), and when this does not (BALD in all Figures except 2a, BALD Hybrid in 2c). We thank reviewer 6ajg for their critique and will use this input to clarify our claims and contributions for the final version of the paper.
Summary: Active preference learning is different from deriving a complete ordering from preferences. It focuses on “If we collect comparisons D_T, how good is the resulting model’s predicted ordering in the item set”. The paper proposes a sampling method in the active learning scenario. Theoretical analysis is also provided. Strengths: --Preference learning is critical to many downstream tasks. --The proposed method is somewhat novel. --The theoretical analysis is provided. --Experimental results are shown to verify its effectiveness. Weaknesses: --The assumption 1 and 2 are not so intuitive. It is better to illustrate an example. --Baseline methods are weak. Though many related studies are mentioned in the related work section, performances of baselines are not shown in the experiments. --The number of comparisons is not reduced tremendously on the ImageClarity in Table1. --Performances should be emphasized in terms of the prediction ordering quality. Technical Quality: 2 Clarity: 3 Questions for Authors: State-of-the-art baselines should be added for performance comparison. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Limitation should be added. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: Baseline methods are weak. Though many related studies are mentioned in the related work section, performances of baselines are not shown in the experiments.** * We argue that the baselines we have included are state-of-the-art and come from diverse fields: Active Preference Learning, Logistic Bandits, Non-Contextual Sorting. Since there is limited work on recovering a complete ordering using contextual attributes, there are no well-established benchmarks to beat. * We have selected baselines to study three main questions: the effects of including contextual information, the impact of the sampling criterion and the difference between in-sample and out-of-sample ordering. Most algorithms discussed in the related work section are **related** but are **not** designed to solve our problem, and would not give evidence for these questions. * Many algorithms would either perform poorly or do not work in the offline setting where we are limited to a subset of comparisons. [1] does not work on a subset of comparisons and assumes no noise. [2] account for noise by collecting $N$ annotations for *every* comparison pair, which is not resourceful and is not possible in most practical settings, including the tasks in our empirical study. Several methods in the related work section do not use contextual information, and TrueSkill (which is included in our study) is considered the state-of-the-art for this group. We welcome specific suggestions for stronger baselines to include. **R: Performances should be emphasized in terms of the prediction ordering quality.** We agree that obtaining a good ordering is the focus of this paper, but a ground-truth ordering does not exist for most datasets where comparisons have been made by human annotators since individual annotators often disagree and no annotator labels all pairs. In Figure 7 in Appendix F.2, we perform the same experiments as in Figure 2, but we instead measure the ordering error compared to an ordering estimated from all available comparisons. While this offers similar results, we discuss why this approach can be problematic and highlight this with an example below. In the case of ImageClarity, where the true ordering was available, we evaluate against this in Figure 6 in Appendix F.2. as well, and show that the results mirror our findings in Figure 2. **R: The assumption 1 and 2 are not so intuitive. It is better to illustrate an example.** We thank reviewer nNXZ for this suggestion and will include the motivation behind these assumptions in the updated manuscript. * Assumption 1 implies that the $\theta_*$ lies in some bounded ball and cannot have unbounded coefficients. We use this assumption on line 705 and we believe this assumption to be necessary since it wouldn't be possible to bound the maximum distance between some unknown $\theta'$ and the true $\theta_*$ otherwise. Note that this assumption is not limiting since the bound on $\theta_*$ is not used by our algorithms. * Assumption 2 states that there exists an upper bound on the norm of the feature vectors. This assumption is trivially satisfied whenever we have a finite set of data points. Both assumptions are standard in the literature and only required for the analysis. **R: The number of comparisons is not reduced tremendously on the ImageClarity in Table1.** This is true. We state in Section 6.2 that this is most likely a result of it being easy to order the images in ImageClarity based on their extracted features due to the low semantic level of image distortion. We still include this experiment to highlight that: * Hybrid models are sometimes not necessary. * There is still a clear difference between the contextual and non-contextual algorithms. * Uniform performs worse than active sampling criteria. We cannot expect any method to beat all other methods on every task, and we are happy to show examples where several methods work well. **R: Limitation should be added.** Is there a specific limitation you are referring to? In Section 7 we cover limitations, such as the lack of a lower bound, and future directions, such as applying representation learning and performing experiments in an online setting. We are happy to include other limitations in an expanded discussion for the final version. **References** * [1] Nir Ailon. Active learning ranking from pairwise preferences with almost optimal query complexity" Advances in Neural Information Processing Systems, 2011 * [2] Kevin G Jamieson and Robert Nowak. "Active ranking using pairwise comparisons" Advances in neural information processing systems, 2011
Summary: This paper considers the setting of learning an ordering between items according to some scoring rule. The assumption is that this ordering is determined by a contextual scoring rule, determined from the features of each item. Such contextual structure can aid in more rapidly learning an ordering, and generalizing to out of sample items. This ordering is learned from asking pairwise preferences to an oracle, reducing uncertainty about the total order. Since there are a large number of possible comparisons to be asked, active learning is deployed to only query labels for a subset of comparisons. A theoretical argument is made about the optimal balance between aleatoric and epistemic uncertainty to target in adaptive selecting queries, motivating the GURO adaptive sampling strategy and variants. The performance of GURO is demonstrated in several empirical experiments on simulated data and data collected (offline) from real humans. Strengths: I believe this paper is excellent - it is *clearly* written, has a great flow, and theoretical and empirical arguments are tied together nicely to motivate the problem setting, establish the problem fundamentals, convey mathematical intuition about uncertainty reduction, and justify active selection. There is also a robust set of experiments demonstrating GURO (and variants) in practice against baselines, along with explanatory discussion and implementation details. To my knowledge the analysis and algorithmic ideas here are *original*, and this is a *high quality* submission. Although not the centerpiece of this work, in an age where RLHF and efficiently learning from human preferences is paramount in training large models and ranking queries, work in active preference learning and the contributions made here are *significant*. There is also a robust and thorough appendix providing details and theoretical proofs (disclaimer: I have read the main paper in careful detail, but only skimmed the appendix). Overall, this is elegant, interesting, and impactful work (both theoretically and empirically). Weaknesses: I do not have any explicit weaknesses to list. Instead, I have a list of comments and questions below that I would like the authors to address. However I feel confident that these can be addressed during the rebuttal phase. One comment is that there is no discussion (unless I missed it) about computational complexity stating and comparing the big-O computational complexity of GURO, its variants, and other selection methods. This would be an interesting and strengthening addition to this work. Technical Quality: 4 Clarity: 4 Questions for Authors: - I think the statement "Moreover, the set we want to order is often larger than the set of items observed during training—we may want to rank new X-rays in relation to previous ones. This cannot be solved using per-item scores alone." should be clarified. If one knew absolute scores for all items (regardless if they are observed in training), isn't it trivial to compute an ordering? Or did the authors mean that pairwise responses collected during training could not generalize outside of training, without example features to predict from? [Edit: this does seem to be clarified in Section 2, but should be made more clear in the introduction] - I find the sentence "However, as we show in Section 4, learning this map to recover a complete ordering is distinct from the general preference learning setting, and existing algorithms lack theoretical justification for this application" to be vague and should be clarified. What exactly is the "general preference learning setting", versus learning a map to recover complete orderings? What does "this application" refer to? Which of these two settings are you concerned with here? - I'm confused by line 159. Why can one lower bound $\Delta_{ij}$ by a factor depending on the index difference $\lvert i -j \rvert$? Aren't the indices arbitrary, and agnostic to the underlying geometry of the feature space? Does this mean that a simple index permutation would drastically change this $\Delta$ quantity? - in line 166, should the dependence not be on $\theta_T$ rather than $\theta_*$?. See line 157 which uses $\theta_T$. line 169 also jumps back to $\theta_T$ - line 189 is missing an important point: by definition the entirety of $\mathcal{I}$ is unavailable, only $\mathcal{I}_D$ is available to select from. This should be commented on. - I think line 198 is too vague: "As θt converges to θ∗, this pair becomes representative of the maximizer of (4) provided there is no major systematic discrepancy between ID and I." Can you comment more on what constitutes acceptable vs unacceptable discrepancies between I and I_D? This does bring up a problematic point: what if $I_D$ is not sufficiently representative of $I$? Line 215 starts to hint at this discussion but I think it needs to be elaborated on, ideally more formally - in GURO Hybrid, how are these $\zeta_i$ parameters actually learned? Is it just a joint MLE on $\theta$ and $\zeta_i$? In this case, what prevents the model from learning an arbitrary $\theta$ (i.e., $\theta = 0$) and just using the full expressivity of $\zeta_i$? Is there some sort of regularization in practice? - for completeness, can you include a figure in the Appendix showing the experiment in Fig 1b, but just plotting $R_{I_D}$ instead of the difference? It would be good to know how each algorithm does in an absolute sense on $I_D$ - figure 2 would benefit from a log y scale - it is very difficult to discern between methods Minor: - line 147 uses the notation $\widehat{H}$ instead of $\widetilde{H}$. I assume this is a typo - missing left parentheses in line 5 of GURO algorithm - the word BayesGURO in Algorithm 1 should also probably be colored green to show the association to (7) and (11) - be careful with red and green as distinguishing colors for readers with color vision deficiency - fix quotes on line 258 Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **R: The definition of "general preference learning'' and its distinction from the current setting** We agree that this can be clarified further. This paper focuses on learning a map to recover a complete ordering, but we leverage active preference learning to achieve this. By ''the general preference learning setting'' we refer to existing literature that adaptively samples pairwise comparisons, subsequently observing which of two items is preferred, to learn a comparison function $h(i, j)$. Much of this related work focuses on learning a parameter vector ($\theta$) [1,2,3]. Our approach is a special case of this, and while we are trying to learn $\theta$, we are only doing so as far as it helps us order our list of items. Existing work emphasizes getting good approximations of $\theta$, with [1] maximizing $\hat{\theta}^T \theta$, while [2,3] try to minimize $|\hat{\theta} - \theta|_2$. If the true vector $\theta$ is known, it is sufficient to order the list. However, reducing uncertainty in all directions will likely be wasteful; we do not care about the accuracy of $\theta$ in directions that do not help determine the ordering. As highlighted in Figures 1a and 2c in our paper, while BALD is constructed to efficiently learn parameters by sampling pairwise comparisons, it is ill-suited for full rank recovery. **R: In GURO Hybrid, how are these parameters actually learned? Is there some sort of regularization in practice?** Yes, the parameters are learned through joint MLE on $\theta$ and $\zeta_i$, with regularization, as you suggest. The regularization prohibits the algorithm from learning an arbitrary $\theta$. We cover experimental details, including regularization, in Appendix F. However, we agree that this information is essential for the hybrid model and will include this motivation in the main paper for the final version. **R: Line 198 is too vague: Can you comment more on what constitutes acceptable vs unacceptable discrepancies between $\mathcal{I}$ and $\mathcal{I_D}$?** To learn the ordering perfectly, the feature differences for pairs of items in $\mathcal{I_D}$ must span the space spanned by the feature differences for pairs in $\mathcal{I}$. This is true with high probability when $\mathcal{I_D}$ is a random subset of $\mathcal{I}$, the dimension $d$ is small relative to $|\mathcal{I_D}|$ and the variation in directions of $z_{ij}$ for $i,j \in \mathcal{I}$ is sufficient (i.e., each direction is sufficiently covered). An ``unacceptable'' case would be where one dimension of $z_{ij}$ has large variance for some pairs $i,j \in \mathcal{I}$ but is constant for pairs in $\mathcal{I_D}$. In this case, the component of $\theta$ in this dimension would not be learned consistently, and the model would generalize poorly to $\mathcal{I}$. **R: Clarification regarding sorting "using per-item scores alone" in the introduction** The assumption here is that you don't have any contextual features available (or that your sorting algorithm does not utilize them) and that per-item scores are not known from the beginning but have to be estimated by observing pairwise comparisons. In this scenario, when new items are added to the collection we have no way of estimating their underlying scores (apart from arbitrary initial values such as the average) before observing further comparisons where they are included. This is exemplified by TrueSkills performance in Figure 2d. **R: Line 159: Why can one lower bound $\Delta_{ij}$ by a factor depending on the index difference $|i-j|$?** This follows from the definition of $\Delta_* = \min_{i\neq j} \Delta_{ij} /(i-j)$. The indecies are actually not arbitrary but we assume, w.l.o.g, that the items are ordered such that $i > j$ implies $y_i > y_j$. This was stated in the appendix and we will move it to the main paper. The idea is that we can now lower bound $\Delta_{ij}$ by a constant, $\Delta_*$, times how close the two items are to each other in the ordered list, $|i-j|$. This allow us to simplify the lower bound stated on line 158 since each element in the sum now depends on $\Delta_* |i-j|$ and we can treat it as a geometric sum. **R: Discussion on the computational complexity of GURO and its variants** Thank you for suggesting this addition. We include a discussion on this topic in the general rebuttal since two reviewers raised the issue. We intend to add this discussion to the final version of the paper. **R: For completeness, can you include a figure in the Appendix showing the experiment in Fig 1b, but just plotting $R_{I_D}$ instead of the difference?** We agree with the reviewer that this would be good for completeness and will include this figure in the appendix for the final version. We have attached the produced figure to the general rebuttal. **R: in line 166, should the dependence not be on $\theta_T$ rather than $\theta_*$? See line 157 which uses $\theta_T$. line 169 also jumps back to $\theta_T$** Yes it should be $\theta_T$, thank you for pointing this out. **R: line 189 is missing an important point: by definition, the entirety of $\mathcal{I}$ is unavailable, only $\mathcal{I_D}$ is available to select from. This should be commented on.** Yes, this is correct, direct minimization of (4) would be impossible considering we only have access to a subset $\mathcal{I_D}$. We will note this in the final version of the paper. We thank LY2M for the thorough feedback and for believing that our contributions are of great value. The remaining comments (including minor ones) have been addressed in the updated manuscript. **References** * [1] Qian, et al. (2015). Learning user preferences by adaptive pairwise comparison. Proceedings of the VLDB Endowment. * [2] Canal, et al., (2019). Active embedding search via noisy paired comparisons. International Conference on Machine Learning. * [3] Massimino, and Davenport (2021). As you like it: Localization via paired comparisons. Journal of Machine Learning Research. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I am satisfied with these points and leave my review unchanged (at an 8). Also, please double check the definition of $\Delta_*$ in line 125. It is missing the quotient you defined above. It just says $\Delta_* = \min_{ij} \Delta_{ij}$
Rebuttal 1: Rebuttal: Dear reviewers and chairs, thank you for evaluating our work. We are happy that a majority of reviewers found that the reasons to accept this paper outweigh the reasons to reject it. As strengths, the reviews pointed to the importance of the problem (3/4 reviews), the theoretical justification for the proposed algorithms (4/4), and the empirical evaluation (3/4). We also thank the reviewers for asking clarification questions and suggesting improvements to strengthen the paper. We have addressed these in individual rebuttals to each review and are ready to incorporate the arguments in the final manuscript. Moreover, we have attached a plot showing $R_{I_D}$ for the experiment in Figure 1b in the original paper, as requested by reviewer LY2M, for completeness. Two reviewers asked about the computational complexity of the algorithms. We expand on this below but stress that sample complexity, not computational complexity, is the focus of this work. Two main factors impact the computational complexity of GURO. The first is the selection of the next comparison. When sampling according to (5), the bottleneck is the calculation of $\lVert z_{ij} \rVert_{H_{t-1}^{-1}}$ for every possible comparison, which scales according to $O(d^2n^2)$ where $n=$ number of items, and $d=$ dimension of the features. As covered in Appendix F, to speed up computations for IMDB-WIKI-SbS, we only evaluate a subsample of all possible comparisons every iteration. This resulted in no noticeable change in performance and is similar to the approach taken in [1]. When only looking at a sample of $m \ll n^2$ combinations this complexity is reduced to $O(d^2m)$. Another interesting direction could be to first evaluate the model uncertainty of individual item scores $\lVert x_{i} \rVert_{H_{t-1}^{-1}}$, and then only evaluate (5) for the $k$ items with the highest uncertainty, giving a complexity of $O(d^2kn)$. The second factor is the update of model parameters. This is done by solving a logistic regression where the computational complexity of each iteration is $O(ds)$ where $s =$ the number of samples collected. Each iteration also includes updating the inverse Hessian using the Sherman-Morrison formula with a complexity of $O(d^2)$. An interesting benefit of BayesGURO is that it allows for sequential updates of $\theta$, avoiding having to solve the logistic regression using all previously collected samples (which are instead embedded into the prior). **References** * [1] Canal, et al., (2019). Active embedding search via noisy paired comparisons. International Conference on Machine Learning. Pdf: /pdf/0d18c012f6848c16de430ef521b56d520987a0a2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
No Train, all Gain: Self-Supervised Gradients Improve Deep Frozen Representations
Accept (poster)
Summary: This paper presents an unsupervised method to enhance the representations of pretrained models. The authors observe that gradient features from various self-supervised objectives are helpful to enhance k-nearest neighbor (KNN) retrieval. The proposed method, FUNGI, combines the embeddings of a pretrained model with gradients from different self-supervised objectives, followed with dimensionality reduction from PCA. Experiments on Image classification, semantic segmentation and text classification show the effectiveness of proposed algorithm. Strengths: This paper is well written and eazy to follow, with nice charaterization of background and related works. The algorithm is well motivated. The plots and tables are also well-structured with clear information. The idea to combine the gradient features from multiple objective is simple and effective. The proposed algorithm is evaluated on different pretrained models and different tasks. Weaknesses: 1. My biggest concern is that the evaluation of proposed method is limited to nearest neighbor retrieval (though I agree that this task is important). As a method of feature engineering, it'd be good to test the performance of (few-shot) linear prob or K-Means clustering. 2. Lack of analysis on random projection. For example, [6] uses random Gaussian matrix to project the gradient features to low-dim and shows that it perseves inner-product with high probability. In FUNGI, the projection matrix is sampled in {-1, 1} by Bernoulli random variables. What properties would it have? 3. Some related works are not covered. There've been a number of works using gradient as features for downstream tasks, for example [1-3]. It'd be good to metion them and discuss the intuition why gradient could be used as features for KNN retrieval. The same also holds for the analysis on self-supervised learning objectives, for example [6]. [1] Gradients as Features for Deep Representation Learning, ICLR 2020. [2] More than a toy: Random matrix models predict how real-world neural representations generalize. ICML 2022. [3] Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning. ICML 2022. [4] A Kernel-Based View of Language Model Fine-Tuning. ICML 2023. [5] Trak: Attributing model behavior at scale. ICML 2023. [6] Learning by Reconstruction Produces Uninformative Features For Perception. 2024 Technical Quality: 3 Clarity: 4 Questions for Authors: 1. What's the dimension of the gradient feature? What's the time cost of random projection and PCA? 2. What's the advantage of random projection with matrix sampled from {-1, 1}, compared to random Gaussian matrix? 3. For text classification, nearest neightbors are used to select examples for in-context learning [1]. Would FUNGI be helpful in this process? [1] What makes good in-context examples for gpt-3? ACL 2022. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The author have discussed some of the limtations in the paper. For others, please refer to "Weakness" Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Other evaluations (linear probing, k-means, in-context learning) We conducted further evaluations of FUNGI features for logistic regression, clustering and text classification via in-context learning. The results are discussed in the global response. ### What's the dimension of the gradient feature? What's the time cost of random projection and PCA? Assuming a ViT-B/16 backbone, the weight gradients from the last attention output projection have shape $(768, 769)$, including the bias. We project them to the same dimensionality as the embeddings, in this case $768$. The final FUNGI features (before PCA) will then have shape $768 \times (N + 1)$, where $N$ is the number of gradient features used. Performing the random projection on the GPU for a batch of $8$ samples results in an overhead of $0.017$ seconds, or $0.0002$ seconds per sample. As for the PCA, fitting it over the entire training set of EuroSAT ($18,000$ samples) requires $5.18$ seconds, while transforming the training set requires $0.17$ seconds, or $0.00001$ s/sample. For larger datasets an incremental version of PCA can be used. The reported runtimes were averaged over $1000$ runs ($100$ for the PCA fitting). ### What properties does a binary random projection have? What is the advantage? TRAK [98WT-5] uses a random projection sampled from $N(0, 1)$ to reduce the dimensionality of gradients, which, according to the Johnson-Lindenstrauss lemma, preserves inner products with high probability, given a large enough projection subspace $k$. The main result of [C] is demonstrating that this lemma holds also for matrices whose entries are sampled from either $\\{-1, 1\\}$ or $\\{-1, 0, 1\\}$. Empirically we show that this holds true by evaluating the k-NN classification accuracy on Flowers102 of raw and projected KL gradients from a DeIT ViT-S/16 backbone. | Features | Dimensionality | Accuracy | | ------------------- | -------------- | -------- | | Raw Gradients | 147456 | 53.53 | | Projected Gradients | 768 | 53.17 | _**Table 28. Binary random projections preserve euclidean distances:** k-NN classification using raw and projected gradients from a DeIT ViT-S/16 backbone. Projected gradients perform marginally worse, showing that relative euclidean distances are mostly preserved._ One advantage of a random matrix with $\\{-1, 1\\}$ entries is that it can be efficiently stored as a boolean matrix. For example, the projection matrices used for ViT-B models have a shape of $(768 \times 769, 768)$, equivalent to ${\sim}1.8$ GB when encoded as `float32` and to $0.45$ GB when encoded as booleans. We also evaluate the difference in performance given by random projection matrices using different initializations, shown in Table 29, which displays the mean per-class accuracy on Flowers102 of FUNGI features extracted from a DeIT ViT 16/B backbone with respect to the projection initialization. The results show that a binary matrix has a slight advantage over Gaussian and sparse matrices, and has a lower standard deviation as more sub-features are used, but the performance gap is negligible. | | Binary $\\{-1, 1\\}$ | Gaussian | Sparse | | ----------------- | -------------------- | ------------------- | ---------------- | | Embeddings | 56.9 | 56.9 | 56.9 | | $\quad$ + KL | 59.7 $\pm$ 0.4 | **60.2 $\pm$ 0.2** | 60.0 $\pm$ 0.1 | | $\quad$ + KL + DINO | **63.5 $\pm$ 0.1** | 63.2 $\pm$ 0.5 | 63.1 $\pm$ 0.2 | | $\quad$ + KL + DINO + SimCLR | **69.2 $\pm$ 0.6** | 69.0 $\pm$ 0.7 | 68.9 $\pm$ 0.9 | _**Table 29. The random projection initialization has little impact on performance:** comparison of the downstream accuracy of FUNGI features built with gradients projected using random matrices with different initializations. We report the mean and one standard deviation measured across three runs using the Flowers102 dataset and a DeIT ViT-B/16 backbone._ PS: We assumed that for this question you were referencing [98WT-5] (TRAK) rather than [98WT-6] (Learning by Reconstruction..), as the former has the exact sentence mentioned in the review. Please correct us in case. ### Missing related works We highly appreciate the suggestion of these works, we discussed them and used some of their results to build an intuition in the global response on why/how gradient features work. We will be include them in the updated version of the paper. Building on that discussion, our work is also related to [98WT-2], which used the empirical neural tangent kernel (eNTK), i.e. the per-sample Jacobian, to benchmark an estimator of generalized risk, and also shows that kernel regression on the eNTK can achieve a performance close to fine-tuning in vision. Their work is extended by [98WT-4], which shows that eNTK can be used for prompt-based fine-tuning of masked language models, if the task exhibits kernel behavior. In comparison to their work, our method does not require the downstream task to show kernel behavior, and we expect their method to be more computationally intensive, as they compute the full model Jacobian for each sample. [98WT-3] takes inspiration from NTK and propose using gradients from task-specific losses as features for representation learning. Given a pre-trained network, they first train a linear classifier in a supervised manner and subsequently learn a linear model using both activation and gradient-based features. FUNGI differs from this work by not requiring any human labeled samples and not running any training as it simply uses the gradients as features for k-NN classification. **References** [C] Database-friendly random projections: Johnson-Lindenstrauss with binary coins. Journal of computer and System Sciences (2003). --- Rebuttal 2: Title: Acknowledgement Comment: Thank you for the detailed reply. **About evaluation on other tasks**: Evaluation on linear probe, clustering and in-context learning are good. It's also great to see that these experiences are carefully conducted in a correct way, e.g., to choose L2-regularization strength for linear probe. From my own experience, the performance of linear probe would be tricky without a proper regularization strength. **About related work and intuition of FUNGI**: the explanation that FUNGI can be regarded as update-free fine-tuning on other objectives is interesting. **About comparison to random Gaussian matrix**: Yes I'm mentioning TRAC [5] for the projection with random Gaussian matrix. I agree that random binary matrix is more memory-efficient. **About choice of gradient**: it seems that you could choose the gradient from [attn_qkv, attn_proj, mlp.fc1, mlp.fc2] as features. From Figure 8 in the paper, gradient in the early layer seems not helpful. In practice, can you explain which part of gradient to choose in order to get best performance? Will the combination of gradients from different layers/modules be helpful (many people just use full gradient of the model, which is very expensive and might not perform very well)? Overall, I'm glad to see the response from the authors. I'd raise my score to 8 and hope athors would include these changes in the modified version (especially the intuition of using gradient feature) and release their code in the future. --- Rebuttal Comment 2.1: Comment: Thank you for your response and for updating the rating. **About the choice of gradients:** correct. For our experiments we use the full gradient with respect to the weights and bias of a single layer, as this is simpler (implementation-wise) and we expect this to be more stable overall, compared to picking a subset of the gradient matrix. While picking only a subset of the gradient is an interesting direction, we did not investigate it thoroughly. Nonetheless, we can make the following observations: - As shown in Table 30, removing the bias gradient may lead to even better performance. - As shown in Table 31, for the `qkv` layer we found that using the `v` gradients alone results in better performance compared to the full `qkv` matrix and its other components. Combining gradients from multiple layers is an interesting idea, thank you. We ran a small experiment and from the results displayed in Table 32, it may lead to better performances, but this is not always the case. Both ideas require further investigation and evaluation to make any concrete claims. | | Textures | Flowers102 | | ------------------------- | -------- | ---------- | | Weight Gradients w/o Bias | 59.4 | 61.0 | | Weight Gradients w/ Bias | 59.0 | 60.5 | _**Table 30. Removing the bias gradient improves performance:** comparison of the downstream accuracy of FUNGI features built with weight only gradients or weight and bias gradients on two datasets using an AugReg IN1K ViT-B/16 backbone._ | | Textures | Flowers102 | | --- | -------- | ---------- | | Q | 56.7 | 52.0 | | K | 55.2 | 52.0 | | V | 59.0 | 61.2 | | QKV | 58.0 | 58.3 | _**Table 31. The `v` of the `qkv` gradients has the best performance, and can even improve over `attn.proj` (Table 30):** comparison of the downstream accuracy of FUNGI features built with the full `qkv` gradients or one of its sub-components. The backbone was an AugReg IN1K ViT-B/16._ | | Textures | Flowers102 | | ------------------------ | -------- | ---------- | | attn.proj | 59.0 | 60.5 | | attn.proj + attn.qkv (v) | 60.9 | 59.2 | _**Table 32. Combining gradients may improve performance:** comparison of the downstream accuracy of `attn.proj` FUNGI features augmented (or not) with gradient features from the `v` of the `qkv` gradients. The backbone was an AugReg IN1K ViT-B/16._ Finally, we do plan to include the discussions that surfaced in the review in the paper, as they will definitely improve the final manuscript. Indeed, we will release all code as open source, including a plug-and-play package to extract FUNGI features from any transformer backbone, e.g. `features = fungi(model, layer='7.attn.v')`. We're happy to address any other feedback or provide clarifications as needed.
Summary: The paper proposes a simple method named FUNGI - Features from Unsupervised Gradients, to improve the representations of Vision Transformers (ViTs). Specifically, FUNGI uses gradients from self-supervised objectives to augment the embeddings from pre-trained models. FUNGI involves three straightforward steps - (i) compute gradients at a specific layer for selected self-supervised objectives (ii) project the gradients to a lower dimension (iii) concatenate these projected features with the model’s embeddings for kNN-based classification. Extensive experiments across various datasets, pre-trained models, and tasks demonstrate the effectiveness of the proposed approach. Strengths: - **Presentation**: The paper is presented well and well-structured overall. The use of gradients for feature enhancement in ViTs is motivated well through simple analytical experiments. All the experimental setups have been clearly explained and the results are presented well, covering a wide range of pre-trained models and datasets. - **Simplicity**: The proposed method is simple and easy to understand and implement. Moreover, the authors have provided the PyTorch pseudocode in the supplementary, which makes it easier to replicate the results of the proposed method. - **Results**: The results from Fig. 5, Tables 1, 2, 3, and 5 demonstrate the effectiveness of FUNGI across various datasets, pre-trained models, and tasks (visual and text-based). Given that the improvements come with no training, they are noteworthy. Weaknesses: - **Missing backbones**: While the authors have presented results on a comprehensive list of backbones, a few recent backbones are missing, which are listed below: - Touvron, Hugo, Matthieu Cord, and Hervé Jégou. "Deit iii: Revenge of the vit." ECCV 2022. - Vasu, Pavan Kumar Anasosalu, et al. "Mobileclip: Fast image-text models through multi-modal reinforced training." CVPR 2024. - **Scalability**: There is no discussion on how FUNGI scales with larger ViT models. All the analyses are performed with ViT-B and ViT-S. The authors should also demonstrate the scalability of FUNGI with larger ViT models across the various pre-training schemes that they have presented results on for ViT-B and ViT-S. Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Does FUNGI also work for CNNs and CNN-ViT hybrid architectures? For example, MobileCLIP mentioned above consists of convolutional layers and attention-based layers. Would FUNGI work for CNNs such as CLIP ResNet-50 and CNN-ViT hybrids such as MobileCLIP? 2. Since FUNGI is a generic gradient-based method, does it extend beyond classification tasks? Would FUNGI work for generative models such as LLMs, VLMs such as Llava, and BLIP? The application of FUNGI to any such model would greatly improve the impact of the work. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have adequately addressed the limitations of the work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Missing backbones We extend the evaluation of FUNGI to DeiT-III, MobileCLIP and CLIP and MoCov2 ResNet-50 backbones. For MobileCLIP, we extract gradient features from the attention output projection of the last token mixer, and for ResNet-50 models we use the last convolutional layer, i.e. `layer4.2.conv3`. For ResNet-50 models we only evaluate KL and DINO gradients, as SimCLR gradients did not seem particularly promising. Although we find that our method works consistently with ViTs, the results on CNNs or ViT-CNN hybrids are less consistent. While our method improves on MoCov2 ResNet-50, it does not improve on the CLIP ResNet-50 and MobileCLIP-S1. We would like to stress that these are preliminary results, and different data augmentation strategies or a more throughout evaluation of the possible gradient sources (i.e. layers) may yield better results for CNNs or ViT-CNNs. The results, averaged over 11 datasets, are reported in Table 26 and Table 27 for full dataset and few shot setups respectively. | | DeiT III ViT-B/16 | CLIP ResNet50 | MoCov2 ResNet50 | Mobile CLIP S1 | | -------------------- | ----------------- | ------------- | --------------- | -------------- | | Embeddings | 64.2 | **65.7** | 52.7 | **79.8** | | + KL | 64.8 ↑0.6 | 63.9 ↓1.8 | 52.6 ↓0.1 | 78.2 ↓1.7 | | + KL + DINO | 67.3 ↑3.1 | 63.7 ↓2.0 | **53.6 ↑0.9** | 78.1 ↓1.8 | | + KL + DINO + SimCLR | **68.2 ↑4.0** | -- | -- | -- | _**Table 26. FUNGI works primarily for ViTs:** accuracy in full dataset k-nearest neighbor evaluation averaged over 11 datasets for FUNGI and embeddings._ | | DeiT III ViT-B/16 | CLIP ResNet50 | MoCov2 ResNet50 | Mobile CLIP S1 | | -------------------- | ----------------- | ------------- | --------------- | -------------- | | Embeddings | 34.9 | **34.7** | 26.4 | **47.9** | | + KL | 36.2 ↑1.3 | 33.4 ↓1.3 | 26.8 ↑0.4 | 46.2 ↓1.7 | | + KL + DINO | 37.2 ↑2.3 | 32.5 ↓2.2 | **27.6 ↑1.2** | 46.6 ↓1.3 | | + KL + DINO + SimCLR | **39.6 ↑4.7** | -- | -- | -- | _**Table 27. FUNGI works primarily for ViTs:** accuracy in few shot k-nearest neighbor evaluation averaged over 11 datasets for FUNGI and embeddings._ ### Scalability In Table 18 of the uploaded PDF we report results for AugReg (labelled as "AR"), CLIP and DINOv2 ViT-L models, and a DeiT ViT-H/14 model, for full dataset and few shot setups. The results are averaged over 11 datasets, except for DeiT ViT-H/14, whose results exclude CIFAR 10 and 100, Food-101 and ImageNet-1K due to computational reasons. The results show that FUNGI works also for larger backbones, especially in few-shot setups, where it improves by $+4.4\\%$, $+4.3\\%$ and $+2.1\\%$ for CLIP ViT-L/14, DINOv2 ViT-L/14 and DeiT ViT-H/14 respectively. ### Does FUNGI extend beyond classification tasks? Can it be used for generative models such as LLMs, VLMs such as Llava, and BLIP? We're thankful for this suggestion, and yes, in the global response we show that FUNGI can be used to improve the performance of linear probing, clustering and that it can be used to retrieve better example for in-context learning with LLMs. As for integrating FUNGI in the pipeline of generative models such as BLIP or LLaVA we believe it should be possible, as they both use a frozen vision encoder. But, taking BLIP as an example, the vision features are used by the text decoder via cross-attention, and thus we expect FUNGI features to be out-of-distribution without at least a partial re-training or fine-tuning. We did not have enough time to setup such an experiment during the rebuttal, but we agree that it would be an interesting direction to explore.
Summary: - The draft introduces a feature enhancement technique called FUNGI (Features from Unsupervised GradIents) for vision transformers. - - The core idea is to leverage the un/self-supervised loss gradients at an arbitrary hidden layer within a vision backbone (the default option being the attention output projection of the last transformer block) to enhance the feature embeddings - These gradients, along with the pre-trained embeddings, are shown to achieve higher accuracy in kNN classification across various datasets and transformer backbones. - Comprehensive experimentation evaluated this technique in three activities (image classification, in-context scene understanding, and text classification). Consistent gains in accuracy are reported across model backbones, tasks/datasets, learning scenarios, etc. Strengths: 1. The core contribution is to show that gradients w.r.t. self-supervised objectives have complementary information to the pre-trained model embeddings. This information can be combined to achieve better performance. In my opinion, this can be claimed to be a novel representation. It is well-known that pretraining on self-supervised objectives results in moderately powerful features. Contrary to this notion, the draft presents a method to enrich the trained embeddings with more information from multiple such self-supervision objectives. However, the clever thing the draft does is to avoid costly fine-tuning by concatenating the (one-step) gradients for the learned embeddings toward the self-supervision tasks. 2. The authors have introduced a "generic" technique to enhance the performance of the kNN classifier built on top of a transformer backbone. 3. The experimental evaluation of the proposed technique is sound, with several backbones, various unsupervised objectives, different tasks/datasets, ablations, initial verifications (Figures 2 & 3), etc. Weaknesses: 1. Although complementary information is extracted as gradients, it is computationally demanding - it requires one (partial) backpropagation for each self-supervision objective, total backpropagation operations are linear with respect to the number of objectives, the added linear projection head (h), and finally, the dimensionality reduction to match the dimension of the embedding. One may opine that the additional cost is not worth performance gains (poor returns). Overall, the gains may not be significant to be deployed in practice. 2. The draft provides no sound reasoning regarding the performance enhancement resulting from the self-supervised gradients. Section 3 (figures 2 & 3) empirically proves that these gradients complement the embeddings but doesn't discuss how/why. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. It would be interesting to understand (if not already) the role of the pre-trained embedding. In other words, can a half-trained or untrained (random) embedding also extract discriminative information from the self-supervised objectives similar to the pre-trained one? Authors may clarify this. 2. Authors may clarify the second weakness mentioned in the above section. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors addressed the limitations and potential societal impact of the proposed work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Computational cost Our method does indeed introduce a computational overhead, but we believe that in a retrieval setting, where the embeddings for the database are computed once and only query samples are encoded on the fly, the performance improvement may be very well worth the added computational cost. In particular, on retrieval on the revisited Paris landmarks dataset [B], our KL loss improves the mean average precision (mAP) of a CLIP ViT-B/16 backbone by $+10.1\\%$ and $+12.4\\%$ on the medium and hard splits respectively, at the cost of a ${\sim}28\\%$ reduction in throughput, including the random projection. The results are also shown in Table 24. | | Medium | Hard | | -------- | -------------- | -------------- | | Features | 64.6 | 40.4 | | $\quad$ + KL | **74.7 ↑10.1** | **52.8 ↑12.4** | _**Table 24. FUNGI improves retrieval:** retrieval mAP on the Paris landmarks dataset of embeddings and FUNGI features built with KL gradients using a CLIP ViT-B/16 backbone._ Beyond that, we note that the main scope of this work was to demonstrate that gradient-enhanced features can improve performance across several tasks, modalities, and backbones. We acknowledge that our method, particularly with the DINO and SimCLR losses, introduces a computational overhead due to the requirement of multiple views for each individual sample. However, we believe that it is possible to make the computation of these gradient features significantly more efficient, which we leave for future work. ### Why are gradients and embeddings complementary? In the global response we built some further intuition on why SSL-gradient features can improve the pretrained embeddings. Moreover, the second row of Figure 2 in the main paper shows that gradient alone can already provide a significant performance improvement, thus it's not strictly necessary to combine them with the embeddings. On the other hand, gradient features may be "brittle", as gradients are estimated using a single data point and depend on the local curvature of the loss landscape. Thus, we hypothesize that combining gradients and embeddings may address this issue and provide more stable yet discriminative features, which also result in better performance overall. ### What is the role of pre-trained embeddings? Can random or half-trained embeddings be used to extract discriminative information? Thank you for this interesting idea and question, that can bring new insights into the utility of gradient features. Indeed, it is possible to extract discriminative information from both a half-trained and random embeddings. For the former, as we do not have access to a half-trained backbone, we consider a Masked Autoencoder ViT-B/16, which is known to have bad performance in retrieval and requires (almost) full-fine tuning for peak performance, while for the latter we consider a randomly initialized ViT-B/16. In Table 25 we show the performance of embeddings and FUNGI features for these backbones, averaged over 11 datasets. The results show that it's indeed possible to extract discriminative information from such embeddings, in particular we improve over a random embedding by $6.25\times$ and by $1.89\times$ on MAE embeddings. | | MAE ViT-B/16 | Random ViT-B/16 | | ----------------- | -------------- | -------------- | | Embeddings | 24.0 | 2.5 | | $\quad$ + KL | 44.4 ↑20.4 | 12.8 ↑10.3 | | $\quad$ + KL + DINO | **45.4 ↑21.4** | 14.0 ↑11.5 | | $\quad$ + KL + DINO + SimCLR | 38.8 ↑14.8 | **15.6 ↑13.1** | _**Table 25. FUNGI significantly improves retrieval for random or half-trained backbones:** average accuracy, over 11 datasets, in full dataset k-nearest neighbor classification for a MAE and a randomly initialized backbone._ **References** [B] Revisiting Oxford and Paris: Large-scale image retrieval benchmarking. CVPR 2018.
Summary: The paper introduces an approach to augment the feature representations from ViTs, by utilizing the gradients from self-supervised losses. The gradients from the attention output projection of the last transformer block is extracted and projected into the output embedding space using random-projections and PCA to compute FUNGI. Exhaustive experimental evaluation across vision and NLP datasets brings out the efficacy of the approach. Strengths: + Novel methodology + Simple approach + Exhaustive experimental analysis Weaknesses: - The method to generate features from unsupervised gradients seems very empirical. Any theoretical backing for the good results would further improve the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: - Intuitively, the proposed approach "encodes first-order learning signals that are specific to the downstream data" (line 46). If we use task specific loss functions, instead of self-supervised loss, would it further help with the adaptation of the latent representation towards the downstream task? - How much is the method dependent on the PCA that is performed to reduce the dimensionality (line 141)? Would there be any other alternative to try? - FUNGI has been found effective is augmenting features used for k-NN based classification and scene understanding task. Can FUNGI features be consumed by downstream discriminative classifiers that are trained with softmax? - Would be good to clarify whether the reported numbers are the mean of multiple runs. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Can task specific loss functions help with adaptation? In this work, we only use self-supervised losses that do not need human labels. If we understand correctly the reviewer's question, task-specific losses such as cross-entropy or a ranking loss are supervised and would require us to have access to the labels. Moreover, if that were the case, we could easily achieve perfect accuracy, as the label information would "leak" into the gradient features, as they are calculated on a per-sample basis. On the other hand, different losses exhibit different properties, e.g. DINO, being a clustering loss, is particularly helpful in improving the k-means clustering performance, as shown in Table 21 of the uploaded PDF. In particular, using FUNGI features from a CLIP ViT-L/14 backbone, we can improve the cluster/class overlap by up to $+15\\%$. ### How much is the method dependent on the PCA? Can other alternatives be used? The primary role of PCA in our method is to produce features that have the same storage and compute requirements (for retrieval) as the model embeddings. In Table 11 of the appendix (reported here as Table 22 for convenience) we compare the performance of PCA-transformed features to raw features, and find that, on average across 11 datasets, PCA provides a consistent minor performance boost (up to $+0.3\\%$), but it's not essential for good performance of FUNGI features. | | No PCA | PCA | | ---------------------------- | -------- | ------------- | | Embeddings | 65.1 | 65.3 ↑0.2 | | $\quad$ + KL | 66.0 | 66.3 ↑0.3 | | $\quad$ + KL + DINO | 67.8 | 68.1 ↑0.3 | | $\quad$ + KL + DINO + SimCLR | **69.8** | **70.1 ↑0.3** | _**Table 22. PCA slightly improves performance, but it's not essential:** performance of FUNGI features and embeddings from a DeIT ViT-B/16 backbone transformed (or not) using PCA, averaged over 11 vision datasets._ As for the strategy used for dimensionality reduction, in Table 23 we evaluate the performance of using no reduction, PCA and random projections with different initializations, and find PCA to be the best across the board. | | No Reduction | PCA | Rand Proj (Binary) | Rand Proj (Gaussian) | Rand Proj (Sparse) | | :------------------- | :-------------- | :------------------ | :----------------- | :------------------- | :----------------- | | Embeddings | 57.2 $\pm$ 0.0 | **61.6 $\pm$ 0.0** | 55.5 $\pm$ 0.4 | 55.8 $\pm$ 1.0 | 56.0 $\pm$ 0.5 | | + KL | 59.4 $\pm$ 0.0 | **64.0 $\pm$ 0.0** | 59.0 $\pm$ 0.8 | 58.2 $\pm$ 0.8 | 58.7 $\pm$ 0.3 | | + KL + DINO | 64.3 $\pm$ 0.0 | **68.3 $\pm$ 0.0** | 62.9 $\pm$ 0.4 | 62.2 $\pm$ 0.3 | 61.9 $\pm$ 0.5 | | + KL + DINO + SimCLR | 69.2 $\pm$ 0.0 | **70.9 $\pm$ 0.0** | 67.7 $\pm$ 0.5 | 67.0 $\pm$ 0.4 | 66.9 $\pm$ 0.7 | _**Table 23. PCA is the best dimensionality reduction method:** mean per-class accuracy on Flowers102 of embeddings and FUNGI features from a DeIT ViT-B/16 backbone transformed with different dimensionality reduction methods. We report the mean performance and one standard deviation across three seeds._ ### Can FUNGI features be consumed by downstream discriminative classifiers that are trained with softmax? Yes, we conducted a thorough evaluation of FUNGI features in logistic regression. See the global response for details. ### Are the reported numbers the mean of multiple runs? The numbers reported in the paper refer to a single run, except for Table 16 in the appendix, where, for 8 datasets and a DeIT ViT-B/16 backbone, we report the mean accuracy over three runs and one standard deviation, which is generally low, falling below $0.3$ in most cases, and being $0.6$ at most, indicating that our method provides consistent results.
Rebuttal 1: Rebuttal: We thank the reviewers for the constructive feedback. In this general response, we address comments, shared by multiple reviewers, regarding the extension of FUNGI evaluation beyond k-NN classification and providing further insight into why FUNGI improves performance. Then, reviewer-specific responses address individual comments. The uploaded PDF includes tables with additional experimental results requested by the reviewers. We mark references to papers cited by reviewers as `[reviewer-index]` and ours using letters, e.g. `[A]`. ### Beyond K-NN classification evaluation (gxYR, EjYW, 98WT) **Linear probing.** We evaluate FUNGI features with logistic regression using the cyanure library [A], for all vision (backbone, dataset) pairs. Each classifier is trained for up to 300 epochs (30 in case of ImageNet) using $L_2$ regularization. For each feature combination we pick the best regularization hyper-parameter between 5 linearly spaced values in the interval $[5 \times 10^{-6}, 5 \times 10^{-4}]$ on the validation set. For datasets without a validation set, we generate one from the training set using an $80/20$ stratified split. The results are shown in the Tables 19 and 20 of the uploaded PDF. FUNGI improves linear classification in nearly all cases, especially with supervised backbones. The only exception is with DINO and DINOv2 ViT-B backbones in the few-shot setting, where FUNGI decreases performance. **Clustering.** We evaluate FUNGI features from DeiT ViT-B/16, CLIP ViT-L/14 and DINOv2 ViT-L/14 backbones in k-means clustering. For this task, we only use the DINO and KL losses, as the SimCLR gradients did not yield good performance improvements. The clusterings are evaluated by matching clusters and classes via the Hungarian algorithm, and measuring the average (cluster, class) overlap. The results are reported in Table 21 of the uploaded PDF, and show that FUNGI can significantly improve performance, by up to $+10.8\\%$ in the case of DINOv2 ViT-L/14 and by up to $+15.8\\%$ on CLIP ViT-L/14. **In-Context Learning.** We thank reviewers `EjYW` and `98WT` for suggesting the application of FUNGI to generative models and LLMs. In particular, reviewer `98WT` suggested that FUNGI could be used to enhance the example selection for language in-context learning (ICL). Thus, we ran a small experiment for intent classification using the banking-77 dataset (where a banking-related intent must be classified within $77$ possible classes) using $\texttt{gpt-4o-mini}$ as the LLM backbone. For each test sample, we retrieve the top-20 most similar training set examples, and append them to the prompt alongisde their label. We then ask the model to predict the label for the test sample. For a fair evaluation, we set the model temperature to $0$. Using the model embeddings to retrieve the ICL examples results in an accuracy of $88.7\\%$, while using FUNGI features (built with KL and SimCLR gradients) results in a $91.2\\%$ accuracy, an improvement of $+2.5\\%$. For reference, we used the following prompt template: ``` You have to annotate banking-related queries with an appropriate intent. You must choose a single class in the following comma-separated list: {list of possible labels, comma-separated} You must reply only with the class name, nothing more. Here's some examples: {(text, label) pairs from the training set} The query sample is: {query text} ``` labels are given as strings, e.g. `exchange_rate`. ### Theoretical backing and intuition on gradient contribution (gxYR, 8XpZ) While we cannot provide a sound proof that theoretically explains why our method works, we can build some intuition using some papers that reviewer `98WT` pointed out, for which we're thankful for. Similarly to our approach, [98WT-1] utilizes gradients, obtained by calculating the Jacobian for multiple deep layers, alongside the model activations to train a linear classifier with improved performance. They frame their method as a linear approximation around the model parameters, and argue that it approximates fine-tuning. Similarly, one of the core assumption that motivates [98WT-3] is that fine-tuning can be approximated as a Taylor expansion of the form: $$ F(x,w*) \approx F(x, w) + \sum_{i,j} \frac{\delta F(x, w)}{\delta w_{i,j}} \Delta w_{ij}. $$ Considering this, our method can be interpreted as a form of update-free fine-tuning, that gives rise to different model abilities depending on the loss being used, e.g. SimCLR, an instance discrimination objective, excels in retrieval, resulting in performance improvements of up to $+11\\%$ in k-NN classification (DeiT ViT-B/16, Flowers102), and DINO, a clustering loss, can improve the matching between k-means clusters and the original classes by up to $+15.8\\%$ (CLIP ViT-L/14, Pets). On the other hand, pixel reconstruction losses such as masked autoencoding (MAE), which have little alignment with semantics [98WT-6], produce uninformative gradients. For example, using a MAE loss on a MAE ViT-B/16 backbone results in a ${\sim}30\\%$ top-1 accuracy on CIFAR-10, compared to an accuracy of $51.43\\%$ using activations. **References** [A] Cyanure: An open-source toolbox for empirical risk minimization for python, c++, and soon more. arXiv preprint 2019. Pdf: /pdf/52d6d2aca14286248ac8d511abd84298b9f9fa0c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Group Robust Preference Optimization in Reward-free RLHF
Accept (poster)
Summary: This paper proposes a novel preference optimization technique, GRPO, which utilizes the group distributional robust optimization. Specifically, the method aims to maximize the worst-case group performance for improving the robustness of LLM models. This paper provides several theoretical aspects of GRPO when it comes to convergence guarantee and closed-form solution when considering the log-linear policy class. Empirically, the GRPO objective function is minimized by the mirror gradient descent. Several theoretical results on synthetic and real-world datasets demonstrate that their method works well. Strengths: * This is the first study to adopt the group DRO technique in the RLHF setting. * Several theoretical results are provided to enhance understanding of the proposed method. * The proposed method is general so that it can be applied to various RLHF optimization techniques. Weaknesses: * This paper lacks novelty in its contribution. The group DRO technique is already a popular method used in a variety of applications [1, 2]. While the authors offer some theoretical results and an algorithm for their method, these may also lack novelty. In specific, Proposition 3.1 on the Nash equilibrium is a well-known result in game theory and Proposition 3.2 seems straightforward because $\pi^*$ is dependent only on an input value, not on the group to which the input belongs. Furthermore, there is no significant difference in the algorithm and proof of convergence guarantee compared to existing works [37, 30 in the manuscript], except for slight modifications. * Although this paper is novel in applying the group DRO technique to RLHF preference optimization for the first time, stronger motivations and empirical results are needed to highlight the necessity of the proposed method. But, I also think that to strengthen this contribution, For example, as mentioned in lines 141-153, it would be beneficial to show that existing Large Language Models (LLMs) suffer from group robustness issues in various applications involving helpful/harmful instances or domain-specific categories when fine-tuned by RLHF optimization techniques. * The interpretation of the results in Section 5.2 is insufficient. In Figure 3, GR-IPO improved the performance of all groups, including Group 1. This result contrasts with findings reported in the Group DRO paper [37], where a trade-off between majority and minority group performance is generally observed, with majority group performance typically decreasing to compensate for minority group performance. Therefore, a more thorough analysis or performance comparison is needed to explain this result. [1] Re-weighting based group fairness regularization via classwise robust optimization, 2023. [2] Distributionally Robust Multilingual Machine Translation, 2021. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In the experiments in Section 5.2, what are the zero-shot performances of Gemma-2B for each group? It would be better to report the degree of bias in the pre-trained model together, because it can enable comparing the bias degrees between the pre-trained and fine-tuned models. 2. Would you provide the results when fine-tuning more layers or the entire neural networks using your method? I think that this can strengthen this paper's contribution by showing the performance of the method in various settings. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: They addressed the limitations of this paper in the manuscript. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for highlighting the novel application of our technique to the RLHF setting, our insightful theoretical analysis, and the broad applicability of our proposed algorithm. Next, we provide answers to **all** the questions posed by the reviewer. **Regarding theoretical novelty:** We appreciate the reviewer's evaluation of the theoretical results in our paper, though some key aspects of our contributions may have been overlooked. We agree that the group DRO is widely used, as noted in Lines 56-60. However, to the best of our knowledge, we are the first to apply it for LLM alignment. Specifically, we motivate and formulate the first group robust alignment of LLMs and derive a novel group robust MLE objective (Eq. 6). Additionally, we show that a naive application of group robustness to the LLM policy maximization objective does not offer robustness benefits (Lemma B.2 in Appendix B). Regarding our analysis, while Proposition 3.1 is well-known in game theory, it guarantees the existence of Nash equilibria for the special case of log-linear policy classes and supports our algorithmic design. Compared to [37,30], we adopt an alternate sampling strategy (see Line-5 of Algorithm-1), which samples groups proportionally to their sizes and assigns uniform weights to all the data points. From a practical perspective, this facilitates stable multi-epoch batch training. However, this might cause samples from smaller groups to be sparsely included in a given batch, thus requiring a compensatory scaling factor of $N/N_g$ in the update of Lines 6 and 8 in Algorithm-1. As a result, our algorithm requires a different proof technique compared to [37,30] in order to attain the convergence guarantees as detailed in Appendix C. Furthermore, our development of the GR-IPO objective for the log-linear policy class yields a closed-form weighted regression update for the policy parameters rather than a gradient update (Section 4.1 and Appendix C). To the best of our knowledge, this is a novel contribution towards efficient fine-tuning through preferential data. We are happy to further elaborate on these differences in our paper. **Regarding other potential use cases:** In Lines 141 - 153, we outline additional potential applications of our approach, to engage readers and inspire future research. Our primary focus is on applying our approach to **pluralistic alignment tasks**, such as ensuring equitable alignment across diverse demographic preferences, as explored in related works (e.g., [1,2]). While extending our method to **multi-objective** applications is interesting, it falls outside the intended scope of this work (and requires additional computational resources), which we leave for future research. Regarding helpful and harmful instances, we refer the reviewer to [3], which discuss the trade-offs between optimizing for helpfulness and minimizing harmfulness (Section 5.1) in LLM fine-tuning. Unlike their approach, which uses an additional hyperparameter to control the trade-offs, our approach is “parameter-free”, ensuring equitable performance and proving more effective for multi-objective tasks. **Regarding improved performance across groups compared to non-robust baselines:** We thank the reviewer for their detailed examination of our results. We agree with the reviewer that GRPO improves the worst group's performance, often reducing the average or best group's performance. In contrast, we can observe improved performance across all groups (see Figure-3 top right of our paper). This is only due to the **erroneous inclusion** of an IPO run with zero training steps while plotting the IPO group losses. The corrected plots are now in the attached PDF (Figure R3), where the new plot shows Group-5 loss of IPO and max group loss of IPO (Figure-3 top left) indeed align, unlike in the previous version. The performance figures for GR-IPO were already accurate and unchanged between the old and new plots. With this revised plot, we note that our method, GR-IPO, improves Group 5's performance, albeit at the expense of Group 1 which is consistent with the reviewer’s observations. **Regarding zero-shot performances of pre-trained Gemma-2B model:** We have included the zero-shot performance of the Gemma-2B model in Figure R4 of the attached PDF as requested. Significant group bias is observed in the pre-trained model's performance. However, we would like to emphasize that it is more relevant to view the performances of SFT fine-tuned model as our robust methodology builds upon it. We visualize the SFT performance (on the same data) in Figure 3 (Bottom right in the paper) and note that the SFT model’s degree of group bias is aligned to those of weights and group losses in Figure 3 (Bottom middle and top right). For further comparisons, we kindly refer to Lines (312-319) and our response to Reviewer kmNK on the alignment performance of the fine-tuned vs. base model. **Regarding fine-tuning more layers:** Thanks for the question. We believe there has been a misunderstanding here. Although our theoretical framework considers only the last layer fine-tuning, in our experiments, we apply the LoRA strategy to fine-tune **all layers of** the model as detailed in the code (also available). To avoid further confusion, we will clearly state this comprehensive fine-tuning approach in our experiments and give details in the Appendix. We have addressed all the questions raised by the reviewer. In light of the reviewer’s opinions about the strengths of our work and our detailed rebuttal to the reviewer’s questions, we kindly ask the reviewer to reconsider their score. [1] Zhao, Siyan et al. "Group preference optimization: Few-shot alignment of large language models." [2] Sorensen, Taylor, et al. "A roadmap to pluralistic alignment." [3] Bai, Yuntao, et al. "Training a helpful and harmless assistant with reinforcement learning from human feedback."
Summary: This paper addresses the limitation of traditional reinforcement learning with human feedback (RLHF) approaches that indiscriminately optimize a single preference model, disregarding the unique characteristics and needs of diverse labeler groups. The authors propose a Group Robust Preference Optimization (GRPO) method to align LLMs with individual groups' preferences. Strengths: The paper's motivation is clear. Weaknesses: 1. How does the proposed method handle scenarios where preference data lacks clear group information? In practice, much preference data does not come with explicit group classifications. 2. The paper should address whether the proposed method can learn an invariant preference standard from data that may contain various group information, such as different preference evaluation criteria and preferences annotated at different iterations. 3. The authors need to provide and analyze the alignment results of the model, not just the reward model analysis. Technical Quality: 3 Clarity: 2 Questions for Authors: Please refer to the Weaknesses section for questions regarding the paper. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive outlook on our work and for highlighting the clear motivation of our problem setting to address the limitations in traditional RLHF techniques. **Regarding access to group information:** We focus on settings with known groups, which are common in pluralistic alignment datasets and tasks (see [1,2]). When groups are unknown, one can still apply several approaches that we hypothesize below. To identify unknown groups in data, one can: a) use clustering: Apply algorithms like k-means or EM to detect potential groups; b) apply representation learning: Use techniques like variational autoencoders (VAEs) to discover hidden structures; c) analyze features: Look for features with high variance or correlations to hypothesize groupings; d) consult experts: Leverage domain expertise to identify meaningful groups. These methods can help uncover group structures in the data. We believe that this is not a limitation of our work and can be an interesting direction for future research. **Regarding learning an invariant preference standard from data that may contain various group information:** We kindly ask the reviewer to clarify what an invariant preference standard means in this context so that we can provide an appropriate answer to this question. **Regarding the analysis of alignment results of the model:** We would like to refer the reviewer to Figure-3 (Bottom left), where we indeed analyze the model’s alignment through worst group log-prob. accuracies. The log-prob. accuracy measures the accuracy of the model to assign higher probability to the chosen response compared to the rejected response (see Lines 302-304). We note that the GR-IPO fine-tuned model is better aligned with the worst group compared to the IPO fine-tuned model. Further, we plot and compare the alignment performance of GR-IPO, IPO, and SFT fine-tuned models with the pre-trained model across groups in Figure R2 of the attached PDF. Note that the pre-trained model performs significantly worse in comparison to other methods and attains its worst performance in Group 4. Further, GR-IPO outperforms both IPO and SFT in terms of worst group performance (Group 5). This is consistent with our methodology that aims to improve the alignment of the LLM and reduce the bias across groups. We believe that we have answered all the questions raised by the reviewer. In light of the reviewer’s opinions about the strengths of our work and our detailed rebuttal to the reviewer’s questions, we kindly ask the reviewer to reconsider their score. We are happy to answer and clarify any further questions the reviewer raises. [1]. Zhao, Siyan, John Dang, and Aditya Grover. "Group preference optimization: Few-shot alignment of large language models." [2] Sorensen, Taylor, et al. "A roadmap to pluralistic alignment." arXiv preprint arXiv:2402.05070 (2024).
Summary: The work tackles the important problem of robust RLHF for diverse groups. Traditionally, RLHF assumes that a single model can fit the diverse feedback from multiple groups of users. In this paper, the authors introduce a method to learn a robust policy that maximizes for worst-case group performance. To achieve this, the method adaptively weights the loss for each group based on the size and cumulative training loss incurred by the feedback samples for that group. As a result, LLMs trained on diverse group data demonstrate reduced loss imbalance and improved accuracies across all the groups. The authors also present a convergence analysis of the proposed method assuming a log-linear policy class. Strengths: 1. The paper tackles a critical problem of robust RLHF for diverse groups. The intuition of the overall method is well understood, the framework and the parameters are clearly mentioned, and the results are shown over the standard datasets and compared to multiple baselines. 2. The authors present a thorough literature review and background. Past work has mainly focused on making RLHF and LLMs robust to noisy or out of distribution data. Meanwhile, this work focuses on a group robust formulation of training LLMs using state-of-the-art methods (mainly DPO). 3. The method introduced “GRPO” is useful is scenarios beyond diverse groups. As the authors mention, it is a general formulation that can enforce robustness to diverse tasks, domains, or objectives occurring in the feedback dataset. 4. To achieve robustness, GRPO presents a robust optimization approach to minimize worst-case loss amongst the diverse groups. Further, the paper introduces a less aggressive object by trading off worst and average case performance. I would be curious to see an ablation study showing the effectiveness of this tradeoff. 5. The paper supplements the approach with some strong theoretical proofs on the convergence properties of the method under a log-linear policy. The authors also present a close form solution for the RLHF update step, replacing DPO with IPO. 6. The results show that GRPO improved performance across all groups, and the weight update behaves as expected by assigning higher weights to groups with higher cumulative loss i.e. the gradients from the worst performing group are scaled the most. Weaknesses: 1. This paper proposes a method for group robust optimization for LLMs. However, the metrics evaluated are only max validation loss and reward error over the groups. GRPO uses a reward free approach to update the LLM, but the evaluations are restricted to the performance of the reward model over the feedback dataset. It would be nice to see the performance of the finetuned vs base model (such as win rate) in generating responses that align with the individual groups. 2. The authors provide detailed training setup, but I would suggest that they also include information about the evaluation. 3. GRPO performs well over all the groups, however, the performance of the importance sampling baseline is very close. It would be helpful if the authors could provide additional ablations and discussions to show the effectiveness of optimizing against the worst-case loss over the groups vs using only the IS approach. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In the current experiments, is only the final layer of the LLM trained? I would be curious to know the result if the method used full finetuning of the model. 2. As GRPO assumes that each prompt has access to group information through a prompt, how much does the prompt affect policy? If the prompts are good enough, would it just create non-overlapping distributions for all groups? As one of the problems in non-robust baselines is that they converge to unwanted biases or majority groups for shared prompts, if the group identification prompt is finetuned will it alleviate this issue altogether? 3. Does GRPO finetune only the final layer of the Gemma-2B model in the results presented in the paper? 4. Does GRPO ensure that there is a non-decreasing change in performance across all groups as compared to the non-robust baselines? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. The paper introduces GRPO, an optimization method to weight each group in the RLHF update step proportional to the size and loss of the group. This ensures a balanced performance of the model across all the groups. In the results, we see an improved performance over all the individual groups, which intuitively violates the no-free lunch theorem. I would be curious to see an analysis of the method that gives insights as to what allows the model or the objective to achieve consistently higher performance. 2. Here, the method assumes access the groups in the dataset. However, in practical settings the group information is unavailable and the model has to cluster or implicitly model the group information from the ungrouped dataset. 3. The evaluation is limited only to the accuracy of the reward model over the preference dataset. So, currently, it provides weaker evidence of the translation of this robustness during the generation phase. It would be nice if the authors could include experiments showing if GRPO enables LLMs to robustly generate better-aligned responses to prompts from all the groups. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive opinion about our work recognizing the importance of our problem setting, broad applicability of our GRPO approach, and strong theoretical analysis of our proposed algorithm’s convergence properties. **Regarding ablation for trade-off parameter between worst-case and average performance:** Due to time constraints, we provide the results of our ablation study for the synthetic experiments in Figure R1 of the attached PDF. Note that as observed in Figure R1 of the attached PDF, the max validation loss decreases while moving from $\chi=0$ to $\chi=1$, where $\chi=0$ corresponds to importance sampling with group weights $\mu_1,\cdots,\mu_g$ mapping to importance sampling weights (see Eq. [30] in Appendix B.4) and $\chi=1$ corresponds to GR-IPO. Further, we plot the average validation loss which increases while moving from $\chi=0$ to $\chi=1$, demonstrating the trade-off between average and worst-case performance. Note that, GR-IPO aptly increases the average loss (as expected) in order to reduce the worst group loss. **Regarding details about evaluation:** Our primary evaluation metrics are the worst group loss and accuracy. The loss refers to the IPO loss for each group and the accuracy refers to the percentage of winning response and losing response pairs correctly ordered by the learned preference function [Eq. 35 in Appendix]. We have defined this in the main text in Lines 302-304 and also in Appendix D.4 (Lines 683-684) along with details about data splits for training, test, and validation (Lines 672-673). We will revise and emphasize the definitions in a more visible manner. **Regarding alignment performance of finetuned vs base model and alternate evaluation metrics:** We would like to refer the reviewer to Figure-3 (Bottom left), where we indeed measure the model’s alignment through worst group log-prob. accuracies. The log-prob. accuracy measures the accuracy of the model to assign higher probability to the chosen response compared to the rejected response (see Lines 302-304). We note that the GR-IPO fine-tuned model is better aligned with the worst group compared to the IPO fine-tuned model. Moreover, in our experiments, the responses/choices are included in the prompt and, our goal is to measure whether the chosen choice (for e.g., A) is preferred over the rejected choice (for e.g., C). Hence we focus on log-prob. accuracies metric. Further, as per the reviewer’s request, we plot and compare the alignment performance of GR-IPO, IPO, and SFT fine-tuned models with the pre-trained model across groups in Figure-R2 of the attached PDF. Note that the pre-trained model performs significantly worse in comparison to other methods and attains its worst performance in Group 4. And, GR-IPO outperforms both IPO and SFT in terms of worst group performance (Group 5). This is consistent with our methodology that aims to improve the alignment of the LLM and reduce the bias across groups w.r.t. SFT fine-tuned model rather than the pre-trained model. **Regarding proximity between the performances of importance sampling and GRPO:** We kindly request the reviewer to clarify the particular figure to which they are referring to. We note that in the two scenarios corresponding to Figure 2 and Figure 5 (Appendix D.2), the groups have different responses’ distributions and there is a clear and significant gap between our proposed method and the importance sampling approach. However, we agree that in Figure-4 of Appendix D.2, the gap between GR-IPO/GR-DPO and the corresponding importance sampling methods is indeed small. This is because Figure 4 corresponds to the scenario where both groups have the same responses’ distribution but are imbalanced in size. In this scenario, such a small performance gap is expected considering the difference between groups arises solely from data imbalance, which is handled by importance sampling. **Regarding fine-tuning more layers:** We kindly refer the reviewer to our response to reviewer SYLq regarding the same question **Regarding prompt-tuning techniques to alleviate group biases:** We agree that the prompt does affect the policy response and policy optimization, as studied in various previous works, including [2]. In our experiments, the prompts are appended with group information (detailed in Lines 669-670 of Appendix D.4), creating non-overlapping distributions for all groups. However, we disagree with the reviewer that tuning prompts alone will alleviate the issue of biased performance across groups. Even with tuned group identification prompts, all groups will still use the same prompt template with only the group information varying. Hence, there is no guarantee that the IPO-based fine-tuning will lead to reduced bias across groups, as the losses might still be distributed unevenly as we observed in the performances of both SFT and IPO fine-tuned models (see Figure-3). Therefore, a group robust fine-tuning strategy like GRPO is still necessary to reduce bias across groups. **Regarding improved performance across groups compared to non-robust baselines:** We kindly refer the reviewer to our response to reviewer SYLq regarding the same question **Regarding access to group information:** We kindly refer the reviewer to our response to reviewer qNjA regarding the same question. We believe that this is not a limitation of our work and can be an interesting direction for future research. [1] Lester, Brian, Rami Al-Rfou, and Noah Constant. "The power of scale for parameter-efficient prompt tuning." arXiv preprint arXiv:2104.08691 (2021). [2] Hu, Edward J., et al. "Lora: Low-rank adaptation of large language models." arXiv preprint arXiv:2106.09685 (2021). --- Rebuttal Comment 1.1: Comment: I thank the reviewers for a detailed response to my concerns. I have read the rebuttals, and it adequately addresses all my questions. I am increasing the score to accept. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for acknowledging our detailed rebuttal and raising their score.
Summary: This paper introduces GRPO, a method to optimize policy preferences across different groups in a robust way. GRPO looks at the worst group alignment loss by taking the maximum loss across all groups, ensuring the policy performs well even when there are group-specific differences or overlaps in prompts. Strengths: 1. This paper writes well and easy to follow. 2. The paper provides a thorough theoretical analysis of GRPO. 3. The proposed framework accommodates both distinct and overlapping group scenarios. 4. GRPO can trade off worst-case for average performance with a hyperparameter. Weaknesses: 1. GRPO's experiment on real world datasets is limited, on only one group (5 countries) of the GlobalQA dataset. 2. How is the training time as compared to other baselines? The GRPO framework's training process involves a min-max optimization, which can be potentially computationally intensive. Technical Quality: 3 Clarity: 3 Questions for Authors: please see weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. This methods requires knowing the number and nature of groups in advance to perform optimization. 2. When new group is introduced, will have to train the entire optimization again, which is expensive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive opinion about our work recognizing our thorough theoretical analysis and broad applicability of our GRPO approach. **Regarding the groups in GlobalQA dataset experiment:** Our proposed GRPO method can be applied to any set of finite groups. To demonstrate the effectiveness of our method, we focus on five groups that are diverse in terms of both dataset size and ethnicity (see Lines 295-296). **Regarding training time comparison:** The training time for both IPO and GR-IPO are *almost the same* and are approximately around 4-5 hours for the GlobalOpinionQA data run on pre-trained Gemma-2B model. We have detailed the time and the configuration of the GPU processors used for our experiments in Appendix D.4 (Lines 685-688). Explanation: We do not need to solve the inner maximization in our minimax problem (see Eq. [6]) at each iteration. Instead, we update the weights over the groups based on the loss of the current sample as per Line 6 of our algorithm. Thanks to Danskin’s theorem, computing the gradient over the policy parameter $\theta$ only requires computing the partial gradient w.r.t. $\theta$ for a fixed $\alpha$, which approximates the maximizer $\alpha^*(\theta)$. As $\alpha$ is also updated iteratively, the computation time is comparable to the amount of computation time in the standard case. Thus, it does not lead to any significant computational overhead. **Regarding the requirement of knowing the number of groups in advance:** We focus on settings with known groups, which are common in pluralistic alignment datasets and tasks (see [1,2]). When groups are unknown, one can still apply several approaches that we hypothesize below. To identify unknown groups in data, one can: a) use clustering: Apply algorithms like k-means or EM to detect potential groups; b) apply representation learning: Use techniques like variational autoencoders (VAEs) to discover hidden structures; c) analyze features: Look for features with high variance or correlations to hypothesize groupings; d) consult experts: Leverage domain expertise to identify meaningful groups. These methods can help uncover group structures in the data. **Regarding requirement of re-training when new group is introduced:** We agree with the reviewer that introducing a new group requires additional training. However, note that one does not need to restart from scratch. The current policy parameter serves as a warm starting initialization point. Moreover, such additional training is necessary to ensure equitable alignment for both new and existing groups. We do not consider this an explicit limitation of our proposed algorithm, but rather an interesting future extension of our approach to continual learning settings. [1]. Zhao, Siyan, John Dang, and Aditya Grover. "Group preference optimization: Few-shot alignment of large language models." [2] Sorensen, Taylor, et al. "A roadmap to pluralistic alignment." arXiv preprint arXiv:2402.05070 (2024).
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback and for recognizing the strengths of our work. We appreciate the constructive comments raised by the reviewers and we believe we have addressed all of them in detail further strengthening the validity of our work. In summary, the reviewers recognize the following **strengths** of our work\ (i) The criticality of our problem setting in the current LLM alignment paradigm\ (ii) The broad applicability of our approach in the real-world\ (iii) Strong theoretical backing of our methodology and algorithm\ (iv) Consistent experimental setup showcasing the efficacy of our approach Further, the following shared comments were mentioned by reviewers: **(i) Regarding the requirement of knowing the number of groups in advance:** \ We agree with the reviewers that our method requires knowing the groups in advance and note that we focus on settings with known groups, which are common in pluralistic alignment datasets and tasks (see [1,2]). Hence, we do not see this as an explicit limitation of our approach. We also provide details on other approaches that can be employed to learn the groups, when they are unknown, in our responses to reviewers qNjA and 7Bpj. **(ii) Regarding alignment performance of finetuned vs base model:** \ As per the reviewers’ requests, we plot and compare the alignment performance of GR-IPO, IPO, and SFT fine-tuned models with the pre-trained model across groups in Figure R2 of the attached PDF. Note that the pre-trained model performs significantly worse in comparison to other methods and, our approach GR-IPO outperforms both IPO and SFT in terms of worst group performance. We also provide further details about the plot in our responses to reviewers kmNK and 7Bpj. **(iii) Regarding fine-tuning more layers:** \ Although our theoretical framework considers only the last layer fine-tuning, in our experiments, we apply the LoRA strategy to fine-tune all layers of the model as detailed in the code (also available). We will clearly state this comprehensive fine-tuning approach in our experiments and give details in the Appendix. Further, as per the requests of reviewers,\ (i) We have included the zero-shot performance of the Gemma-2B model in Figure R4 of the attached PDF (ii) We provide the results of our ablation study for the synthetic experiments in Figure R1 of the attached PDF We have also responded to each reviewer’s questions and comments individually. We believe we have addressed all of them thoroughly and are happy to answer any further questions that are raised by the reviewers. [1]. Zhao, Siyan, John Dang, and Aditya Grover. "Group preference optimization: Few-shot alignment of large language models." \ [2] Sorensen, Taylor, et al. "A roadmap to pluralistic alignment." arXiv preprint arXiv:2402.05070 (2024). Pdf: /pdf/305bee2c860845de3bc7beda1b57c9cf2fb59253.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper addresses the problem of improving fairness in preference optimization. It proposes a new loss function and algorithm. Experiments conducted show that the proposed algorithm indeed achieves better fairness. Additionally, the paper provides a theoretical analysis indicating the convergence of the optimization. Strengths: 1. The idea of improving the fairness is good. This is a problem that deserves more attention. 2. The derivation of the loss and the algorithm, though simple, is clear. 3. The results provided for the algorithm appear to be sound. Weaknesses: 1. The author mentioned that the idea can also be applied to multi-objective preference optimization. In my opinion, a comparison with works in multi-objective preference optimization [1] and group optimization [2] is needed. 2. It is unclear whether this algorithm can be applied to scenarios beyond multiple choices. 3. It is unclear how Proposition 4.1 relates to common metrics like regret. [1]. Zhao, Siyan, John Dang, and Aditya Grover. "Group preference optimization: Few-shot alignment of large language models." [2]. Wang, Haoxiang, et al. "Arithmetic control of llms for diverse user preferences: Directional preference alignment with multi-objective rewards." Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The paper mentioned that the algorithms can also be used for multi-objective preference optimization. Were any related experiments conducted? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See the 'Weakness' part Flag For Ethics Review: ['Ethics review needed: Discrimination, bias, and fairness'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive opinion about our work recognizing the importance of the problem, clear and sound exposition of our algorithm and theoretical results. **Regarding comparison with works in multi-objective preference optimization [1] and group optimization [2] and related experiments:** The primary focus of this paper is to design a robust fine-tuning approach to align a language model with diverse group preferences and validate it both theoretically and numerically. Indeed, our method applies to robust multi-objective preference optimization. It is definitely interesting to conduct experiments for the robust multi-objective setting. However, these are beyond the scope of this project and will be considered for future work. Compared to [1], the distinctive feature of our method is that we are not modeling multi-reward objectives but consider a reward-free setting. Specifically, we consider the robust alignment problem optimizing for the worst group performance. Further, they align with user preferences assuming that each user/group has varied importance over the distinct metrics in their multi-objective reward model. The output policy is trained to output a response based on both the prompt and the importance/weights over the individual metrics. Whereas, our methodology directly models each group’s preferences through a group-dependent latent reward model where the group dependency is injected through the prompt. Compared to [2], as explained in Lines 68-71 and also in the appendix (Lines 520-523), we note that they consider alignment to multiple groups’ preferences through an in-context learning approach that is different from our LLM fine-tuning methodology. In particular, they consider a distributional objective that is different from the robust one we consider. Hence, we believe that [1,2] are not directly comparable to ours and we will include this explanation and comparison regarding [1] in our related work section. **Regarding application of our algorithm to scenarios beyond multiple choices:** Yes, it can be used for scenarios beyond multiple choices. The reason why we use the global opinion data with a multiple choice structure in our experiments is to clearly demonstrate the efficacy of our approach in aligning LLMs to diverse group preferences and such datasets are typically used in similar pluralistic alignment studies (e.g., [2,3]). **Regarding relation of Proposition 4.1 with regret:** Proposition 4.1 is indeed related to regret. It bounds the expected difference in loss between the average iterate policy and the optimal policy after T iterations and demonstrates a sublinear dependency on T. This allows us to provide convergence guarantees for our algorithm in terms of the average iterate. Such a regret formulation is common in minimax problems as discussed in Nemirovski et al. [30]. [1] Wang, Haoxiang, et al. "Arithmetic control of llms for diverse user preferences: Directional preference alignment with multi-objective rewards." [2] Zhao, Siyan, John Dang, and Aditya Grover. "Group preference optimization: Few-shot alignment of large language models." [3] Sorensen, Taylor, et al. "A roadmap to pluralistic alignment." arXiv preprint arXiv:2402.05070 (2024).
null
null
null
null
null
null
Recursive PAC-Bayes: A Frequentist Approach to Sequential Prior Updates with No Information Loss
Accept (spotlight)
Summary: The paper investigates a type of PAC-Bayes bounds that makes use of sequential prior updates, without loosing the appearance of the cardinality of the whole training set. Strengths: * Sequential updates feel natural in that context, not only statistically as demonstrated here, but also computationally. Indeed, having sequential Monte Carlo methods in mind, it might be easier to approximate a complex posterior by slowly modifying the prior. * The first three sections of the paper are very nicely written, and agreeable to read for a non-expert. * While the proofs are straightforward consequences of earlier results, the key theorem has practical importance and, to my limited knowledge and following the authors' claim, seems to have been missed by earlier investigations. * Overall, I enjoyed reading the paper, and feel like I learned something. Weaknesses: ## General comments * The crux of the paper, Section 4, is too dense and hard to read, to the point of hiding some of the degrees of freedom of the paper, and making it tedious to follow the paper line by line and check the claims. This can easily be solved; see major comments. * After a very nice historical introduction, I was expecting a down-to-earth comparison to recent work on PAC-Bayes and sequential posterior updates, as listed in p2 L60-68. Maybe my lack of expertise made me miss a point, but I think these approaches are introduced and discarded in a single high-level paragraph, which feels unfair compared to the (nice) long pedagogic account of Section 2. ## Major 1. p2 L60 to L68: this paragraph cites several competing martingale-based approaches to PAC-Bayes inequalities with sequential posterior updates, which are discarded a bit quickly for me to understand why. Could you elaborate and give some details on the claims of e.g. Section 6.5 of [Chugg et al., 2023] and [Haddouche and Guedj, 2023]? For instance, you could give an informal version of their main claim, and point to the limitations you address. 2. Relatedly, why are these approaches not featured in the experimental section? Maybe the answer to bullet 1. will make it clear. 3. I would avoid bullet lists on p4 and p7, and put the proofs of Theorems 3 and 4 in the appendix; all of this would create space for Section 4 to breathe. I was enjoying reading the paper until Section 4, and then bumped into the daunting density of Section 4 on p6, due in part to too many inline formulas and overly compact definitions. The risk is to lose many readers here, while this is the crux of the paper. With the space gained, you could e.g. use centered displays for the most important definitions. 4. With the space created, still in Section 4, I would take time to introduce objects one by one. I believe $(\gamma_t)$, for instance, is used before Theorem 5 without definition. Implicitly, we understand in Theorem 5 that they can be predictable functions. We should thus, for instance, have a definition of $\gamma_t$ as a sequence of real-valued functions adapted to a filtration. Then we could introduce, for $\gamma\in\mathbb{R}$, the function $f_{\gamma}: \mathcal{H} \times \mathcal{X} \times \mathcal{Y} \times \mathcal{H} \rightarrow \mathbb{R}$ by $f_{\gamma}(h, x, y, h')=...$, instead of the more compact definition of $f_{\gamma_t}(h, (X,Y,h'))$, which saves space but intertwines the definitions of several objects, functions and random variables. Similarly, I would start by defining a filtration to be able to state which function is measurable w.r.t. to what $\sigma$-algebra. 5. Relatedly, L232 I am not sure I understand over what distribution the expectation is taken. Slower definitions would likely make it clearer. 6. Section 4 feels like it could be rewritten in terms of (super)martingales, although it's a matter of taste, I guess, and I think I saw a comment in the paper that you wanted to avoid doing so. However, not explicitly using martingales makes it harder to connect to previous work. For instance, your theorem 5 bears resemblance with a maximal inequality for martingales such as Ville's, as in [Chugg et al., 2023], could you comment on this? 7. Can you confirm that I understand well Section 5.1? Basically, you describe how you approximate the sequential optimization problems in (4), for $t=1, ..., T$. This does create a sequence $(\pi^*_t)$, to which you can apply Theorem 5. Actually, Theorem 5 does not care whether $(\pi^*_t)$ optimizes (4), so it is OK to actually approximate the optimal sequence (4). 8. Section 5.1.3: how do you draw $h_{t,1}\sim\pi_t^*$? Is $\pi_t^*$ easy to draw from by construction? 9. Section 5: as a side remark, I am wondering whether sequential Monte Carlo techniques could help sequentially approximating $\pi^*_t$; see e.g. [[Chopin and Papaspiliopoulos, An Introduction to Sequential Monte Carlo, Springer, 2020]](https://link.springer.com/book/10.1007/978-3-030-47845-2). 10. p8 L320 what is meant by "Gaussian distributions modeled by probabilistic neural networks"? 11. Why do you not split the data more than in 6 shards? What happens if we go "online" and make $n$ shards of $1$ data point? What differs then from the "online PAC-Bayes" approach of [Haddouche and Guedj, 2023]? 12. Is there any computational gain in splitting the dataset and sequentially optimizing $\pi_t$ instead of doing plain batch PAC-Bayes, on top of the statistical gain in the tighter bound? ## Minor * p1 L27 The Bayesian posterior Technical Quality: 3 Clarity: 3 Questions for Authors: Any of my major comments above, with a priority to items 1 and 11, and commitment to follow the suggestions of clarification in Section 4. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This is fundamental work; no potential negative or environmental societal impact other than making the pdf available! Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: General comments: - We agree that Section 4 is too dense. We will take advantage of the tenth page offered for the final version to sparsify it. - Concerning works on sequential posterior updates: we are sorry for brevity, let us elaborate. In works on sequential posterior updates the prior remains fixed, and the posterior is updated as the data are processed point-by-point. Our contribution is the sequential update of the prior. It can be directly combined with sequential posterior updates, because the excess loss (the first term in Eq. (2)) can be bounded sequentially by processing the corresponding data chunk point-by-point, exactly as it is done in works on sequential posterior updates. We will add this clarification to the paper. Major: 1. Sequential posterior updates of Chugg et al. (2023) can be combined with our approach, as described above. Concerning their sequential prior updates in Section 6.5, we note that the denominator of the bound reduces to the sample size since the last prior update, meaning that there is no preservation of confidence information, and it is not different from other data-dependent priors. (This may not be clear from the way they have presented the bound, but if you check the proof of the bound, this limitation becomes evident. Specifically, the proof of Thm 4, which is used to prove Thm 40, only allows them to use samples since the last prior update to be able to swap the expectations in the second line on Page 11 in the JMLR version of the paper.) On the other hand, as explained in Lines 64-66, Haddouche and Guedj (2023) provide a generalization bound that applies to an aggregation of posteriors, and their denominator of the generalization bound depends on the number of aggregated posteriors, which can be as large as the number of data points. This is a very different object from a standard posterior studied in our work. In particular, the generalization power of their approach comes primarily from aggregation, and their method requires construction of a large number of posteriors to work. We do not see any close connection between their work and ours. The need to construct and maintain as many posteriors as the number of data points is a limitation in terms of computation and storage. As far as we can see, the authors have not conducted empirical evaluation of their method, presumably because of that. 2. Sequential construction of posteriors for every data point is computationally expensive, and experimental evaluations of this approach are highly limited. It would not have been possible to conduct such experiments given the size of the datasets we have worked with and the computational resources available to us. 3. We fully agree that Section 4 is too dense. We will use the tenth page given for the final version to implement your suggestions. (We still think that there is value in keeping the proofs in the body, luckily we will not need to compromise on that.) 4. Thanks a lot for your valuable suggestions! We will definitely use them to improve the writing. 5. Oh, yes. In Line 232 $S$ is sampled according to $\mathcal D$ and $\hat \pi_{t-1}$ is a sequence of prediction rules, one prediction rule per every sample in $S$, sampled according to $\pi_{t-1}$. The expectation is with respect to the two distributions. We will find a better way to present this. 6. As already mentioned above, bounding of the excess loss (the first term in Eq. (2)) can be done sequentially using martingale techniques, as in the work of Chugg et al. We will add a comment about this, but we prefer to avoid spelling out the technical details. We believe that for those who are familiar with these techniques, filling out the technical details would be trivial, but for those who are not, it can make the reading unnecessarily complicated and obstruct the main message of our work. 7. Yes, this is correct. And, yes, Thm 5 applies to any sequence $\pi_t^*$, so it is not a problem that we only get an approximation of the optimum. 8. It depends on the model to which Recursive PAC-Bayes is applied. For our model we borrowed the sampling procedure from Pérez-Ortiz et al., (2021). The details are described in Appendix B.2 and B.3. Since in our case $\pi_t^*$ is a probabilistic neural network represented by a factorized Gaussian distribution, it is easy to sample from. 9. That’s a good question. Again, it depends on the models to which Recursive PAC-Bayes is applied. 10. To clarify the confusion, replace the sentence in L320 with: “Similar to them we used probabilistic neural networks with weights represented by Gaussian distributions.” 11. There are two reasons to limit $T$. The first is that the added value of additional splits saturates (the improvement in going from $T=4$ to $T=6$ is already relatively small), so at some point the cost of the union bound on the number of splits (the $\ln T$ factor in the bound) will dominate the benefit of making extra splits. The second reason is computational. Once the improvements saturate, it makes little sense to split further. We will add experiments with $T=8$, which show that there is little gain relative to $T=6$. 12. The computational cost or gain depends on the models to which the bound is applied in two ways. First, if the cost of finding the (approximate) argmin in Eq. (4) is linear in the sample size, as in our experiments, then the difference between using or not using the recursion is small. If the method would have been applied to models, where the cost of finding the argmin is superlinear (e.g., kernel SVMs), then it could lead to substantial computational savings. The second source for potential computational gain is a trade-off with the statistical gain. Specifically, we could imagine a scenario, where compromising on the statistical gain and doing several recursion steps with more relaxed approximation of the argmin could still yield better and faster outcomes than doing a more precise single-step posterior optimization. --- Rebuttal Comment 1.1: Comment: Thanks for the clarifications!
Summary: Data-dependent priors are crucial for the tightness of PAC-Bayesian bounds in several scenarios. However, using a fraction of the data to train the prior reduces the sample size of the bounds, which can sometimes be counter-productive. This fact also discourages sequential prior updates since the bounds can rapidly degenerate. This paper elegantly solves these problems using a "prior-dependent" excess loss bound inspired by Mhammedi et al. (2019). Their approach allows sequential prior updating without confidence loss and surpasses previous data-dependent prior building techniques in standard benchmark datasets. Additionally, they generalize ternary split-kl PAC-Bayes bounds to general discrete random variables. Strengths: 1- The main result, Theorem 5, is a significant contribution to the study of data-dependent priors with many possible applications. 2- The experimental results (Table 1) show a remarkable improvement over previous methods. Weaknesses: I find no major weaknesses, just a couple of comments: 1- Sections 3 and 4 are very dense in notation and difficult to read (I recognize there is not much the authors can do about that). 2- Since Recursive PAC-Bayes is developed for sequential prior updating, it would be nice (if possible) to have some discussion about its possible applications to streaming/online learning scenarios. 3- In the experiments with data-dependent priors 1/2 of the data was used for building the prior. However [1] shows that the proportion of data used for prior building is crucial. It would be nice to have experiments with different data splitting proportions (also, I think [1] is a relevant paper that could be cited as related work). [1] Dziugaite, G. K., Hsu, K., Gharbieh, W., Arpino, G., & Roy, D. (2021). On the role of data in PAC-Bayes bounds. In International Conference on Artificial Intelligence and Statistics (pp. 604-612). PMLR. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No significant limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. 1. We agree that Section 3 and, even more crucially, Section 4 are too dense. We will take advantage of the tenth page offered for the final version to sparcify writing. 2. The application to streaming and online learning is essentially straightforward, because what we do with the data is effectively sequential processing. We will add a comment to the paper. 3. We fully agree that the splitting proportions matter. Splitting half-half is sort of default, and our approach can be seen as a default generalization of it, because we recursively continue splitting the first half further. Of course, experimenting with other proportions would be a natural continuation, although we think that it might be more appropriate for a journal extension, both due to space constraints, and because explosion in the number of experiments risks distructing from the primary theoretical contribution of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for your response, I am happy with the feedback and I will maintain my score. My only suggestion would be to use the extra page in the camera-ready version to extend Sections 3 and 4 to make them more readable. I wish the authors the best.
Summary: - The paper presents a novel PAC-Bayesian procedure that allows for sequential prior updates without splitting the training dataset. - The proposed procedure is based on a novel decomposition of the expected loss of randomized classifiers, which rewrites the posterior loss as an excess loss relative to a downscaled loss of the prior plus the downscaled loss of the prior, which is recursively bounded. - The paper generalizes the split-kl and PAC-Bayes-split-kl inequalities to discrete random variables. - The authors confirmed that their PAC-Bayes learning method outperforms state-of-the-art methods in empirical evaluations. Strengths: - The paper addresses a practically important topic of adaptively optimizing the prior in PAC-Bayes learning to reduce pre-training costs while enhancing generalization performance. - The attempt to derive recursively bounded generalization error bounds for the optimization steps includes originality. Weaknesses: First, I want to note that I find it difficult to fully grasp the motivation, utility, and specific algorithms of this paper from the current manuscript, which may lead to some of the points in the Weaknesses and Questions sections being based on misunderstandings. If there are any misunderstandings, I would appreciate it if the authors could correct them and provide clearer explanations beyond what is presented in the paper. - **Limitations of Derived Bounds**: - As one of the limitations in this paper, the derived bounds assume 0-1 loss, limiting their applicability to binary classification problems. - The bounds are valid only at a specific time point $t$. Considering that most existing PAC-Bayes bounds provide insights into generalization error independent of iteration, this bound appears to be valid under very restrictive conditions. - The definitions of symbols throughout the paper seem ambiguous, affecting readability. For instance, it is unclear what $t$ represents—whether it is training iteration, epoch, or a separate recursion step. Clarification is needed. - **Comparison with Existing Approaches**: - The main issue addressed by this paper is that traditional data-dependent priors in PAC-Bayes learning are constructed by splitting the training data, thus discarding some information. However, alternative approaches not discussed in this paper use optimization methods like SGLD or Entropy-SG(L)D, which guarantee differential privacy while optimizing the prior using all training data (e.g., Dziugaite et al., 2017; Dziugaite et al., 2018). Although these methods might result in looser bounds or smaller probabilities of bound validity, they have shown good performance in similar experimental settings. While these methods differ in being pre-training approaches rather than recursive, they share the goal of optimizing the prior without splitting the training data. The lack of comparison with these methods leaves the usefulness of the proposed approach insufficiently demonstrated. - **Data Splitting and Recursive Approach**: - The paper still splits data according to the value of $T$. It is unclear how this fundamentally differs from traditional data-splitting prior learning approaches. In the split strategy proposed in Sec. 5, increasing the number of split data sets with $T$ might lead to suboptimal prior distributions due to fewer data in early stages, potentially affecting generalization performance. - There is no discussion on the tightness of the bounds derived in Theorem 5, especially for $t \geq 2$. Intuitively, even though $\gamma_{t}$ is multiplicative, the cumulative addition of the KL term may result in looser bounds as $t$ increases unless $\gamma_{t}$ or/and the complexity value is very small. The decreasing bound values with increasing $T$ observed in the experiments may be due to the simplicity of datasets like MNIST, where KL values are initially very small. If there is another reason, it should be explained. - **Computational Efficiency**: - There is no mention of computational efficiency. It seems that increasing the number of recursive steps would lead to higher computational costs. - **Convergence of generalization bounds**: - It is unclear whether the bound guarantees convergence with respect to the sample size $n$, i.e., whether it converges to zero as $n \rightarrow \infty$. If we only consider Equation (3), even if $n_{t}^{\mathrm{val}} \rightarrow \infty$ and $B_{t-1} = 0$, it seems that $-\gamma_{t}$ would remain in $\mathcal{E}_{t}$, preventing convergence. Is this a desirable property for a generalization error bound? - **Experimental Validation**: - The experiments are limited to very simple datasets like MNIST and Fashion-MNIST, which is insufficient to convincingly demonstrate the utility of the proposed algorithm. More diverse and complex datasets should be tested to validate the approach comprehensively. Citation: - Dziugaite et al., 2017: Dziugaite et al. Data-dependent PAC-Bayes priors via differential privacy. NeurIPS2018. https://arxiv.org/pdf/1802.09583 - Dziugaite et al., 2018: Dziugaite et al. Entropy-SGD optimizes the prior of a PAC-Bayes bound: Generalization properties of Entropy-SGD and data-dependent priors. ICML2018. https://arxiv.org/pdf/1712.09376 Technical Quality: 3 Clarity: 2 Questions for Authors: In addition to the concerns raised in the Weakness section, I would appreciate it if you could also address the following questions: - Can you provide a qualitative interpretation of the generalization error bound given by Equation (3)? - Why does the summation part in \(\mathcal{E}\) above Equation (3) only sum from \(j=1\) to \(j=3\)? - The specific algorithm of the proposed method is not very clear. Could you provide an algorithm table? - What are the advantages of the proposed method compared to traditional PAC-Bayes prior pre-training approaches that fully use the training data (e.g., Dziugaite et al., 2017; Dziugaite et al., 2018)? If there is a reason for not conducting a theoretical or experimental comparison with these methods, please explain. - You mentioned calculating the 0-1 loss. Is this correct that you conducted binary classification using a one-vs-all approach? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: This study is foundational research aimed at improving the generalization by PAC-Bayes learning with prior optimization, and these aspects are discussed in the Broader Impact paragraph. The datasets used are open datasets such as MNIST and Fashion MNIST, which suggests that there are no significant concerns regarding potential negative social impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: There are numerous and significant misunderstandings in the review, which we clarify below one-by-one. The first sentence of the Summary states: “The paper presents a novel PAC-Bayesian procedure that allows for sequential prior updates without splitting the training dataset.” This statement is incorrect. We do split the dataset into $T$ folds. The novelty of our work is it allows meaningful sequential processing of the folds without losing confidence information along the way. Prior work could only meaningfully process splitting in two folds, but even then it was losing confidence information. Limitations of derived bounds: - The derived bounds assume zero-one loss: this statement is not fully correct. The main contribution of our paper is the recursive decomposition of the loss in Eq. (2), $\mathbb E_{\pi_t}[L(h)] = \mathbb E_{\pi_t}[L(h) - \gamma_t \mathbb E_{\pi_{t-1}}[L(h)]] + \gamma_t \mathbb E_{\pi_{t-1}}[L(h)]$. It applies to any loss function (as long as the expectations are well-defined). Any suitable PAC-Bayesian bounds can be used to bound the two terms in the decomposition, as mentioned in Lines 167-169. We have restricted the presentation to the zero-one loss and used the most basic PAC-Bayes-kl bound, because otherwise the reader would get lost in variations of PAC-Bayes and miss the main message. But it is straightforward to use, say, PAC-Bayes-Empirical-Bernstein, and obtain a bound for any bounded loss function. - The bounds are valid only at a specific time point $t$: First, we believe that there is a confusion between $t$, which indexes folds and recursion steps, and $i$, which indexes samples. Our bounds hold for all $t$, because we take a union bound (factor $T$ under the logarithm in Thm 5). Concerning bounds that hold for all $i$, it is straightforward to replace our bounds for the two terms in the decomposition in Eq. (2) with existing PAC-Bayes bounds that hold for all $i$. what $t$ represents: $t$ indexes the folds, which also correspond to recursion steps, because each recursion step processes a new fold. Comparison with existing approaches: priors based on differential privacy, as in the work of Dziugaite et al., pursue a very different goal from our work. Their goal is to construct a set of interesting priors based on a fixed dataset without revealing too much information. Our goal is to refine the priors while moving from one chunk of data to the next, without losing confidence information. Data splitting and recursive approach: - It is unclear how this fundamentally differs from traditional data-splitting prior learning approaches: Traditional approaches split the data in two folds. Our approach supports meaningful splitting into an arbitrary number of folds. - Increasing the number of split data sets with $T$ might lead to suboptimal prior distributions due to fewer data in early stages, potentially affecting generalization performance: Allocating little data to early stages is actually an advantage, as can also be seen from Tables 1, 2, and 3 in the paper (compare small values of $T$ with large values of $T$). This way few data are used to steer the prior into the right region, and then there is still a lot of data to obtain a tight bound. - The decreasing bound values with increasing $T$: As you can see from Tables 2 and 3, the first rounds of recursion are used to steer the prior into the right region. The KL term in the first rounds is large, but with every subsequent round the contribution of this term to the final bound is decreased by a factor of $\gamma_t$. Therefore, the larger is the number of rounds, the smaller is the contribution of the first terms to the final bound. In the later rounds the prior is already good, and the KL term is very small (Tables 2 and 3). Computational Efficiency: See our reply to Reviewer gLvU, Point 12. Convergence: The decomposition of the excess loss into a superposition of binary variables is exact (see the illustration in Fig. 2 in the appendix). Therefore, if the excess loss converges to zero and $n\to\infty$, then the bound will converge to zero. Put attention that as $n\to\infty$ the $kl^{-1,+}$ terms in the bound converge to the value of the first argument, and not to zero. So, if the excess loss is zero, they will counterbalance $-\gamma_t$. Experimental validation: we emphasize that the primary contribution of the paper is theoretical, and that it is a conference submission. Further experimental evaluation would be natural for a journal extension. Questions: - Interpretation of the generalization bound in Eq. (3): look at the decomposition of the loss in Eq. (2). The first term in Eq. (3) is a PAC-Bayes bound on the first term in Eq. (2), and the second is a recursive PAC-Bayes bound on the second term in Eq. (2). - Why the summation above Eq. (3) goes from $j=1$ to $j=3$: The excess loss takes 4 values ($K=4$, see Line 223) and the corresponding binary decomposition is given in Line 196. You can also check the illustration in Fig. 2 in the appendix. - Algorithm: Our algorithm is a recursive computation of the argmin in Eq. (4). The computation of the argmin depends on the model to which the bound is applied, which is external to the paper. - Comparison to Dziugaite et al.: addressed above. - Clarification concerning the experiments: the experiments apply the zero-one loss in multiclass classification. (Correct prediction gives loss zero, incorrect one.) It is NOT one-vs-all. Finally, we would like to say that the score given by the reviewer “3: Reject: For instance, a paper with technical flaws, weak evaluation, inadequate reproducibility and/or incompletely addressed ethical considerations.” is unjustified. The reviewer has not identified any technical flaws; our evaluation is reasonable for a theoretical contribution to a conference; we have provided the code to reproduce the experiments and no reproducibility issues were raised; and there are no ethical considerations concerning our work. --- Rebuttal Comment 1.1: Title: Acknowledgements and Apologies Comment: First of all, I deeply apologize for submitting a low-quality review due to my misreading. After reading your response, I have gained a clearer understanding of the content. Thank you for the detailed explanation. As a result, I have decided to change my score to a 6. However, I still have some concerns regarding the authors’ claim that the paper focuses on theoretical contributions because the title explicitly mentions handling "sequential prior updates", which sounds proposing some novel theoretically-validated algorithm. In the abstract, it states, “We present a novel and, in retrospect, surprisingly simple and powerful PAC Bayesian procedure that allows sequential prior updates…” Upon first reading this, I had the impression that the paper would derive an algorithm with a theoretical background and empirically validate it. With this premise, I proceeded with my review, which led me to feel that the paper lacked a numerical comparison based on sequential prior updates and pre-training-based prior distribution settings proposed by some existing studies. This perceived “weakness of evaluation” led me to assign a score of 3. I now realize that my understanding was flawed, and I agree that the score was too low given the paper’s contributions. I apologize for any discomfort this may have caused. If I may suggest one revision, it would be to more clearly present that the primary objective is the theoretical contribution. This could help reduce the risk of misinterpretation by readers like myself and increase the paper’s impact. I wish the authors the best of luck. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for raising the score. We will use input from all the reviewers to improve the presentation.
Summary: PAC-Bayes bounds are extended to the recursive or streaming case by breaking the expected loss into ($a$) expected loss on the previous step times a discounting factor plus ($b$) current expected loss minus $a$. An extension of the kl inequality to many-valued (here 4-valued) random variables provides a bound on $b$. Strengths: This is impressive work. It's technically sound and makes a major theoretical advance. Weaknesses: The proof of thm 5 doesn't account for sampling error between $\hat{\pi}^*_{t-1}$ and $\pi^*_{t-1}$. The inequality used in line 244 refers only to $\pi^*_{t-1}$ but the definitions of $\mathcal{E}\_{t}$ and $\\hat{F}\_{\\gamma_{t} | j}$ involve $\hat{\pi}^{*}\_{t-1}$. The method in section 5.1.3 can resolve this but it should be included in the theorem (or the theorem should be rewritten without sampling). 214: "can depend on $S$" is conceptually helpful but technically superfluous. What's really meant is that because the quantification is over all of $\mathcal{P}$ one is free to choose $\rho$ in a way that depends on $S$. 231: the summation index is misleading because it implies the index set is the product of $U^{val}\_t$ and $\hat{\pi}\_{t-1}$, whereas the actual index is $((X,Y),h')$. Technical Quality: 4 Clarity: 4 Questions for Authors: The objective in (4) is very similar to the VI objective, especially with 0-1 loss replaced by cross-entropy as in section 5.1.1. (I think replacing $kl^{-1,+}$ with Euclidean divergence and $n^{val}_t$ with $|S_t|$ would yield the VI update.) It would be useful to see an experimental comparison between recursive PAC-Bayes and recursive VI. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: none noted Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and the feedback. “sampling error between $\hat \pi_{t-1}^*$ and $\pi_{t-1}^*$” - It is true that this is a delicate point, but in fact no correction is required. We have explained it in lines 219-233, but we accept that the explanation might have been too dense. Let us explain it again. $\mathcal{E}\_t(\pi_t,\gamma_t)$ in Theorem 5 is a summation of PAC-Bayes-kl bounds on $\mathbb{E}\_{\pi_t}[F\_{\gamma_t|j}(h,\pi\_{t-1}^*)]$. By definition, $\hat F\_{\gamma\_t|j}(h,U\_t^{val},\hat \pi\_{t-1}^*)$, which involves the sampling according to $\pi_{t-1}^*$, is an unbiased sample of size $n_t^{val}$ from $F_{\gamma_t|j}(h,\pi_{t-1}^*)$. In contrast to standard applications of PAC-Bayes, where the samples are pairs $(X,Y)$, here the samples are triplets $(X,Y,h’)$, where $h’$ is sampled according to $\pi_{t-1}^*$, but this does not matter for the bound. We will emphasize this in the final manuscript. Lines 214 & 231: thanks for the comments, we will polish the writing. The definition of Recursive VI and comparison to Recursive PAC-Bayes could be an interesting direction for future work. We note, though, that while the two share the similarity of using KL regularization, the settings and objectives are quite different. PAC-Bayes aims to minimise the generalization error, whereas VI aims to compute the Bayesian posterior. Therefore, it is not immediately clear what could be a meaningful comparison.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Accelerating Transformers with Spectrum-Preserving Token Merging
Accept (poster)
Summary: This paper proposes Protect Informative Tokens Merging (PIToMe), a token merging method that safeguards crucial information-bearing tokens prior to the merging step. PIToMe focuses on the token-splitting step before token merging. Specifically, PIToMe defines and calculates an energy score for tokens to be merged, and marks the highest scored tokens as suitable candidates for merging. Experiments show that PIToMe consistently outperforms previous token merging methods such as ToMe, DiffRate, ToFu, etc. Theoretical analysis reveals that PIToMe preserves the intrinsic spectral properties to the original token space. Strengths: 1. Being the first method to consider the graph relations of tokens during token merging, the design of PIToMe is novel. Thorough theoretical analysis demonstrates the design insights of PIToMe. 2. Although some improvements are not significant enough, PIToMe successfully outperforms the previous token merging methods. Considering that many investigations towards token merging have been conducted by many works, I consider the improvements, albeit minor, as strengths. 3. Visualizations demonstrate the design insights of PIToMe. From those demonstrations, I could see that PIToMe merges the correct tokens in the image. Weaknesses: 1. ToMe is actually capable of conducting token merging at a very high compression rate (see Table 10 of ToMe paper, ViT-L and ViT-H can achieve about 2.5x speedup). Another recent work on arXiv [1] also demonstrate that this compression rate can even increase to >3x. However, in the experiments reported by PIToMe, the biggest speedup I could find is about 2x. I wonder whether PIToMe still works out for higher compression rates, since the metric vital for finding token clusters gradually defines more tokens as informative tokens (Eq. 4 in this PIToMe article). [1] PYRA: Parallel Yielding Re-Activation for Training-Inference Efficient Task Adaptation (arXiv 2403.09192) Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Also on the topic of compression rate. In ToMe, r can be randomly set to achieve any compression rates. Can I randomly set the r values for each layer in PIToMe? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: See weaknesses & questions. I may change the rating after carefully checking the replies and reviews from other reviewers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and high score, and for recognizing the main contribution of our work. We would now address your concerns regarding the compression rate of PiToMe. ### 1. ToMe is actually capable of conducting token merging at a very high compression rate (see Table 10 of ToMe paper; ViT-L and ViT-H can achieve about 2.5x speedup). Another recent work on arXiv [1] also demonstrates that this compression rate can even increase to >3x. However, in the experiments reported by PIToMe, the biggest speedup I could find is about 2x. **** Our PITOME can indeed achieve up to a $3\times$ speedup, as evidenced in Figures 5 and 6 (in the submission). In these experiments, we benchmarked PiToMe against various baselines using different ratios of $r$, where the compression rate ranged from approximately $70\%$ down to $30\%$. Generally, performance dropped by approximately $0\%$ to $3\%$, keeping more than $95\%$ of the baseline model's performance. Remarkably, on text-classification tasks (figure 10 in the submission), PiToMe can even compress down to $20\%$ of the number of tokens. This resulted in training and inference speeds approximately $5\times$ faster while maintaining accuracy above $85\%$ (about a $7\%$ accuracy drop compared to baseline ). Please refer to figures 2, 3, and 4 in the global rebuttal, where we conduct experiments on a wider range of ratio $r$. Thank you for recommending arXiv [1], which was accepted by ECCV24 on July 01, 2024. Indeed, modulating token features during token merging can mitigate the performance drops under a low compression rate and achieve up to $\times3$ speed. However, we do not recommend compression rates higher than $60\%$ because compressing beyond this rate leads to significant performance degradation in many tasks. As shown in the arXiv [1] (PYRA) and ToMe, achieving a $3\times$ faster inference speed typically results in a $4\%$ to $7\%$ drop in accuracy compared to the baseline in image classification tasks, which is undesirable. Also, please note that factors other than the compression rate, such as hardware and batch size, can affect the model's speed. Therefore, we are unable to fully compare PyRA with PiToMe as the author hasn't published their code. ### 2. I wonder whether PITOME still works out for higher compression rates since the metric vital for finding token clusters gradually defines more tokens as informative tokens (Eq. 4 in this PITOME article) **** First, we want to clarify that Equation 4 defines the token energy score. In our approach, informative tokens (low energy) are clustered into a preserved group, which will not be merged, and we only merge the less informative tokens (top 2k tokens with highest energy). Like other compression methods, PITOME can handle higher compression rates; however, there is a trade-off between performance and compression rate. It is important to note that the compression process primarily merges less informative tokens while preserving the informative ones, which is true as long as a reasonable ratio $( r )$ is maintained. This is demonstrated in all the trade-off figures, where PiToMe can maintain model performance before a significant drop occurs if $( r )$ is too low. In the final version, we will include more ablation studies on higher compression rates with various $( r^i )$ ratios for each layer $( l^i )$, as suggested by the reviewer. ### 3. Also, on the topic of compression rate. In ToMe, r can be randomly set to achieve any compression rates. Can I randomly set the r values for each layer in PIToMe? **** In PITOME, the compression rate can be flexibly controlled based on the ratio r, representing the fraction of remaining tokens. Specifically, each layer $l^i$ is controlled by a ratio $r^i$, meaning that $1- r^i$ of the tokens will be merged. The overall percentage of remaining tokens after compression is given by: $$ r_{\text{remain}} = \frac{\sum_{i=1}^{N} r^i}{N} $$ For example, if we consider a ViT with 12 layers $(N = 12)$ and $r = 0.9$, the percentage of remaining tokens after compression is $r_{\text{remain}} = 0.538$. Similarly, for a BLIP2 model with 48 layers, $r_{\text{remain}} = 0.362$ with $r = 0.9$. Thus, it is feasible to set the value $r^i$ for each layer randomly. However, it is important to consider the trade-off between performance and compression. As shown in our results, performance dramatically drops when $r < 0.85$ for all layers. --- Rebuttal Comment 1.1: Title: Follow up on our rebuttal Comment: Dear Reviewer 4Z7M, We thank you again for your positive feedback on our work. Regarding your remaining questions about the **merging capacity of PiToMe**, we wonder if our explanation is clear enough. If you have other questions, please feel free to raise it. We are happy to discuss it before the **discussion period ends in two days**. Thank you very much Regards The Authors
Summary: This paper introduces an energy-based approach to the token merging process, utilizing energy scores to avoid erroneous merges by distinguishing between informative or isolated tokens and redundant tokens. This method enhances the efficiency of heuristic merging approaches and preserves the spectral properties between the original and merged graph, thus accelerating token-based architectures such as ViT. Specifically, the technique employs cosine similarity with neighboring tokens to evaluate the energy score. Tokens with a high energy score are considered redundant and can be merged, indicating they belong to large clusters (e.g., background). Conversely, tokens with low energy scores are treated as foreground tokens. Furthermore, the authors also present theoretical findings demonstrating that the proposed method more effectively approximates the spectral distance between the initial token spaces and the merged token set compared to existing approaches. The method has shown competitive results across four tasks: image-text retrieval, visual question answering with LLMs, image classification, and text classification, underscoring its effectiveness and significance in the field. Strengths: * This paper is well-written and clearly presents both the proposed solution and the experimental results. * In addition to thorough experiments demonstrating the effectiveness of the proposed method, the paper provides a theoretical derivation to explain why PiToMe outperforms another related approach (e.g., ToMe). Weaknesses: * Some formulas and inferences are ambiguous, leading to misunderstandings that undermine the correctness of the theoretical explanations. * The lack of results for higher merging ratios weakens the practicality of the proposed approach. * There is a lack of discussion on related works (e.g., CrossGET [1], TRIPS[2]). [1] CrossGET: Cross-Guided Ensemble of Tokens for Accelerating Vision-Language Transformers, ICML 2024 [2] TRIPS: Efficient Vision-and-Language Pre-training with Text-Relevant Image Patch Selection, EMNLP 2022 Technical Quality: 3 Clarity: 3 Questions for Authors: * There are some ambiguities that need to be clarified in the methodology section. The authors claim that $m$ is a fixed constant margin; however, according to the context, $m$ seems to be the threshold of the margin, and $1-m$ is the margin. If I am wrong, the following description is confusing: In lines 162-163, "$m = 0.9 − 0.9 × l_i/l$, ... indicating an increasing margin as tokens move to deeper layers." This sentence seems to be the complete opposite of the conclusion. Also, does $\alpha = 1.0$ mean that when token $j$ is outside the margins, $f_m(x)$ is a negative constant? Furthermore, there are no related ablation studies or discussions of the impact of the $\alpha $. * In most of the experiments, the ratio $r$ is 0.95, 0.975, 0.9, and 0.925. Only the image classification results include a ratio $r = 0.7$. Existing approaches (e.g., ToMe) can achieve ratios of $r = 0.5 \sim 0.6$. I am curious about the consistency of the proposed method at higher merging ratios. * Some existing work, such as TRIPS and CrossGET, has also investigated token merging. The lack of thorough discussion or comparison with previous work weakens PiToMe's claimed novelty. I suggest the authors more clearly present their merits and differences in the paper. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: - The authors discuss some limitations in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your helpful feedback. In what follows, we will address your concerns individually. ### **1. The authors claim that is a fixed constant margin; however, according to the context, $m$ seems to be the threshold of the margin, and $1-m$ is the margin.** We're sorry for the confusion. As we use the cosine similarity to compute the energy score, the value of $x$ in Eq. 4 will be in the range $(-1,1)$. A value close to 1 indicates that the two vectors point in the same direction (identical tokens). Thus, since $m=0.9-0.9\cdot l_i/l$ , we have that for the first layer, the margin will have a value of 0.9, and only neighbors that have a cosine similarity larger than 0.9 will be considered as neighbors. Tokens outside this margin will have the value $x$ replaced by a negative constant $\alpha$. The margin becomes adaptively "wider" in deeper layers as $m$ decreases. ### **2. Does mean that when the token is outside the margins, is a negative constant? Furthermore, there are no related ablation studies or discussions of the impact of the $\alpha$** Yes, due to the strict page limits, we could not include the ablation regarding these parameters, the margin $m$, and the lower bound $\alpha$ . Recall that we used $\alpha(\exp(x-m) - 1)$ to smoothen the $f_m(\cdot)$ function to consider neighbor tokens that are outside but are close to the margin $m$. The larger $\alpha$, the more consideration for these neighbors, and $\alpha=0$ means we completely ignore them. The table below shows experiment results on the image-text retrieval task using different values of $\alpha$ and $m$. | $r$ | $\alpha=1.0$ | $\alpha=0.5$ | $\alpha=0.0$ | | --- | --- | --- | --- | | 0.85 | 519.98 | 518.66 | 515.9 | | 0.875 | 545.9 | 544.22 | 542.54 | | 0.9 | 562.82 | 562.42 | 561.92 | | 0.925 | 571.88 | 571.1 | 570.62 | | 0.95 | 577.5 | 577.43 | 577.4 | | 0.975 | 580.24 | 579.82 | 579.76 | ### **3. In most of the experiments, the ratio is 0.95, 0.975, 0.9, and 0.925. Question about the consistency of the proposed method at higher merging ratios.** Thank you for your comment. We define the ratio $r$ differently from the ratio $r$ as mentioned in your question. We define the ratio $r$ as the number of tokens remaining in *each layer* after merging, not as the percentage of total remaining tokens after compression. Since PITOME is applied to all layers, $(1-r)$ percent of tokens will be merged after each layer. For example, let's consider a ViT model with 12 layers and a ratio $r=0.9$. This means that 10% of the tokens will be merged after each layer. The percentage of remaining tokens after compression is thus given by: $$ r_{\text{remain}} = \frac{\sum_{i=1}^{12} 0.9^i}{12} = 0.538. $$ $r_{\text{remain}}$ is the ratio $r$ mentioned in your question, which means reducing nearly half the number of tokens the model needs to process and leading to about two times the throughput. This percentage can further decrease for larger models with more layers. For instance, in the BLIP2 model with 48 layers, we can achieve $r_{\text{remain}} = 0.362$  if $r = 0.9$ . For large language models like LLaVA, $r_{\text{remain}}$ can be even lower (in our paper we tested with $r_{\text{remain}}$ ranging from 0.732 down to 0.277). ### **4. PiToMe compared to TRIPS and CrossGET** For better readability we added a table to compare these works: | | TRIPs | CrossGET | PiToMe | | --- | --- | --- | --- | | Method | selective protect and merge image token based attention score from [CLS] token from text encoder | Introduced two components: (i) Complete-Graph Soft Matching (CGSM): to merge tokens with the highest similarity scores. (ii) Cross-Guided Matching and Ensemble: introduce learnable tokens to provide cross-modal importance scores, guiding the CGSM algorithm and enabling weighted average merging. | Introduced energy scores to identify mergeable and isolated (informative) tokens. Tokens in large clusters will have high energy scores and be merged. | | Strength | boost the model accuracy at a lower computational cost than the unaccelerated baseline model | (i) Can be applied to both modality dependence and independent models. (ii) achieve better accuracy than the previous method (ToMe) in both w/wo training | (i)Share the shame strength with CrossGET (ii)Robust against imbalanced clusters | | Weakness | It is tailored for modality dependent models like ALBEF, and still need pertaining before inference | CSGM is very sensitive to the token space’s distribution and is suboptimal when facing token space with imbalanced clusters (Confirmed by the author in the appendix section) | Applied to only the ViT encoder of the VL model without any cross-modal guidance | Since these papers do not provide their source code, it is difficult to reproduce their results. Also, we would like to explain more clearly the weakness of the CSGM algorithm used in CrossGET: - CSGM's implementation divides all tokens into two disjoint sets, $T_s$ , and $T_d$, allowing only one token from $T_s$ to merge with one token in $T_d$. This is suboptimal for imbalanced clusters, such as large clusters with many tokens (e.g., backgrounds) and small clusters (e.g., small objects), which are common in practice. - For example, let's assume we have four tokens where three are close to each other and one is isolated. CSGM would merge two of the three close tokens and then merge the remaining close token with the isolated one, causing significant information distortion. The correct behavior should be to merge the three close tokens and leave the isolated one untouched. - PiToMe, on the other hand, assigns high energy scores to tokens in large clusters, prioritizing their merging while protecting isolated tokens. In the example above, the three close tokens are identified as mergeable, while the isolated token remains protected. Moreover, PiToMe can also benefit from cross-modality guidance techniques as add-ons to enhance performance. --- Rebuttal Comment 1.1: Title: Follow up our rebuttal Comment: Dear Reviewer ZkPm, Because we have around two days left before ending the discussion phase, we wonder if our rebuttal above answers your concerns. Your major questions about discussing **related works**, **compressing rates**, and **ablation studies** on margin $m$ were provided. Please let us know if there is any points you are still unclear. Thank you very much for taking the time to give our feedback. We look forward to hearing your response soon. Regards The Authors
Summary: This paper proposes a new strategy for reducing the number of tokens used by a vision transformer by merging similar tokens without. Such methods are very sensitive to the type of clustering/partitioning used on the tokens. The proposed approach, `PITOME` explicitly identify highly informative tokens, which should be kept out of the merging process. More specifically, the tokens are place in a graph where each node corresponds to a key embedding and the edges are the cosine distance between said keys. Based on this graph, an "energy" is computed for each token, capturing how mergeable they are. Only the top $2k$ are considered mergeable, while the rest is preserved. The actual following merging step follows a similar procedure as previous token merging methods: In summary, an "energy" score is computed on each token, using their key embeddings as base features. Using these energy scores, `EPITOME` draws itself apart from other merging techniques such as `ToME` in two ways; - explicitly exclude some tokens from the merging process - split tokens across the two sets $A$ and $B$ for bipartite matching as to maximise the change of max Strengths: - The proposed method is thoroughly evaluated on image, text and image+text tasks - Generally, `PITOME` achieves higher accuracy while preserving the expected number of FLOPs and memory usage - the issues addressed by `EPITOME` are well explained and motivated. Weaknesses: - Generally, the **experiments section** is really hard to parse: * the figure placement is a bit all over the place * some results would benefit to be moved to the appendix; for instance, in my opinion, the trade-off curves are much more informative than the tables * some figures are hard to read (Figure 3) (e.g. small captions, overlapping curves due to the y/x axes scales) * ablation experiments are much too condensed - In terms of ablation, there is little discussion about the term $m$ when computing the energy score: It seems to be an important hyperparameter as it impacts the "merge ability" of the tokens as the layer depth increase. Since `ToMe` and other related work are presented as having a lack of robustness to the token splitting strategy, it would be interesting to also show how robust `PITOME` is to this hyper parameter. - Some claims didn't seem substantiated enough to me - e.g. line 120 *"`PITOME` remains robust to token partitioning strategies"*: However `PITOME` uses a specific token partitioning algorithm. And the ablation on it (random split in Table 1) show that it actually suffers from a different portioning - line 276. *"We contend that this improvement stems from the merging of less significant tokens in PITOME potentially enhancing the robustness of the language mode"* -> this seems like a very vague hypothesis which in my opinion does really need a justification to begin with, since some other merging methods also outperforms the baseline. It may as well be a consequence of the noisiness of question answering datasets. Technical Quality: 3 Clarity: 2 Questions for Authors: - I didn't fully understand the role of **Table 2**: isn't `PITOME` agnostic to the choice of the architecture ? it would send a stronger message to combine `PITOME` with various backbones, rather than compare it to different architectures. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: there is a limitation section discussing some current drawbacks of token merging methods in general Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful feedback and for pointing out substantiated claims. We will now address your concerns one by one. ### 1. The experiments section is hard to parse We will address these concerns in the final revision using the additional page. This will allow us to provide more details about the experimental results. Furthermore, we will move some less important parts to the appendix. For example, in Figure 3, we will remove redundant space at the bottom of the top-row figure and the top of the second-row figure to increase the size of the curves. Regarding the ablation study, we will include more detailed descriptions for each setting and provide an analysis of their impacts to improve clarity. ### 2. Discussion about the term $m$ in the ablation when computing the energy score We acknowledge that the ablation study for $m$ is a valuable addition to the paper. To support readability, we include Figure 1 in the global rebuttal to visualize the impact of $m$ when experimenting with multiple margins $m$ on different tasks using various thresholds. From figure 1, it can be seen that, in both tasks, the adaptive version $m=0.9 - 0.9 \cdot l_i/l$ achieves the best results. While models with fixed $m$ tend to have the accuracy drop sharply when $r$ is lower than some threshold. The reason might be as the token space becomes more spare in deeper layers, PiToMe with fixed $m$ will likely assign the same energy score to every token which is undesirable since we want to identify isolated tokens to protect them while merging the others. ### 3. Some claims didn’t seem substantiated enough - **Line 120, which is “PITOME remains robust to token partitioning strategies.”** We apologize for the confusion the statement has caused. Indeed, this was a typo mistake. What we meant to convey is that "PITOME remains robust compared to other token partitioning strategies." In addition, we want to highlight that PITOME offers a better trade-off between speed and accuracy compared to previous BSM-based algorithms (i.e, ToMe, Diffrate, etc), while still being able to preserve spectral properties of the token space, similar to loop-based algorithms (i.e k-mean, spectral clustering, graph coarsening, etc). - **In line 276, “We contend that this improvement stems from the merging of less significant tokens in PITOME potentially enhancing the robustness of the language mode”**: We agree that the dataset may contain noise. However, this noisiness can be interpreted as either biased or diverse perspectives. A possible explanation for the performance improvement in our token merging method is that the model treats all objects in the image equally, regardless of their size and the number of tokens they have. Large objects have their tokens merged, while smaller objects' tokens are preserved. Therefore, the number of tokens representing an object is not much different. This approach allows the model to avoid bias caused by object size and focus on the objects themselves rather than being biased by larger objects with a greater number of tokens. Additionally, by paying equal attention to all scene objects, the model aims to capture diverse perspectives. ### **4. The role of Table 2** In Table 2, we aim to demonstrate the following points: - PITOME can be combined with various backbones (as mentioned by the reviewer). In particular, we test PITOME with 2 backbones in Table 2 (BLIP, BLIP2) and two additional backbones in Figure 3 (CLIP and ALBEF). - We show that PiToMe, when combined with various backbones still outperforms state-of-the-art architecture (CLIP, ALBEF, UNITER, VilT, etc...). - In addition to the image-text retrieval task, we evaluated PITOME on various other tasks using multiple backbones. For the image classification task, we tested our algorithm on six backbones with two different pre-training styles (DeiT and MAE). We used two backbones (LLaVA-7B and LLaVA-13B) for the visual question-answering task. We tested on two backbones (BERT and DistilBERT) in the text classification task. These experiments confirm the versatility of our algorithms. --- Rebuttal Comment 1.1: Title: Follow up on our rebuttal Comment: Dear Reviewer jrpm, We would like to thank you very much for your feedback, and we hope that our response addresses your previous concerns, for e.g., about the experimental section in Table 2. If there exist some points we have not responded to so far, please feel free to let us know soon, as the discussion period is expected to end soon. We would be more than happy to address any additional concerns from you. Thank you again for spending time on the paper; we really appreciate it! Sincerely Authors. --- Rebuttal Comment 1.2: Comment: Dear authors, Thank you for your response/clarifications and additional results. It is indeed good to see that the adaptive version of $m$ performs well across tasks. I am currently inclined to keep my score of **6** as I think the paper is technically solid and with good/robust evaluation, but with moderate to high impact, as the improvement over the ToME baseline is not always very significant. --- Reply to Comment 1.2.1: Comment: Thank you for your feedback. We will integrate those points into the final camera if the paper is accepted. Regards Authors
Summary: This paper proposes a novel method called PITOME, which enhances the efficiency of Vision Transformers (ViTs) by merging tokens in a way that preserves crucial information. Unlike previous methods, PITOME uses an energy score to prioritize and protect informative tokens while merging redundant ones. This approach reduces the computational and memory requirements by 40-60% with minimal impact on accuracy. PITOME achieves superior performance in tasks such as image classification, image-text retrieval, and visual question answering, demonstrating its effectiveness and robustness compared to existing token merging techniques. Furthermore, it theoretically preserves the spectral properties of the original tokens, contributing to its efficiency and performance stability. Strengths: 1. The introduction of the energy score metric for token merging is a novel idea that addresses the limitations of previous methods. This innovative approach effectively distinguishes between informative and redundant tokens, ensuring the preservation of critical information during the merging process. 2. The theoretical foundation for preserving the spectral properties of the original token space is a unique contribution. This aspect of the work sets it apart from existing methods, which often do not consider the impact on the spectral properties of the token space. 3. The paper provides extensive empirical validation across multiple tasks, including image classification, image-text retrieval, and visual question answering. 4. The theoretical analysis that supports the empirical findings enhances the overall quality of the paper. By demonstrating that PITOME preserves spectral properties, the authors provide a solid theoretical justification for the observed performance improvements. Weaknesses: 1. The proposed method, PITOME, does not show significant improvement over the baseline, particularly ToMe, in many tasks. The empirical results indicate that in some cases, PITOME's performance is only marginally better or even comparable to existing methods, which raises questions about its practical benefits. 2. As shown in Figure 11, PITOME fails to outperform ToMe in visualizations, and in some instances, the token merging appears unreasonable. For example, in Figure 11(d), the merging around "a tennis racquet" is poorly executed, leading to doubts about the method's effectiveness in preserving critical details. 3. The paper includes extensive mathematical derivations and proofs, which significantly impact its readability. The dense mathematical content makes it difficult for readers to quickly grasp the core contributions and the practical implementation of the proposed method. 4. The paper primarily focuses on improving ToMe by addressing its token merging strategy. While this is a valid research direction, the scope of the problem tackled is relatively narrow. The significance of the improvement may not be high enough to warrant extensive interest or application, as it addresses a specific aspect of an existing method rather than introducing a fundamentally new concept or problem. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. In Figure 11, PITOME's token merging appears unreasonable in some cases (e.g., the tennis racquet example). Could you provide a deeper explanation or justification for these visualizations and any potential improvements to address these issues? 2. The mathematical derivations and proofs are quite complex and impact readability. Are there ways to simplify these explanations or provide more intuitive summaries of the key points to improve accessibility for readers? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have acknowledged some limitations of their work and provided suggestions for future research directions. The paper does not explicitly address any potential negative societal impacts. Given the focus on efficiency improvements in Transformers, the primary societal impact would likely relate to the broader implications of making these powerful models more accessible and efficient. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and for recognizing the strengths of our work. We appreciate your thoughtful comments. We will now address each of your concerns individually. ## 1. Performance comparison with Tome We would like to emphasize that PiToMe demonstrates significant improvements over the baselines, particularly ToMe. We acknowledge that the condensed presentation of experiments in the paper might have made it difficult to discern the performance gains of our method. We highlight the performance gains of PiToMe over ToMe as follows: - Image-text retrieval: At a 60% compression rate, PiToMe outperforms ToMe by a margin of approx 11% for recall score at top 1. - Visual Question Answering: With 50% compression rate, PiToMe maintains a more stable performance drop below 0.05% while ToMe's drop exceeds 1%. - Text classification: PiToMe achieves a better trade-off, compressing over 60% of tokens with accuracy above 90% (about a 2% drop) and over 80% of tokens with accuracy above 85%. - Image classification: PiToMe saves 40-60% of FLOPs, with an average performance drop of 1.4% compared to 2.6% for ToMe. ## 2. Figure 11 We kindly argue that a misunderstanding might cause the observation that the merging is unreasonable. For clarity, we discuss how we generate these visualizations and then provide a definition of good token merging: - Implementation: Tokens are represented as grid boxes with black borders, colored by the mean color of the pixels they cover. Tokens with higher attention scores will have a bolder cyan border. During the merging process, result tokens will have their colors replaced by the mean color of all the merged tokens. If merged tokens are adjacent, the black borders will be removed. - Good merging criteria: Objects represented by a small number of tokens should be preserved, while larger objects with repeated textures, such as background areas, can be considered for merging. In Figure 11d, we observe that PITOME effectively minimizes information distortion by preserving important tokens representing "the tennis racquet," "the man," and the "Adidas logo" while merging tokens that contain redundant information representing the background. Despite some mis-merged tokens in the background regions, PITOME largely preserves the distribution of all tokens. This is illustrated by the attention map, which closely resembles that of the original model. On the other hand, In ToMe, there are several wrongly merged tokens; for example, all tokens representing the “tennis racquet" are merged into a single token. Similarly, for the “Adidas logo”. Also, ToMe merges tokens representing the man's arm and the white skirt with background tokens, which can be seen from the resulting false color. ## 3. Mathematical derivations and proof The theory section, with mathematical derivations and proofs, is intended to show the spectrum preservation property of a graph built based on merged tokens using our method compared to the original token graphs in ViT. To improve readability, we will emphasize the key steps in deriving our theoretical results and provide high-level summaries in the final version. ## 4. Scope and significance We focus on ToMe because it pioneered this line of research, making it a prime candidate to demonstrate our theoretical results. To address the reviewer's concern, we first provide a comparison table, below | Method | Targeted architecture | Description | Weaknesses | | --- | --- | --- | --- | | ToMe / ToFu | ViT | BSM with odd &even indices partitioning | risk damaging tokens in later layers | | DiffRate | ViT | BSN with partitioning based on attention score from [CLS] token | Tightly coupled with the attention scores of classification task and performs poorly on several tasks | | PuMer | VL models | BSM+Cross model guidance | is not able to be used for modal-independent models like CLIP, and is tightly coupled with a vision-language model | | CrossGET | VL models | CGSM + Cross Model guidance | is tightly coupled with vision-language models and is sensitive to token distribution | | DCT | Language models | used the FFT operator to filter out high frequencies with low amplitude | Perform poorly in the vision domain | | PITOME | Transformer-based | PiToMe identifies large clusters and partitions them based on the ordered Energy score of each token. Thus, it is robust again token distribution and can be used for any Transformer-based architecture | | We summarize our contribution: - Conceptual novelty: We introduce a new concept called *energy score*. When applying this into other applications like LLM, 3D point clouds, video processing, or PDEs, it could open new research directions for designing an optimal energy score that captures specific underlying data distributions. Furthermore, energy minimization theory has been well-studied in fields such as chemistry and physics. Thus, we believe this will pave the way for a new research direction in these fields. - Theoretical results: Our theoretical results provide a solid foundation for understanding and applying the energy score concept, offering valuable insights for further research and development. - Extensive applications: To demonstrate the versatility of PITOME, we provided extensive experiments for multiple scenarios (image image-text retrieval, image classification, text classification, and VQA using LLAVA-7B and 13B). Furthermore, we compared our method with SOTA compression models in each specific downstream task, showcasing the broad applicability and effectiveness of our approach. Moreover, PITOME’s accuracy can also benefit from existing techniques (differential merging schedule, cross-modal guidance, etc.) to improve its accuracy. Most importantly, due to the widespread use of Transformers and the associated high computational resources required, significant efficiency improvements can lead to substantial resource savings and open up new applications. --- Rebuttal Comment 1.1: Title: Follow Up Our Rebuttal Comment: Dear Reviewer zfJf, We sincerely appreciate the time you have taken to provide feedback on our work, which will help us to improve its clarity, among other attributes. We want to follow up to check if our response fully addressed your concerns/questions before ending the discussion period. Sincerely, The Authors --- Rebuttal 2: Comment: Dear Reviewer zfJf, We have **around five hours from this post before ending the discussion phase**. Could you please let us know whether our responses answer your main concerns? Regards Authors
Rebuttal 1: Rebuttal: First, we are grateful to the reviewers for their valuable comments and detailed feedback. We are pleased that the **reviewers recognize our energy-based token merging as a novel idea** (**Reviewer zfJf** and **Reviewer 4Z7M**) with a theoretical foundation explaining the underlying mechanism (**Reviewer zfJf** and **Reviewer ZkPm**). All reviewers also acknowledged our extensive experiments on image, text, and image-text retrieval, demonstrating that PITOME achieves higher accuracy while maintaining the expected number of FLOPs and memory usage **(Reviewer JRPM** and **Reviewer 4Z7M**). Then, we would like to summarize the main contributions of our work: - We are introducing a new concept called the energy score, which can be applied to various applications such as LLM, 3D point clouds, video processing, or PDEs. This could open new research avenues for designing an optimal energy score that accurately represents specific underlying data distributions. Additionally, it could lead to new research directions in fields like chemistry and physics. - We provide a robust and solid theoretical foundation for understanding and applying the energy score concept, offering valuable insights for further research and development. - We demonstrated PITOME's versatility through extensive experiments in various scenarios, including image-text retrieval, image classification, text classification, and VQA using LLAVA-7B and 13B. Additionally, we compared our method with top compression models in each specific task, showcasing its broad applicability and effectiveness. PITOME's accuracy can also benefit from existing techniques to improve its performance further. - Significant efficiency improvements can lead to substantial resource savings and open up new applications due to the widespread use of Transformers and the high computational resources required The reviewers raised several concerns, which we addressed in the individual response. We summarize the highlights of our responses as follows: - **Performance comparison with Tome:** We discuss the advantages of our approach over the baselines, including Tome, across various tasks with notable performance gains. Furthermore, we provide a detailed analysis of the speed and compression rate comparison between our PITOME and Tome. - **Impact of variable $m$ and $\alpha$**: We include additional ablation studies on various values of $m$ and $\alpha$ to showcase the effectiveness and robustness of our proposed PITOME. - **Compression rate:** We clarify the merging ratio $r$ used in our paper and discuss in detail the trade-off between performance and compression rate. Finally, we have carefully addressed all the reviewers' comments and questions. We will revise and update our final version, using one additional page, based on all the reviewers' suggestions. Pdf: /pdf/2901e5becdba2ffe927e7ab7826afb821c188f2d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The Impact of Geometric Complexity on Neural Collapse in Transfer Learning
Accept (poster)
Summary: The paper studies the relationship between geometric complexity (based on penultimate layer features) and the neural collapse phenomenon (especially, the variability collapse property). The main theoretical result is a bound on the CDNV-based NC metric using geometric complexity (Proposition 4.1). This is then employed to extend generalization and transfer learning bounds (proposed by previous efforts) to rely on geometric complexity. Experimental results on classification tasks (along with transfer learning settings) are presented to validate the proposed relationship. Strengths: The unified analysis of neural collapse and geometric complexity presents an interesting line of research for the community. Especially, in terms of studying seemingly disjoint phenomena under a common lens. Weaknesses: 1. There seems to be a mismatch in the $\log(2/\delta)$ term in Proposition 4.2 vs $\log(1/\delta)$ term in Proposition 4.3, which makes the main results inconsistent (yet fixable). Additionally, why do we have an "expectation" over the "error" term in the LHS of Eq (10)? Isn't the "error" itself an expectation? 2. Although the paper presents generalization bounds, a discussion on the generalization performance of the classifier (VGG-13 for Figure 1) seems to be missing. Only the neural collapse and geometric complexity plots are illustrated but a deeper relationship between the trends and the test performance is lacking. Is there an upper limit of "good" learning rates, and batch sizes above which the geometric complexity can reduce but the test performance can get worse? 3. Similar to the above point, currently it seems like increasing the learning rate, increasing batch size, and increasing the regularization parameter value results in lower neural collapse values. Ideally, neural collapse is applicable during the terminal phase of training (TPT) where the training accuracy is $1$. Does this mean that all the experiments (at least for Figure 1) lead to TPT during training? nit: Please fix the notation for the norms as it is unclear if $||.||$ represents $||.||_2$ or $||.||_F$. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Since the geometric complexity-based generalization/transfer learning bounds are the upper bounds of existing neural collapse-based bounds, what can be a potentially new implication of your results? since it is common practice to do a hyper-parameter search over learning rates, batch sizes, and regularization parameter values. 2. The paper does not discuss any explicit regularization mechanism that aims to reduce the geometric complexity. What are the potential challenges in designing such a regularization mechanism? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We are very happy that you found that the “unified analysis of neural collapse and geometric complexity presents an interesting line of research for the community [...] in terms of studying seemingly disjoint phenomena under a common lens.” We have addressed your comments to the best of our ability below. Please consider raising your score if your concerns have been resolved. We'd also like to respectfully note that your concerns seem to focus around only minor aspects of our work that are easily fixable (i.e., clarification, notation, typos, addition of plots). Could you please help us understand which of your concerns prompted you to a "confident reject” so that we can address it? > *"There seems to be a mismatch in the log⁡(2/𝛿) term in Proposition 4.2 vs log⁡(1/𝛿) term in Proposition 4.3, which makes the main results inconsistent (yet fixable). Additionally, why do we have an "expectation" over the "error" term in the LHS of Eq (10)? Isn't the "error" itself an expectation?"* Thank you for noticing the typo: there should be $\log(2/\delta)$ instead of $\log(1/\delta)$ in Prop. 4.3. As for the second expectation in the LHS of Eq. (10) it is taken over the sample S used to produce the classifier $h_{f,S}$ on line 230 (similarly to [22, Proposition 5]). We will correct the typo and make the second point clearer in the paper. > *"Although the paper presents generalization bounds, a discussion on the generalization performance of the classifier (VGG-13 for Figure 1) seems to be missing. Only the neural collapse and geometric complexity plots are illustrated but a deeper relationship between the trends and the test performance is lacking."* We included plots for the test accuracy in the rebuttal document (see Figure 1, 3rd column). As referenced in our paper, it has already been observed in [22] that lower levels of NC are indicative of higher test accuracy. Similarly, in [17] it has been shown that lower levels of GC also are characteristic of higher test accuracy. We will add a remark to this effect in the paper and include similar plots for our experiments in the appendix. > *"Is there an upper limit of "good" learning rates, and batch sizes above which the geometric complexity can reduce but the test performance can get worse?"* Yes, you are correct. As for any form of regularization (implicit or explicit), test performance degrades beyond a certain level of the regularization rate, while the regularized quantity keeps decreasing at the expanse of the original objective. >*"Similar to the above point, currently it seems like increasing the learning rate, increasing batch size, and increasing the regularization parameter value results in lower neural collapse values. Ideally, neural collapse is applicable during the terminal phase of training (TPT) where the training accuracy is 1. Does this mean that all the experiments (at least for Figure 1) lead to TPT during training?"* Yes, that is correct and a good clarification. All our experiments lead to training accuracy 1. We included these plots for our main experiments in the attached rebuttal document (see Figure 1, 1st and 2nd columns) and will add similar plots for our additional experiments to the appendix. > *"Please fix the notation for the norms as it is unclear if $||\cdot||$ represents $||\cdot||_2$ or $||\cdot||_𝐹$."* Thank you. We will clarify the norm notation in the paper as you suggest. > *"Since the geometric complexity-based generalization/transfer learning bounds are the upper bounds of existing neural collapse-based bounds, what can be a potentially new implication of your results? since it is common practice to do a hyper-parameter search over learning rates, batch sizes, and regularization parameter values."* To our knowledge, there are no known theoretical mechanisms which control neural collapse. Our bound in Prop. 4.1 shows that the same mechanisms that control GC also control NC. This is significant because (see [17] from the paper references) there are a number of experimental mechanisms which control the GC and furthermore a strong theoretical justification as to why [17, Thm 5.1]. The current work provides a novel justification that these same mechanisms also control NC. This implication is theoretically motivated via our Prop 4.1 and experimentally validated. A second new implication that can be derived from our bound in Prop 5.1 is that pretrained networks with lower GC have better transfer learning performance on new tasks. These 2 new implications are indeed the key theoretical findings in our paper. Devising practical schemes based on these theoretical findings is beyond the scope of the current paper, yet opens a new avenue for future work. > *"The paper does not discuss any explicit regularization mechanism that aims to reduce the geometric complexity. What are the potential challenges in designing such a regularization mechanism."* This is a good point. While direct regularization by GC computed at the logit layer has been previously studied (for instance in [17]) and found beneficial, direct regularization by GC computed at the embedding layer becomes impractical because of the much higher dimension of the embedding layer, resulting in a very large computational cost. That being said, explicit regularization by GC computed at the logit layer provides a direct way to control GC computed at the embedding layer. We added this experiment (see rebuttal document Fig. 3) and observed the predicted effect on the embedding GC (rebuttal Fig 3. left and middle) and on neural collapse (rebuttal Fig 3. right). We will add a paragraph to our paper about the challenge of directly regularizing for embedding GC. --- Rebuttal Comment 1.1: Comment: Thank you for the responses and also the additional experimental results. - In my opinion, the authors discussion on explicit GC regularization is key to improve the contribution. Although it is computationally inefficient, it is always helpful to have experiments which validate the claim in this paper. Based on the rebuttal PDF, it seems like NC indeed reduces when explicit GC regularization is employed. - I would request the authors to focus more on this aspect (techniques, limitations, tuning etc of such a regularizer) to further strengthen their paper. Simply showing correlations by tuning common hyper-parameters might not fully justify your work. I have increased the score. Good luck. --- Reply to Comment 1.1.1: Title: Thank you! Comment: We are glad that we were able to address your concerns. And we agree, this in an important (and potentially useful) aspect, despite the inherent computational inefficiencies. Including these experiments has helped to improve the exposition. Thanks again for the suggestion!
Summary: This paper examines the relationship between geometric complexity (GC), neural collapse (NC), and transfer learning performance in deep neural networks. The key contributions are: * Proposing geometric complexity as a measure that connects the flatness of the loss surface and neural collapse * Deriving theoretical bounds showing GC controls NC under certain assumptions * Empirically demonstrating that mechanisms affecting GC (learning rate, batch size, regularization) also influence NC * Showing lower GC during pre-training leads to better transfer learning, especially for few-shot tasks * Proving a new generalization bound in terms of GC * Demonstrating GC can be efficiently estimated on small samples The authors argue that GC can serve as an informative proxy for transfer learning potential and provide a unifying framework for understanding implicit biases in deep learning. By bridging concepts from optimization geometry, representation learning, and transfer learning, the paper aims to provide deeper insights into the fundamental mechanisms driving the success of modern deep learning techniques. Strengths: **Originality**: The paper presents a novel perspective connecting several concepts in deep learning theory. Its original contribution is using geometric complexity as a bridge between loss surface geometry and neural collapse. This approach provides a fresh lens through which to view the implicit biases in deep learning, potentially unifying several strands of research. Using GC as a progress metric for transfer learning is innovative and could open up new avenues for improving pre-training techniques. **Quality**: The empirical evaluation thoroughly examines multiple datasets, architectures, and hyperparameter settings. The authors have made a commendable effort to validate their hypotheses across various conditions, including different network architectures (ResNet, VGG), datasets (CIFAR, MNIST, Fashion-MNIST), and transfer learning scenarios. The ablation studies exploring the impact of learning rate, batch size, and regularization on GC and NC provide valuable insights into the dynamics of deep network training. While limited in scope, the theoretical results give some formal grounding for the empirical observations and offer a starting point for further analytical work in this area. **Clarity**: The paper is well-structured and communicates the main ideas. The authors have presented a complex set of ideas in a logical flow, starting from the theoretical foundations and moving to empirical validation. The figures illustrate key results, particularly the relationships between GC, NC, and transfer performance. The use of color-coding and consistent formatting across figures aids in interpretation. The pseudo-code and detailed experimental setup descriptions in the appendix are beneficial for reproducibility. **Significance**: If the claims hold up, this work could provide valuable insights into the mechanisms behind transfer learning and self-supervised learning. The proposed geometric complexity measure may help analyze and improve deep learning models. The potential impact extends beyond just theoretical understanding. If GC can serve as a reliable proxy for transfer learning potential, it could guide the development of more effective pre-training strategies and architectures. Furthermore, the connection between GC and NC could help explain why certain training practices (like large batch training or specific learning rate schedules) are effective, potentially leading to more principled approaches to hyperparameter tuning. Weaknesses: **Limited theoretical foundation**: While the paper provides some theoretical results, the assumptions required (e.g., Poincaré inequality) are quite strong and may not hold in practice for real datasets and architectures. The connection between the linear model analysis and deep neural networks is not strongly justified. This is a significant limitation, as it's unclear how much of the theory translates to practical scenarios. The authors could strengthen this aspect by providing empirical evidence that these assumptions hold in realistic settings or deriving weaker results under more general conditions. Additionally, the paper could benefit from a more thorough discussion of the implications and limitations of these theoretical results. **Experimental limitations**: The empirical evaluation focuses on image classification tasks and relatively small datasets. The generalizability of the results to larger datasets, other domains (e.g., NLP), or more complex architectures is unclear. This narrow focus raises questions about the broader applicability of the findings. For instance, it's unclear whether the relationships between GC, NC, and transfer performance would hold for large-scale pre-training scenarios like those used in modern language models. The paper would be significantly strengthened by including experiments in a broader range of tasks and scales or at least by providing a more detailed discussion of the potential challenges in scaling up the approach. **Lack of comparison to related metrics**: The paper does not thoroughly compare geometric complexity to other related complexity measures or generalization bounds from the literature. This makes it difficult to assess the relative merits of GC. There's a rich body of work on complexity measures for neural networks, including various forms of capacity measures, stability-based bounds, and PAC-Bayesian approaches. A comprehensive comparison would help situate GC within this broader context and clarify its unique contributions. Such a comparison could also help identify scenarios where GC might be particularly advantageous or potentially suboptimal compared to existing approaches. **Causality vs correlation**: While the paper shows correlations between GC, NC, and transfer performance, it does not conclusively demonstrate a causal relationship. Alternative explanations for the observed trends have not been thoroughly explored. This is a critical limitation, as it leaves the possibility that GC is merely a proxy for some other underlying factor driving NC and transfer performance. The paper would be strengthened by a more rigorous causal analysis, perhaps through interventional studies or by controlling for potential confounding variables. Additionally, discussing potential mechanisms by which GC might causally influence NC and transfer learning would add depth to the analysis. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How does geometric complexity compare to other measures of model/representation complexity like Fisher information, gradient norm, etc.? A more thorough comparison would strengthen the paper. 2. The theoretical results rely on strong assumptions like the Poincaré inequality. Can you provide empirical evidence that these assumptions hold for real datasets/models? Or derive results under weaker assumptions? 3. The paper shows correlations between GC, NC, and transfer performance but does not conclusively prove causation. Have you considered alternative explanations or confounding factors? 4. How computationally expensive is it to measure GC during training? Is it feasible to use it as a regularization term or early stopping criterion in practice? 5. The generalization bound in terms of GC is interesting, but how tight is it empirically? How does it compare to other generalization bounds in the literature? 6. Have you explored using insights about GC to guide architecture design or improve training algorithms? Demonstrating practical benefits would strengthen the paper. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors acknowledge some limitations, like the focus on image classification tasks and relatively small datasets. However, they could go further in discussing potential shortcomings: * The strong assumptions required for the theoretical results may limit their applicability. * The computational cost of measuring GC is not thoroughly addressed. * Potential negative consequences of optimizing for low GC are not explored. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your thoughtful review. We are very happy that you found our approach to be a “novel perspective” as well as an “original contribution” and that it “could open up new avenues for improving pre-training techniques” and “provide valuable insights into the mechanisms behind transfer learning and self-supervised learning”. We are also glad that you found the “empirical evaluation” to be “thoroughly” done and that the paper was “well-structured” with “ideas communicated clearly”. Below are our answers to your concerns. They have been very helpful in further strengthening the paper. We ask that you please consider raising your score if your concerns have been resolved. >*"The connection between the linear models and DNNs is not justified"* There is no mention of linear models in our paper. Can you please reference the paragraph creating confusion so that we can clarify? >*"Limited theoretical foundation: [..] the assumptions required (e.g., Poincaré inequality) are quite strong [..] the authors could strengthen this by providing evidence that these assumptions hold in realistic settings"* This is a good point but we respectfully disagree with the idea that the Poincare Inequality (PI) is an overly strong or unrealistic assumption. We agree our paper would be strengthened by clarifying this point. Here, we include a short summary of key facts and can provide a more detailed exposition in the Appendix. Note that the Gaussian distribution; mixtures of Gaussians with equal (or different) variances; mixtures of Gaussians and sub-Gaussians; mixtures of uniform and Gaussian measures; and any log-concave probability measure all satisfy a PI ([1, 2] below). The same is true for most distributions of importance in probability and statistics ([3] below). The PI has also been assumed to hold for real life image datasets, where it has been used to help improve GAN convergence ([4] below). It is also a key assumption to understand the role of over-parameterization in generalization as happens for large NNs ([5] below). On the contrary, non-PI distributions are considered pathological; they can be constructed for instance by artificially concatenating distributions with fully disjoint support. The only restrictive assumption for the model is that it's differentiable wrt the input, which is a widely assumed property in the literature. Furthermore, we examine various real-life datasets like CIFAR10, CIFAR100, and ImageNet as well as common architectures like VGG and ResNet. [1] Bakry et al. A simple proof of the Poincaré inequality for a large class of probability measures, '08 [2] Schlichting, Poincaré and log-Sobolev inequalities for mixtures, '19 [3] Pillaud-Vivien et al, Statistical Estimation of the Poincaré constant and Applications..., AISTATS '20 [4] Khrulkov et al, Functional Space Analysis of Local GAN Convergence, ICML '21 [5] Bubeck et al, A universal law of robustness via isoperimetry, NeurIPS '21 >*"Experimental limitations: [..] The generalizability of the results to other domains (eg, LLMs) is unclear [..]"* This is good point. Thanks for raising it. We will definitely clarify this aspect in our paper. There is a limitation of our work when it comes to extending to LLMs. However, this limitation is not specific to our work per-se but it is a general limitation of the application of NC to LLMs (see Wu & Papyan, Linguistic Collapse: Neural Collapse in (Large) Language Models, ‘24 which outlines limitations). The main problem is that for LLMs the embedding dimension is lower in general than the number of classes (vocabulary size), making the NC simplex impossible to exist. Extending the notion of NC to LLMs is an open question beyond our scope. >*"How does GC compare to other measures [..]"* This is indeed interesting. However, this comparison has been done already; e.g. [17] introduces GC and thoroughly relates it to the gradient norm & [41] compares with other complexity measures. The relation with the Fisher information and the gradient norm has also been studied (Jastrzebski et al., Catastrophic Fisher Explosion, ICML '21). These works indicate that the GC as a measure in function space is a similar notion of complexity to the gradient norm or the Fisher information in parameter space. We are happy to add an Appendix section on this. >*"The paper shows correlations with GC [..] but not [..] causation"* We added an experiment (rebuttal:Fig. 3) that directly controls GC by explicitly regularizing it, mitigating confounding factors. We observe the same predictions: lower GC produces lower NC. Note, here we regularized w.r.t. the GC computed at the logit layer rather than the embedding GC. This is because the high dimensionality of the embedding layer makes taking the gradient of the embedding GC very prohibitively expensive. >*"How computationally expensive is it to measure GC [..]? Is it feasible to use as a regularization term?"* We have not yet tried to use GC as an early stopping criterion but this could be a good idea. Measuring GC during training is relatively cheap, easy, and robust, especially if one samples it as explained in Section 4.2, see also Fig. 2. We have explored using GC as a regularization term (see above & rebuttal:Fig. 3) as have others, eg [17]. >*"The generalization bound in terms of GC is interesting, but how tight is it empirically?"* We added an experiment (rebuttal:Fig. 2) where we plotted the LHS and RHS (excepting the Lipschitz term) of this bound Eq. 10 in Prop 4.3, which we will add to our paper. On that plot the bound is not vacuous and quite tight! A full comparison with other generalization bounds is beyond our scope. Our main goals are showing that 1) the mechanisms that control GC also control NC and 2) solutions with lower GC are more performant for transfer learning. >*"Have you explored using GC insights to guide models or algorithms?"* This is a very interesting but beyond the scope of the current paper. --- Rebuttal Comment 1.1: Comment: Thank you again for your review. We hope that the concerns in your original review were sufficiently addressed in our response. In particular, the justification of the Poincare inequality assumption for data distributions, the demonstrated empirical tightness of the Generalization bound (our Propn 4.3), and the explicit GC regularization experiments exploring causality of these phenomena. We also hope we addressed the concerns regarding experimental limitations particularly with respect to the applicability to modern language models and LLMs. As the discussion period ends, we wanted to see if there is anything that needs additional clarification. Thank you.
Summary: The paper explores the relationship between neural collapse (NC) and geometric complexity (GC). It presents both theoretical and empirical evidence showing that geometric complexity is a robust metric across various variables. By substituting the NC metric with GC, the paper introduces a generalization bound based on geometric complexity. Furthermore, it highlights the significance of GC in transfer learning, demonstrating that by controlling the pre-training GC, downstream NC can also be controlled, leading to improved transfer learning results. Strengths: * The paper is overall well-written and easy to follow, the interpretation of each theoretical results is clear and intuitive. * The relationship between NC and GC is quite interesting. * Most of the empirical results align well with the theoretical part, demonstrating the validity of the results. Weaknesses: Some empirical results can be better aligned with the theoretical counterpart and additional experiments could be added to better support the claims. For example: * In proposition 4.1, NC is upper bounded by geometric collapse up to some constant scaling. What's the range of this constant $c$, does it purely depend on $Q$? If yes, it would be better to directly plot NC and RHS in (6) in the same plot to demonstrate the validity of the bound; and if no, it would be interesting to show how different settings examined in Figure 1 affect $c$. * The empirical evidence presented in Figure 2 clearly demonstrates the robustness of geometric complexity. However, I have some concerns regarding the Lipschitz notion. Neural networks are known to be not so Lipschitz, and making them more Lipschitz is an active area of research aimed primarily at improving robustness. Is Proposition 4.2 numerically verifiable by calculating the Lipschitz constant of the tested neural networks? Technical Quality: 3 Clarity: 3 Questions for Authors: * In equation (7), does the distance $d$ represent the norm distance? This should be stated clearly. * Both the paper and the prior work (Galanti et al, 2022) rely on the data assumption where the source and transfer datasets come from the same distribution $Q$. However, in practical settings, source and transfer datasets typically have domain gaps which can potentially violate the data assumption. Would the results in the paper still hold in this case? For example, pre-training on ImageNet/Cifar-100 while fine-tuning on DTD? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review. We are very happy that you found the work “well-written” and “easy to follow”, making the interpretation of the theoretical results “clear” and “intuitive”. We are also glad that you found the relation between “GC” and “NC” to be “quite interesting” and that the “empirical results align well with the theoretical part”. Below you’ll find below our best effort to answer your questions. Please consider raising your score if you feel your concerns have been resolved. >*"In proposition 4.1, NC is upper bounded by geometric collapse up to some constant scaling. What's the range of this constant c, does it purely depend on Q. If yes, it would be better to directly plot NC and RHS in (6) in the same plot to demonstrate the validity of the bound; and if no, it would be interesting to show how different settings examined in Figure 1 affect c."* Yes, the Poincare constant depends exclusively on the distribution Q. Stacking the plots in Figure 1 as you are suggesting is a great idea, which we will add in our paper. On these plots, by comparing the magnitude RHS and LHS of (6), we can roughly see that the c ~ 1000 (lower bound) for CIFAR-10 in our setting. The Poincare constant can take very high values depending on the spread of the distribution. For instance, for a standard normal distribution the Poincare constant is 1 while for a multivariate normal distribution the Poincare constant equals the largest eigenvalue of its covariance matrix which can be any positive real number. Intuitively speaking, the Poincaré constant will increase if the space is stretched in any direction and decrease if it is compressed in any direction. >*"The empirical evidence presented in Figure 2 clearly demonstrates the robustness of geometric complexity. However, I have some concerns regarding the Lipschitz notion. Neural networks are known to be not so Lipschitz, and making them more Lipschitz is an active area of research aimed primarily at improving robustness. Is Proposition 4.2 numerically verifiable by calculating the Lipschitz constant of the tested neural networks?"* The numerical experiment in Figure 2 (plot 1) shows that in practice the empirical GC estimate may be approaching the true value much faster than the bound specifies. Namely after 10 samples only, we already obtain a value very close to what we obtain with 50 times more samples, which is much better than a 1/sqrt(#sample) dependence, even ignoring the Lipschitz constant. This makes us believe that a much tighter bound here should be possible. We agree with you that this bound may be a looser bound than reality. >*"In equation (7), does the distance d represent the norm distance? This should be stated clearly."* Thank you for pointing this out. We will clarify that point in the paper. “d_ij” represents the euclidean distance between the mean for class i and the mean for class j. >*"Both the paper and the prior work (Galanti et al, 2022) rely on the data assumption where the source and transfer datasets come from the same distribution Q. However, in practical settings, source and transfer datasets typically have domain gaps which can potentially violate the data assumption. Would the results in the paper still hold in this case? For example, pre-training on ImageNet/Cifar-100 while fine-tuning on DTD?"* This is a very good point. In theory any source and target distribution can always be concatenated together mathematically into a single larger distribution. However, as you are pointing out, issues may arise when “gaps” are created by this process. Namely, when the support of the source and target distribution do not overlap, this process results in an overall distribution with non-connected support. This particular configuration is a setting ripe for violations of the Poincare inequality even if the source and target distribution both satisfy it. If this happens, then the range of validity of our bounds as well as our predictions concerning the impact of the learning rate, batchsize, regularization on the neural collapse and transfer performance may not hold. However, instead of seeing this as a limitation, we interpret it as a feature that may help us understand what compatibility conditions are necessary for a model pre-trained on a source distribution to transfer well to a target distribution. Namely, in line 286, we give compatibility conditions for the transfer bound in Prop'n 5.1 to be meaningful. This condition in particular implies that no such gaps are created by concatenating the source and target distributions. --- Rebuttal Comment 1.1: Comment: Thank you again for your review. We hope that the concerns in your original review were sufficiently addressed in our response, particularly, the application of the Poincare inequality and the role of the constant c in our Propn 4.1. As the discussion period ends, we wanted to see if there is anything that needs additional clarification. Thank you. --- Rebuttal Comment 1.2: Title: Response to rebuttal Comment: Thanks to the authors for the detailed response. Given that the authors said "c ~ 1000 (lower bound) for CIFAR-10 in our setting", would this make the bounds in Proposition 4.1 and 4.3 vacuous? --- Reply to Comment 1.2.1: Comment: This is a good question but indeed, taking the value c near 1000 does not make the bounds in Proposition 4.1 and Proposition 4.3 vacuous. Figure 1 in our paper plots the Geometric Collapse (i.e. the RHS of the inequality in Propn 4.1 excepting this constant c) in the middle column and the Neural Collapse (i.e. the LHS of the inequality of Propn 4.1) in the right column. From these plots, we see that by comparing the magnitude of the Neural Collapse and the Geometric Collapse that indeed c ~ 1000 for CIFAR-10 in our setting. We also explored directly the tightness of the bound derived in Proposition 4.3 during the rebuttal. Please see the 1-page pdf which we attached to our rebuttal response. Figure 2 of that rebuttal pdf shows empirically the tightness of the bound in Propn 4.3. For that experiment we trained a VGG-13 model on CIFAR-10 with 5 random seeds. The plot shows the LHS of the inequality in Propn 4.3 (i.e. the nearest mean classifier error on the test set ) in blue and the RHS of the inequality in Propn 4.3 (excepting the Lipschitz term which we expect to be negligible due to Fig 2 and (8) of our paper) in orange. Note that here we take $c = 1000$, $p = 1024$ (which denotes the embedding dimension of the feature map) and $m_c = 5,000$ (which denotes the number of samples per class). Indeed, the bound is not vacuous and in fact appears quite tight! Thanks again for your careful and thoughtful read. I hope we've been able to successfully address your concerns.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers for your thoughtful and thorough reviews. We are grateful that you found the paper "well-written" and "well-structured" and found our "novel perspective" to be an "original contribution" which you think “could open up new avenues for improving pre-training techniques” and “provide valuable insights into the mechanisms behind transfer learning and self-supervised learning”. This is great to hear and much appreciated! Furthermore, your thoughtful comments and review have helped us to substantially improve and strengthen the current work. The overall goal of this paper is to connect two previously unconnected areas of research in machine learning and leverage that insight to better understand their role in successful transfer learning. We achieve this goal in two ways: First, we clearly demonstrate the relationship between neural network geometric complexity and embedding neural-collapse (Section 4). We show through theory and verify empirically that mechanisms which regularize the GC in turn put measurable pressure on the neural collapse (Section 4.1: Prop 4.3 and Fig. 1). A second goal is to show that pre-trained networks with lower GC promote lower NC values on new unseen target classes which enable improved transfer accuracy during fine-tuning (Section 5: Prop 5.1 and Fig. 3). We tried our very best below to satisfy your requests and answer your questions within the time frame imparted. **Please find attached a 1-page document containing figures referenced in our responses to individual reviewers comments.** Please consider raising your scores in consequence if your requests have been met and questions answered. Pdf: /pdf/4d656e41f0ede804abd782cc6cd6652ac3397099.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Relationship Prompt Learning is Enough for Open-Vocabulary Semantic Segmentation
Accept (poster)
Summary: This paper tackles open-vocabulary semantic segmentation, a task aiming at per-pixel predictions of image inputs on any given classes. Existing methods are built on pre-trained vision-language models like CLIP. Different from CLIP that achieves coarse open-vocabulary classification, open-vocabulary segmentation extends this understanding capability to pixel level. This paper claims that existing approaches take much compute for training. Authors develop a Relationship Prompt Network that features dense image-text correlations across all layers. Also, their approach includes some training techniques, including mixture-of-experts, multi-scale input and prompt learning. Experimental results show that proposed RPN achieves performance gains against published methods like ZegCLIP [19], MaskCLIP [8] and FreeSeg [10]. Strengths: - **Performance.** The proposed RPN outperforms several published methods, such as ZegCLIP, MaskCLIP and FreeSeg. - **Inference efficiency.** Table 1 shows that proposed RPN is trained with fewer learnable parameters. In inference, RPN shows less computation in FLOPs. Weaknesses: - **Unclear unique contributions.** I am not quite understand about the unique contribution from this paper. Does this paper give new insights to the community? The proposed relationship prompt is a combination of powerful tuning methods that have been proven effective, including dense image-text attentions, mixture-of-experts, multi-scale inputs and prompt learning. None of these modules are emphasized and I cannot see the unique value of applying these techniques. Performance gains from these techniques does not surprise me that much. - **Many typos and confusing claims**. Please refer to Questions 1-11 for details. I think these issues should be taken care of before considered for publication. Technical Quality: 2 Clarity: 1 Questions for Authors: - Line 32, “it is challenging to apply prompt learning solely to VLM for OVSS”. Has this claim been checked? Why applying prompt learning to VLM for OVSS is challenging? Does “Challenging” refer to “technically challenging”, right? - Line 49-51, “we find they can construct an image-text relationship attention map via the attention mechanism”. Image-text relationship attention map is a confusing claim. This should refer to a well defined phrase in this field, such as language-aware deep fusion in [A], which is an influential paper in this field. - Figure 1, how are these attention maps plotted? - Line 58-61, we propose linear projection module comprising merely two individual linear layers, which maps the image and text feature into a shared space, and then computes their matrix product to produce the results. What should matrix product refer to? Element-wise or column-wise? - Typos. Line 67 and 69, RPN to RPM. There is potential confusion between RPM and RPN. - Line 113-114, reducing parameter consumption. I do not think this is a correct claim. Adding dense cross-attentions will surely increase parameters for inference. I do not see techniques in this paper that reduces parameters like quantization. - Line 113-125, refer to a figure here is better for illustration purpose. - Figure 2, what is M2oE, ITP and APG block? A bit confusing. - Table 4, what does the first line mean and what is the difference between the first and the second line? - Figure 7, only legends of head 1 and 2 are given, while others are missing. In such case, I cannot understand what is presented in this figure. - Table 1, instead of params, this should be Learnable params, right? [A] Grounded Language-Image Pre-training. CVPR 2022. Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: Authors have included limitations of proposed RPN in Appendix D. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer Rvjz Thank you so much for acknowledging the strength of our method. We have carefully considered your constructive and insightful comments and here are the answers to your concerns. **Q1. Unclear unique contributions.** Please refer to **General Response-Q1**. **Q2. Confusing claim that *it is challenging to apply prompt learning solely to VLM for OVSS* should be argued.** We would like to re-clarify the claim as: We point out two reasons for the claim (in L.32-41): (1) Current prompt provides only image-level granularity, which restricts VLMs from performing tasks related to pixel-level visual localization. (2) A lack of an image-text relationship, which hinders the exploration of VLMs’ potential for open-vocabulary scene understanding. To demonstrate this, we compare seven current PEFT methods applied to the baseline (in L.618-638), and show the comparison in Tab.9 and 10. We show the part of Tab.9 as: |(VOC)|#Params(M)|pAcc|mIoU$_s$|mIoU$_u$|hIoU| |:-|:-:|:-:|:-:|:-:|:-:| |Baseline|154.5|84.1|83.5|31.2|45.4| |VPT|4.0|90.9|81.0|52.9|64.0| |LoRA|4.0|91.3|82.2|53.1|64.5| |**RPN(ours)**|**3.2**|**95.8**|**93.1**|**84.6**|**88.6**| We would like to re-emphasize the conclusion as: it is challenging to apply prompt learning solely to VLM for OVSS, and our method shows significant improvement (at least 20% mIoU improvement in VOC). **Q3. Concerns between image-text relationship attention map and language-aware deep fusion in [1\*].** We analyzed their differences from three perspectives as follows: (1) Motivation: We aim to directly output pixel-level predictions using only the frozen VLM, while [1*] aim to proposal a grounded language-image pre-training method. (2) Structure: We use a train-free method to construct our map, while [1*] uses a trainable cross-attention module to construct it; we only need to introduce our map for the image encoder, while [1*] also introduces it for the text encoder. Thus, our method is more concise and direct. (3) Training cost: we do not need additional training sets, parameter-intensive adapter and large-scale pre-training, which are all required by [1*]. In summary, our method achieves OVSS with lower training cost and simpler structure. [1*] Grounded Language-Image Pre-training. **Q4. Fig.1, how are these attention maps plotted?** We directly visualize the results of eq.2, ${{m}}^i = {p}^i \cdot (t \odot {g}^i)^\top$, using a heat map. **Q5. What should matrix product refer to? Element-wise or column-wise?** Let the image and text feature denote $p \in \mathbb{R}^{n\times d}$ and $t\in \mathbb{R}^{c\times d}$. The matrix product $O\in \mathbb{R}^{n\times c}$ between them $O_{ij} = \sum_{k=1}^d p_{ik}t^\top_{kj}$. **Q6. Typos. Line 67 and 69, RPN to RPM. There is potential confusion between RPM and RPN.** In fact, RPM and RPN refers to **R**elationship **P**rompt **M**odule and **R**elationship **P**rompt **N**etwork. As illustrated in L.61-62, RPN consists of RPM and VLM. **Q7. Line 113-114, concerns about reducing parameter consumption.** We clarify the sentence of L.113-114 that our method only add 3M trainable parameters with the frozen VLM to achieve OVSS, so our method is low-cost during training. As illustrated in Tab.1, we conduct efficiency experiment to demonstrate our method indeed consume very small amount of additional trainable parameters to achieve OVSS. In addition, we add the efficiency for ADE20K and Context as: ||#Params(M)|FLOPs(G)|FPS| |:-|:-:|:-:|:-:| |ADE20K|3.2|117.5|10.4| |Context|3.2|95.5|10.7| **Q8. L.113-125, refer to a figure here is better for illustration purpose.** Thanks for your suggestion. **Q9. Fig.2, what is M2oE, ITP and APG block? A bit confusing.** Actually, Fig.2 has nothing to do with these blocks. Are you referring to Fig.3? If yes, we have mentioned these blocks in the legend of Fig.3. RPM consists of M2oE, ITP and APG block. M2oE refers to multi-scale mixture-of-experts, which aims to propose multi-scale vision knowledge (L.134-150). ITP refers to image-to-pixel semantic attention, which aims to enable the image encoder to learn open-vocabulary semantics from text features (L.151-168). APG refers to adaptive prompt generation, which aims to construct the adaptive image-text relationship for each pixel (L.169-179). **Q10. Tab.4, what does the first line mean and what is the difference between the first and the second line?** RPM (the first line) denotes the combination of M2oE, ITP and APG. Therefore, RPM without M2oE (the second line) denotes the ablation about APG and ITP. **Q11. Concerns about Fig.7. Only legends of head 1 and 2 are given, while others are missing.** First, we give all heads in Fig.7 (MAD evaluation) instead of head 1 and 2, and use different color point to denote different heads; due to the number of heads is large, only the symbols for the two heads are given in the legend, and the remaining heads are indicated by ellipses (note the legend at the bottom right). Second, MAD can be used to explore the range of attention of each attention head, similar to the receptive field in convolutional neural networks (CNNs) (e.g., Fig.7 in ViT[1*] is also represented to valuate MAD). It is a common metric. A higher point indicates a larger receptive field, and greater point spacing signifies richer feature diversity. Fig.7 show that with the guidance of relationship prompt, the deep features of VLM gradually have a wider MAD value range, which indicates fine-grained semantic properties. [1*] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. **Q12. Tab.1, instead of params, this should be Learnable params, right?** Yes, params denotes learnable params. --- Rebuttal Comment 1.1: Comment: Thanks authors for detailed responses to my concerns. Most typo concerns have been addressed, but surely need better clarifications and clearer presentation in the future manuscript. I do have a few previous concerns regarding contributions and novelties for this paper, which I think is more important and have not addressed by this rebuttal, and also a few new ones which need better clarifications. ### Reply to Q1 (General Response-Q1) 1. I am not quite sure if I clearly understand “additional training sets”. Why are these training sets “additional”? 2. Another thing is, performing parameter-efficient segmentation on top of a frozen VLM (or fine-grained understanding) is very straightforward [A,B]. I do not think this is a novel motivation. ### Reply to Q2 1. Authors claim that "current prompt … a lack of an image-text relationship, which hinders the exploration of VLM potential for open-vocabulary scene understanding." This is not an accurate claim, considering paper [C]. In all, I still do not see the unique value from this paper. Many positive factors in this paper that could lead to performance gains, as I included in **Weaknesses 1**. I will adjust my score based on responses from other reviewers and how authors clarify their unique contributions. [A] F-VLM: Open-Vocabulary Object Detection upon Frozen Vision and Language Models. ICLR’23. [B] Convolutions Die Hard: Open-Vocabulary Segmentation with Single Frozen Convolutional CLIP. NeurIPS’23. [C] MaPLe: Multi-modal Prompt Learning. CVPR’23. --- Reply to Comment 1.1.1: Comment: Thanks for your exhaustive reply and suggestions. Here we have three more clarifications: **Q1: I am not quite sure if I clearly understand “additional training sets”. Why are these training sets “additional”?** There are two main ways to achieve OVSS using VLM: (1) Training VLM with pixel-level supervision from scratch allows it to directly output pixel-level predictions, which means that the process needs to introduce “additional” pixel-level supervised training sets, like SegGPT[1*] use a mixture of ADE20K, COCO, PASCAL VOC, Cityscapes and so on to train ViT to directly output segmentation results. (2) Unlike training from scratch, adopting knowledge distillation or feature adaptation to transfer the frozen VLM's semantic knowledge to segmentation-specific networks dose not introduce “additional” pixel-level supervised training sets, except COCO under the OVSS setting requirements. Thus, the former usually needs “additional training sets; and the latter needs additional segmentation-specific networks. [1*] SegGPT: Towards Segmenting Everything In Context **Q2: Performing parameter-efficient segmentation on top of a frozen VLM (or fine-grained understanding) is very straightforward. I do not think this is a novel motivation.** As answered in **General Response-Q1**, training from scratch relies on additional training sets and large-scale pre-training, while only needs a simple VLM; adopting knowledge distillation or feature adaptation relies on additional segmentation-specific networks based on VLM, while only needs small-scale fine-tuning. Thus, is it possible to combine the advantages of both, using small-scale fine-tuning while only requiring a simple VLM (discarding the complex segmentation-specific networks)? This is exactly what we do. We would like to re-emphasize our contribution as: **we only use VLM with extremely low training cost (almost 3M trainable parameters) to directly achieve OVSS. (Please note that we do not rely on segmentation-specific networks and only need parameter-efficient fine-tuning)**. We would like to use the table in **General Response-Q1** again to illustrate the difference between our approach and the methods you listed. ||Additional Training Sets|Parameter-intensive Decoder Adapter|Large-scale Pixel-level Pre-training| |:-|:-:|:-:|:-:| |F-VLM|❌|✔|❌| |FC-CLIP|❌|✔|❌| |RPN(ours)|❌|❌|❌| **Q3: Authors claim that "current prompt … a lack of an image-text relationship, which hinders the exploration of VLM potential for open-vocabulary scene understanding." This is not an accurate claim, considering paper[1\*].** As illustrated in L.107-108, MaPLe adopts trainable prompt to guide both visual and textual embeddings and proposes a coupling function as a bridge to build a multi-modal prompt. The coupling function is implemented as a linear layer, which maps the text prompt to the image prompt. Note that the mapping process of text-to-image prompt completely relies on text prompt without introducing the image embeddings of the VLM. In addition, the generation of text prompt also relies solely on a randomly initialized learnable vector that is independent of the text embeddings of VLM. Therefore, in essence, MaPLe does not construct a prompt with image-text relationship, but only generates a text prompt from a randomly initialized learnable vector and maps it to an image prompt with the help of a linear layer. [1*] MaPLe: Multi-modal Prompt Learning. --- Rebuttal 2: Comment: Thanks for authors prompt and active replies. For **Q2**, I think presented results convince me a bit. However, in spite of performance gains presented in this paper and many turns of debating, I am still confused on unique contributions from this paper and remain doubts on its novelty. Authors seem to claim very high-level/vague contributions (could claimed by most multi-modal learning papers) without theoretical or empirical justifications and unique insights, which do not convince me that much. So far I am inclined to reject this paper and waiting other reviewers response for a second opinion. --- Rebuttal Comment 2.1: Comment: Thanks for your patient response and comments. We would like to reiterate our contribution: we only use image-level supervised VLM (note that no segmentation-specific network is introduced) and achieve OVSS with extremely low training cost. We are the first to do so. Although we did not reach a consensus in the end, thank you for your discussion and wish you all the best in your future research.
Summary: The paper proposes a training and inference-efficient Relationship Prompt Network (RPN). This network leverages a layer-wise Relationship Prompt Module (RPM) utilizing tuning methods similar to VLM LoRA and an improved Linear Projection Module (LPM) without relying on a segmentation model. The authors conduct extensive experiments, demonstrating the solid efficiency and effectiveness of their approach. Strengths: The paper proposes a method for continuously injecting pixel-level image-text relationships into the layers of the image encoder using efficient tuning techniques. The experiments conducted are very comprehensive. Weaknesses: The paper lacks a discussion on region-text relationships, focusing instead on directly leveraging pixel-text relationships. Additionally, there should be a discussion and experiments with other visual prompt tuning methods applied to the baseline. Technical Quality: 4 Clarity: 3 Questions for Authors: See the weakness. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Please provide a more detailed analysis on the correlation between the relational abilities of the model and its performance in open-vocabulary tasks (separating seen and unseen classes). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer Rcdd Thank you so much for acknowledging the strength of our method. We have carefully considered your constructive and insightful comments and here are the answers to your concerns. **Q1. Lack of a discussion on region-text relationships, focusing instead on directly leveraging pixel-text relationships.** Thanks for your valuable suggestion. The discussion on region-text relationship is worthy. First, we clarify the difference between pixel-text and region-text relationship: the former links image pixel with text description; the latter links image mask with text description. Since image mask contains more information, it can more intuitively represent things or stuff, while the image pixels are the opposite. Thus, image mask can be more easily associated with text description. For example, region-text relationship can be established by directly matching image mask and text description, while when we need to establish pixel-text relationship after featuring image mask and text description. Second, we consider the impact of region-text on OVSS. Although constructing region-text relationship seems to be simpler, is it more effective for OVSS? As illustrated in MaskFormer[1*], pixel-level classification is not necessarily better than mask-level classification for semantic segmentation, which seems to show that region-text relationship enable the VLM to achieve OVSS better than pixel-text relationship. However, adopting region-text relationship means that VLM need to output mask proposals, which are usually output by a DERT-like[1*,2*] decoder. This shows that VLM need a DERT-like decoder, and goes against our intended goal of achieving OVSS using only VLM without any segmentation-specific networks. Third, we consider the impact of region-text relationship on open-vocabulary segmentation. VLM with region-text relationship can output class-agnostic mask, thus making it feasible to achieve instance segmentation and object detection. This is a very interesting direction worth exploring, and it gives us some inspiration for future work. [1*] Per-Pixel Classification is Not All You Need for Semantic Segmentation. [2*] End-to-End Object Detection with Transformers. **Q2. There should be a discussion and experiments with other visual prompt tuning methods applied to the baseline.** Thanks for your valuable suggestion. In fact, we have discussed seven current PEFT methods (including other visual prompt tuning) applied to the baseline as illustrated in L.618-638, and show the comparison in Tab.9 and 10. We show the part of Tab.9 as: |(VOC)|#Params(M)|pAcc|mIoU$_s$|mIoU$_u$|hIoU| |:-|:-:|:-:|:-:|:-:|:-:| |Baseline|154.5|84.1|83.5|31.2|45.4| |VPT|4.0|90.9|81.0|52.9|64.0| |LoRA|4.0|91.3|82.2|53.1|64.5| |**RPN(ours)**|**3.2**|**95.8**|**93.1**|**84.6**|**88.6**| We would like to re-emphasize the conclusion as: our method shows significant improvement (at least 20% mIoU improvement in VOC), due to that existing PEFT methods mainly focus on fine-tuning the image-supervised VLM to improve performance on classification. But, our method enables VLM to directly achieve OVSS. **Q3. Please provide a more detailed analysis on the correlation between the relational abilities of the model and its performance in open-vocabulary tasks (separating seen and unseen classes).** Thanks for your valuable suggestion. We provide our analysis as follows: The relational capability of the model is reflected in guiding the model to make pixel-level predictions for unseen classes by establishing relationship attention between text class information and pixel-level visual information, making its performance close to that of seen classes. We visualize the relationship attention map in Fig.1. The first line denote the relationship attention maps for seen class across each layer; the last two lines for unseen classes. For seen class, the model has prior pixel-level semantic information, so the relationship attention map only needs to focus on a small number of pixels to guide the model to make class predictions for these pixels (e.g., the relationship attention map for airplane has fewer highlighted areas). For unseen class, the model lacks corresponding pixel-level semantic information, so the relationship attention map needs to focus on more pixels to provide the model with more sufficient pixel-level semantic information (e.g., the shallow relationship attention map for sheep highlights the complete semantic information). In addition, to demonstrate the guidance mechanism, we conduct related interpretability experiments (i.e., MAD evaluation), as illustrated in L.271-282. The MAD results show that with the guidance of relationship prompt, the deep features of VLM gradually have a wider MAD value range, which indicates fine-grained semantic properties. We also conduct system level comparison (separating seen and unseen classes) as shown in Tab.2 and 7. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you again for reviewing our manuscript. We have tried our best to address your questions, and revised our paper by following suggestions from all reviewers. Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
Summary: This paper proposes relationship prompt module (RPM), which generates relationship prompt that directs VLM to extract pixel-level semantic embeddings suitable for OVSS. Moreover, RPM integrates with VLM to construct relationship prompt network (RPN), achieving OVSS without segmentation-specific networks. RPN attains state-of-the-art performance with merely about 3M trainable parameters (2% of total parameters). Strengths: 1. This paper proposes RPM, which generates pixel-level relationship prompt to guide VLM to transform image-level embeddings to pixel-level ones suitable for OVSS. 2. This paper proposes a simple yet parameter-efficient OVSS method, i.e., RPN, employing relationship prompt learning solely to perform OVSS without any segmentation-specific networks. 3. RPN attains state-of-the-art results on four public benchmarks by optimizing about 3M trainable parameters (2% of total parameters). Weaknesses: 1. In the experiment tables, there is no Efficiency comparison for ADE20K and Context dataset. 2. The ablation studies in Table 4 are insufficient; they only include M2oE and LPM ablation studies, and do not cover ITP and APG. 3. The performance is not state-of-the-art compared to some previous works, such as CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to the concerns and issues raised in the "Weaknesses". Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Please refer to the concerns and issues raised in the "Weaknesses". Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer 2wFX Thank you so much for acknowledging the strength of our method. We have carefully considered your constructive and insightful comments and here are the answers to your concerns. **Q1. No efficiency comparison for ADE20K and Context dataset.** We have verified the efficiency on two benchmarks as illustrated in L.245-249. According to your comments, we add the efficiency for ADE20K and Context as: | | #Params(M) | FLOPs(G) | FPS | | :------- | :------------: | :-------: |:-------: | |ADE20K|3.2|117.5|10.4| |Context|3.2|95.5|10.7| **Q2. The ablation study in Tab.4 do not cover ITP and APG.** We would like to re-clarify the ablation study in Tab.4. Tab.4 mainly shows the impact of M2oE, ITP, APG and LPM (we use RPM to denote the combination of M2oE, ITP and APG). Although we do not directly mark APG and ITP in Tab.4, RPM without M2oE denotes the ablation about APG and ITP. Therefore, we have conducted the component of APG and ITP. In addition, due to APG and ITP together form our relationship prompt (ablating one of them will cause bug), it is notable that APG and ITP can not be separated for ablation. **Q3. The performance is not SOTA compared to some previous works.** Thanks for your suggestions, please refer to **General Response-Q2**. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you again for reviewing our manuscript. We have tried our best to address your questions, and revised our paper by following suggestions from all reviewers. Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
Summary: This paper primarily studies Open-Vocabulary Semantic Segmentation. The main contribution of this paper is the proposal of RPN, which employs relationship prompt learning solely to perform OVSS without any segmentation-specific networks. The authors claim that RPN attains state-of-the-art results on four public benchmarks. Strengths: The strengths of this paper can be listed as follows, - The paper clearly expresses its main research content and the proposed algorithm. - The proposed algorithm achieved good results on multiple benchmark datasets. Weaknesses: The weaknesses of this paper can be summarized as follows, - The authors claim that their method achieved state-of-the-art results, but the results in Table 3 and Table 8 do not support this claim. Additionally, the methods listed in Tables 2, 3, 7 and 8 are somewhat limited. More recent works should be included to substantiate the claim that the proposed method achieves state-of-the-art results. (refer to https://paperswithcode.com/task/open-vocabulary-semantic-segmentation) - The authors should provide a more detailed explanation of the content in Figure 2, either in the caption or in the introduction section. (Or, they could reference this figure in a more suitable location, such as the first paragraph of the introduction section. Or, the authors could mention that a more detailed explanation is provided in the supplementary materials.) - According to the authors' definition, there have already been many works that achieve open vocabulary semantic segmentation without segmentation-specific networks, e.g., "Image Segmentation Using Text and Image Prompts", "GroupViT: Semantic Segmentation Emerges from Text Supervision", "CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation", "SegGPT: Towards Segmenting Everything In Context", to name a few. These works, to some extent, diminish the novelty of the authors' contribution. (To be clear, I am not saying that the authors' work lacks novelty entirely, but its novelty is not as strong as claimed in the paper.) - Did the authors conduct any ablation studies on the hyper-parameters that appear in Equations 9 to 11? - Why do the authors use class embeddings from only text layer 12, while using image embeddings from various image layers? Can the authors provide some analysis or experimental support for this choice? - When visualizing the attention map in Figure 1, the authors could also include a color legend to represent the response intensity. - Figures 4 to 6 should include annotations to explain the meanings of the various symbols used in the images. - Did the authors conduct any ablation studies on how to select the scale and the number of experts used in M2oE? - Is the Hadamard product in Equation 2 significant? Would using a matrix product between $p^i$​ and $g^i$ instead affect the results? - Why is pixel-level information used instead of patch-level information? - Does the proposed algorithm produce block artifacts when generating segmentation maps for high-resolution images? Overall, the paper is quite dull and does not stand out in the field of open vocabulary semantic segmentation. However, for a conference paper, the amount of work presented seems sufficient. Technical Quality: 2 Clarity: 2 Questions for Authors: I have placed all the questions I want to ask in the weaknesses box. Overall, the paper doesn't seem to have any major issues, but personally, I find the work uninteresting and believe it offers limited insights to the field of open vocabulary semantic segmentation. However, if other reviewers feel that this work is worth accepting, I wouldn't strongly oppose it. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors did not discuss the limitations of their proposed algorithm in either the main text or the supplementary materials. I have highlighted some limitations of the paper in the weaknesses box. The authors can refer to these points to further improve the quality of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to Reviewer DSMk Thank you so much for acknowledging the strength of our method. We have carefully considered your constructive and insightful comments and here are the answers to your concerns. **Q1. The claim that our methods achieved SOTA results lack persuasiveness.** Thanks for your suggestions, please refer to **General Response-Q2**. **Q2. More detailed explanation of the content in Fig.2.** Thanks for your suggestion. We will modify the description in L.63 as: *Fig.2 shows the key difference between RPN and other OVSS methods. RPN makes VLM directly output pixel-level predictions by prompt learning, while other methods use VLM to assist a complex segmentation-specific networks where VLM transfers its rich knowledge to the mask proposal network by knowledge distillation or make the semantic decoder output segmentation masks by feature adaptation.* **Q3. Some works that achieve OVSS without segmentation-specific networks diminish the novelty of our contribution.** Thanks for your suggestions, please refer to **General Response-Q1**. **Q4. Ablation studies about the hyper-parameters $\lambda_1, \lambda_2$ in eq.11.** $\lambda_1, \lambda_2$ control the weights between $\mathcal{L}\_{focal}$ and $\mathcal{L}_{dice}$. We explore three common combinations. The results is shown as: |$\lambda_1$|$\lambda_2$|A-847|PC-459|A-150|PC-59|PAS-20| |:-|:-:|:-:|:-:|:-:|:-:|:-:| |20|1|11.0|16.9|31.1|56.7|95.1| |1|1|10.4|16.7|29.6|55.4|94.8| |**100**|**1**|**11.4**|**17.3**|**31.5**|**57.1**|**95.2**| The results show that $\lambda_1=100$ and $\lambda_2=1$ are the best. **Q5. Why using class embeddings from only text layer 12, not others?** To explore which text layer of class embeddings to use, we conduct studies as: ||pAcc|mIoU$_s$|mIoU$_u$|hIoU| |:-|:-:|:-:|:-:|:-:| |one-to-one|69.7|65.4|17.3|27.4| |only last|95.8|93.1|84.6|88.6| One-to-one denotes using the same-layer embeddings from image and text encoder, and only last denotes using the last class embeddings, i.e., layer 12. We find that the shallow text embeddings cannot effectively guide VLM to achieve OVSS. The reason is that the shallow embeddings cannot accurately represent the class information, thereby constructing wrong relationship prompt. **Q6. A color legend is needed in Fig.1** Thanks for your suggestion. We will add the color legend in the future version. The degree of attention from low to high is marked by colors from dark blue to red. The more attention, the darker red; the less attention, the darker blue. **Q7. Explanation about some symbols in Fig. 4 to 6 is needed.** We will add these explanation in caption as: *$\odot$, $\otimes$ and $\oplus$ denotes element-wise product, matrix product and addition. The symbols Expand, Einsum and Mul denote expanding class dimension, element-wise product and matrix product.* **Q8. Ablation studies about the scale and the number of experts used in M2oE.** Note that the scale and number of experts are same parameter, which control the size of feature maps. Due to we crop images into $512\times512$, the scale can be set to 1/8 of the input feature maps at most. Thus, the scale $s_i = \frac{1}{2^{i-1}}$, where $i \in [1,4]$, i.e., the maximum number of experts is 4. The ablation studies is shown as: |$i$ |pAcc|mIoU$_s$|mIoU$_u$|hIoU| |:-|:--:|:-:|:-:|:-:| |(1,2,3,4)|95.8|93.1|84.6|88.6| |(1,2,3)|95.6|92.9|83.8|88.1| |(1,2)|95.6|92.8|83.5|87.9| The results show that embeddings with more diverse scales can improve OVSS performance. **Q9. Is the Hadamard product in eq.2 significant? Would using a matrix product between $p^i$​ and $g^i$ instead affect the results?** We rewrite eq.2 as: ${{m}}^i = {p}^i \cdot (t \odot {g}^i)^\top$, where $p^i \in \mathbb{R}^{n\times d}$, $t\in \mathbb{R}^{c\times d}$ and $g^i \in \mathbb{R}^{1\times d}$. Please note the calculation process figure in our top-level pdf for more details. Hadamard product have an important effect in eq.2. Eq.2 (including Hadamard product and matrix product) can construct pixel-text relationship attention map to guide image encoder to transform image-level semantic to pixel-level. Firstly, Hadamard product assigns class weights to images in a batch by fusing $t$ used to identify different classes and $g^i$ used to identify each image in a batch, thus attaining image-text relationship. Based on this relationship, matrix product gets the class distribution of a pixel, which is normalized to one. Because $p^i$ contains pixel-level visual information, eq.2 achieves pixel-text relationship. Fig.1 visualizes the guidance process from image-text to pixel-text relationship. To demonstrate the image-to-pixel guiding scheme, we also conduct related interpretability experiments (i.e., MAD evaluation, L.271-282). The MAD results show that with the guidance of relationship prompt, the deep features of VLM gradually have a wider MAD value range, which indicates fine-grained semantic properties. Due to that we need relationship map $m^i \in \mathbb{R}^{n\times c}$, thus matrix product between $p^i$​ and $g^i$ cannot output map of this shape. In fact, we have explored matrix product between $p^i$​ and $t$ in L.291-311 and Tab.5. The results show that missing any of the product operations will have a significant impact on performance. **Q10. Why is pixel-level information used instead of patch-level information?** In fact, pixel-level information for feature maps used refers to patch-level information for original images. Due to that ViT usually divides the image into multiple patches for encoding, a pixel of the feature describes a patch of the image. **Q11. Does the proposed algorithm produce block artifacts when generating segmentation maps for high-resolution images?** No, for high-resolution images, we adopt *slide inference* mode, i.e., set stride to $341\times 341$, and crop size to $512\times 512$. --- Rebuttal Comment 1.1: Comment: Dear Reviewer, Thank you again for reviewing our manuscript. We have tried our best to address your questions, and revised our paper by following suggestions from all reviewers. Please kindly let us know if you have any follow-up questions or areas needing further clarification. Your insights are valuable to us, and we stand ready to provide any additional information that could be helpful.
Rebuttal 1: Rebuttal: # General Response We would like to thank all reviewers for providing constructive feedback that helped us improved the paper. We are encouraged that reviews think our paper: * Clear expression: *The paper clearly expresses its main research content and the proposed algorithm.* (Reviewer DSMk) * Good performance: *The proposed algorithm achieved good results on multiple benchmark datasets.* (Reviewer DSMk) *RPN attains state-of-the-art results on four public benchmarks.* (Reviewer 2wFX) *The proposed RPN outperforms several published methods.* (Reviewer Rvjz) *RPN shows less computation in FLOPs.* (Reviewer Rvjz) * Comprehensive experiments: *The experiments conducted are very comprehensive.* (Reviewer Rcdd) We have been working diligently on improving the paper on several fronts, addressing your critique. Below, we summarize the changes that we have made in an updated draft. * More explanation about the diagrams. (Please refer to Response to Reviewer DSMk-Q2,6-7) * More discussion on region-text relationship. (Please refer to Response to Reviewer Rcdd-Q1) * More analysis on the correlation between the relational abilities of the model and its performance. (Please refer to Response to Reviewer Rcdd-Q3) We respond to each of the reviewers in detail, and to some common issues below. **(Reviewer DSMk and Rvjz) Q1. Concerns about contributions.** We would like to re-emphasize our novelty as follows: (1) Motivation: There are two main ways to achieve OVSS using image-level supervised VLM: pre-training the VLM from scratch using pixel-level supervision; using knowledge distillation or feature adaptation to assist in training parameter-intensive segmentation-specific networks with VLM. However, the training cost of these methods is high (the former needs additional training sets and large-scale pixel-level pre-training; the latter needs additional parameter-intensive segmentation-specific networks). We aim to froze VLM and just make prompt learning to achieve OVSS. Thus, the training is low-cost. (2) Difference: Although there exist OVSS methods which do not rely on segmentation-specific networks, they do not just rely on frozen VLM to achieve OVSS. For example, CLIPSeg[1*], SED[3*] and CAT-Seg[2*] all rely on an additional parameter-intensive decoder adapter; GroupViT[4*] and SegGPT[5*] both need additional segmentation training sets. We would like to re-emphasize our contribution as: **we only use VLM with extremely low training cost (almost 3M trainable parameters) to directly achieve OVSS.** The comparison of important aspects such as, training sets, decoder adapter and pixel-level pre-training, between RPN and the other OVSS methods without segmentation-specific networks are shown as: ||Additional Training Sets|Parameter-intensive Decoder Adapter|Large-scale Pixel-level Pre-training| |:-|:-:|:-:|:-:| |CLIPSeg|❌|✔|❌| |SED|❌|✔|❌| |CAT-Seg|❌|✔|❌| |GroupViT|✔|❌|✔| |SegGPT|✔|❌|✔| |RPN(ours)|❌|❌|❌| [1*] Image Segmentation Using Text and Image Prompts [2*] CAT-Seg: Cost Aggregation for Open-Vocabulary Semantic Segmentation [3*] SED: A Simple Encoder-Decoder for Open-Vocabulary Semantic Segmentation [4*] GroupViT: Semantic Segmentation Emerges from Text Supervision [5*] SegGPT: Towards Segmenting Everything In Context **(Reviewer DSMk and 2wFX) Q2. The claim that our methods achieved SOTA results lack persuasiveness.** There are usually two types of experimental settings (zero-shot and open-vocabulary settings) for OVSS, involving multiple benchmarks. To fully evaluate the performance, we conducted experiments in the two settings. Firstly, in zero-shot settings, our method outperforms existing methods on all benchmarks in Tab.2 and 7. Secondly, in open-vocabulary setting, our method outperforms CAT-Seg[2*] (the latest OVSS method) on three benchmarks (A-847, PC-459 and A-150), and achieve the highest results on three benchmarks (A-847, PAS-20 and A-150) in Tab.3 and 8. Thirdly, in both the two settings, we show the comparison as: ||A-849|pc-459|A-150|PC-59|PAS-20|VOC(mIoU$_u$)|COCO(mIoU$_u$)|Context(mIoU$_u$)| |:-|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:| |SPNet|-|-|-|24.3|18.3|15.6|8.7|-| |ZS3Net|-|-|-|19.4|38.3|17.7|9.5|12.7| |ZegFormer|5.6|10.4|18.0|45.5|89.5|63.6|33.2|-| |FreeSeg|7.1|6.4|17.9|34.4|85.6|78.6|42.2|-| |RPN(ours)|**11.4**|**17.3**|**31.5**|**57.1**|**95.2**|**84.6**|**42.8**|**58.7**| The result show that our method achieve SOTA on all benchmarks in both the two settings. Pdf: /pdf/96b882035a560726ecaa4e4b06fd9e40a5335cb0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Implicit Causal Representation Learning via Switchable Mechanisms
Reject
Summary: This paper studies weakly supervised causal representation learning with soft interventions. It assumes that pairs of observations are provided that differ by a soft intervention on one variable in the latent space. Additionally, the intervention target (and the total number of latent variables) are given. Identifiability up to element-wise transformations assuming linear mixing and known intervention targets (among other assumptions) is shown. Experiments on synthetic data compare the proposed method to ILCM, beta-VAE and D-VAE. An improved D and C score in the DCI framework is shown. An experiment on semi-synthetic image data is shown. Strengths: - The method extends previous work on weakly supervised CRL and generalizes it from hard to soft interventions, which is an important step for bringing CRL closer to real-world applications, since CRL methods often suffer from restrictive assumptions. - Also, I appreciate the effort of testing this CRL approach to datasets which are more similar to real-world image datasets. Weaknesses: - The mathematical notation is poorly defined and difficult to follow: objects aren't well-defined and notation inconsistently or unusually used. See questions below. - L136-137: "a diffeomorphic solution function [...] deterministically maps a value for exogenous variable [...] to a value for causal variable [...]" That's incorrect. There can be such a map from the set of all exogenous to all endogenous, but not per variable. Consider $X:= U_X; Y:= X+U_Y$, where $U_X$, $U_Y$ are exogenous. In general, there is no deterministic map from $U_Y$ to $Y$. - Even though the DCI framework for evaluation of the learned representation is cited, only two of the metrics (D and C) are used, whereas Informativeness (I) is omitted without comment. That seems suspicious to me. - The synthetic experiments test a rather simple setting on few data generating processes. The observation dimension is 4 (same as latent dimension) and the mixing is linear (as far as I can tell). We have seen in previous work (e.g. [2]) that the complexity of the mixing function is a crucial element in the recoverability of the latents. For example, the more nonlinear the function, the harder it is to recover latents. Furthermore, only 10 data generating processes are tested. That seems quite limited, given that the dimensions of the data is so small. **Minor:** - L133: "decoder function" is a bit of an odd choice for something that relates to the ground-truth data generating process. Usually, this is a part of a model. "Mixing function" would be more appropriate. - There is a period missing in L251. - L345: There is a follow-up version of DCI that takes more aspects of the learned representations into account [1]. It could be considered for the next iteration if time permits. - L363: "As mentioned in [27, 25], causal graphs are sparse and in the G5 case, where the graph is fully connected, the proposed method cannot identify the causal variables well." The two references **do not say that causal graphs are sparse**. You may refer to the sparse mechanism shift hypothesis, which says that changes between environments are assumed to stem from changes in few mechanisms of the causal graph. This is a statement about sparse changes, not sparse graphs. Technical Quality: 2 Clarity: 2 Questions for Authors: - L136: $i$ isn't defined, I suppose it's the index of the causal variables. However, later it's defined as an element of the intervention targets. Which one is it? - L138: What is $\mathcal{S}$ is it a set of functions $\mathcal{E}_i\to\mathcal{Z}_i$ (as suggested by the first statement in the sentence) or is it a map $\mathcal{E}\to\mathcal{Z}$? - L187: Previously, $s_i$ was introduced as a function $\mathcal{E}_i\to\mathcal{Z}_i$ (L136), now it seems to be a function $\mathcal{E}\to\mathcal{Z}_i$. Which one is it? - L204: What does this equation mean? How come you are using the same function but with a different set of arguments? What does it mean when one argument disappears? - Equation 2: Is the Taylor expansion necessary here, or is the expansion used elsewhere? To me, the LHS can be transformed to the RHS in any case, since $R_i$ is just some variable that is introduced for the difference. If it's not used elsewhere, I would suggest removing it since it's only a distraction and doesn't add to the method or understanding. - Assumption 3: I'm confused by the approximate relation. Either $g$ is linear and has no intercept term ("truly linear"), then $g(\tilde{z}) g(z)=g(\tilde{z} - z)$; or it has an intercept term (affine), then the difference between LHS and RHS can be arbitrarily large and thus writing approximately equal is inappropriate. - Assumption 3: What exactly is the assumption? Is it that $g$ is linear? The rest seems to (more or less) follow from linearity and the statements made before. - L325: "loc and scale networks are changed in post intervention", but from Equation (7) it seems only loc is changed? - Equation 7: Are the loc parameters during data generation generated by random neural networks? Do they have the same architecture as the network that estimates those parameters? - The experiment code seems to indicate that the following seeds were selected for synthetic experiments: "nature_seed=5 # 1, 2, 5, 6, 7, 8, 9, 10, 11, 12" Why were seeds 3 and 4 omitted? - I struggle to understand both the evaluation of the experiments in 5.3. Why weren't the same DCI metrics compared here? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: - As far as I can tell, no latents were uncovered (theoretically or experimentally) for either nonlinear mixing or unobserved intervention targets. Therefore, it seems to me, it should be mentioned that the results are for the linear case and weakly supervised. The title and abstract make the impression as if the more general CRL problem with paired observations is solved. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors appreciate the reviewer’s valuable comments and provide the responses below: **Diffeomorphic solution function deterministic mapping** - In general if we do not account for **uncertainties** in a function, that function will not be a **deterministic map from a variable to another**. - We would like to clarify that in the paper, we discuss the implementation of the solution functions (see Equation 6). These functions also incorporate other exogenous variables ($e_{/i}$), which are treated as parameters in the mapping from $\tilde{e}_i \rightarrow \tilde{z}_i$. Consider a linear regression model $y = w_1 \cdot x + b = f(x)$. In this example, $f: x \rightarrow y$, and $w_1$ and $b$ are parameters of $f$. Similarly, in Equation 6, $loc(e{/i})$ and $scale(e{/i})$ are parameters of the solution function $\tilde{s}_i$. Therefore, there is **no unmodeled uncertainty** in the solution function, making it a **deterministic mapping**. - We will make further clarification in the paper. **Informativeness (I)** - According to Cian Eastwood et al. [10], D and C are sufficient to evaluate the correspondence of learnt latents and true causal variables. Therefore, we have not calculated the I in our experiments. - *While the disentanglement score quantifies the number of generative factors captured by a given code variable, the completeness score quantifies the number of code variables which capture a given generative factor. Together, these scores quantify the deviation from the ideal one-to-one mapping between z and K of the dimensions in c [10].* **Synthetic experiments simple setting** - We evaluated our model in the **real-world datasets (Causal-Triplet: Epic-Kitchen and ProcTHOR )** where some of our assumptions such as linearity of decoder and observability of $V$ are violated. - We have performed experiments on synthetic dataset to evaluate the performance of the mode when all assumptions are satisfied and to better study the effects of each assumption; demonstrating its validity as a *proof of concept*. - Furthermore, we have additional results with larger number of variables presented in the **Appendix (See figure A6)**. **Sparse causal graphs** - The **Sparse Mechanisms Shift hypothesis** itself does not directly state that the causal graph is sparse. However, if changes are sparse, this often correlates with the causal graph being sparse because fewer connections make it easier to have fewer changes. **Comments** - We will address the replacing *decoder function* with *mixing function*. - L136: $i$ definition: - We have used $i$ for indices. When we explicitly state that $i \in \mathcal{I}$, it indicates that $i$ represents the intervention targets. - L138: $S$ is a set of functions $\mathcal{E}_i \rightarrow \mathcal{Z}_i$ or $\mathcal{E} \rightarrow \mathcal{Z}$? - $S$ is a set of functions $s_i: \mathcal{E}_i \rightarrow \mathcal{Z}_i$ for all causal variables. - L187: $s_i$ is $\mathcal{E}_i \rightarrow \mathcal{Z}_i$ or $\mathcal{E} \rightarrow \mathcal{Z}_i$? - We use other exogenous variables in the solution functions as well ($e_{/i}$) and they are treated as parameters of the mapping from $\tilde{e}_i \rightarrow \tilde{z}_i$. - For example, in Equation 6, $loc(e_{/i})$ and $scale(e_{/i})$ are the parameters of the solution function $\tilde{s}_i$. Same as in a linear regression model $y=w_1\cdot x + b=f(x)$ where $f: x \rightarrow y$ and $w_1$ and $b$ are the parameters of $f$. - L204: Solution function arguments - Here $h_i(v_0)$ is a parameter of the solution function. Same as in a linear regression model $y=w_1\cdot x + b=f(x)$ where $f: x \rightarrow y$ and $w_1$ and $b$ are the parameters of $f$. When $h_i(v_0)$ disappears, it means that we are ignoring that parameter. Same as ignoring parameter $b$ in the linear regression example. - Equation 2: Taylor expansion: - Taylor's expansion is mainly used in the proof of theorem in the Appendix. We included it in the main paper to elaborate on how soft intervention effect can be modelled using switch variable. - Assumption 3: - **Approximation:** We wanted to present the assumption for more *general* cases where that approximation would hold. In the *linear case*, the approximation would be *equality*. - The assumption concerns observing the effects of soft interventions, which we model using the switch variable $V$. Specifically, we have discussed the linear case in which the subtraction of post-intervention and pre-intervention images can be utilized in the encoder of $V$. Depending on the complexity of the decoder, other augmentations may be employed that capture information about soft intervention effects exclusively. - L325: loc and scale networks: - Only $loc$ is changed. Later in that paragraph we mention that scale is a **constant 1** for pre-intervention and post-intervention samples (L327). - *loc* in data generation - The parameters of the loc neural networks are **initialized randomly** in the data generating process (see L326). - The architectures are the same, however, the number of layers might be different. For example, in data generation the *scale* is a constant 1, however, in the model it is a fully connected neural network. The *loc* network has one layer in data generation, and in the model it has 2 layers. - Why were seeds 3 and 4 omitted? - We have selected the seeds that generated different graphs. Seed 3 generated same graph as G2 and seed 4 generated the same graph as G5. Only the loc network parameters were initialized differently in these seeds which we have examined in details in Ablation studies in Section 5.2. - DCI metrics in 5.3: - DCI metrics require **ground-truth causal variables**. The causal-triplet datasets do not contain the ground-truth causal variables. --- Rebuttal Comment 1.1: Comment: Thank you very much for providing clarifications in the detailed rebuttal. I will try to answer open points below. ## Diffeomorphic solution function deterministic mapping Do I undertand the rebuttal correctly, that you do use all exogenous variables in the solution function of the learned model? From that it follows that L136 is incorrect, but the implementation in the model is correct (since it uses all exogenous variables)? ## Sparse causal graphs Dense DAGS (ones with full upper triangular adjacency matrix) are perfectly consistent with the sparse mechanism shift (SMS) hypothesis. Sparse graphs therefore do not follow from it. The "sparse" in SMS refers to mechanisms, not edges. Hence, writing "As mentioned in [27, 25], causal graphs are sparse" is incorrect and implies a scientific consensus which is not backed up by the literature. You may argue or hypothesise that causal graphs are sparse and it may be reasonable. But then I think it should be clearly stated that this is an assumption or hypothesis that you make. Or if some literature also makes this assumption the relevant literature should be cited. ## Questions - L138: Then writing it as a map $\mathcal{E}\to\mathcal{Z}$ is a bit sloppy. This may be minor, but to help clarity of the mathematical formalisms things like this should be cleaned up. - L187: Why are they treated as parameters? This adds to the confusion. $s_i$ is a map that takes in all exogenous variables, omitting it from the set that it maps from makes it less clear. - L204: Ignoring in the linear regression example means setting it to zero, it wouldn't just disappear if you introduced it as a parameter argument of the function. Having consistent notation would help clarity. As a reader it would really help if the signature/number of arguments of a function doesn't change from one use to the next. - Assumption 3: Either you assume the decoder is linear, then it should be an equality. Or you don't assume the decoder is linear, then the approximation needs justification (it can be an arbitrarily bad approximation in general). You can't have it both ways. --- Reply to Comment 1.1.1: Comment: - Do we use **all exogenous variables** in the solution function of the learned model? - Yes. We use all exogenous variables in the solution function of the learned model. - We will add this information to better clarify in L136. - **Sparsity of Causal Graphs** - The reviewer comment about the sparse mechanism shift (SMS) hypothesis is sound and correct. We will update relevant references [1], [2] which assume sparsity of causal graphs. [1] Jiaqi Zhang, Kristjan H. Greenewald, Chandler Squires, Akash Srivastava, Karthikeyan Shanmugam, and Caroline Uhler. Identifiability guarantees for causal disentanglement from soft interventions. In Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023. [2] Sébastien Lachapelle, Pau Rodríguez, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, and Simon Lacoste-Julien. Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA. In 1st Conference on Causal Learning and Reasoning, CLeaR, volume 177 of Proceedings of Machine Learning Research, pages 428–484. PMLR, 2022. - L138: Writing: - We will clearly state that $\mathcal{S}$ is a set of functions of $s_i i=1, ..., n$ which are defined in L136. - L187: Why are they treated as parameters? - The other exogenous variables in $s_i$ ($e_{/i}$) are used as inputs of $loc$ and $scale$ networks (See Equation 6) which output a scalar. They can be seen as learnable parameters same as $b$ and $w_1$ in the linear regression example. Nevertheless, they are in the set of inputs that are mapped to $z_i$. - We will correct our definition in L136 to make this more clear. - L204: Ignoring in the linear regression example means setting it to zero ... - In the example in L205-L206 we have explained that ignoring $h_i(v_0)$ means setting it to zero. - We will revise the notation in L204 for more clarity. - Assumption 3: if the decoder is linear, then it should be an equality: - We will remove approximation and replace it by equality in Assumption 3 for linear decoders case to avoid confusion. - Assumption 3 is about observability of switch variable $V$. As a special case, we have explained that in the linear decoder case subtraction of $\tilde{X}$ and $x$ can contain information about $V$. In other nonlinear decoder cases, other augmentations may be used to obtain information about $V$.
Summary: This paper proposes a new approach **ICRL-SM** that performs implicit causal representation learning (mapping from noise to latent variables) by using causal mechanism switch variable to model the soft intervention effects. &nbsp; ### References [1] Burak Varıcı, Emre Acartürk, Karthikeyan Shanmugam, Abhishek Kumar, and Ali Tajer. Score- based causal representation learning with interventions. arXiv:2301.08230, 2023. [2] Burak Varıcı, Emre Acartürk, Karthikeyan Shanmugam, Abhishek Kumar, and Ali Tajer. Score- based causal representation learning: Linear and general transformations. arXiv:2402.00849, 2024. [3] Julius von Kügelgen, Michel Besserve, Wendong Liang, Luigi Gresele, Armin Kekic ́, Elias Bareinboim, David M Blei, and Bernhard Schölkopf. Nonparametric identifiability of causal rep- resentations from unknown interventions. In Proc. Advances in Neural Information Processing Systems, New Orleans, LA, December 2023. [4] J. Brehmer, P. De Haan, P. Lippe, and T. Cohen. Weakly supervised causal representation learning. In Advances in Neural Information Processing Systems, volume 35, pages 38319– 38331, 2022. Strengths: The idea of implicitly modeling causal effects using switch variables is very interesting. The experiments have shown promising performance on synthetic and high-dimensional image datasets compared to several baselines. Weaknesses: 1. My most significant concern is that this paper is generally not well written and, thus, pretty hard to follow. For example, the calligraphic letter $\mathcal{Z}$ was used throughout to denote both causal variables (e.g., line 137) and its domain (e.g., line 139). 2. The assumption of a diffeomorphic causal mechanism is pretty strong. I am aware that [4] made a similar assumption. Yet, it is not a very common assumption for latent causal models in other causal representation learning literature (e.g., [1, 2, 3]). 3. Line 278: The Gaussianity assumption of the causal and exogenous variables might be hard to satisfy in realistic settings. Technical Quality: 2 Clarity: 1 Questions for Authors: 1. Line 159: (**Terminology**) As far as I know from the causality literature, a **soft intervention** is defined on a distributional level; from this perspective, it is not necessarily correct to assume the values are strictly different pre- and post-interventions for the intervened causal variables. Also, since intervention is performed on a distributional level, one does not observe such ``paired data`` $(x, \tilde{x}, i)$. Instead, one observes the data population before and after intervention $p(x), p(\tilde{x})$ without the same level correspondence (see [1, 2, 3]). 2. Line 162: Is there also a formal mathematical definition for **"sufficient variability"**? 3. Line 181: I would appreciate the authors properly defining the **"switch variables"** upon their first occurrence, especially because the work is based on these "switch variables." 4. Line 244: "Latent Causal Model" is neither defined nor cited. Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: The main paper did not discuss limitations. The authors pointed them out in the checklist, but I would have appreciated it more if they had adequately addressed the limitations in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors appreciate the reviewer’s valuable comments and provide the responses below: **Assumptions of a diffeomorphic causal mechanism** and **Gaussianity assumption of the causal and exogenous variables** - We would like to add that CauCA [1] and dVAE [2] also assumes *diffeomorphic decoders* in their theorems (see Table 2). - Furthermore, we evaluated our model in the **real-world datasets (Causal-Triplet: Epic-Kitchen and ProcTHOR )** where some of our assumptions including *diffeomorphic causal mechanism* and *Gaussian assumption* could be violated. - Additionally, in our Theory (synthetic experiments ), we have examined the cases where the distribution of causal variables follow a multivariate normal distribution, demonstrating its validity as a *proof of concept*. - We will examine the other cases in our future works. **Terminology** - The reviewer's explanation of the terminology is sound and correct. However, according to Juan D et., al [3], soft interventions refer to scenarios where the changes to the population are more *subtle* as opposed to the hard interventions where the causal mechanisms are set to a *constant*. These subtle changes can manifest themselves in: - exogenous variables, - set of parents, and - causal mechanisms. - Therefore, for example, any changes to the causal mechanisms in a system including, setting them to a constant (in case of hard interventions), will give us post-intervention samples. **Formal mathematical definition for "sufficient variability"** - Please refer to [4]. **Switch Variables Definition** - Switch Variables are first introduced in **line 180** and not before. And in that paragraph we have fully defined the switch variable. Would the reviewer make more clarifications of their comment? **Latent Causal Model Definition** - We have defined Latent Causal Models in **Definition 3.4**. - We will revise the paper by referencing and citing it properly. **References** - [1] Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, and Michael Tschannen. Weakly-supervised disentanglement without compromises. In Proceedings of the 37th International Conference on Machine Learning,ICML, volume 119 of Proceedings of Machine Learning Research, pages 6348–6359. PMLR, 2020. - [2] Liang Wendong, Armin Kekic, Julius von Kügelgen, Simon Buchholz, Michel Besserve, Luigi Gresele, and Bernhard Schölkopf. Causal component analysis, 2023. - [3] Juan D. Correa and Elias Bareinboim. General transportability of soft interventions: Completeness results. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems, NeurIPS, 2020. - [4] Sébastien Lachapelle, Pau Rodríguez, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, and Simon Lacoste-Julien. Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA. In 1st Conference on Causal Learning and Reasoning, CLeaR, volume 177 of Proceedings of Machine Learning Research, pages 428–484. PMLR, 2022 --- Rebuttal Comment 1.1: Title: Thank you for the response Comment: I thank the authors for the rebuttal. **Regarding Assumptions of a Diffeomorphic Causal Mechanism and Gaussianity Assumption of the Causal and Exogenous Variables** > We would like to add that CauCA [1] and dVAE [2] also assume diffeomorphic decoders in their theorems (see Table 2). 1. To clarify, my inquiry was not about "diffeomorphic decoders" but rather about the "diffeomorphic causal mechanisms." I recognize that diffeomorphic decoders are a common assumption in identifiability within causal representation learning, whereas diffeomorphic causal mechanisms are not. A thorough discussion on this distinction would significantly strengthen the paper. 2. Additionally, I noticed that the references for the two papers mentioned appear to be swapped in your rebuttal. Correcting this will help ensure clarity and accuracy in your citations. 3. As far as I am aware, Locatello et al. (2020) named their approach "Adaptive-Group-VAE (Ada-GVAE) and Adaptive-ML-VAE (Ada-ML-VAE)," rather than "dVAE." Accurately citing and referring to existing approaches is important for maintaining scholarly rigor. **Formal Mathematical Definition for "Sufficient Variability"** > Please refer to [4]. The reference to [4] for "sufficient variability" is somewhat unclear. First, [4] addresses a different setting within causal representation learning, involving both action and time, and second, there are multiple types of sufficient variability discussed within that work (e.g., sufficient time variability or sufficient action variability). Even when assumptions are similar to those in existing work, it is vital to clearly state them within the main theorem for improved readability and understanding. **Switch Variables Definition** > Switch variables are first introduced in line 180 and not before. In that paragraph, we have fully defined the switch variable. Would the reviewer clarify their comment? By "formal mathematical definition," I was looking for more detailed information, such as: Is this variable discrete or continuous? What is its domain? If it is a random variable, how is it distributed? Unfortunately, these details are not provided in the main text or the rebuttal. The authors only cited Schölkopf et al. (2019) upon first mentioning the causal switch variable. However, given the broad scope of that paper, a direct citation without further specifics does not adequately support the reader’s understanding of the key concept of causal switching variables. Overall, I believe that the mathematical formulation of the current manuscript lacks the necessary rigor and clarity required for publication at this time. These issues are fundamental and would require significant revision. --- Reply to Comment 1.1.1: Comment: - **diffeomorphic causal mechanisms** - We have responded regarding the diffeomorphic decoder. - Since the same diffeomorphic causal mechanism assumption is made in [1] and in our paper we aimed to relax one of the assumption of [1] (availability of hard interventions). Therefore a **thorough addressing of the diffeomorphic causal mechanisms** falls out of scope of the current draft, however, we agreed with the reviewer that it is an important research topic so we will study it in details in our future works. - **Naming references** - [2] applied an averaging function to the posterior of their causal variables as a substitute to one of their constraints which is not applicable in our setting. Naming used in [2] is based on this averaging function. In [2] authors assumed independent causal variables. Following [1], we have also used the naming as disentangled VAE (dVAE). - Note that the code implementation of the dVAE is available through our submitted files. - **sufficient variability** - Since in our work we only have actions/interventions we did not distinguish between time and action sufficient variability. For more clarity, we will say that our assumption is similar to "sufficient action-variability" in [3]. - **formal mathematical definition of Switch Variable** - In **line 192 (Definition 3.2)**, we clearly defined the domain of the switch variable as well as if it is discrete or continuous, **We have mentioned that it is real valued, which shows it is a continuous variable.** - We do not have any assumption on the distribution of switch variable therefore we have not discussed it in line 180. - In **section 3.4 Training Objectives (L301-302)**, we clearly made an assumption of the switch variable for our experiment setup. - **Citing Schölkopf et al. (2019)** - We inspired the idea of using switch variable from Schölkopf et al. (2019), and we made our design and implementation based on that concept. - We would like to know what exactly the reviewer means by * further specifics*. For example please take a look at the Equations 6 and 7 of Schölkopf et al. (2019) where it mentions **"a random selector variable choosing from among a set of functions ..."**. Note that our usage of the concept is novel and specific to our paper. **References** - [1] J. Brehmer, P. De Haan, P. Lippe, and T. Cohen. Weakly supervised causal representation learning. In Advances in Neural Information Processing Systems, volume 35, pages 38319– 38331, 2022. - [2] Francesco Locatello, Ben Poole, Gunnar Rätsch, Bernhard Schölkopf, Olivier Bachem, and Michael Tschannen. Weakly-supervised disentanglement without compromises. In Proceedings of the 37th International Conference on Machine Learning,ICML, volume 119 of Proceedings of Machine Learning Research, pages 6348–6359. PMLR, 2020. - [3] Sébastien Lachapelle, Pau Rodríguez, Yash Sharma, Katie Everett, Rémi Le Priol, Alexandre Lacoste, and Simon Lacoste-Julien. Disentanglement via mechanism sparsity regularization: A new principle for nonlinear ICA. In 1st Conference on Causal Learning and Reasoning, CLeaR, volume 177 of Proceedings of Machine Learning Research, pages 428–484. PMLR, 2022
Summary: This paper presents a novel approach for learning implicit causal representations through switchable mechanisms, specifically designed to handle soft interventions which are more realistic but challenging compared to hard interventions. The authors introduce a causal mechanism switch variable to model the subtle effects of soft interventions and establish the identifiability of causal models under certain assumptions. Their proposed method, ICRL-SM, demonstrates improved performance in learning identifiable causal representations over baseline methods through experiments on synthetic and real-world datasets. The paper contributes a new perspective to causal representation learning with potential applications in various domains where understanding causal relationships from observational and interventional data is crucial. Strengths: 1. A standout feature of this paper is the introduction of a switchable mechanism to model the nuanced effects of soft interventions. Traditional causal inference methods often focus on hard interventions, which are impractical in many real-world scenarios due to the need for stringent control. The paper's approach to incorporate soft interventions through a causal mechanism switch variable is groundbreaking. It allows the model to adapt to changes in causal relationships post-intervention, providing a more realistic and flexible framework for causal representation learning. 2. This paper proposes Augmented Implicit Causal Models, which is an innovative concept that extends the scope of implicit causal representation learning. By integrating the causal mechanism switch variable into the model's solution functions, AICMs can capture the intrinsic characteristics of each causal variable while accounting for intervention effects. This approach sidesteps the need for explicit parameterization of the causal graph, which is a complex and often intractable task. The innovation here lies in the model's ability to implicitly learn the causal structure while directly modeling the effects of interventions. Weaknesses: 1. The method's effectiveness relies on several key assumptions, including knowing the targets for intervention, interventions being atomic (indivisible), and variables following a multivariate normal distribution. These assumptions might not be realistic in many real-world settings where interventions can be complex, and data may not be normally distributed. 2. While the paper's experiments on synthetic and specific real-world (Causal-Triplet) datasets demonstrate the method's potential, it's unclear how well it generalizes to other data types or different domains. Technical Quality: 3 Clarity: 3 Questions for Authors: As the paper itself mentions, the identifiability of causal models can become more challenging with an increase in the number of variables and complexity of causal graphs. The paper could benefit from a deeper analysis of scalability, including how the method performs as the size and complexity of the causal system grow. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors greatly appreciate the reviewer’s valuable comments and provide the responses below: **Assumptions** - We evaluated our model in the **real-world datasets (Causal-Triplet: Epic-Kitchen and ProcTHOR )** where some of our assumptions such as linearity of decoder and observability of $V$ are violated. Additionally, in our synthetic experiments, we assessed the performance of our proposed model under conditions where all assumptions are satisfied, demonstrating its validity as a *proof of concept*. - The assumptions used in our model have already been used in the SOTA baseline model, e.g., **Atomic intervention** is used in [3]. - **Known intervention target** lines 167-170: *The known targets assumption can be relaxed in applications where such data is not available and the same procedure in [3] can be used to infer the intervention targets. In fact, in our real-world experiments, intervention targets are not available and based on the nature of the datasets, we hypothesize our causal variables to be object attributes and actions to be intervention targets.* - **Multivariate normal distribution** In our theory, we have examined cases where the distribution of causal variables follows a multivariate normal distribution as a *proof of concept*. Exploring other distributions will be part of our future work, as this falls outside the primary scope of the current paper. **Scalability** - We have tried to address this issue in our real-world experiments where the number of causal variables are larger than our synthetic dataset ( 7 in ProcThor and 20 in EpicKitchens). - Additionally, we will make a thorough theoretical analysis of this problem in our future works. **Generalization to other data types or different domains** - In this paper, we propose a **novel theory** and provide a **rigorous proof** for it. - We validate the theory using synthetic datasets, demonstrating its **theoretical soundness**. - Furthermore, we showcase the **practical performance** of the concept on **real-world datasets**. - Future work will focus on exploring the generalization of our approach across different data types and domains. [3] Johann Brehmer, Pim de Haan, Phillip Lippe, and Taco S. Cohen. *Weakly supervised causal representation learning*. In NeurIPS, 2022. --- Rebuttal Comment 1.1: Comment: I thank the authors for their rebuttal. To further enhance the persuasiveness of the paper, I suggest the following improvements: 1. Strengthen experimental design: Conduct more comprehensive experiments, including tests under various datasets, with different numbers of variables, and with different data distributions, to evaluate the robustness of the proposed method. 2. Deepen theoretical analysis: Conduct a more in-depth theoretical analysis of the scalability issue, deriving the computational complexity and performance upper bound of the method under different conditions. I will review the other discussion thread and re-evaluate the score upon the conclusion of the discussion period. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for their suggestions. We would like to mention some points in the following: **Experiments** - In the current paper, we ran experiments using 3 datasets; A synthetic dataset and 2 realistic datasets (Causal-Triplet: Epic-Kitchen and ProcTHOR ). - In our experiments, we tried different number of causal variables; 4 for synthetic, 7 ProcTHOR and 20 for Epic-Kitchen. - Additionally for synthetic dataset we changed the causal variables from 4 to 10 (please see Figure A6). - Using different datasets would result in different data distributions. The data distributions of real-world datasets are unknown and therefore we did not make any assumption on them. **Scalability issue** - In this paper, we aim to relax one of the assumptions made in [1] regarding the availability of hard interventions. Our paper focuses on developing the theory and providing its proof. As also noted in [1], **scalability** remains a challenge (see Figure 8 in [1]). **Computational complexity and performance upper bound** - In our Ablation studies *Section 5.2: Factors Affecting Causal Disentanglement*, we have tried to study the impact of various factors affecting the upper bound of causal disentanglement. **More in-depth theoretical analysis** - In this paper we tried to bring implicit causal representation learning [1] one step closer to the real-world applications by relaxing hard interventions assumptions and proved identifiability theorem. However, as shown by the results in [1], there are other challenges remaining in this context including various factors affecting performance and scalability. We agree with the reviewer that these are important challenges and aim to address them in our future works. **References** - [1] J. Brehmer, P. De Haan, P. Lippe, and T. Cohen. Weakly supervised causal representation learning. In Advances in Neural Information Processing Systems, volume 35, pages 38319– 38331, 2022.
Summary: This work is in causal representation learning that utilizes interventional data. It involves two types of common interventions: hard interventions and soft interventions. It’s known that soft intervention is more general since it covers hard intervention. But it is also more challenging since parental relations remain. This work proposed identifiability results using a causal mechanism switch variable designed to change between different causal mechanisms by component-wise transformations. Adequate experiments were provided to verify they claim. Strengths: Pros: 1. Overall, the writing is clear and with good context. 2. Adequate experiments were provided, including real-data experiments. Also, they offered good observations of the comparison with baselines. This is a big plus to their contribution. Weaknesses: Cons: 1. It's unclear what the unique theoretical contribution is to this switch variable since both types of interventions were proposed before. Could the author clarify the theoretical contribution? Especially regarding the original technical hardness compared to [1] and also [38]. 2. In the synthetic experiment, it seems that you use the linear decoder, which loses the purpose of training a neural network to handle non-linear issues. Did you try non-linear functions and see how it goes? Besides the switch variable utilizing interventional data, what is the difference between these settings and those classical linear unmixing methods (like linear ICA or its other assumption variants)? Some minor issues/recommendations: Line 161, the $e_i$ is not defined, if you mean causal variable, then you already have defined it as $Z_i$. Line 187, the ‘\s’ is not defined before but later in line 195. Technical Quality: 2 Clarity: 2 Questions for Authors: I put my questions in my previous section. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The authors appreciate the reviewer’s valuable comments and provide the responses below: **Unique theoretical contribution of switch variable** - We have proposed switch variable to obtain **identifiability** in **implicit models** using **soft interventions**. Neither [38] nor [1] use implicit models for causal representation learning. **Non-linear decoder functions** - We evaluated our model in the **real-world datasets (Causal-Triplet: Epic-Kitchen and ProcTHOR )** where some of the assumptions such as linearity of the decoder are violated. For these evaluations, we utilized a *ResNet-based* architecture for the decoder. Additionally, in our synthetic experiments, we assessed the performance of our proposed model under conditions where all assumptions are satisfied, demonstrating its validity as a *proof of concept*. **Difference with classical methods** - In Independent Component Analysis (ICA), whether linear or nonlinear, the latent variables, which represent the underlying causal factors, are assumed to be **statistically independent and do not have direct connections with each other**. - In contrast, causal representation learning models latent variables as **interconnected**, with their relationships explicitly represented by a Directed Acyclic Graph (DAG). **Notation** - In line 137, we defined exogenous *variables* as $\mathcal{E}_i$, so we have denoted *values* using small letters throughout the paper. - Line 159-161 mentioned: *The **exogenous variables’ values** change only for the corresponding intervened causal variable, while the others maintain their pre-intervention values ..* - We have defined $s_i$ in line 136 *...a diffeomorphic solution function, $s_i : \mathcal{E}_i \rightarrow \mathcal{Z}_i$* --- Rebuttal Comment 1.1: Title: Response to the author Comment: The review thanks the author for the clarification about the contribution regarding the implicit model. For the implicit model, does that mean you have a solution function to map exogenous variables to latent Z, but you don't have the learned and identifiable causal graph? The difference with classical methods: I agree that the CRL is different in the sense that latent Z is causally related, and that is the main difference from the first place. I should be more clear about the ICA and other variants. What I tried to ask is that let's say in the linear case, can your model be considered as a certain linear factor analysis model where we put extra constraint for the DAG (triangular matrix constraint in matrix factorization)? Since you want to triangular matrix to be unique, and also the mapping from Z to X to be unique. --- Reply to Comment 1.1.1: Comment: - Do we have solution function to map exogenous variables to latent Z, but we do not have the learned and identifiable causal graph? - Yes, in implicit model we have a solution function to map exogenous variables to latent $Z$. We **do not learn a causal graph during training** but the causal graph's edges can be **inferred from solution functions after training** utilizing methods explained in [1]. [1] Johann Brehmer, Pim de Haan, Phillip Lippe, and Taco S. Cohen. Weakly supervised causal representation learning. In NeurIPS, 2022. - in the linear case, can our model be considered as a certain linear factor analysis model? where we put extra constraint for the DAG (triangular matrix constraint in matrix factorization)? - As the reviewer also confirmed, in CRL the latent $Z$ is causally related. - Our model can not be considered as a certain linear factor analysis model because in linear factor analysis models, the factors are assumed to be independent but in **causal representation learning the causal variables are dependent** and we must learn the causal relations as well. - For learning causal relations, we **did not use Directed Acyclic Graph (DAG)** but rather we used **solution functions in implicit causal representation learning**. - As opposed to explicit models, in implicit models we **do not use adjacency matrix to learn causal relations**, hence, we **do not apply constraints on matrices such as acyclicity constraint**. We use additional data such as **interventional data** and other assumptions including **diffeomorphism of solution functions** to learn the causal relations. - We wonder by **linear case**, the reviewer references the decoder function (mixing function). Please clarify otherwise. - Since in CRL we also need to take into account the causal relations, in linear decoder case, our model has solution functions to learn these relations where as in linear factor analysis there would be no such component. - In fact, **we want our latent causal models to be identifiable up to reparametrization**, so we proved it in theorem 3.5. --- Rebuttal Comment 1.2: Title: response to the author Comment: The reviewer thanks the author for the clarification. The reviewer is monitoring the other discussion thread for re-considering the score at the end of the discussion period.
Rebuttal 1: Rebuttal: Our paper solves the more **general case** of a previously introduced problem in [1], which is **implicit** causal representation learning using **soft interventions**. We have used similar assumptions in [1] and relaxed the **hard intervention assumption**. Three new assumptions were added: 1- **Known intervention targets:** In our real-world experiments, **the targets are unknown**, and we have hypothesized them. Furthermore, the results in [1] have indicated that inferring intervention targets is a relatively easier task; therefore, we have focused on other aspects of the problem. A similar procedure in [1] can be used to infer intervention targets in applications where they are unknown. Please see L167-L170 in the paper. 2- **Multivariate Normal distribution:** We have proven our theory for cases where causal variables and exogenous variables follow a Gaussian distribution. It is also worth noting that in our experiments with real-world datasets, the distributions are **unknown**. We have left further investigations to our future works. 3- **Observability of $V$:** As a proof of concept, we have examined a simple case in our **synthetic dataset** where the mixing function in the data generation process is linear, and $V$ can be observed from the subtraction of post-intervention and pre-intervention samples (See Assumption 3.3). Note that in our theory **we only require the mixing function to be diffeomorphic** and linearity is not a requirement. This assumption implies that in nonlinear cases, other augmentations may be used to obtain information about $V$. In experiments with real-world dataset, the mixing function is **nonlinear**, and our method outperforms the baseline models. As the ground truth causal variables were not available in the real-world dataset, we used **action accuracy and object accuracy** as a proxy to determine the degree of **uncovering causal variables** compared to other methods. **References** - [1] Johann Brehmer, Pim de Haan, Phillip Lippe, and Taco S. Cohen. Weakly supervised causal representation learning. In NeurIPS, 2022.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Transfer Learning for Latent Variable Network Models
Accept (poster)
Summary: This paper studies the problem of transfer learning for estimating the edge probabilities of random graphs under latent position models. In particular, given a fully observed graph on $n$ nodes generated from an independent-edge random graph model, with edge probabilty matrix $P$, and an $n_Q \times n_Q$ submatrix of a graph generated from an independent-edge random graph with edge probability matrix $Q$, the authors consider the task of estimating $Q$, by leveraging information from both graphs. In order for transfer learning to be possible, the underlying graph models must share some structure. The authors assume that both $P$ and $Q$ are given by latent positions models, i.e. $$ P_{ij} = f_P(x_i,x_j),\qquad Q_{ij} = f_Q(x_i, x_j) $$ where $x_1,\ldots,x_n \in \mathcal{X} \subset \R^d$ are a common set of latent positions, drawn i.i.d. from a uniform distribution on $\mathcal{X}$, and that $f_P, f_Q$ are $\alpha$​-Hölder-smooth symmetric functions. The authors quantify the similarity between two nodes in a graph model using an interpretable distance based on their expected number of common neighbours (Definition 1.2), and measure the distance between two graph models on the same node set by comparing the ranks of these distances for each node in the graphs. In Section 2, a simple algorithm (Algorithm 1) is proposed which estimates $Q_{ij}$ by averaging entries of $A_Q$ which are similar in terms of graph distance to $(i,j)$ in $A_P$, and under some reasonable conditions, a high-probability tail bound from the Frobenius norm difference between $\hat Q$ and $Q$ is dervived (Theorem 2.2). In Section 3, information theoretic lower bounds for the problem are derived under the special case that $A_P$ and $A_Q$ are stochastic block model (SBM) graphs (Theorem 3.2), which are proved using Fano's lemma. An alternative algorithm (Algorithm 2) for estimating $Q$ under the SBM modelling assumption is proposed, based on the same ideas as Algorithm 1, and a high-probability tail bound (Proposition 3.4), analagous to Theorem 2.2, is derived which matches the information-theoretic lower bound up to a log factor. In Section 4, Algorithms 1 and 2 are compared with an oracle method on a selection of simulated and real-world datasets. Complete proofs of all theoretical results are provided in the appendix. Strengths: - This paper was a joy to read. The problem of transfer-learning on graphs is well-motivated with real-world applications and the relevant literature is reviewed extensively. The reader is guided through each aspect of the problem; every definition feels motivated and mathematical statements are followed by intuition and illustrative examples. The paper is well organised, the writing is exemplary, and proofs are complete, accurate and clear. - This paper addresses a problem which is of great relevance to the machine-learning community, and yet provides what appears to be the first provably consistent algorithm for this problem. The proposed solution, is simple and elegant, and the derived high-probability tail bound and its proof provide a lens into the fundamentally important aspects of this problem. The lower bounds derived in Section 3 show the, despite the simplicity of the method, it is optimal (in the minimax sense) under the SBM, albeit under fairly strong signal-to-noise assumptions (see weaknesses). - The simulated and real data experiments in Section 4 are informative and demonstrate the effectiveness of the approach. Weaknesses: - One weakness of this paper, is that their setup only considers graphs whose expected node degrees grow linearly in the number of nodes in the graph. In practice, many networks are sparse and it is common in the literature to consider asymptotic regimes in which expected node degrees grow sub-linearly. However, there is sufficient novelty in this paper that I don't see that this can be held against it. The authors claim in their conclusion that they believe their algorithm will work for when node degree are of the order $\Theta(n^{1/2})$, but not down to $\Theta(\log(n))$ (see Questions), but it is not clear *why* they believe this. - It is not clear to me what is being shown in Figure 1. I was left quite confused. Some additional explanation here would be helpful. Technical Quality: 4 Clarity: 4 Questions for Authors: My only questions to the authors relate to addressing to points raised in the Weaknesses section. I should mention that the first point is a curiosity and I believe it is only the second point that needs addressing before this work is ready for publication. **Typos:** - I believe there is a typo in the conclusion, and that the big-Os relating to edge densities should be big-$\Omega$s. Is this correct? - The first letter on line 111 should be capitalised. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors are upfront the main limitations of their work, although I think it would be helpful to mention the first weakness I mention explicitly. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their very encouraging comments concerning the novelty of our theoretical guarantees, the great relevance of the problem we consider, and the quality of our writing. We address their comments in detail below. ### **Why we believe our Algorithm 1 will work for expected node degree $\\tilde\\Theta(n^{1/2})$.** 1) _Graph distance concentration_. The proof of Theorem 2.2 requires that the empirical graph distance we use in Algorithm 1 concentrates around the population graph distance (Proposition A.13, line 602). We show that the concentration is strong enough for our purposes when the expected node degree grows linearly in the number of nodes. However, the common neighbors metric has also been used for network-assisted covariate estimation (Mao et. al. 2021) with expected node degree $\\tilde\\Omega(n^{1/2})$. Therefore, we expect that the analysis of Algorithm 1 would still go through for expected node degree $\\tilde\\Theta(n^{1/2})$. 2) _Empirical results on sparse real-world graphs_. We test our Algorithm 1 on sparse real-world graphs in Section 4, lines 271-287. The metabolic networks have median node degree $\approx 0.06 n$ and the email networks have median node degree $\approx 0.007 n$ (see Table 2). The performance of Algorithm 1 on these graphs suggests that the assumption of $\Theta(n)$ node degree is perhaps not necessary. ### **Why we believe our Algorithm 1 will not work for expected node degree $\Theta(\log(n))$.** For expected degree $o(n^{1/2})$, the common neighbors graph distance (Definition 1.2) fails to concentrate. However, one can modify the common neighbors distance so that it concentrates for expected node degree $\\tilde\\Omega(n^{1/3})$ (Mao et al. 2021). While this is still far from the $\Theta(\log(n))$ regime, it suggests that variations on the graph distance might ensure our Algorithm 1 works for sparser graphs. Note that for a Stochastic Block Model with two communities, there are information theoretic limitations to exact recovery when the expected node degree of the SBM is $o(\\log n)$ (see [1], Theorem 13). Latent variable models are much more general than SBMs, so it is unlikely that we can give a consistent algorithm for expected node degree $o(\\log n)$ in our setting. ### **Clarification of Figure 1.** The intention of Figure 1 is to give a visual idea of the inputs and outputs for the Algorithms we consider in our paper. It is styled after a similar figure in Zhang et al. (Biometrika, 2017). The Figure intends to show, at a glance, that Algorithms 1 and 2 both work well on Stochastic Block Models, that only Algorithm 1 works well on graphons, and that the Oracle performs well in all cases. To clarify, each row of the figure corresponds to a different source/target pair $(P, Q)$. For a fixed row, the upper triangular part on columns 2, 3, 4 corresponds a $\hat Q$ for a different algorithm. The upper triangular part of column 1 shows the true $P$. The lower triangular part of columns 1, 2, 3, and 4 is identical for a fixed row, and shows the true $Q$. For example, the first row of Figure 1 shows a pair $(P, Q)$ of Stochastic Block Models. From the left-most cell, we can see that $Q$ has $2$ communities while $P$ has $4$. The upper triangles of columns 2, 3, 4 show $\hat Q$ for Algorithms 1, 2 and the Oracle respectively. The lower triangles in columns 1, 2, 3, 4 are all the same, and show the true edge probability matrix $Q$. Each cell displays the true $Q$ in the lower triangle for comparison. We will rewrite the Figure caption in the revision to avoid confusion. ### **Typographical errors.** The reviewer is correct that the occurrences of $O(\cdot)$ should be replaced with $\Omega(\cdot)$ in the Conclusion, lines 293 and 294. We will fix this and all other typographical errors in the revision. [1] Abbe, Emmanuel. "Community detection and stochastic block models: recent developments." Journal of Machine Learning Research 18.177 (2018): 1-86. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response and I continue for strongly support this paper. The discussion on the concentration of different graph distances is very interesting and I would welcome seeing it in the paper. This work could motivate others to develop graph distances which work at lower sparsity levels. --- Reply to Comment 1.1.1: Comment: Dear reviewer TFsn, Thank you very much for your encouraging words and your strong support. We will include additional discussion on concentration of graph distances in the revision.
Summary: In this paper, the authors address two topics in random network/graph models: 1. The transfer learning in latent variable network models. It proposed estimate of the distribution of target network from source network using a defined graph distance. 2. It also proves a minimax lower bound for Stochastic Block Models and show that a simple algorithm achieves this rate. Strengths: This is a theoretical paper that addresses the transfer learning from source to target network in latent variable network models. Weaknesses: 1. Theorem 1.1 is an informal form of Theorem 2.2, it seems there is no need to repeat. 2. The authors repeatedly use the vague word "suitable", "suitably" in the paper. Many phrases (containing this word) need to be clarified. Technical Quality: 3 Clarity: 3 Questions for Authors: In Page 3, line 105: the authors mentioned: relative, not absolute graph distances, it seems to me this is not defined. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: In the authors' reply to the question set, in line 950, the authors mentioned that they discussed "the need for a different graph distance ..." . It will benefit the readers if the authors explain further in this paper on the selection of of the distance, an explain why they prefer this graph distance that they are currently using. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for their feedback and for assessing our paper to have high impact. We address their comments in detail below. ### **Clarification of relative versus absolute graph distances.** On line 105, we note that our rankings assumption (Definition 1.3) concerns relative, not absolute graph distances. The reviewer is right to point out that the terms "relative" and "absolute" can be clarified. Our point is that Definition 1.3 involves quantiles of graph distances. This is a "relative" condition, because it depends on a rank-ordering within both graphs $P, Q$ before comparison. On the other hand, an "absolute" condition would require that for nodes $i, j \in [n]$, if e.g. $d_P(i,j) < 100$ then $d_Q(i,j) < C \cdot 100$. Our condition is more flexible and will hold for a larger set of graph pairs $(P, Q)$, such as pairs where one graph is much more dense than the other. In the revision, we will rewrite line 105 to clarify this point. ### **What we mean by "the need for a different graph distance" in line 950.** In line 950 we mention our discussion of extending to sparse graphs in the Conclusion (lines 292-295). The concentration of the empirical graph distance (Algorithm 1, line 3) requires expected degree $\tilde\Omega(n^{1/2})$ (Mao et al. 2021). In the same paper, they show that a modification of this graph distance concentrates when expected degree is $\tilde\Omega(n^{1/3})$. While this is still far from the $\Theta(\log n)$ degree regime, it suggests that variations on the graph distance might ensure our Algorithm 1 works for sparser graphs. We will clarify this point in the revision. ### **Why we select our graph distance.** We use the common neighbors graph distance (Definition 1.2) to capture local graph structure, as discussed in lines 90-100. Other graph distances have been used in the literature as well (Zhang et al. 2017, Mukherjee and Chakrabarti 2019, Mao et al. 2021). The reason we use this specific graph distance is technical. We upper-bound part of the smoothing error of Algorithm 1 in terms of the common neighbors graph distance (section A.2.3). Roughly speaking, there is a relationship between the $\ell_2$ (Euclidean) distance between rows of the common neighbors matrix $Q^2$ (coming from our graph distance), and the $\ell_2$ distance between rows of $\hat Q - Q$ (coming from the mean-squared error $\frac{1}{n^2} \|| \hat Q - Q \||_F^2$). See Lemma A.17, lines 641-646, for a precise statement. To minimize mean-squared error, we use a graph distance based on $\ell_2$ distance. Previous works indicate that bounding other kinds of error, such as $\|| \hat Q - Q \||_{2 \to \infty}$ (Zhang et al. 2017), require graph distances tailored for those errors. We will include this point in the revision. ### **Writing clarity and typographical errors.** We will remove all uses of vague language and typographical errors in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I read your rebuttal and other reviewers comments and still want to keep my rating as 7: Accept. --- Reply to Comment 1.1.1: Comment: Dear reviewer BaBd, Thank you very much for your support.
Summary: The work explores transfer learning in latent variable network models. In particular the work focuses on the setting of observing samples from an n x n probability matrix from a source P and a submatrix of the adjacency of a target Q. The goal is then to estimate Q, using information from P. The authors propose an algorithm with vanishing error under certain conditions and a more simple algorithm for the special case of stochastic block models. Strengths: The paper is well-written. I particularly enjoyed the introductory sections that help motivate the work. I believe that the problem the authors tackle has value, as it stands mostly as a theoretical contribution. I am unfortunately unable to judge the strength of the contribution of this work as the submission is far from my area of research. Weaknesses: The experimental part of the paper focuses on purely synthetic tasks. While I understand that, as the authors mention, they believe there aren't direct baselines or datasets for this, I still see this as a potential weakness. A way to greatly strengthen the work would be to have experiments on domains mentioned in the introduction (i.e. metabolic networks). Technical Quality: 2 Clarity: 3 Questions for Authors: Would the authors be able to explain why they think there are no direct baselines? I am not familiar with the surrounding literature, but the problem of estimating a full matrix from a sub-matrix should be a well-studied area. It might be useful to include baselines that do this, even if they are not operating in the transfer learning regime. Would the authors be able to provide experiments on a real-world task -- or if not provide a justification? Confidence: 1 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: None that I am aware. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for highlighting the quality of our writing and the value of the problem we tackle. We address their comments in detail below. ### **We test our algorithms on two real-world transfer tasks.** In Section 4, lines 271-287, we test our algorithms on two real-world transfer tasks, using the BiGG dataset from the biological networks literature (King et al. 2016) and the Email-EU dataset from the social networks literature (Leskovec and Krevl 2014). ### **We perform experiments on the domains mentioned in the introduction (metabolic network estimation).** In Section 4, lines 273-279, we use our algorithms to estimate the metabolic network of _Pseudomonas putida_, a gram-negative bacterium that is studied for its applications to industrial biocatalysis [1] and bioremediation [2]. The full metabolic network for _Pseudomonas putida_ is not known [3]. We use our transfer learning algorithms to estimate its metabolic network for different choices of source organism (Figure 2, left). For a good choice of source organism, our Algorithm 1 achieves mean-squared error comparable to the Oracle with flip probability $p=0.1$. ### **Comparisons against a new baseline adapted from Levin et al. (JMLR 2022).** In the global rebuttal, we implement a new baseline transfer method from the literature to compare against our algorithms. See the global rebuttal and attached PDF for results. At a high level, our algorithm is better in most parameter regimes that we test, which we will discuss more below. We introduce a new model of transfer learning on networks (lines 69-71). To our knowledge, there are no published algorithms with provable guarantees that can be directly applied to our setting. Two key differences of our setting compared to existing works are: * _No node data for most nodes in the target network $Q$_. Most works on transfer learning for graphs assume access to some edge data for every node in the target graph (Wang et al. 2018, Levin et al. 2022). Similarly, matrix completion papers typically assume that at least one entry from each row/column of the target matrix is observed (Chatterjee 2015, Simchowitz et al. 2023 and references therein), or otherwise assume low rank/nuclear norm (see [2] and references therein). In our setting, we only observe an $n_Q \times n_Q$ subgraph for $n_Q \ll n$. Hence for most target nodes we observe none of their edges. This is comparable to the MNAR (Missing Not at Random) model in matrix completion [3]. * _No node labels in any network_. We consider a latent variable model in which the relevant features of nodes are not observed. This excludes approaches that rely on observed node labels or features (Tang et al. 2016, Zou et al. 2021, Qiao et al. 2023, Wu et al. 2024). Nevertheless, for the sake of comparison we implement a new baseline based on a modification of Levin et al. [1]. Specifically, we modify the estimator from their Section 3.3. Their method assumes that full edge data from both P and Q are observed, and that they have the same expectation ($P = Q$). Since this is not true for us, we instead compute the modified MLE: $$\tilde Q_{ij} = \begin{cases} \frac{w_P}{w_P + w_Q} A_{P;ij} + \frac{w_Q}{w_P + w_Q} A_{Q;ij} & i, j \in S \\\\ A_{P;ij} & \\text{otherwise} \end{cases}$$ Where the weights $w_P, w_Q$ are computed as in their paper, based on estimated sub-gamma parameters of the noise for $A_P, A_Q$. This is the only modification we make to their algorithm. Note that Levin et al. only give theoretical guarantees on the spectral norm $\|| \hat Q - Q \||_2$ of their estimator $\hat Q$. Analyzing the stronger metric of mean-squared error $\frac{1}{n^2} \|| \hat Q - Q \||_F^2$ would require different techniques than their paper. ### **Matrix completion without transfer should perform poorly in our setting.** In our setting, we only observe target data on an $n_Q \times n_Q$ submatrix of the adjacency matrix, for $n_Q \ll n$. To apply a matrix completion algorithm, we can certainly zero-pad the target data to obtain some $A_Q \in \\{0,1\\}^{n \times n}$, and apply a standard matrix completion algorithm to this $A_Q$. However, in this case the matrix $A_Q$ would have mostly all-zeroes rows and columns, corresponding to nodes $i \not \in S$. Up to permutation, the zero-padded input matrix would have block structure: $$A_Q = \begin{bmatrix} A_Q[S, S] & 0 \\\\ 0 & 0 \end{bmatrix}$$ Where $A_Q[S, S] \in \\{0,1\\}^{n_Q \times n_Q}$ contains the observed target data. Note that $n_Q \ll n$ so the matrix is almost all zeroes. Therefore, any left/right singular vector of $A_Q$ will be of the form $v = \begin{bmatrix} v_S \\\\ 0 \end{bmatrix}$, where $v_S \in \mathbb{R}^{n_Q}$ is a singular vector of $A_S[S,S]$. The singular vectors will contain no information outside of $S$, and therefore the matrix completion algorithm will do poorly. [1] Nikel, Pablo I., and Víctor de Lorenzo. "Pseudomonas putida as a functional chassis for industrial biocatalysis: from native biochemistry to trans-metabolism." Metabolic engineering 50 (2018): 142-155. [2] Ward, Patrick G., et al. "A two step chemo-biotechnological conversion of polystyrene to a biodegradable thermoplastic." Environmental science & technology 40.7 (2006): 2433-2437. [3] Yuan, Qianqian, et al. "Pathway-consensus approach to metabolic network reconstruction for Pseudomonas putida KT2440 by systematic comparison of published models." PloS one 12.1 (2017): e0169437. --- Rebuttal Comment 1.1: Comment: I thank the authors for the detailed response. After having read the discussions with the other reviewers, I would like to increase my score to a 6. --- Reply to Comment 1.1.1: Comment: Dear reviewer VhmF, Thank you very much for taking our rebuttal into account and for raising your score.
Summary: This paper investigates transfer learning in the context of estimating latent variable network models. Specifically, the goal is to estimate the edge probability matrix $Q$ of the target graph using (1) edge data from a source graph $P$ given by its adjacency matrix, and (2) edge data from a vanishingly small subgraph of $Q$ that consists of an $o(1)$ fraction of the nodes in $Q$. The authors propose a transfer learning algorithm (Algorithm 1), which is based on matching quantiles, and demonstrate that it is possible to accurately estimate $Q$ if $P$ and $Q have similar quantile/ranking profiles (per Definition 1.3) and smooth latent variable representations (Assumption 2.1). Furthermore, in Section 3, the authors focus on Stochastic Block Models, stating the minimax rate under the setup and providing an estimation algorithm (Algorithm 2) that achieves this rate (Proposition 3.4). The proposed algorithms are then tested with numerical experiments on synthetic and real-world datasets. Strengths: This work addresses the significant problem of learning graphs/networks from a small subset of edge data by leveraging knowledge from a source graph. This problem is fundamental and has numerous potential practical applications, such as in biological network estimation and social sciences. The paper considers a simple yet expressive mathematical model and presents key findings and supporting arguments clearly. Although primarily theoretical, the results are complemented by numerical experiments that substantiate the potential of the proposed approach. Weaknesses: While this manuscript makes substantial contributions, there are some areas for improvement to enhance its impact: **1. Model Assumptions:** The models and assumptions in this work —- latent variable models, H\”older smoothness of the latent functions, and the ranking assumptions between source $P$ and target $Q$ —- are quite standard in theoretical literature. Nevertheless, including in-depth discussions and ablation studies to examine their relevance and applicability in real-world scenarios would help motivate and convince practically oriented readers. **2. Further Numerical Experiments:** Additional numerical experiments to verify and examine the theoretical results (Theorem 2.2 and Proposition 3.4) would be valuable. This would help validate whether the expected dependence on parameters such as $d, \beta, n_Q$ are sharp or not. Additionally, comparing the performance of the proposed transfer learning algorithms against a true oracle with direct access to the full edge data from $Q$ (which would correspond to what the authors call “oracle” with $p_{flip} = 0$) would properly quantify the cost of transfer and evaluate the effectiveness of the proposed algorithms. It would also be beneficial to compare the proposed algorithms to other existing algorithms in the literature to highlight the claimed advantages of the proposals. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Based on the description of the Oracle algorithm in lines 246-252, it seems that its performance should be independent of $n_Q$. However, in Figure 2, the Oracle algorithm's performance appears to improve as $n_Q$ ​increases in the first two plots. Could the authors clarify if there is something I am missing? 2. For the Email networks, Algorithm 1 seems to outperform the Oracle (with $p=0.01$ and $p=0.05$). Can the authors provide any insights on how a transfer learning algorithm (i.e., Algorithm 1) can outperform an Oracle algorithm that has full access to the edge data from $Q$? *Minor suggestions/typos:* - Line 71: The order of $|S|$ and $n_Q$ should be swapped, i.e., $n_Q := |S| = o(n)$. - Lines 102 - 104: The quantifier for $i$ seems missing, e.g., "... *for $i$,* and for all $j \neq i$ ..." - Line 111: Something should be off here. - Lines 183 - 184: In the second sentence of Theorem 2.2, it is unclear which parameter corresponds to which function. I would suggest the authors write for example "Let $f_P$ be $\alpha$-H\"older-smooth and $f_Q$ be $\beta$-H\"older-smooth for $\beta \geq \alpha > 0$, ..." - Lines 238 - 239: It seems a line break is needed before "(1) Algorithm 2." - Lines 244 - 245: This line break should be removed. - Section 4: Please use $p_{flip}$ instead of $p$ for consistency (or vice versa). Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: This is primarily a theoretical work, and the authors discussed the potential limitations of the work and potential future research directions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank the reviewer for highlighting the fundamental nature of our problem, and its numerous potential practical applications. We will fix all typographical errors in the revision. We address their feedback in detail below. ### **Relevance and applicability of latent variable models in real-world scenarios**. Latent variable models are widely used in applied fields such as neuroscience [1], ecology [2], international relations [3], political pscyhology [4], and education research [5]. To help motivate practically oriented readers, we will include more in-depth discussion of these applications in the revision. ### **Relevance and applicability of the rankings assumption for biological networks.** Previous works require some form of similarity between networks to enable transfer (Sen et al. 2018, Fan et al. 2019, Baranwal et al. 2020). For example, Kshirsagar et al. 2013 require a _commonality hypothesis_: if pathogens A, B target the same neighborhoods in a protein interaction network, one can transfer from A to B. Our rankings assumption similarly posits that to transfer knowledge from A to B, A and B have similar 2-hop neighborhood structures. ### **New ablation experiments in the global rebuttal validate our algorithms' error rates.** We test dependence of Algorithm 1 on $n_Q, d, \beta$ and Algorithm 2 on $n_Q, k_Q$. See the global rebuttal and the attached PDF. * Experiments include comparisons to a true oracle with access to full edge data from $Q$. See also the discussion in the next heading. * Experiments include new comparisons to a proposed algorithm in the literature by Levin et al (JMLR 2022). See global rebuttal for a full description. * Performance of Algorithm 1 and Algorithm 2 both match the trend lines given by theoretical upper bounds (Theorem 2.2 and Proposition 3.4 respectively). Note that we plot theoretical upper bounds assuming each one has a constant of $1.0$, but these are likely to be larger. ### **Oracle with $p = 0.0$ outperforms transfer algorithms in our new experiments.** An oracle with $p = 0.0$ corresponds to a non-transfer setting where all edges from the target graph $Q$ are observed. Note that in this case, the value of $n_Q$ does not matter because edges incident to nodes outside of $S$ never get flipped. In lines 251-252, we noted that when $p=0.0$, the Oracle error for $\beta$-smooth graphons on $d$-dimensional latent variables will be $O(n^{-\frac{2 \beta}{2\beta + d}})$ (Xu 2018), which is less than the error bound of Theorem 2.2. In our new experiments we implement this oracle and indeed find that it outperforms our transfer algorithms in the regimes where its theoretical upper bound is better than our theoretical upper bounds. In particular, our upper bounds scale with $n_Q$, rather than $n$, so for $n_Q \ll n$ transfer should generally be worse. Note that for our Algorithm 1 and for this version of the oracle, lower bounds are unknown (see our discussion in lines 193-196). Still, these experimental results should help quantify the cost of transfer and evaluate the effectiveness of our proposed algorithms. ### **The Oracle depends on $n_Q$ if $p > 0$.** Following the equation below line 249, the oracle observes the unbiased $Y_{ij} \sim Bernoulli(Q(x_i, x_j))$ if $i, j \in S$ are both part of the set of observed target nodes $S$ such that $\lvert S \rvert = n_Q$. If $i \not \in S$ or $j \not \in S$ then the oracle observes a possibly flipped version of $Y_{ij}$ when $p > 0$. Therefore as $n_Q$ grows, the oracle should improve, because none of the nodes in $S$ can be flipped. We will clarify this point in the revision. ### **The Oracle performs much better on the Email-EU task with $p = 0.0$ than with $p=0.01$.** Rerunning our Email-EU experiment with a flip probability of $p=0.0$, and comparing to our existing results from Figure 2, we have the following. We use ($\dagger$) to denote a new experiment. | $n_Q$ | Method | Target | MSE (Median, 50 Trials) | ---- | ---- | ---- | ---- | | 20 | Oracle, $p=0.0$ ($\dagger$) | Email-EU Days 81-160 | 0.003797 | | 20 | Oracle, $p=0.01$ | Email-EU Days 81-160 | 0.007312 | | 20 | Our Alg. 1 | Email-EU Days 81-160 | 0.007240 | | 20 | Oracle, $p=0.0$ ($\dagger$) | Email-EU Days 561-640 | 0.004100 | | 20 | Oracle, $p=0.01$ | Email-EU Days 561-640 | 0.007620 | | 20 | Our Alg. 1 | Email-EU Days 561-640 | 0.007591 | For this dataset, a flip probability of $p = 0.0$ versus $p = 0.01$ makes a substantial difference. Note that all MSE values on the right-hand side of Figure 2 are within $[0.007, 0.008]$, whereas the Oracle with $p=0.0$ has MSE $\approx 0.04$. Note that the Oracle with $p=0.0$ does not depend on $n_Q$ since it access the full, unbiased edge data from $Q$. We believe this difference is because the email networks in Figure 2 are quite sparse, with median degree $\leq 0.007n$ (see Table 2). Therefore even introducing a $0.01$ probability of edge flips makes the Oracle substantially worse on these networks. [1] Ren, Mingyang, et al. "Consistent estimation of the number of communities via regularized network embedding." Biometrics 79.3 (2023): 2404-2416. [2] Trifonova, Neda, et al. "Spatio-temporal Bayesian network models with latent variables for revealing trophic dynamics and functional networks in fisheries ecology." Ecological Informatics 30 (2015): 142-158. [3] Cao, Xun, and Michael D. Ward. "Do democracies attract portfolio investment? Transnational portfolio investments modeled as dynamic network." International Interactions 40.2 (2014): 216-245. [4] Barberá, Pablo, et al. "Tweeting from left to right: Is online political communication more than an echo chamber?." Psychological science 26.10 (2015): 1531-1542. [5] Sweet, Tracy M., et al. "Hierarchical network models for education research: Hierarchical latent space models." Journal of Educational and Behavioral Statistics 38.3 (2013): 295-318. --- Rebuttal Comment 1.1: Title: Response to the Authors' Rebuttal Comment: I thank the authors for addressing my questions and concerns. I trust that the authors will incorporate the new experiment and additional explanations into their revision. With that understanding, I am inclined to raise my rating to a 7. --- Reply to Comment 1.1.1: Comment: Dear reviewer ajJ3, Thank you very much for taking our rebuttal into account and for raising your score. We will incorporate the new experiments and additional explanations into the revision.
Rebuttal 1: Rebuttal: We sincerely thank all the reviewers for their encouraging comments and for acknowledging the significance of our work. We will fix all typographical errors in the revision. In this global rebuttal, we will mainly discuss the new experiments, attached as PDF. ### **New experiments quantify the dependence of Algorithms 1 and 2 on all parameters [Reviewer ajJ3].** Each experiment tests the dependence of one of our algorithms on a particular parameter (see the attached PDF). Algorithm 1 is for general latent variable models. The error (Theorem 2.2) depends on the smoothness $\beta$ of the target graph, the number of observed target nodes $n_Q$, and the dimension of the latent variables $d$. Algorithm 2 is for Stochastic Block Models. The error (Proposition 3.4) depends on the number of communities $k_Q$ in the target graph, and the number of observed target nodes $n_Q$. Note that Proposition 3.4 also depends logarithmically on the minimum community size of $Q$, but this is less significant. ### **Comparisons against a new baseline adapted from Levin et al. (JMLR 2022) [Reviewers ajJ3 and VhmF].** We introduce a new model of transfer learning on networks (lines 69-71). To our knowledge, there are no published algorithms with provable guarantees that can be directly applied to our setting. Two key differences of our setting with existing works are: * _No node data for most nodes in the target network $Q$_. Most works on transfer learning for graphs assume access to some edge data for every node in the target graph (Wang et al. 2018, Levin et al. 2022). Similarly, matrix completion papers typically assume that at least one entry from each row/column of the target matrix is observed (Chatterjee 2015, Simchowitz et al. 2023 and references therein), or otherwise assume low rank/nuclear norm (see [1] and references therein). In our setting, we only observe an $n_Q \times n_Q$ subgraph for $n_Q \ll n$. Hence for most target nodes we observe none of their edges. This is comparable to the MNAR (Missing Not at Random) model in matrix completion [2]. * _No node labels in any network_. We consider a latent variable model in which the relevant features of nodes are not observed. This excludes approaches that rely on observed node labels or features (Tang et al. 2016, Zou et al. 2021, Qiao et al. 2023, Wu et al. 2024). Nevertheless, we implement a new baseline based on a modification of the estimator in Section 3.3 of Levin et al., 2021. They assume that full edge data from both P and Q are observed, and $P=Q$. Since this is not true for us, we instead compute the modified MLE: $$\tilde Q_{ij} = \begin{cases} \frac{w_P}{w_P + w_Q} A_{P;ij} + \frac{w_Q}{w_P + w_Q} A_{Q;ij} & i, j \in S \\\\ A_{P;ij} & \\text{otherwise} \end{cases}$$ Where $w_P, w_Q$ are computed as in their paper, based on estimated sub-gamma parameters of the noise for $A_P, A_Q$. Akin to their adjacency spectral embedding, which assumes known rank of $Q$, we use Universal Singular Value Thresholding to obtain $\hat Q$ from $\tilde Q$. Note that while we can plot theoretical guarantees for the mean squared error $\frac{1}{n^2} \|| \hat Q - Q \||_F^2$ of both our algorithms' $\hat Q$ and the oracle's $\hat Q$, Levin et al. only give theoretical guarantees on the spectral norm $\|| \hat Q - Q \||_2$ for their estimator $\hat Q$. Analyzing the stronger metric of mean-squared error would require different techniques than their paper. ### **Comparisons against the Oracle with $p = 0.0$ [Reviewer ajJ3].** As suggested by reviewer ajJ3, we compare against the Oracle with flip probability $p = 0.0$. This corresponds to a non-transfer setting in which full data from the target graph $Q$ are observed. In this case, the value of $n_Q$ does not matter since edges incident to nodes outside of $S$ never get flipped. In lines 251-252, we note that such an oracle should be better than Algorithm 1 for smooth graphons. This is supported by our experiments. ### **Takeaways from our new experiments (see attached PDF) [Reviewers ajJ3 and VhmF].** Our main takeaway is that the performance of Algorithms 1 and 2 matches the trend of our theoretical upper bounds (modulo constants). These validate our analyses and show the applicability of our methods in a wide variety of regimes, complementing our existing experiments on real-world data. By comparing against the Oracle with full edge data from $Q$, we help quantify the cost of transfer. As we would expect, the Oracle with $p = 0$ consistently attains lower mean-squared error than our Algorithms. This is to be expected because our transfer algorithms do not access the full target data from $Q$. The baseline adapted from Levin et al. is worse than our Algorithms. This is not surprising because it is not designed for a setting in which $P \neq Q$ and a vanishing fraction of $Q$ is observed. ### **Clarifying the effect of sparsity on the choice of graph distance [Reviewers BaBd and TFsn].** We use the common neighbors graph distance (Definition 1.2) to capture local graph structure in Algorithm 1. Other graph distances have been used in the literature as well (Zhang et al. 2017, Mukherjee and Chakrabarti 2019, Mao et al. 2021), but ours is useful for technical reasons (see response to Reviewer BaBd). For expected degree $\tilde o(n^{1/2})$, our graph distance (Definition 1.2) fails to concentrate (Mao et al. 2021). However, the same authors show that a modified common neighbors concentrates for expected degree $\tilde \Omega(n^{1/3})$. This suggests that variations on the graph distance might ensure our Algorithm 1 works for sparser graphs. [1] Xiang, Yunhua, et al. "On the optimality of nuclear-norm-based matrix completion for problems with smooth non-linear structure." JMLR 2023. [2] Ma, Wei, and George H. Chen. "Missing not at random in matrix completion: The effectiveness of estimating missingness probabilities under a low nuclear norm assumption." NeurIPS 2019. Pdf: /pdf/c17d89813377a2d3e7cbc10f1312807dfd3511b2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits
Accept (poster)
Summary: The paper focuses on stochastic combinatorial semi-bandit problems where a player selects from a power set of actions comprising subsets of d base items. It underscores the importance of adapting to the problem structure to achieve optimal regret bounds, emphasizing the use of covariance matrix estimation to enhance performance. The proposed "optimistic" algorithms, OLS-UCB-C and COS-V, employ online covariance estimation. While COS-V prioritizes improved computational efficiency over slight optimality, inspired by Thompson Sampling, it marks the first sampling-based algorithm to achieve a $T^.5$ gap-free regret bound. Strengths: The paper is well-written. The worst-case guarantees and instance-dependent guarantees represent a significant enhancement over existing algorithms that adapt to covariance. 2 different and relevant algorithms and analysis are proposed. The comparison of regret rates and computational complexity between different algorithms is clear. Weaknesses: The log(T) term has a power 3 in the regret bound, which can be improved. The approach of estimating the covariance matrix online and using it for confidence bounds is not new. No empirical evaluation. Technical Quality: 4 Clarity: 3 Questions for Authors: Do you think the log(T)^3 factor can be reduced ? Why is there such a factor, whereas usually there is simply a log(T) factor ? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: A sharp dependence on the horizon $T$ is more important for the instance-dependent guarantees. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ___Concerning dependance in $T$___ > The log(T) term has a power 3 in the regret bound, which can be improved. > Do you think the $\log(T)^3$ factor can be reduced ? Why is there such a factor, whereas usually there is simply a $log(T)$ factor ? > A sharp dependence on the horizon $T$ is more important for the instance-dependent guarantees. The $\log(T)$, actually has a power $2$ for OLS-UCB-C and $3$ for COS-V. The two first exponents come from the online nature of the problem (Laplace trick) and the covariance estimations. The last one, only present in COS-V, comes from the union bounds made on the sampling distribution. ESCB-C manages a $\log(T)$ with power $1$, but suffers from more computational complexity than OLS-UCB-C and does not satisfy a $\tilde{O}(\sqrt{T})$ gap-free regret because it incurs a $1/\Delta_{min}^2$ in the gap-dependent bound. For randomized algorithms, there exists $\log(T)$ gap-dependent bounds for Combinatorial Thompson Sampling. However, they also suffer from additional terms that can be up to $1/\Delta^d$, which prevent the $\tilde{O}(\sqrt{T})$ gap-free bound. All in all, it is indeed possible to get better $\log(T)$ exponents for the gap-dependent bounds. However, known approaches do not satisfy a $\tilde{O}(\sqrt{T})$ gap-free bound at the same time. It means that on “difficult" instances, our algorithms will incur $\tilde{O}(\sqrt{T})$ regret, while others may suffer regrets that are a lot worse (Imagine $\Delta \sim 1/T$ for example). We are not aware of any straightforward way to reduce the $\log(T)$ exponents and keep the desired gap-free bounds at the same time. ___ ___Concerning the covariance matrix___ > The approach of estimating the covariance matrix online and using it for confidence bounds is not new. The approach is indeed not new (and it is even natural) and we do not claim to be the first. But the way to incorporate those estimators in our analysis is rather new. --- Rebuttal 2: Comment: I read the authors’ rebuttal and the comments of other reviewers. I would like to keep my score unchanged.
Summary: The paper "Towards Efficient and Optimal Covariance-Adaptive Algorithms for Combinatorial Semi-Bandits" addresses the challenge of designing algorithms that adapt to covariance in the context of stochastic combinatorial semi-bandits, where decision-makers face a set of actions exponentially large in the number of base items. Strengths: 1. The paper presents new algorithms that improve on both the computational efficiency and regret bounds by adapting to the covariance structure of the environment, which is crucial for applications where the rewards are not independent across actions. 2. It provides the first gap-free regret bounds (regret bounds that do not depend on the minimum sub-optimality gap between the best and second-best actions) for these types of problems, extending the theory beyond independent rewards assumptions. Weaknesses: 1. The assumption that the reward (Lines 45-46) for each base item \( Y_{t,i} \) is bounded by \( \frac{B_i}{2} \) may be strong, where reward distributions can exhibit significant variability and are not tightly bounded. 2. While the algorithms' theoretical framework and mathematical formulation are well-articulated, the lack of experimental results or simulations leaves a crucial gap. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Concerns regarding the role of `f_t,δ` and `g_t,δ` functions in Eq. 6 and 7. The algorithms utilize the functions `f_t,δ` and `g_t,δ` to dynamically adjust the exploration strategy. The paper implies an increase in `f_t,δ` and `g_t,δ` as \( t \) increases, which leads to an increasing exploration bonus and variance over time. This design choice raises several concerns and questions, especially from an intuitive standpoint: a) Increasing Exploration Over Time: Typically, one might expect that as an algorithm learns more about the environment, the need for exploration would decrease, reducing the exploration bonus and variance. The paper’s approach where `f_t,δ` increases with \( t \) seems counterintuitive. An explanation or justification for why increased exploration is necessary or beneficial as the algorithm progresses would be crucial. b) Design Justification for `g_t,δ` and `f_t,δ`: The paper lacks a detailed explanation for the design of `g_t,δ` and `f_t,δ`. Are there specific characteristics of the problem domain or empirical observations that suggest this approach? c) Impact on Algorithm’s Performance: How does the increasing trend of `f_t,δ` and `g_t,δ` impact the algorithm's overall performance across different learning stages? Some insights into how this approach compares to more traditional methods where exploration decreases over time would be beneficial. 2. The paper specifies using a Gaussian distribution in Eq. 7 for parameter sampling within the COS-V algorithm. Could the authors provide further justification for the choice of a Gaussian distribution in this context? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: As described in the paper, the results and theorems are discussed with comments made especially about some limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ___Concerning the weaknesses___ > The assumption that the reward (Lines 45-46) for each base item ($Y_{t,i}$) is bounded by ($\frac{B_i}{2}$) may be strong, where reward distributions can exhibit significant variability and are not tightly bounded. In many realistic settings, a player would be able to tell what scale of values for $Y_{t, i}$ is normal or not. If they can identify an upper bound $B$, then they could use the algorithms with $min(max(Y_{t, i}, -B), B)$ instead of the "true" $Y_{t, i}$ observed. Otherwise, one could try a Doubling Trick. We initialise $B$ at a certain power of $2$, and then we restart the algorithm each time an observation surpasses it, after doubling the current value of $B$. This would add a multiplicative factor to the regret bounds, of magnitude $\log(B)$, where $B^*$ is a “true" upper bound. ___ ___Concerning the main questions___ > Concerns regarding the role of $f_{t,\delta}$ and $g_{t,\delta}$ functions in Eq. 6 and 7. The algorithms utilize the functions $f_{t,\delta}$ and $g_{t,\delta}$ to dynamically adjust the exploration strategy. The paper implies an increase in $f_{t,\delta}$ and $g_{t,\delta}$ as $t$ increases, which leads to an increasing exploration bonus and variance over time. [...] >> a) Increasing Exploration Over Time [...]. b) Design Justification for $g_{t,\delta}$ and $f_{t,\delta}$ [...]. c) Impact on Algorithm’s Performance [...]. We acknowledge the fact that the $\log(t)$ increase can be counterintuitive and may add an explanation in subsequent versions to clarify it. The exploration is actually not increasing over time. The decay on the exploration bonus, comes more from the number of times each item/couple has been observed $(n_{i,j})$ rather than from $\log(t)$ (See Eq.4 and Eq.6 for example). Like for all the main variants of UCB, the exploration bonuses look like a sum of $\sqrt{\frac{\log(\delta)}{n_{i,j}}}$, which decreases as the number of time each item is chosen increases. Using $\log(t)$ (instead of $\log(T)$) ensures adaptivity to the time horizon and has been commonly used in several other papers. The justification for $f_{t, \delta}$ is explained in Section 4: it originates from the way the ellipsoids are designed. $f_{t, \delta}$ is involved in the definition of event $G_t$ in Eq.9 and its expression ensures Prop.1 is satisfied. $g_{t, \delta}$ controls the exploration made by COS-V. Its expression is derived from union bounds which enable Lemma 5 to be satisfied. > The paper specifies using a Gaussian distribution in Eq. 7 for parameter sampling within the COS-V algorithm. Could the authors provide further justification for the choice of a Gaussian distribution in this context? We think that other distributions could actually be used. Using Gaussian distributions was a natural direction as they are standard distributions, we know how to sample them efficiently, especially for given covariance matrices, and we know how to control their tails with usual tools. --- Rebuttal 2: Comment: Thank you for the response from the authors. 1. I have reviewed the experiments conducted by the authors, and I am particularly curious about the regret pattern of OLS_UCBV, which shows a distinct zigzag shape: it starts with a slow increase in regret, followed by a sharp rise, and then it tends to converge. This has also piqued my curiosity about the specific settings of the randomly generated synthetic environment, and it appears that its results are inferior to those of C_UCBV and ESCB_C_approx. 2. Additionally, if it is Covariance-Adaptive, I would like to know how the fluctuations or error bars of the actual performance of the algorithm proposed by the authors compare to those of other algorithms across multiple random seeds. --- Rebuttal Comment 2.1: Comment: We thank the reviewer for responding to our rebuttal and hope that we have adequately addressed their concerns regarding the design of our algorithms and our results. We kindly ask the reviewer to take this into consideration in their final assessment. We would like to emphasize again that our contributions are focused on the theoretical aspects of combinatorial bandits. Our paper is already dense with significant theoretical contributions, including new gap-free regret bounds, improved computational complexity, and a novel stochastic algorithm with analysis that combines the computational efficiency of Thompson Sampling with the analytical strengths of optimistic algorithms, all of which contribute to a deeper understanding of combinatorial bandits. The empirical study we provided in the rebuttal primarily serves to demonstrate that our algorithms can be implemented, achieve the expected logarithmic regret expected by the theory, and perform well in practice compared to their competitors. We acknowledge that this study alone is not sufficient, and we will strive to conduct more thorough experiments in the final version (although we do not expect it to be considered as a contribution of our current work). Regarding the additional questions, due to time constraints during the rebuttal phase and the high computational cost of existing combinatorial bandit algorithms, our experiments were conducted on a single synthetic dataset and should be interpreted with caution. > I have reviewed the experiments [...]. Note that due to the log-log scale, the early shape of the regret curves should be interpreted with caution, as they correspond to only a few rounds $(\approx 30)$ compared to the total number of rounds $(\approx 10\,000)$. Furthermore, the curves are only valid after the measurement at $t=10$; prior to that, they mistakenly appear flat. The early behavior is primarily influenced by how the algorithms perform initial exploration and may be affected by hyper-parameter calibrations, which were not thoroughly optimized in this case. What is more important to observe in these experiments is the behavior at the end of the regret curves: - Both OLS-UCB-C and ESCB-C-approx appear to have converged and entered the logarithmic regret regime. The difference between them seems to be due to constant terms related to their exploration strategies, but it does not seem to increase over time. - Towards the end, the per-round regret of C-UCBV is higher than that of OLS-UCBV, indicating that its cumulative regret will eventually be higher. This synthetic instance was "easy" enough for both algorithms to reach the logarithmic regime. However, it's important to note that in more challenging instances, ESCB-C only guarantees a worst-case regret of $O(T^{2/3})$, while our algorithm provides a stronger guarantee of $O(\sqrt{T})$. > Covariance-Adaptation and error bars. Indeed, having error bars is important, and we will include them in the final version. Unfortunately, we did not have time to include them in this preliminary experiment presented in the rebuttal. The adaptivity of our algorithms (and ESCB-C) to the covariance structure of the base items can be inferred from the per-round regret (i.e., the derivative) at the end of the plot. We observe that ESCB-C and OLS-UCB-C maintain a relatively similar distance asymptotically. In contrast, C-UCBV (C-UCB with plugged-in variance estimators) has also converged, but its final per-round regret is slightly worse than both. This is likely because C-UCBV does not account for covariance but instead assumes worst-case correlations between items. However, we must emphasize that this could be an artifact, and these experiments are not thorough enough to draw definitive conclusions.
Summary: This paper tackles the combinatorial semi-bandits problem and provides many theoretical results, including gap-free variance-dependent upper bounds for both deterministic and stochastic sampling strategies, and for the least, they also provide a corresponding lower bound. Strengths: The paper is well-written in a reader-friendly manner. It contributes a variety of highly non-trivial theoretical results that improve upon the current state-of-the-art, with extensive proofs in the appendix. Weaknesses: The paper does not provide any empirical evaluation, it would have been interesting to see how the presented algorithms perform empirically against its competitors. But given the theoretical contributions and nature of this paper, this is okay for me. Apart of that I have only the following rather minor issues and typos/suggestions: - Be more precise in the formulation of the lower bound (Thm. 2): In which sense does the inequality hold, in expectation? Also, is policy restricted deterministic policies here? - Some of the notations may be clear for most readers, but should for the sake of completeness still be properly introduced. E.g., $\mathbb{N}^\ast$, $\mathbf{I}$ and which logarithm (base) $\log$ refers to. - In Algo 2, line 3, does it have to be $\leq 2$ instead of $\leq 1$? - 8: yield[s] - 10: $\mathcal{O}(\sqrt{T})$ instead of $\sqrt{T}$? - 28: At each round - 108: adaptative - 139: positive semi-definite - 269: $\mathbb{P}(\mathcal{C}^c)$ - 272: sketch of the proof - You oftentimes write $A\bigcap B$ instead of $A\cap B$, why? Technical Quality: 4 Clarity: 4 Questions for Authors: - Do you have a feeling how sensible the regret is w.r.t. the vector $B$? You assume this to be known, but I could imagine that in practice it's oftentimes unknown. - In Table 1, do you know how $C_{1/T}^{opt}$ scales in comparison to $d^2$? That is, do you know that $OLS-UCB-C$ outperforms $ESCB-C$ regarding time complexity? - Both your algorithms have a parameter $\delta > 0$, which is not contained in the upper bounds. Could you say how the behaviour of the regrets are w.r.t. this parameter? Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have adequately addressed the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ___Concerning the minor questions and remarks___ We first wish to thank the reviewer for pointing out our typos and inaccuracies. > Be more precise in the formulation of the lower bound (Thm. 2): In which sense does the inequality hold, in expectation? Also, is policy restricted deterministic policies here? The theorem holds in expectation, and is applicable for both deterministic and for stochastic policies. Besides, it seems that there is a mistake in our formulation. We inverted a “for all" and a “there exists" statement. We should have written: “For any policy $\pi$, there exist a stochastic combinatorial semi-bandit such that$\dots$". We are very grateful to the reviewer for pointing out this inaccuracy, which allowed us to correct a mistake. > In Algo.2, line 3, does it have to be $\leq 2$ instead of $\leq 1$ ? It should be either $\leq 1$ or $<2$. All the coefficients indexed by $(i,j)$ are correctly defined as soon as the corresponding interaction has been observed at least twice. > You oftentimes write $A\bigcap B$ instead of $A\cap B$, why? The reason is purely aesthetic, we will uniformize the notation. ___ ___Concerning the main questions___ > Do you have a feeling how sensible the regret is w.r.t. the vector $B$? You assume this to be known, but I could imagine that in practice it's oftentimes unknown. In our analysis, $B$ appears in the negligible additive terms (w.r.t. the dependence in $T$). Using a “looser" $B$ may make the convergence a little slower, but would not impact the asymptotic regret. In practice, knowing a plausible upper bound on the rewards is sufficient. In realistic settings, we can assume that a player knows such an upper bound B_i. In that case, they can replace $Y_{t, i}$ with $min(max(Y_{t, i}, -B_i), B_i)$ and use our algorithms. Otherwise, one could try a Doubling Trick. We initialise $B$ at a certain power of $2$, and then we restart the algorithm each time an observation surpasses it, after doubling the current value of $B$. This would add a multiplicative factor to the regret bounds, of magnitude $\log(B^*)$ where $B$ is a “true" upper bound. > In Table 1, do you know how $C^{opt}_{1/T}$ scales in comparison to $d^2$ ? That is, do you know that OLS-UCB-C outperforms ESCB-C regarding time complexity? $C^{opt}_{1/T}$ scales with the horizon $T$, contrarily to ou per-rounds complexities. Ultimately, our complexity will then be better for horizons large enough. > Both your algorithms have a parameter $\delta>0$, which is not contained in the upper bounds. Could you say how the behaviour of the regrets are w.r.t. this parameter? We have formulated $f_{t, \delta}$ and $g_{t, \delta}$ so as to isolate $\delta$ from the dependence on time and dimension. Its influence shows through $\log(1/\delta)$ factors in negligible additive terms. $\delta$ is fixed at the beginning and characterizes a tradeoff when sizing the ellipsoids. It influences how quick we reach the asymptotic regret rate but not the said rate. --- Rebuttal Comment 1.1: Comment: Dear authors, thank your for your rebuttal, I have no further questions.
null
null
Rebuttal 1: Rebuttal: We would like to sincerely thank the reviewers, ACs, SACs and PCs for their expertise and the time they are devoting to our submission. Their feedback and suggestions are very valuable and will be taken into account. ___General comments___ We were provided 3 high quality reviews that acknowledge the strengths of our submission, namely, the improvement over existing results by providing gap-free bounds, the clear presentation and the mathematical soundness. We answer the specific questions of each reviewer in the per-review rebuttal. Concerns over (the lack of) empirical evaluations have been raised by several reviewers. Our contributions being theoretical, we did not consider a priority to provide such evaluations and consider our contributions self-sufficient as is. We answer the concerns in more details in the following. ___ ___Concerning empirical evaluations___ Although our contributions are mainly theoretical, we agree that some empirical evaluations could be interesting. However, we consider our submission self-sufficient as is. We detailed clear and complete proofs and strove to make the presentation clear. Their quality has been unanimously recognized as a main strength of our submission. We are considering adding an experimental part in the Appendix for a new version, but we wish to show insightful results and illustrations to match the quality of our current main document. If the reviewers have specific demands and/or suggestions, we can try to produce them as our algorithms (and some competitors) can be easily implemented (although they all still need to be tuned). For the sake of illustration, we provide regret curves in a PDF file, for a randomly generated synthetic environment, for OLS-UCB-C, where we see that it is comparable to other combinatorial algorithms. Pdf: /pdf/03653c140ca0bd876746be5fdad31ecb86dacceb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Scaling Laws in Linear Regression: Compute, Parameters, and Data
Accept (poster)
Summary: Large models often empirically satisfy the neural scaling law i.e. the error decreases when the number of data and the model size increase, however, this contradicts with the widely accepted belief in learning theory that the variance error (one of the decomposed errors) should increase with the model size. Authors investigate this in the infinite-dimensional linear regression setting and show that the variance error is dominated by other errors because of the implicit regularization of SGD. Authors also support their claim with numerical simulations. Strengths: - The authors reconcile empirically observed scaling law and the classical statistical learning theory, and offers a novel viewpoint to understand and explain their original "conflictions", which is novel and should be interesting for both learning theory and LLM community. - The authors offer a general framework in section 6 to incorporate the general spectrum case of data covariance matrix, and it gives a novel technical innovations to sharply control the effect of data sketch. - The paper is very well written and it is easy to understand Weaknesses: - The assumptions of the paper might be too strong with linear regression setting and Gaussian design assumption, is it possible to extend to the kernel setting and with relaxed assumption on features [1]? - While the current analysis is focusing on one-pass SGD, which is not often used in LLM training, can the result be extended to Adam? - The result is not very surprising [1] Barzilai et al., Generalization in Kernel Regression Under Realistic Assumptions Technical Quality: 4 Clarity: 4 Questions for Authors: - I am confused about the sketch matrix setting? Is the fixed sketch matrix analogous to the neural net with size M and why the teacher-student model in previous work can be viewed as sketched linear regression model - I am also confused about the Hypercontractivity assumption in Assumption 5, what's its intuition and why it is required in the proof? - What's the purpose for setting geometric decaying stepsize, what happens if it is fixed or it is polynomial decaying? - Is the regularization effect of SGD determined by the choice of stepsize, where the optimal choice controls the strength of the implicit bias of SGD? Confidence: 2 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: See weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for supporting our paper! We address your comments as follows. > Q1. The assumptions of the paper might be too strong with linear regression setting and Gaussian design assumption, is it possible to extend to the kernel setting and with relaxed assumption on features [1]? A1. The Gaussian design assumption enables us to derive matching lower bounds for linear regression under data sketch. Note that our upper bound does not require Gaussian design. We will clarify this in the revision. Extending our results to kernel settings following relaxed assumptions in [1] is an interesting future direction. However, we think the obtained guarantee might be weaker. For instance, Theorem 2 in [1] only provides constant probability guarantees while our results, albeit using stronger assumptions, hold with high probability. --- > Q2. While the current analysis is focusing on one-pass SGD, which is not often used in LLM training, can the result be extended to Adam? A2. Our SGD analysis cannot be directly extended to Adam because the structure of the randomness becomes convoluted in Adam. Nonetheless we think this is an important question and we will comment on this in the revision. --- > Q3. The result is not very surprising. A3. We respectfully disagree. Typical risk bounds only involve upper bounds, but an upper bound is insufficient to characterize scaling laws, since an upper bound can be loose and the implied scaling law might be just an artifact. Our results, by establishing matching upper and lower excess risk bounds, rigorously characterizes scaling laws. Results of such kind are quite rare to the best of our knowledge. Thus we believe our results are significant. --- > Q4. I am confused about the sketch matrix setting? Is the fixed sketch matrix analogous to the neural net with size M and why the teacher-student model in previous work can be viewed as sketched linear regression model A4. You are correct that our sketched data model is an analog to a two-layer network of size $M$ with fixed inner layer and linear activation. Although prior works formulated their problem as a teacher-student model, they (taking [Bordelon et al., 2024] as an example) assumed that the teacher model is an infinite dimensional linear model and the student model is another linear model using features projected by a random matrix $A$ (see their equations 1 and 3). Therefore their teacher-student model is effectively a sketched data model. --- > Q5. I am also confused about the Hypercontractivity assumption in Assumption 5, what's its intuition and why it is required in the proof? A5. The hypercontractivity condition requires the fourth moment tensor to be controlled by the second moment tensor. This is known to be weaker than subGaussainity while still allowing a sharp SGD analysis [Zou et al., 2023]. The intuition is that the expected risk of the SGD output involves at most the fourth moment information of the data, therefore some fourth moment condition is necessary to derive an expected risk bound. --- > Q6. What's the purpose for setting geometric decaying stepsize, what happens if it is fixed or it is polynomial decaying? A6. The geometrically decaying stepsize scheduler is known for the last iterate of SGD to achieve a nearly (up to logarithmic factors) minimax optimal excess risk [Ge et al., 2019] in linear regression. Therefore we focus on the geometrically decaying stepsize scheduler. The last iterate of SGD would suffer from a constant variance error if used with a constant stepsize. Moreover, a polynomially decaying stepsize scheduler leads to a worse risk bound compared to a geometrically decaying stepsize scheduler as shown in [Wu et al., 2022a]. --- > Q7. Is the regularization effect of SGD determined by the choice of stepsize, where the optimal choice controls the strength of the implicit bias of SGD? A7. You are correct that the regularization of SGD is determined by the choice of stepsize. The optimal stepsize can be solved from the instance-wise sharp risk bounds. However, this requires certain knowledge of the task parameter and the spectrum of the data covariance. Additionally, $1/(\gamma n)$ in the SGD risk bound is comparable to $\lambda$ in the known ridge regression risk bound. This is explained in depth in [Zou et al., 2021]. **References** - Blake Bordelon, Alexander Atanasov, and Cengiz Pehlevan. A dynamical model of neural scaling laws. arXiv preprint arXiv:2402.01092, 2024. - Rong Ge, Sham M Kakade, Rahul Kidambi, and Praneeth Netrapalli. The step decay schedule: A near optimal, geometrically decaying learning rate procedure for least squares. Advances in neural information processing systems, 32, 2019. - Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, and Sham M. Kakade. Last iterate risk bounds of sgd with decaying stepsize for overparameterized linear regression. The 39th International Conference on Machine Learning, 2022a. - Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, Dean P Foster, and Sham Kakade. The benefits of implicit regularization from sgd in least squares problems. Advances in Neural Information Processing Systems, 34:5456–5468, 2021. - Difan Zou, Jingfeng Wu, Vladimir Braverman, Quanquan Gu, and Sham M Kakade. Benign overfitting of constant-stepsize sgd for linear regression. Journal of Machine Learning Research, 24(326):1–58, 2023. --- Rebuttal Comment 1.1: Comment: Thanks for your response and I will keep my positive score
Summary: This work examines neural scaling laws, in the simplified setting of linear regression trained by one-pass SGD. In particular, it attempts to explain the apparent mismatch between the statistical theory on one hand, which predicts that variance error increases with model size, and neural scaling laws on the other, which state that predictive performance should increase with model size. By considering sketched covariates of an infinite dimensional linear regression model with Gaussian prior under mild spectral assumptions, scaling laws are derived while keeping track of a variance term that increases with model complexity and corrects for a mismatch between the best-in-class predictor and the best possible predictor. However, this term is of higher order than the usual approximation and bias terms appearing in scaling laws, accounting for the empirical accuracy of scaling laws in practice. Numerical examples are provided for synthetic data. Strengths: This is a good paper that addresses an important theoretical inconsistency, and provides progress towards more general results. The paper is generally well written and communicated. Weaknesses: It is not clear when the variance term is first given on page 2 that it is of higher order than the other terms. The reader has to find the more precise presentation in Theorem 4.1 in order to verify this for themselves. Clarification of this when the term is first introduced would be helpful and improve the paper. While the paper's contribution is primarily theoretical, the only examples given are synthetic. Validation of the theory on a large benchmark dataset, even if modified to make it appropriate to the assumptions, would strengthen the paper significantly. Technical Quality: 3 Clarity: 3 Questions for Authors: Can you please address the weaknesses outlined above? How do you think this work can this be generalized to other settings of interest? Under which settings should the variance term be accounted for? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Some discussion of the assumptions of their work and their limitations would be helpful. Broader societal impact is mostly not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their helpful feedback and comments. > Q1. It is not clear when the variance term is first given on page 2 that it is of higher order than the other terms. The reader has to find the more precise presentation in Theorem 4.1 in order to verify this for themselves. Clarification of this when the term is first introduced would be helpful and improve the paper. A1. We note that the display on page 2 already suggests variance error is of higher order. Notice that $\gamma = O(1)$, so we have $$ \mathrm{min} \\{ M, (N \gamma)^{1/a} \\} / N \le\mathrm{min} \\{ M, (N \gamma)^{1/a} \\} / (N\gamma) \le \\begin{cases} \frac{M}{N \gamma} \le M^{1-a} & M < (N \gamma)^{1/a}, \\\\ (N \gamma)^{1/a-1} & M \ge (N \gamma)^{1/a}. \\end{cases} $$ This implies that the variance error is of higher order compared to the sum of the other terms. We will make this more clear in the revision. --- > Q2. While the paper's contribution is primarily theoretical, the only examples given are synthetic. Validation of the theory on a large benchmark dataset, even if modified to make it appropriate to the assumptions, would strengthen the paper significantly. A2. The power-law relations predicted by our theory are aligned with empirical findings in many other works [e.g., Hoffmann et al., 2022]. However, the hyper-parameter $a,b$ in our work are oracle properties of the data distribution, and it is a challenging problem to estimate them in practice. Nevertheless, we believe this is an interesting problem to study. --- > Q3. How do you think this work can this be generalized to other settings of interest? Under which settings should the variance term be accounted for? A3. We believe the intuition carried from our theory, where the scaling laws are due to the disappearance of the variance error due to the implicit regularization, can be generalized to other settings. For instance, the practical LLM training often makes only one pass of the data, so less noise is fitted which implicitly regularizes the variance error. We believe this is an important reason for the observed scaling law. --- **References** - Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, et al. Training compute-optimal large language models. arXiv preprint arXiv:2203.15556, 2022. --- Rebuttal Comment 1.1: Title: Response Comment: Thanks for your response and clarification. I will maintain my score.
Summary: This paper sheds new light on the bounds of linear regression by framing them in terms of scaling laws. Strengths: - The set-up is very clear, the story is well-narrated and the model, including some sketch matrix to play the role of the model-size, is well-thought. - The authors try to be exhaustive with the assumptions: by including in a second time, (i) the assumptions that make everything work (Assumptions A and B) and that allow to go beyond Gaussian data ; (ii) source condition, (iii) some other spectrum decay. Weaknesses: It is difficult to talk about a weakness: the paper is self-contained, the results are solid and the context very well explained. Perhaps the authors should put emphasis on the fact that there is **absolutely no technical novelty and that the contribution relies in re-framing known bounds under the perspective of scaling laws**. Technical Quality: 4 Clarity: 4 Questions for Authors: No question Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 1 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the feedback. Below is our response to your question. > Q1. Perhaps the authors should put emphasis on the fact that there is absolutely no technical novelty and that the contribution relies in re-framing known bounds under the perspective of scaling laws. A1. We respectfully disagree with the claim that our results are re-framing of known bounds. As discussed in lines 129-133, compared to the existing SGD analysis, we consider an additional data sketch, which leads to a new approximation error. Moreover, a direct application of existing SGD analysis only leads to sketch-dependent bias and variance errors, whereas we establish matching upper and lower bounds on these errors that also control the effects of data sketch. To achieve these, we develop several new concentration results (such as Lemma 6.2) and make novel uses of Von-Neuman’s inequality (see for example the proof of Lemma C.2). Therefore, we believe we have made novel technical contributions. --- Rebuttal Comment 1.1: Title: After Rebuttal Comment: Let me be clearer, obviously, nothing is completely black and white, and there is obviously a continuum between a simple reformulation and new contributions. What I mean is that I totally share the feeling of **Reviewer v1rx** that we do not understand what are the *real* novelties of the approach --beyond the story telling. Hence, if Lemma 6.2 or Lemma C.2 are really new achievements, I feel that they could be promoted. I sincerely feel it is not the case, but it is only a question of perception and I am happy to be wrong. Furthermore, convergence rates on SGD and concentration rates on sketching or random features has been widely studied, under the present form, it seems hard to really understand **where** the authors are. Yet, I maintain my score as I have nothing against nice re-formulation. --- Reply to Comment 1.1.1: Comment: Thanks for the comment. Below we provide a roadmap of our proof to address some of your concerns. We agree that convergence rates on SGD and concentration rates on sketching or random features have been widely studied in many contexts. However, obtaining the main results (Theorem 4.1-4.3) in our work requires both application of the existing results and non-trivial developments based on the existing bounds. More specifically, our Theorem 6.1 is mainly built upon existing results in [Wu et al., 2022a,b]. However, in order to obtain Theorem 4.1-4.3 from Theorem 6.1, we first develop novel upper/lower bounds (Theorem 6.3, 6.4) of the approximation, bias and variance terms. These bounds decompose $\mathsf{Approx}$, $\mathsf{ Bias}$ and $\mathsf{Var}$ into the head and tail components (i.e. $\\|w^*_{1:k}\\|\_2\^2,\\|w^*_{k:\infty}\\|\^2\_{H_{k:\infty}}$) and involve the ratio of eigenvalues of the sketched covariance matrix. The bounds are carefully built so that the upper and lower bounds match up to constant factors in our examples. Theorem 4.1-4.3 are then obtained based on instantiation of Theorem 6.3 and 6.4 under the power-law (or log-power-law ) spectrum. The role of Lemma 6.2 and C.2: they are intermediate results developed in our proof. Lemma C.2 proves the first part of Theorem 6.4; Lemma 6.2 characterizes the eigenvalues of the sketched covariance matrix and enables us to obtain Theorem 4.1 and 4.2 from Theorem 6.3 and 6.4. While we appreciate the discussion and understand the perspective shared, we respectfully believe that some technical contributions have been made (e.g., novel uses of Von-Neuman’s inequality) in proving these lemmas and other results.
Summary: Motivated by the recent neural scaling law literature, this work investigates the generalization error rate of a sketched linear regression model trained on Gaussian covariates with power-law spectrum under one-pass SGD. The main result are lower and upper bounds with matching rates, providing a detailed characterization of the different contributions to the generalization error. Strengths: The paper is well-written and easy to parse. The topic is of interest to the NeurIPS community, and the contribution is timely. Weaknesses: In my reading, the two main weaknesses are: 1. The lack of a wider context. While it makes precise connections to other recent works on the "*"theory of neural scaling laws"*" such as [Bahri et al., 2021, Maloney et al., 2022, Bordelon et al., 2024], it ignores that the study of generalization error rates in least-squares / ridge regression under source and capacity conditions is a classical topic with an extensive literature. For instance: - [1,2,3] have studied both single and multiple pass SGD for least squares regression on Hilbert spaces under source and capacity conditions. This corresponds to the kernel limit $M\gg N$ in this work. Borrowing the notation from [3] for concreteness, after a quick comparison with the identification of the capacity $\alpha = a$ and source $r=\frac{b-1}{2a}$, I believe the "hard" and "easy" rates from Theorems 4.1 and 4.2 can be retrieved from the one-pass rates in [3]. - [4] extends the discussion of [1-3] to the random features approximation of kernels. - The rates from one-pass SGD on least squares can be identified with the rates of ridge regression with an appropriately chosen regularization, which has been exactly characterized in [7]. For example, the blue and orange regions in Figure 1 from [7] correspond to rates in Theorem 4.2 with $\ell = 1$. - A recent extension of the above to RFs case appeared in [6] (though note some of these rates were known from [7]). From the discussion above, it is not really clear what rates are new in this submission. I am willing to change my evaluation if this can be clarified with an in-depth comparison with these works. 2. The organization of the manuscript in Theorems with "settings of increasing generality" can come through as deceptive. Aren't Theorems 4.3, 4.2 and 4.1 corollaries of Theorems 6.3 and 6.4? **References** - [1] Yao, Y., Rosasco, L. & Caponnetto, A. On Early Stopping in Gradient Descent Learning. Constr Approx 26, 289–315 (2007). https://doi.org/10.1007/s00365-006-0663-2 - [2] Ying, Y., & Pontil, M. (2008). Online gradient descent learning algorithms. Foundations of Computational Mathematics, 8, 561-596. - [3] Pillaud-Vivien, L., Rudi, A., & Bach, F. (2018). Statistical optimality of stochastic gradient descent on hard learning problems through multiple passes. Advances in Neural Information Processing Systems, 31. - [4] Luigi Carratino, Alessandro Rudi, Lorenzo Rosasco. Learning with SGD and Random Features. Part of Advances in Neural Information Processing Systems 31 (NeurIPS 2018). - [5] Hugo Cui, Bruno Loureiro, Florent Krzakala, Lenka Zdeborová. Generalization Error Rates in Kernel Regression: The Crossover from the Noiseless to Noisy Regime. Part of Advances in Neural Information Processing Systems 34 (NeurIPS 2021). - [6] Leonardo Defilippis, Bruno Loureiro, Theodor Misiakiewicz. Dimension-free deterministic equivalents for random feature regression. arXiv:2405.15699. - [7] Alessandro Rudi and Lorenzo Rosasco. Generalization properties of learning with random features. In Advances in Neural Information Processing Systems, pages 3215–3225, 2017. Technical Quality: 3 Clarity: 3 Questions for Authors: - L43-48: > "*This difference must be reconciled, otherwise, the statistical learning theory and the empirical scaling law make conflic predictions: as the model size increases, the theoretical bound (2) predicts an increase of variance error that eventually causes an increase of the population risk, but the neural scaling law (1) predicts a decrease of the population risk. In other words, it remains unclear when to follow the prediction of the empirical scaling law (1) and when to follow that of the statistical learning bound (2).*" Given our current understanding of random features regression, this claim is inflated. Several of the works mentioned above [4,6,7], as well as many other works in the random features literature have made the observation that increasing the "width" $M$ does not necessarily lead to an increase of variance / overfitting, thanks to the implicit regularization. - L273-275: > "*It is noteworthy that our simulations demonstrate stronger observations than the theoretical results in Theorem 4.1, which only establishes matching upper and lower bounds up to a constant factor.*" What do you exactly mean in this sentence? What in Figure 2 is stronger than the exponents reported in Theorem 4.1? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The assumptions under which the Theorems hold are clearly stated, but I miss an explicit discussion of the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We will cite and discuss the relationship between the pointed papers and our work in the revision. However, there are several potential misunderstandings about our results, which we would like to clarify below. > Q1. [1,2,3] have studied both single and multiple pass SGD for least squares regression on Hilbert spaces under source and capacity conditions. This corresponds to the kernel limit $M\gg N$ in this work. A1. There are three major differences that clearly separate our work for studying scaling laws from the prior works. First, as you mentioned, [1,2,3] focused on the kernel limit case where $M \gg N$, while our work treats both $M$ and $N$ as free variables — this is a key feature to allow studying scaling laws, which are functions of both $M$ and $N$. Second, we establish matching upper and lower bounds for SGD while [1,2,3] only focused on the upper bound. Without a matching lower bound, it is unclear whether the derived scaling law is the true behavior, or just a mathematical artifact of a loose upper bound. Finally, our work also establishes scaling laws under logarithmic power law spectrums (see Section 4.2), which is beyond the scope of capacity condition. --- > Q2. Borrowing the notation from [3] for concreteness, after a quick comparison with the identification of the capacity $\alpha=a$ and source $r=\frac{b-1}{2a}$, I believe the "hard" and "easy" rates from Theorems 4.1 and 4.2 can be retrieved from the one-pass rates in [3]. A2. Our results cannot be recovered from [3]. As we discussed above, our Theorems 4.1 and 4.2 prove instance-wise matching upper and lower bounds as functions of both $M$ and $N$, while Theorem 1 in [3] only covers upper bounds as function of $N$. --- > Q3. [4] extends the discussion of [1-3] to the random features approximation of kernels. A3. We hope our A1 and A2 have clarified the relationship between our work and [1,2,3], hence also clarifies this question. --- > Q4. The rates from one-pass SGD on least squares can be identified with the rates of ridge regression with an appropriately chosen regularization, which has been exactly characterized in [7]. For example, the blue and orange regions in Figure 1 from [7] correspond to rates in Theorem 4.2 with $\ell=1$. A recent extension of the above to RFs case appeared in [6] (though note some of these rates were known from [7]). A4. We hope our A1 and A2 have clarified the relationship between our work and [1,2,3], hence also clarifies this question. Moreover, we would like to point out SGD is a more practical algorithm than ridge regression. --- > Q5. The organization of the manuscript in Theorems with "settings of increasing generality" can come through as deceptive. Aren't Theorems 4.3, 4.2 and 4.1 corollaries of Theorems 6.3 and 6.4? A5. Theorems 4.3, 4.2 and 4.1 are derived based on Theorems 6.3 and 6.4, however, we believe the derivations are non-trivial and Theorems 6.3 and 6.4 can be viewed as the building blocks of the proof of our main results. Therefore, we put Theorem 6.3 and 6.4 after Theorems 4.3, 4.2 and 4.1. --- > Q6. L43-48… Given our current understanding of random features regression, this claim is inflated. Several of the works mentioned above [4,6,7], as well as many other works in the random features literature have made the observation that increasing the "width" does not necessarily lead to an increase of variance / overfitting, thanks to the implicit regularization. A6. This is a nice comment. We will rewrite this part to emphasize the differences between our results and prior results as discussed before. --- > Q7. L273-275… What do you exactly mean in this sentence? What in Figure 2 is stronger than the exponents reported in Theorem 4.1? A7. While our theorems provide upper and lower bounds of matching rates, the exact constant factors do not match. Our experiments suggest that the constant should also match, as indicated by the linear trend and small standard deviation in e.g. Figure 2, which is stronger than our theoretical prediction. --- > Q8. The assumptions under which the Theorems hold are clearly stated, but I miss an explicit discussion of the limitations. A8. Thanks for the suggestion. Most of our assumptions are standard in literature, however, we will discuss the limitations of our assumptions (e.g., gaussian design, source condition) in the revision. --- Rebuttal Comment 1.1: Title: Clarification Comment: I thank the authors for their rebuttal that addressed some of my questions, in particular Q5-Q8. However, I find that your reply to Q1-Q4 miss my point. Let me try to be clearer. - I don’t mean to say your results lack technical novelty. I understand [1,2,3] provide only upper bounds for kernel methods ($M\gg N$), while you also provide lower bounds with matching rates. - I also understand that [5,6,7] are for ridge, not SGD. My point is also not about which one is a more practical algorithm. In Q1-Q4 I (tried) to convey two points: 1. First, how do your exact rates compare with the upper bounds of [1,2,3]? Do they match in the regime where they compare? Are they tighter? 2. Second, how do your rates compare with the ridge rates from [5,6,7] for kernels and RF? We know that running SGD to convergence with decreasing learning rate recover the least-squares rates, which are not the optimally regularized ridge rates. I want understand how close to optimal is early stopping in terms of implicit regularisation. See for example Section 4 of [6] for a summary of the rates in [5,7]. It is not just about citing papers, but how your work fit within the classical source and capacity literature, and the conclusions this allow us to make about optimality of early stopping with SGD in the different scaling regimes. --- Reply to Comment 1.1.1: Title: Reply to Reviewer v1rx Comment: Thanks Reviewer v1rx for the clarification. We will show below our rate matches the prior SGD and ridge rates in the comparable regimes. For simplicity, we will ignore logarithmic factors by replacing $N_{\mathrm{eff}}$ with $N$ in our Theorem 4.2 in the following discussions. Note that the logarithmic factor can be removed by considering averaged SGD instead of last iterate SGD. - Comparion with the prior SGD rate. We use [3] as an example. Aligning the source and capacity conditions, we have $\alpha = a$ and $r = (b-1)/(2a)$. Then the rate in their page 4 becomes $O(1/(N\gamma)^{(b-1)/a} + (N\gamma)^{1/a}/N)$, which exactly matches ours in Theorem 4.2 when $M$ is large in the comparable regime. - Comparing with the prior ridge rate. We use [7] as an example. Aligning the source and capacity conditons, we have $\gamma=1/a$ and $r = (b-1)/(2a)$. Then their optimal rate in Theorem 2 becomes $O(N^{-(b-1)/b})$, which matches our Theorem 4.2 when $M$ is large and $\gamma$ is tuned to be $\gamma^* = \Theta(N^{a/b-1})$ in the comparable regime. Finally, we would like to point out that [3] requires $\mathrm{tr}(\Sigma^{1/\alpha})$ to be finite. However, we do not require such condition, that is, our bounds allow $\mathrm{tr}(H^{1/a}) = \infty$ in our notation. We will add these discussions in the revision.
null
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper proves that the scaling laws that occur in deep learning also occur when solving linear regression with SGD. Namely, suppose: * that covariates x are Gaussian with a power-law spectrum * the true labels are given by <w^*, x> for unknown w^* * we run one-pass SGD to do linear regression on an M-dimensional sketch of the covariates x * we have N data points Then the risk is upper-bounded by an expression that depends only on the "bias" and "approximation" terms, with inverse polynomial dependence as observed in neural scaling laws. The "variance" term is of lower order (provided the appropriate large step sizes are used). Extensions to when the data follows a logarithmic power spectrum are also explored. Strengths: The result is timely and I think will be of high interest to the community, since it sheds insight on why neural scaling laws might occur in practice. The paper is well-presented, and the proofs are neatly written and easy to follow. As opposed to previous works, this paper provides risk bounds for any network width M, and any number of data points N, instead of taking one of the parameters to infinity and deriving a scaling law in the other parameter. Weaknesses: Typo: "optimally tunned" Technical Quality: 4 Clarity: 4 Questions for Authors: 1. How dependent are the results on a geometric step-size schedule? What if a constant schedule, or schedule decreasing as gamma_t = t^{-c} is used instead? 2. In line 232, it is stated that when $1 \leq b \leq a$ the tasks are "relatively hard", and in line 236 that when $a < b < a+1$ the tasks are "relatively easy". Is what is meant here that these tasks are "relatively harder/easier" than when a = b? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for supporting our work! We will fix the typo. We address your concerns as follows. > Q1. How dependent are the results on a geometric step-size schedule? What if a constant schedule, or schedule decreasing as $\gamma_t = t^{-c}$ is used instead? A1. The geometrically decaying stepsize scheduler is known for the last iterate of SGD to achieve a nearly (up to logarithmic factors) minimax optimal excess risk [Ge et al., 2019] in linear regression. Therefore we focus on the geometrically decaying stepsize scheduler. The last iterate of SGD would suffer from a constant variance error if used with a constant stepsize. Moreover, a polynomially decaying stepsize scheduler leads to a worse risk bound compared to a geometrically decaying stepsize scheduler as shown in [Wu et al., 2022a]. We will clarify this in the revision. --- > Q2. In line 232, it is stated that when $1 \leq b \leq a$ the tasks are "relatively hard", and in line 236 that when $a < b < a+1$ the tasks are "relatively easy". Is what is meant here that these tasks are "relatively harder/easier" than when a = b? A2. You are correct. We will clarify this in the revision. --- **References:** - Rong Ge, Sham M Kakade, Rahul Kidambi, and Praneeth Netrapalli. The step decay schedule: A near optimal, geometrically decaying learning rate procedure for least squares. Advances in neural information processing systems, 32, 2019. - Jingfeng Wu, Difan Zou, Vladimir Braverman, Quanquan Gu, and Sham M. Kakade. Last iterate risk bounds of sgd with decaying stepsize for overparameterized linear regression. The 39th International Conference on Machine Learning, 2022. --- Rebuttal Comment 1.1: Title: Response Comment: Thank you, I will keep my score.
null
null
null
null
null
null
Thompson Sampling For Combinatorial Bandits: Polynomial Regret and Mismatched Sampling Paradox
Accept (spotlight)
Summary: The paper explores CTS for the linear combinatorial semi-bandit problem with subgaussian rewards. It introduces a novel TS algorithm that avoids exponential regret scaling with problem dimensionality. Theoretical bounds and experiments are provided. Strengths: The paper addresses a significant limitation in existing CTS algorithms by avoiding exponential regret scaling with problem dimensionality, which is a notable contribution to the field. It establishes new regret bounds that improve upon previous results in terms of finite-time performance. This is further shown experimentally. The paper is well-organized and clearly presents its ideas. Weaknesses: The paper places significant emphasis on the "mismatched sampling paradox," showcasing situations where a divergence between assumed and actual reward distributions can unexpectedly improve performance. While intriguing, this phenomenon isn't entirely novel, as it's known that Thompson Sampling (TS) can adapt to various posterior distributions. The paper's focus on this paradox, including its prominence in the title, may overshadow what could be considered the core contribution: a little boost on the exploration of CTS to better control the regret constant term. The paper does not sufficiently clarify how the exploration boost is integrated into the analysis. The proof strategy is not well synthesized. [minor] Your main term scale with log(m)^2, but it is now possible to get a log(m) term instead. For example, use Lemma 4 in https://arxiv.org/pdf/2302.11182. I think this could be good to at least cite this paper mentioning that this could be done. line 203-204, I think the event D_t should contains the norm infty. Technical Quality: 3 Clarity: 3 Questions for Authors: Can you explain better what makes your analysis work exactly ? What are the proof techniques that you used, and how this differs from [18] ? I put a good score for this paper as I think it worth it. But I can consider putting a lower score if a not satisfying answer is given to the questions. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The analysis only work for linear reward. Can you discuss why this is the case and how would it be possible to extend it the more general cases? Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The Lemma 4 of (https://arxiv.org/pdf/2302.11182) is very interesting. It could replace lemma 10 from [9] and provide us with a better bound with a $\ln(m)$ instead of a $\ln(m)^2$. We will make sure to correct and add that in the final version of the paper. $U^\star(s)$ is a scalar; we do not believe an infinity norm is needed. (Or maybe you meant an infinity norm over the vector $(U^\star(s))_{s\in[t]}$ ?) The novelty of our work is to target the exponential term in the regret of [18] and mainly their step 4 that we replaced completely by intersecting the event $\mathfrak{G}$ ($\mathfrak{C}$ in their notation) with the event of the clean run $\mathfrak{A}$ that we introduced. Our proof relies on a maximal concentration inequality in the same spirit as Lemma 3 of [9] to bound the maximal deviation of the estimate of the reward of the best action (event $\mathfrak{D}$). This result was not used in [18] and is a key element in our proof. Some more basic concentration inequalities are used to control the probability of event $\mathfrak{B}, \mathfrak{C}$. And for the event $\mathfrak{F}$ like in [18]. The way the event $\bar{\mathfrak{H}}$ is handled is not new and is already handled by the literature [18], or [9] and now (https://arxiv.org/pdf/2302.11182). We are not convinced that this technique of added exploration could be used in other reward settings to alleviate the exponential regret. Even for 1-Lipshitz reward functions, the task seems rather tedious. First, we would need to find a way to extend the definition of $f$ to $\mathbb{R}^d$ and not $[a,b]^d$. For instance, the example of theorem 6 of [22] $f$ is Lipshitz on $[0,1]^d$ but not on $\mathbb{R}^d$. Furthermore, suppose one tries to apply SG-CTS to that example. In that case, we believe that in order to explore the optimal decision, all the Thompson samples would need to be simultaneously greater than a certain threshold. This is a very low probability set in high dimensions. We believe that, in general, the structure of Lipshitz functions could induce a lot of confusing sets of parameters where the initial prior would put a very small probability. If that were to happen, Thompson sampling would need an exponentially long time to explore all those zones. It is what happens in step 4 of [18]. Our method works for linear functions because there are essentially only two sets to be explored, and our posterior still puts enough probability on both of them. --- Rebuttal Comment 1.1: Comment: I read the authors’ rebuttal and skimmed the comments of other reviewers. I would like to keep my score unchanged.
Summary: This paper addresses Thompson Sampling for stochastic combinatorial bandits with sub-Gaussian rewards. In this area, previous work has identified the interesting phenomenon that some versions of Thompson sampling incur a per-instance regret which is exponential in the (maximum) number of arms pulled per round, m. This is not experienced by UCB-style algorithms, although those tend to have a worse computational complexity. This paper furthers the discussion around the performance of TS in combinatorial bandits by identifying a variant which incurs only polynomial dependence on m. This is achieved by using a distribution with inflated variance to draw Thompson samples, rather than the 'true' posterior. The paper contains non-trivial theoretical work to derive this bound, and contains an experiment demonstrating the superiority of the new TS approach over 'natural' variants and its comparable performance to ESCB, a state-of-the-art UCB-based algorithm. Strengths: I find the research questions considered in this paper very interesting, and think that the multi-armed bandits community at NeurIPS will value this work. Thompson Sampling is a very popular algorithm for all kinds of bandit problems, and the fact that its most natural extension to combinatorial bandits has poor performance is an important issue. The theoretical results in this paper meaningfully contribute to the understanding of where 'natural' Thompson sampling is necessary and where it is only 'some notion of adding randomness' that is needed. The theoretical work is based on state-of-the-art tools, does include some novel steps and is likely to be of its own interest. It does not appear to be a routine translation of existing tools. The paper is mostly well-written and clear, and for someone acquainted with combinatorial bandits it is quite easy to follow. Weaknesses: The main weakness of the paper is that more could be done to articulate exactly where the results improve over existing bounds. The exponential and polynomial dependence on m is, as the paper states, in the constant order (wrt T) term. The main comparison between the results of the present paper and [18] is in terms of order results, and some constants are potentially large. It leaves the reader uncertain as to where in the space of T, m and d, the bound of this paper improves upon the bound of [18]. Adding some clarity around this would improve the paper, and help to establish the extent of its contribution. There are some further points where I feel clarification could be made around theoretical and experimental results. I ask questions about these in the section below, and they are also central to me assessing the extent of the contribution. There are some parts where the explanations are probably a bit too brief to be accessible to those without expert knowledge of the field - e.g. lines 52-64 discussing related literature use undefined technical terminology and discuss papers very quickly. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. For which values of m and T specifically does the bound of this paper improve upon the bound in [18]? I appreciate this is problem dependent, but if this could at least be answered for the example considered in the experiments, that would be beneficial. 2. The situation in the experiments seems as though it could also be tackled by a 2-armed bandit policy that receives full-bandit feedback on [0,d/2] and essentially ignores the combinatorial structure. If those rewards were scaled to [0,1] would the combinatorial algorithms be outperformed by algorithms for 2-armed bandits? 3. When introducing B-CTS and G-CTS could you make clear if these are the variants to which the results of [18] and [22] apply? 4. The conclusion that using a mismatched variance term outperforms the natural variance is an interesting one, and seems to mirror some findings in the bandit literature that randomisation strategies that are not TS-like or based on a posterior distribution (explicitly) can perform well in bandit problems (e.g. Perturbed History Exploration and bootstrapping based approaches, Kveton et al. (2019) and subsequent work). Can you comment on whether this polynomial dependence relies on using something close to the posterior, or whether other randomisation strategies would seem to be sensible candidates? 5. Can you add to section 5 some clarity on what steps are novel and what are inspired by previous theoretical work? Some of the construction of a clear run seems to bear resemblance to (now) classic proofs for Thompson sampling (Agrawal and Goyal (2012), Kaufmann et al (2012), etc.) and the idea of using sample paths may have roots in other work too? (I'm not certain on that) Agrawal and Goyal (2012) http://proceedings.mlr.press/v23/agrawal12/agrawal12.pdf Kaufmann et al (2012) https://arxiv.org/pdf/1205.4217 Kveton, Szepesvari, et al. (2019) https://arxiv.org/abs/1902.10089 Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: 1. + From a theoretical point of view : We share the same leading term in $\log{T}$. However, for the exponential [18] vs. polynomial term, you need to compare $m^{8m}$ and $m^{20} \times d^{10}$. So, one can say that for $m>10$, our bound can be better. This does not take into account the term in $\frac{1}{\Delta_{\min}}$. To be sure, if $m>20$, our bound is better. + From an experimental point of view : There is no way to reach the asymptotic bound in practice (for some parameter) of [18] because the logarithmic term in $T$ is too small compared to the exponential term. In theory, it is also impossible to reach our bound because $\log(T)$ is very small compared to our polynomial term and for all practical $T$. However, it happens that in the simulation, the asymptotic regime is reached much faster than the theory predicts. Our simulations run for $m$ in a range $[5,65]$. One can already see the exponential gap happening around $m=13$. 2. Yes. This is undead the case! However, one could imagine a slightly more complex set where one could not easily separate the arms into separate actions. (A set with many decisions with many arms but which share a few of them.) This would also create exponential regret in the Semi-bandit setting. 3. For B-CTS and G-CTS, this is the version that [18] and [22] apply in the linear setting. However, in [18], they also provide a G-CTS with a slight modification to handle correlated subgaussian noise, but it still shows exponential regret. 4. If we understand the reviewer correctly, we believe that the randomization that Kveton et al. (2019) use could indeed help an algorithm like Thompson sampling (B-CTS) to work a bit better without alleviating the exponential regret. In fact, those constant terms (polynomial or exponential) come from a kind of waiting time for the best action to be played enough time. As far as we know, randomization tries to lower the probability that the reward of the best action is underestimated. However, if the best action is not played enough, like in the two action examples, this would not help much. This randomization could help, for instance (in our case), so that the events $\mathfrak{D}$ and $\mathfrak{F}$ happen with a smaller probability or that we can have tighter bounds. This is indeed very interesting! 5. The idea that, to control the behaviour of TS, one should lower the bound on the number of times the optimal decision is selected is undoubtedly not new. It was indeed exploited in Agrawal and Goyal (2012), Kaufmann et al (2012) for the multi-armed bandit setting. We also believe those ideas inspired [18], [21] in their proofs for the combinatorial setting. However, some more intricate proof techniques required to handle the exponential term are new. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: Thanks very much for your response to my comments, all of these responses are convincing, I would just like to know if and how/where you intend to incorporate these responses into the updated paper? --- Reply to Comment 1.1.1: Comment: Yes, we will incorporate those responses into our paper. We will emphasise the heuristic of the proof and especially the comparison with [18], Agrawal and Goyal (2012), Kaufmann et al (2012). We will also try to incorporate the exponential vs the polynomial scaling if the space constraint allows us.
Summary: The authors present a new Thompson Sampling algorithm for linear combinatorial stochastic semi-bandits. The algorithm provably achieves a better finite-time regret than previous works, and specifically without an exponential dependency on the dimension of the problem. The authors also present a "paradox" that shows using posterior knowledge is not always beneficial. Strengths: * The paper is presented clearly and the theory is sound. * The experiments suggest the algorithm is useful in practice. * The presented paradox is original and interesting Weaknesses: * The main result improves the known regret only for the term that does not depend on the time horizon, which is usually less interesting. * Code is not provided for the numerical experiments. Technical Quality: 4 Clarity: 3 Questions for Authors: * In line 26, do you mean $X(t)$ is uniformly distributed? * Can you provide summary of the algorithms as figures? It is hard to follow in the text what is the exact algorithm * Can you provide any intuitive explanation for the presented "paradox"? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: The authors address the limitations of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We will provide the code in an open-source repository after the review process. * In line 26, $X(t)$ could be any random variable that is bounded in $[a,b]^{d}$. However, the one that maximizes its variance is the half Dirac in $a$ and $b$. The latter gives us the most deconcentrated random variable with a subgaussian parameter $(b-a)^2/4$. And any bounded random variable is subgaussian with parameter $(b-a)^2/4$. * Due to space constraints, the algorithms as figures were removed. They will be added back to the final version of the paper. You can find it in the pdf attached to the general answer. * One of the reasons behind the paradox is the fact that if the unknown parameter $\theta$ lies in $[a,b]^d$ and we attempt to use TS to select decisions by assigning (for instance) a uniform prior over that set, then in many cases the algorithm will put too little probability mass on the set around the actual parameter $\theta$ so that TS will never explore enough to discover $\theta$ and will be stuck in an infinite loop of sampling suboptimal decisions. Indeed, when $d$ is large, any region that only contains vectors $\theta$ such that $(1/d) \sum_{i \in [d]} \theta_i$ is not close to $(b-a)/2$ will be assigned very little probability mass, from the law of large numbers. In the counter example of 2 actions, if the worst action is sampled first and gives a reward that is greater than the mean of the sum of the priors of the other action. Then, because of the prior, Thompson sampling will firmly believe it is the optimal action (again due to concentration). Therefore, it will not explore the other action, which is the best, for a very long time. [22] exploited this to create the example (see their section 3.2). --- Rebuttal Comment 1.1: Comment: Thank you for the response, I will keep my score
Summary: This paper proposes a modified version of posterior sampling that achieves optimal asymptotic regret bound, providing an algorithm through the methodology of Thompson Sampling that achieves such a bound. Strengths: This is a technical paper, and the message is clear and intriguing. This paper validates the methodology of Thompson Sampling in the context of combinatorial bandits in terms of achieving polynomial dependence on the number of dimensions. The technique is novel, to my knowledge. Weaknesses: I am exactly doing this line of research, so I am not able to follow all the proof details. The paper is notation-heavy; for instance, there are nine events defined, eight of which are some sort of deviations. It is hard to parse and keep track of for a person outside the exact line of research. I appreciate the authors' effort to explain, but I do believe in the main paper, the authors should write a more sketchy proof to highlight the key technique to prove novelty than writing a semi-rigorous proof: for instance, out of the eight deviation events, which are standard and bounded by conventional results? which are novel contributions? Of all the techniques involved, which is the key to this new result? In all, I feel it would be of great help if a clear logic line for the proof is demonstrated for people outside this line of research. At the current state, it is rather hard for me to evaluate the technical contribution. Technical Quality: 3 Clarity: 3 Questions for Authors: What is the guarantee for G-CTS? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: The decomposition under a clean run (event $\mathfrak{A}$) is original. The decomposition of event $\mathfrak{A}$ into events $\mathfrak{B},\mathfrak{C},\mathfrak{D}, \mathfrak{E}$ is new. And how we handle event $\mathfrak{E}$ is original. However, the handling of event $\mathfrak{B},\mathfrak{C},\mathfrak{D}$ are similar to prior work. The event $\mathfrak{Z}, \mathfrak{G}, \mathfrak{F}, \mathfrak{H}$ are not new and comes from [18]. However the way we handle $\bar{\mathfrak{G}} \cap \mathfrak{A}$ is. The regret upper bound for G-CTS (Not SG-CTS) is given in the paper [18] Theorem 3. One needs to go to the appendices D, page 21, to see the exponential term. It states that under linear rewards, subgaussian noise of the rewards of the items (note that the noise can be correlated but known), the regret scale in $\frac{d\ln(m)^{2}\ln(T)}{\Delta_{\min}}$. However, there is an exponential constant term. This is why we decided to add a little more exploration and to rework the analysis. --- Rebuttal Comment 1.1: Title: Reply to rebuttal Comment: The authors have addressed my concern. I would like to maintain my score.
Rebuttal 1: Rebuttal: We want to thank the reviewers for their questions and remarks; we will take them into account to improve the paper's clarity. Here is a general answer to all the questions asked by the reviewers. The main contribution of our paper was to find a way to circumvent the exponential term in the work of [18] in step 4 of his analysis, page 13. We replace it with a polynomial term in Theorem 1. The exponential dependency is due to the fact that they control some of the regret by the expected time necessary for **all** of the Thompson samples to be simultaneously greater than a threshold, and this expected time scales exponentially in the ambient dimension. That in high dimension is an exponentially small set in probability. In fact, for the best action to be played, we only need the mean of the Thomspon samples to be greater than a certain threshold. Furthermore, we needed to add more variance to them to control some deviation of the estimate of the rewards. Below are some of the original proof elements we came up with to circumvent the problem: 1. First, we show that what we name a clean run happens with a high probability. Proposition 2. The way we decompose ${A^{\star}}^\top \theta(s)$ is original with the introduction of the random part of the Thompson sample $Z(t)$. The treatment of the event $\mathfrak{E}$ is original. It is a mix between a deconcentration inequality (i.e. a lower bound on the probability that a random variable is far from its expectation), a study of the function $f(t)/g(t)$, and a multiplicative Azuma Chernoff bound. The multiplicative Azuma Chernoff bound is, we think, classic, but we did not manage to find a reference for the direction of inequality, which we proved in the appendix. 2. Second, under the clean run event, we show that the optimal action is played at least a certain number of times. This is done in Proposition 3. It combines a concentration result from [9] and our last result. 3. Last, The subsubsection 5.3.2 is new and uses the last two results. It replaces the step 4, page 13 of [18] It was discussed that a graph representing the event and the regret decomposition, highlighting the novel contribution, would be added to the paper's appendices. If the reviewers believe that it facilitates the understanding of the proof, we propose adding it to the paper's final version. We hope this answer will help the reviewers understand our work's novelty. Please do not hesitate to reach out for more details if needed. Here are the algorithms in figure form. Pdf: /pdf/3f79259e641989fbc8faebfa77faaf71767f9dff.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
SDP4Bit: Toward 4-bit Communication Quantization in Sharded Data Parallelism for LLM Training
Accept (poster)
Summary: This paper presents a novel scheme for reducing the communication cost of training LLMs using FSDP approach. To this end, the authors suggest to quantize the weights differences (instead of the weights themselves) and applying Hadamard transformations for making the gradients smooth. They show up to ~4x training speedup on 128 GPUs. Strengths: 1. The paper studies a highly important problem (communication cost of llm training). 2. Although the ideas of quantizing the weights difference and Hadamard smoothing are not new, using them in this direction is interesting. 3. The scheme is evaluated on a solid setting (16x4 A100 and 16x8 H100). 4. The theoretical convergence analysis of difference quantization is nice. 5. The paper is well-written and easy to follow. Weaknesses: 1. Although, the idea is interesting, I think the authors should provide more ablations on various aspects of their design choice (see questions section) to elaborate the approach better. 2. To motivate the problem more, I would like to see some practical analysis of the communication volume based on the number of nodes. For example, how much data (in GB) will be sent between gpus in a single node (and between different nodes) when we scale to a high number of gpus. Technical Quality: 3 Clarity: 3 Questions for Authors: I have some questions/ideas about the paper. I would be happy if the authors answer them. 1. The paper applies symmetric quantization for all schemes. However, in the communication reduction techniques, it is natural to use asymmetric quantization (which is a bit more accurate). Also, stochastic rounding could be another option to have unbiased gradients. Have you tried those schemes to see the accuracy changes? Maybe you don't need to apply hadamard in that case and the accuracy will be recovered! 2. The main motivation behind the hadamard smoothing is to remove the outliers. This is studied in several works [1, 2, 3, 4] (which are not cited here also). My main question is why we do not see an analysis of the gradient distribution in the paper so we can make sure that the hadamard is important. 3. There isn't any study on the overhead of applying Hadamard scheme. I agree that there are fast hadamard kernels when the input is power-of-two. However, it is not clear for me if your input is not power-of-two. Also, there isn't an ablation on the size of the Hadamard in that case. References: 1. Training Transformers with 4-bit Integers (https://arxiv.org/abs/2306.11987) 2. QuaRot: Outlier-Free 4-Bit Inference in Rotated LLMs (https://arxiv.org/abs/2404.00456) 3. QuIP#: Even Better LLM Quantization with Hadamard Incoherence and Lattice Codebooks (https://arxiv.org/pdf/2402.04396) 4. SpinQuant: LLM quantization with learned rotations (https://arxiv.org/abs/2405.16406) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. The author discuss that the work focuses only on communication reduction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses **W2: To motivate the problem more, I would like to see some practical analysis of the communication volume based on the number of nodes. For example, how much data ...** A: The practical communication volume for data parallelism is primarily determined by factors such as model size, the number of GPUs, and the quantization strategy employed. We present the practical volume result for different models below. Note that for larger models (6.7B and above), tensor parallelism is employed, so the intra-node data-parallel size may not equal the number of GPUs within a node. #### Notation | Model Size | GPUs per Node | Nodes | Gradient Quant Group | Weight Quant Group | |--|----|--|--|--| | M | g | N | G | W | #### Number of Bytes sent by a single GPU (when data parallel size = number of total GPUs, g > 1, N > 1) | | Baseline &nbsp;&nbsp;&nbsp;&nbsp; &nbsp;&nbsp;&nbsp;&nbsp; | SDP4Bit | |-----|----|----| | **Weight** | | | | Intra: | $$2M\frac{gN - 1}{gN}$$ | $$(0.5M + 4\lceil M/W\rceil)\frac{gN - 1}{gN}$$ | | Inter: | $$2M\frac{gN - 1}{gN}$$ | $$(0.5M + 4\lceil M/W\rceil)\frac{gN - 1}{gN}$$ | | **Gradient** | | | | Intra: | $$4M\frac{gN - 1}{gN}$$ | $$(M + 4\lceil M/G\rceil)\frac{g - 1}{g}$$ | | Inter: | $$4M\frac{gN - 1}{gN}$$ | $$(0.5\frac{M}{g} + 4\lceil\frac{M}{gG}\rceil)\frac{N - 1}{N}g$$ | #### Number of Bytes sent per GPU per Iteration (32 A100 GPUs) | Model Size | Baseline Weight | Baseline Gradient | SDP4Bit Weight | SDP4Bit Gradient | |----|------|------|-------|------| | GPT-1.3B | Intra: 2.37 GB, Inter: 2.37 GB | Intra: 4.75 GB, Inter: 4.75 GB | Intra: 0.60 GB, Inter: 0.60 GB | Intra: 0.95 GB, Inter: 1.15 GB | | GPT-2.7B | Intra: 4.79 GB, Inter: 4.79 GB | Intra: 9.57 GB, Inter: 9.57 GB | Intra: 1.20 GB, Inter: 1.20 GB | Intra: 1.91 GB, Inter: 1.15 GB | | GPT-6.7B | Intra: 0 GB, Inter: 2.72 GB | Intra: 0 GB, Inter: 5.45 GB | Intra: 0 GB, Inter: 0.68 GB | Intra: 0 GB, Inter: 0.72 GB | | GPT-13B | Intra: 0 GB, Inter: 2.30 GB | Intra: 0 GB, Inter: 4.61 GB | Intra: 0 GB, Inter: 0.58 GB | Intra: 0 GB, Inter: 0.61 GB | | GPT-18B | Intra: 0 GB, Inter: 3.29 GB | Intra: 0 GB, Inter: 6.58 GB | Intra: 0 GB, Inter: 0.83 GB | Intra: 0 GB, Inter: 0.87 GB | This data illustrates the communication volume involved in different model sizes and quantization strategies. Notably, larger models like GPT-6.7B and above also enable tensor parallelism which will decrease intra-node dp size, eliminating intra-node dp communication. ### Questions: **Q1** **A:** We appreciate the insightful question regarding the use of asymmetric quantization and stochastic rounding. In our experiments, we've applied symmetric quantization with stochastic rounding across all methods to ensure a fair comparison. Our choice aligns with the baseline papers such as ZeRO++ for consistent comparison. While we acknowledge that different compressors can have varying impacts on performance, our main contributions lie in the areas of weight difference quantization and two-level mixed-precision quantization. The exploration of different compressor types, such as asymmetric quantization or even sparsification and their potential benefits, is orthogonal to our work. Although investigating different compressors is valuable, it's beyond the scope of our current study. Also note that one of our main contributions in gradient quantization is the system optimization (kernel fusion, etc.) which could also be applied to asymmetric quantization. **Q2** **A:** We appreciate the suggestion to analyze gradient distribution to justify the use of the Hadamard Transform. We've added and additional analysis illustrated in **Figure 3 of our rebuttal PDF** for the gradient distribution. This analysis demonstrates the necessity of the Hadamard Transform in handling gradient outliers, thereby justifying its use in our methodology. We will also include the additional relevant works ([2, 3, 4]) to our revised manuscript, as well as the analysis of gradient distribution. **Q3** **A:** The overhead reduction is two-fold: the choice of a small Hadamard matrix size and kernel fusion. The experiments in Table 4 (TLq-HS vs. Int4-Int4) demonstrate that there is nearly no additional overhead due to these optimizations. As described in Section 3.3, our proposed TLq-HS uses a Hadamard matrix size of 32x32, which we consider the best trade-off between accuracy and performance. Although larger Hadamard matrices can better smooth outliers, the improvement is modest (**illustrated in the table below with different Hadamard size**). Additionally, larger matrices increase computational complexity, which can become a bottleneck, especially on older GPU versions, and can result in inefficient memory access patterns due to larger tiling sizes. Given the small size of the Hadamard matrix (32x32), the Hadamard transformation is primarily a memory-bound operation on contemporary GPUs. This characteristic allows us to seamlessly integrate the Hadamard transform with the quantization kernel, resulting in nearly zero additional overhead. To further illustrate the impact of the Hadamard transformation on performance, we provide (de)quantization throughput experiment in our **rebuttal pdf table 2**, which is tested on an A100 GPU. If the input size is not divisible by 32, we pad the input with zero accordingly. As requested, the ablation study of **validation loss with different Hadamard sizes** is provided below. We will also include these results in the revised version. | GPT-125M | Val Loss | |-|-| | Baseline | 2.29392 | | TLq-HS (Hadamard Size=32) | 2.29528| | TLq-HS (Hadamard Size=64) | 2.29597| | TLq-HS (Hadamard Size=128) | 2.29612 | --- Rebuttal Comment 1.1: Title: Reply Comment: Thanks for your helpful answers and explanation. I will raise my score to 7. --- Reply to Comment 1.1.1: Comment: Thank you for your insightful review and valuable advice, which have been very helpful in improving our paper.
Summary: Thank you for submitting your paper to Neurips 2024. The paper proposes SDP4Bit, a communication quantization method that mitigates accuracy loss by weight difference quantization and 8-bit intra-node and 4-bit internode quantization. The authors provide convergence analysis to show that SDP4Bit is compatible with both biased and unbiased compressors and experimental results to show that SDP4Bit outperforms existing solutions. Strengths: 1. The authors provide convergence analysis to show that weight difference quantization is compatible with both biased and unbiased compressors. 2. The two-level quantization method is suitable for the existing GPU cluster network topology. 3. The authors' implementation uses several system optimizations. Weaknesses: 1. Missing key ablation studies. The two proposed strategies in this paper are weight difference quantization and two-level int8-int4 all-to-all gradient averaging. So I suggest the authors add the following two experiments to compare the accuracies. - Running ZeRO++ with two-level int8-int4 all-to-all gradient averaging and its original quantization method - Running ZeRO++ with its original int4-int4 all-to-all gradient averaging and the weight difference quantization. 2. The figures are too small to read. 3. Related work could add AG-SGD, which compresses changes in activations rather than the activations themselves, so it does not rely on the assumption that gradients are unbiased. [1] Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Under what assumptions are weight differences generally easier to quantify? I've read several papers based on this intuition, but none that provide a comprehensive analysis of this intuition. 2. If you only need to assume that the magnitude of the weight difference is smaller than the weight, how do you ensure this? 3. If you ensure this via the learning rate, I think it will potentially lead to higher wall clock convergence time. Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses **W1: Missing key ablation studies. The two proposed strategies in this paper are weight difference quantization and two-level int8-int4 all-to-all gradient averaging. So I suggest the authors add the following two experiments to compare the accuracies.** - *Running ZeRO++ with two-level int8-int4 all-to-all gradient averaging and its original quantization method.* - *Running ZeRO++ with its original int4-int4 all-to-all gradient averaging and the weight difference quantization.* **A:** We do have ablation studies for weight-only quantization and gradient-only quantization in Table 1 and Figure 5 (and we apologize for any potential confusion caused by the abbreviations). By separating the evaluation of gradient and weight quantization strategies, we avoid the influence of quantization on other parts of communication, and provide a clear understanding of the effectiveness of our proposed method. Instead of the two suggested experiments, we have the following ones: - *Running with full-precision weight communication: compare the proposed two-level int8-int4-Hadamard all-to-all gradient averaging (denoted as TLq-HS) with the original ZeRO++ two-level int4-int4 all-to-all gradient averaging (denoted as Int4-Int4)* (**Figure 5 after Line 313**) - *Running with full-precision gradient communication: compare the proposed int4 weight difference quantization (denoted as qWD) with the original ZeRO++ int4 weight quantization (denoated as quant-W4)* (**Table 1 at the top of Page 8**) We also acknowledge that there was no direct comparison between our proposed gradient quantization and the original ZeRO++ gradient quantization without the influence of Hadamard transform. To address this, we include the Int8-Int4 all-to-all gradient averaging (denoted as TLq) results in **Figure 2 of rebuttal PDF**, which will also be included in the revised manuscript. **W2: The figures are too small to read.** **A:** We acknowledge the issue with the figures being too small to read. We will increase the size of the figures in the revised version to enhance readability. **W3: Related work could add AG-SGD, which compresses changes in activations rather than the activations themselves, so it does not rely on the assumption that gradients are unbiased. [1] Fine-tuning Language Models over Slow Networks using Activation Quantization with Guarantees** **A:** Thank you for pointing out the relevant work on AG-SGD, which compresses changes in activations rather than the activations themselves. We will include this work in our related work section. ### Questions **Q1: Under what assumptions are weight differences generally easier to quantize? I've read several papers based on this intuition, but none that provide a comprehensive analysis of this intuition.** **A:** In our paper, "easier to quantize" refers to achieving lower relative reconstruction error during quantization. This intuition is based on the observation on the real data, as illustrated in Figure 4 Page 4. The data distribution in Figure 4 shows that: 1) the quantization on weight differences achieves a finer granularity (smaller gaps between quantization levels) compared to quantization on weights themselves; 2) the weight differences are smaller than weights in magnitude or range, thus combining with the informal analysis in Line 136-143 of Page 4, we intuitively argue that weight differences potentially has lower reconstruction error relative to the weights themselves, which motivates our algorithm design. **Q2: If you only need to assume that the magnitude of the weight difference is smaller than the weight, how do you ensure this?** **Q3: If you ensure this via the learning rate, I think it will potentially lead to higher wall clock convergence time.** **A (for Q2 and Q3 together):** We do not assume nor ensure "the magnitude of the weight difference is smaller than the weight" at all, either in practice or in theoretical analysis. In fact, we use exactly the same learning rate as the non-compression baselines in the experiments. Note that the statements such as "magnitudes of weight differences are smaller than those of weights themselves" in Section 3.1 are simply an intuition and observation on real data that motivates weight difference quantization. It is not an assumption required in the actual theoretical analysis. We further explain such an intuition here: For any optimizer, it could be summarized as pesudo code: $w \leftarrow w - lr * update$, where $w$ is the weight parameter and $lr * update$ is exactly the weight difference. We typically expect that $lr * update$ is much smaller than $w$ itself, otherwise, the optimizer update would override the weight parameter entirely, which doesn't make sense. Also, note that we don't really need the assumption of relatively smaller weight differences to prove that weight difference compression is better than weight compression in theory. In fact, the gap between them is whether to converge or not. In Counterexample 4.1 Line 217, we show that under standard settings (smooth function and unbiased gradients) weight compression + SGD may not even converge, no matter how small the learning rate is. In contrast, we provide rigorous proof in Theorem 4.1 (Appendix A) showing that our proposed weight difference compression + SGD converges in the same rate as ordinary SGD. --- Rebuttal Comment 1.1: Comment: Thank you for adding ablation studies and the explanation of "easier to quantize". I will maintain my score.
Summary: This paper proposed a 4bit quantization framework for sharded data parallelism training. It proposed to quantize the weight differences between iterations as the first method to reduce the accuracy degradation. It also proposed to mixed-precision quantization for intra- and inter- node communication. Experiment results show that the proposed system can achieve 2x to 4x speedup on training GPT models with size from millions to a few billions. Strengths: 1. The problem of communication bottleneck in FSDP, especially multi-node training is important. The paper aims to solve this problem and the results seems to encourage this approach. 2. The paper implemented the framework in real hardware environment and achieves 2-4x speedup by reducing the communication. Weaknesses: 1. The experiment results shows training/validation loss on Pile for the proposed method and baseline (FP) and QSDP and other method. The loss is an auxiliary variable to the performance of the model. A direct comparison on real dataset will be more convincing. For example, what is the accuracy of the model training by proposed methods and ZeRO/QSDP on mmlu and hellaswag/piqa/winogrande/ARC/ARE etc reasoning tasks, and gsk8k etc math tasks? This is critical to show the accuracy/precision improvement over other 4-bit communication methods. 2. The paper shows the speedup of the proposed methods, but I would like to know which results in Table 4 is the training throughput for ZeRO++ and QSDP (if applicable)? I would like to see a comparison for accuracy-training_speed between the proposed method and ZeRO /QSDP. Furthermore, is there any results on the training throughput (tokens/sec) at certain sequence length (instead of training TFLOPs)? 3. The paper introduces a few method, including qWD, two-level quantization, Hadamard transforms, etc. I would like to know which ones are the invention of the paper? For example, using Hadamard transform to handle the outlier has been known for a while. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the session "Weakness" Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses **W1: The experiment results show training/validation loss on Pile for the proposed method and baseline (FP), QSDP, and other methods. The loss is an auxiliary variable to the performance of the model. A direct comparison on real datasets would be more convincing. For example, what is the accuracy of the model trained by the proposed methods and ZeRO/QSDP on MMLU and Hellaswag/PIQA/Winogrande/ARC/ARE, etc., reasoning tasks, and GSK8K, etc., math tasks? This is critical to show the accuracy/precision improvement over other 4-bit communication methods.** **A:** We thank the reviewer for suggesting these evaluation methods for a more comprehensive comparison. We use validation loss as a measurement to align with our baseline papers ZeRO++/QSDP. Furthermore, we believe that the loss, as first-hand information, is one of the best metrics to reflect the influence of communication compression on the training procedure. **W2: The paper shows the speedup of the proposed methods, but I would like to know which results in Table 4 represent the training throughput for ZeRO++ and QSDP (if applicable)? ...** **A:** - **ZeRO++ Results in Table 4:** We thank the reviewer for highlighting this point. We acknowledge that the use of numerous abbreviations might have caused some confusion. The term "Int4-Int4" in Table 4, as defined on lines 277 and 292, represents the configuration used in ZeRO++ when gradient quantization is enabled. Specifically, "Int4-Int4" denotes 4-bit gradient compression for both intra-node and inter-node communication, which aligns with ZeRO++'s quantization strategy. - We did not include the throughput results for ZeRO++ with both weight and gradient quantization enabled because the resulting accuracy degradation was substantial. As shown in Figure 1 and line 291-292, the configuration with W4 (4-bit weight quantization) combined with Int4-Int4 (4-bit 2-level gradient quantization) exhibits a significant accuracy gap compared to the baseline, making it impractical for pre-training tasks where accuracy is critical. - **Accuracy-Training Speed Comparison:** The comparison of accuracy versus wall-clock time between our proposed SDP4Bit method and ZeRO++ (W4&Int4-Int4) for GPT-6.7B training on 128 H800 GPUs is presented in **Figure 1** of the rebuttal document. - **Training Throughput in Tokens/Sec:** While we primarily report training throughput in TFLOP/s, these results can also be presented in tokens/sec. The detailed corresponding results are provided below: | Model Size | 4xA100 Baseline (tokens/sec) | 4xA100 SDP4Bit (tokens/sec) | 4xA100 Speedup | 8xH800 Baseline (tokens/sec) | 8xH800 SDP4Bit (tokens/sec) | 8xH800 Speedup | |------------|---------|----------|---|---------|-----------|-----| | 1.3B | 169,723.8 | 406,228.9 | 2.39$\times$ | 974,432.5 | 1,498,441.4 | 1.54$\times$ | | 2.7B | 85,886.9 | 209,101.9 | 2.43$\times$ | 514,489.8 | 836,600.1 | 1.63$\times$ | | 6.7B | 63,950.4 | 220,223.3 | 3.44$\times$ | 310,965.6 | 923,946.0 | 2.97$\times$ | | 13B | 60,795.3 | 161,877.3 | 2.66$\times$ | 172,901.7 | 666,543.1 | 3.86$\times$ | | 18B | 44,843.3 | 130,724.9 | 2.92$\times$ | 126,892.3 | 519,622.9 | 4.09$\times$ | **W3: The paper introduces a few methods, including qWD, two-level quantization, Hadamard transforms, etc. I would like to know which ones are the invention of the paper? ...** **A:** Our proposed SDP4Bit is a system-algorithm co-design where the qWD (quantization on Weight Difference) mainly contributes to the algorithm part, while the TLq-HS (Two-level gradient quantization and Hadamard Smoother) contributes to the system part. **Our primary innovation** is the introduction of the **qWD method**, which involves quantizing weight differences instead of the weights themselves. To the best of our knowledge, this approach is novel and has not been proposed in prior works. We have provided theoretical proof of its convergence. The qWD method significantly reduces the weight communication overhead from the state-of-the-art 8-bit level (used in QSDP and ZeRO++) to a 4-bit level without compromising accuracy. **For TLq**, we introduced a mixed-precision approach (Int8-Int4) to achieve an optimal trade-off between accuracy and speed. While ZeRO++ utilizes Int4-Int4 quantization, our method combines 8-bit and 4-bit quantization to enhance accuracy. **Hadamard transform:** We acknowledge that the Hadamard Transform is a known technique for handling outliers and does not constitute a novel contribution in itself. However, our work makes significant contributions in minimizing its overhead. We achieved this by: 1. Eliminating the need for two consecutive Hadamard Transforms, as discussed on page 5, lines 183-186. 2. Integrating a Hadamard Transform with nearly-zero overhead within the quantization kernel. This optimization is detailed in Section 3.3, lines 190-210, and is evidenced by the similar overheads reported for the Int4-Int4 and TLq-HS configurations in Table 4. We also include detailed throughput results tesed on A100 GPU in our **rebuttal PDF table 2**. **Comprehensive compression strategy for shardedDP:** Combining all three methods, we established a unique system-algorithm co-design communication pattern for the sharded data parallelism (shardedDP). Our design is crucial for the efficient and scalable training of large-scale language models (LLMs) and is the first to reduce communication overhead of both parameter weight and gradient in shardedDP to nearly the 4-bit level without compromising accuracy.
Summary: The paper introduces a novel approach to reduce communication overhead in Sharded Data Parallelism (ShardedDP) for training large language models (LLMs). It proposes two key techniques: quantization on weight differences and two-level gradient smooth quantization, which effectively compress weights and gradients to nearly 4 bits without compromising training accuracy. The method is implemented within the Megatron-LM framework and achieves up to 4.08× speedup in end-to-end throughput on 128 GPUs. The paper also provides theoretical guarantees of convergence and demonstrates negligible impact on training loss for models with up to 6.7 billion parameters. Strengths: 1. Weight difference and two-level gradient quantization are more effective at retaining accuracy and significantly improving throughput than baseline approaches 2. The authors present a complete story with a new quantization algorithm, accompanying GPU kernel implementations, and detailed evaluation/ablation over a range of models and baselines which together clearly illustrate all the benefits of each component of their approach. Weaknesses: 1. The connection between the convergence analysis and the quantization schemes is not entirely clear. Specifically, it is not clear if the quantizers correspond to the biased/unbiased compressors in Algorithm 4 and if so which of them is biased and which is unbiased. It appears from lines 5 and 7 of algorithm 4 that TLq is unbiased and qWD is biased but I couldn't find any mention/proof of that, so I am not sure if that is the case or if I am missing the point entirely. 2. Most of the empirical analysis is limited to the older generation of GPT models, and it is possible that may not extend as well to modern models like Mistral series, Phi series, Orca series, Llama series etc as well as the newer generation of GPT models Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why is 4-bit quantization needed? Is 8-bit quantization not sufficient in practice, especially since we don't expect foundation models to be trained too frequently? 2. Why is inter node communication quantized to 4 bits while intra node communication is quantized to 8 bits and not the other way around? 3. I think the Hadamard transform matrix might need to be post multiplied by a diagonal matrix with randomly chosen +1/-1 values on its diagonal for it to be effective at removing outliers and flattening the data (https://arxiv.org/abs/1011.1595). Is that what has been done here? If not, why? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ### Weaknesses **W1: The connection between the convergence analysis and the quantization schemes is not entirely clear. Specifically, it is not clear if the quantizers correspond to the biased/unbiased compressors in Algorithm 4 and if so which of them is biased and which is unbiased. It appears from lines 5 and 7 of Algorithm 4 that TLq is unbiased and qWD is biased but I couldn't find any mention/proof of that, so I am not sure if that is the case or if I am missing the point entirely.** **A:** The definition (notation) of the compressors $\mathcal{U}$ and $\mathcal{C}$ can be found in Definitions 4.1 and 4.2 on Page 6, right below Algorithm 4. The compressor $\mathcal{U}$ in Algorithm 4 Line 5 and Definition 4.1 is unbiased, assuming $\mathbb{E}[\mathcal{U}(v)] = v$. The compressor $\mathcal{C}$ in Algorithm 4 Line 7 and Definition 4.2 is (potentially) biased, as it does not assume unbiasedness. In the proof, the usage of Definition 4.1 can be found in Line 476 and Line 484 of the Appendix A, while the usage of Definition 4.2 is found in Line 483 of the Appendix A. **W2: Most of the empirical analysis is limited to the older generation of GPT models, and it is possible that may not extend as well to modern models like Mistral series, Phi series, Orca series, Llama series, etc., as well as the newer generation of GPT models.** **A:** The reason that we chose to use the open-source GPT-2 is that the main purpose of this paper is to accelerate the pretraining of LLMs, which is more time-consuming and thus benefits more from communication compression compared to fine-tuning tasks. The hyperparameters of a GPT-2 baseline pretrained on the Pile dataset are readily available in open-sourced repositories such as GPT-Neo. However, for models like Mistral and Llama, they are typically used for fine-tuning rather than pretraining from scratch on public datasets. The corresponding recipes (hyperparameters) and datasets for pretraining these models are not publicly available, making it challenging to establish a fair baseline. ### Questions **Q1: Why is 4-bit quantization needed? Is 8-bit quantization not sufficient in practice, especially since we don't expect foundation models to be trained too frequently?** **A:** The necessity for 4-bit quantization arises primarily from the need to reduce communication overhead, especially in scenarios with limited bandwidth. The choice of quantization level is influenced by the physical hardware and network infrastructure available: - **Weak Hardware:** For example, some NVIDIA GPUs, particularly the more affordable variants, lack high-speed NVLink and instead rely on PCIe for intra-node communication. This setup can significantly benefit from higher compression rates to mitigate slower communication channels. - **Heavy Communication Overhead:** In our communication overhead breakdown experiment shown in Figure 7, we observed that communication time (1142ms) constituted 76.7% of the total overhead compared to computation time (347ms) when training the GPT-2.7B model on 32 A100 GPUs equipped with 100Gbps slingshot10 inter-node bandwidth. Even with 8-bit quantization, the communication overhead remains substantial, as detailed in the communication overhead results **provided in the table below**. Hence, further compression to 4-bit levels can provide additional reductions in overhead. Additionally, while full retraining of foundation models may not be frequent, there are scenarios of continual training where incremental improvements or adaptations to new data are necessary. In these cases, efficient communication remains critical, making 4-bit quantization a practical solution. | Quantization Scheme (Weight/Gradient) | Computation | Communication | Communication Overhead | |---------------------------------------|-------------|---------------|------------------------| | Baseline (BF16/FP32) | 347 ms | 1142 ms | 76.7% | | Quantize to 8-bit (Int8/Int8) | 347 ms | 420 ms | 54.8% | | SDP4Bit (Int4/Int4) | 347 ms | 241 ms | 41.0% | **Q2: Why is inter-node communication quantized to 4 bits while intra-node communication is quantized to 8 bits and not the other way around?** **A:** Intra-node communication typically uses high-speed communication media such as NVLink or at least PCIe, which is typically much faster than inter-node networking. Since inter-node is slower, it naturally becomes the major bottleneck and thus requires a higher compression ratio. **Q3: I think the Hadamard transform matrix might need to be post-multiplied by a diagonal matrix with randomly chosen +1/-1 values on its diagonal for it to be effective at removing outliers and flattening the data ([source](https://arxiv.org/abs/1011.1595)). Is that what has been done here? If not, why?** **A:** We didn't multiply the Hadamard matrix by a random +1/-1 diagonal matrix in this paper. The flattening power mainly comes from the original Hadamard matrix itself, which mixes the neighboring elements in the input matrix. The diagonal matrix only adds some randomness to the transformation, which may potentially enhance the flattening ability slightly. However, it also results in additional communication overhead because the receiver of the compressed message also needs the same +1/-1 diagonal matrix as the sender for dequantization. Furthermore, the additional random diagonal matrix incurs some difficulty to the optimization of the fused cuda kernel of Hadamard transform and quantization/dequantization. Thus, we chose to use the simplest form of Hadamard matrix without randomness. --- --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for addressing my concerns. As I had already recommended accepting the paper, I will keep my score.
Rebuttal 1: Rebuttal: We appreciate the reviewers' critical assessment of our work. Below, we provide the relevant figures and results to address the questions and concerns raised. **Contents in Rebuttal PDF** 1. Table of notations that explains the abbrevations of algorithms such as W4, Int4-Int4, TLq, etc. 2. Comparison of Accuracy versus Wall-Clock Time Results 3. Additional ablation Study: Direct comparison between our proposed Int8-Int4 gradient quantization (denoted as TLq) and the original ZeRO++ gradient quantization (denoted as Int4-Int4) 4. Analysis of Gradient Distribution Before and After Hadamard Transformation 5. Comparison of (De)Quantization Kernel Speed with and without Hadamard Transformation Pdf: /pdf/f892a87c316ce3688032e0a50df6e61bb40fa46c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Credal Deep Ensembles for Uncertainty Quantification
Accept (poster)
Summary: This paper presents Credal Deep Ensembles (CreDEs), an ensemble framework of Credal-Set Neural Networks (CreNets) to produce a high-quality epistemic uncertainty by using a credal set with the Distributionally Robust Optimization (DRO) technique. Contributions are: - A rigorous CreNet final layer in Section 2.1, where the number of nodes is doubled to present the confidence intervals with lower and upper probability bounds for each class. The final set of probability intervals is shown to be proper by satisfying convexity conditions. - A training procedure with CreNet loss in Section 2.2, where the standard cross-entropy loss is trained with the upper-class probability bounds, and adversarially reweighted DRO is trained with the lower ones. - Experimental results show CreDEs are better than Deep Ensembles (DEs) by having a higher test accuracy, lower ECE in IID classification, AUROC/AUPRC in OOD detection tasks. Strengths: - The paper is well-written and clear to understand. Sections are well organized, and the proposed method and its components are clear. - I like the proposed CreNet final layer. I also like the motivation of this paper, the topic of improving uncertainty and robustness performance in Neural Networks (NNs) is important and has broad applications in the AI safety field. - The empirical evaluation covers different task performances in multiple experimental settings. Weaknesses: 1. The theoretical contributions of this paper are weak. There are no theories to show the proposed method and the loss function can improve the uncertainty or robustness quality. The only theoretically sound result is showing the results in Eq. 4 satisfy convexity conditions in Eq. 2, but this can be derived straightforwardly by using the sum of standard exponential functions for the output vector. 2. Regarding the practical novelty, I also raise a concern about the computational limitation of the proposed method. Because DEs has been raised as computationally inefficient in estimating the NN uncertainty field [1, 3, 4, 5], CreDEs is even less efficient than DEs. In particular, for each ensemble member, in training time, the complexity of Alg. 1 with sorting and the training objective function is more complex than Standard Neural Network (SNN). In test time, the model also requires doubling the number of output nodes than SNNs and needs to solve two optimization problems in Eq. 13, causing a higher latency. 3. The proposed method only works for classification tasks. Additionally, it is also a heuristic and depends on hyperparameter tuning. In particular, choosing the adversarial weights $\displaystyle w$ for DRO is non-trivial and can significantly change the training objective function. Furthermore, choosing the number of class $K$ in Alg. 2 is also heuristic, causing a trade-off between uncertainty estimation quality and computational efficiency. 4. Lack of empirical comparison with many related works in uncertainty baselines [1]. I would be happy to raise my initial score if important baselines will be added, including MC Dropout [2], BatchEnsemble [3], Rank-1 BNN [4], MIMO [5], SNGP [6], and NatPN [7]. 5. Lack of uncertainty evaluations and comparisons (see Questions). Better performance in OOD detection or active learning tasks is only a necessary condition, it is not a sufficient condition for a high-quality epistemic uncertainty algorithm. 6. The training complexity of CreNet is considerably higher than SNN in L-712, causing CreDEs's complexity to be even much higher than DEs by linearly increasing the number of ensemble sizes. The model inference complexity in Tab. 6 is not evaluated in different model architectures across different hardware settings, e.g., GPU, CPU, etc. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can you qualitatively evaluate the uncertainty quality by using the reliability diagrams [8] and the PDF plot of predictive entropy $\mathrm{H}(p(y|x))$ like Fig. 3 in [9]? 2. Regarding the quantitative evaluation of the epistemic uncertainty in Table 5: How about the performance of DEs? Can you add an evaluation by calculating $Var_i[\mu_i(x)]$ in [10]? 3. Regarding the robustness performance, can you compare your method with DEs regarding the Negative log-likelihood, test accuracy, and ECE on OOD data (e.g., train on CIFAR-10, test on CIFAR-10-C)? 4. Regarding Table 1: What is the number of $K$ used for this setting? Could you please specify why there are significant differences in your reported ECE with [1] on ImageNet about DEs? 5. I am also curious about the results of different settings of DEs. For instance, the DEs used in your paper is the mixture of experts (or operation), how about the product of experts (and operation)? References: [1] Nado et al., Uncertainty Baselines: Benchmarks for Uncertainty & Robustness in Deep Learning, arXiv preprint arXiv:2106.04015, 2021. [2] Gal et al., Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, ICML, 2016. [3] Wen et al., J. Batchensemble: an alternative approach to efficient ensemble and lifelong learning, ICLR, 2020. [4] Dusenberry et al., Efficient and scalable Bayesian neural nets with rank-1 factors, ICML, 2020. [5] Havasi et al., Training independent subnetworks for robust prediction, ICLR, 2021. [6] Liu et al., Simple and principled uncertainty estimation with deterministic deep learning via distance awareness, NeurIPS, 2020. [7] Charpentier et al., Natural posterior network: Deep bayesian predictive uncertainty for exponential family distributions, ICLR, 2022. [8] Guo et al., On Calibration of Modern Neural Networks, ICML, 2017. [9] Lakshminarayanan et al., Simple and scalable predictive uncertainty estimation using deep ensembles, NeurIPS, 2017. [10] Valdenegro-Toro et al., A Deeper Look into Aleatoric and Epistemic Uncertainty Disentanglement, CVPR Workshop, 2022. --- **After rebuttal.** The authors provide detailed additional results, and I believe that they are valuable to support the claim of the main paper. So, I increased my initial rating for this paper. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Please see my comments on the Weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We thank you for your valuable feedback. **Response to W1:** We believe that lines 172-175 in the original paper are also an important result, which shows that cross-entropy loss (CE) can be applied to upper probability vectors as representatives of part of the boundary of the predicted credal set. However, in the original paper, indeed, the only validation of our hypothesis that credal prediction leads to better accuracy, calibration, and OOD detection is empirical. Below we outline a more rigorous argument. Epistemic uncertainty in predictions is induced by the probability distribution(s) that generated the training data may be very different from the distribution that generates a test point. It is well known that using CE as a loss leads to the maximum likelihood estimator of the model parameters, under the assumption that all training points are drawn i.i.d. from a single, unknown distribution $P$: $$ \hat{\theta} = \arg \min_\theta\text{CE} = \theta_{MLE}. $$ What does Distributionally Robust Optimization (DRO) do? It takes a cautious stance, and assumes training points are generated by one of many potential distributions $\mathcal{U}$, and minimizes the upper bound of the expected risk (Eq. (6) of the paper). In practice, this is approximated using Adversarially Reweighted Learning as in Eq. (9). Therefore, it learns $\hat{\theta} = \arg \min_{P \in \mathcal{U}} E_P (Loss)$. However, if we apply DRO to a traditional network, the network will learn a single parameter value (Eq. (9) of the paper); for a new test data point $x_t$ it will then produce a single predicted probability vector, the one produced by the model that, in the long run, minimizes the worst-case risk. This, though, does not express at all the (epistemic) uncertainty about *which* of the distributions in $\mathcal{U}$ has generated the particular test datapoint $x_t$; as a result, it does not express how uncertain the prediction may be for that particular data point. Allowing a network to output lower and upper probability vectors for each input, instead, allows this epistemic uncertainty to be explicitly modeled. What is the best-case scenario? That in which all data (including the test point $x_t$) are indeed generated by the same distribution – in this case, $\theta_{MLE}$ is the best choice, in a maximum likelihood sense. What is the worst case? That in which $x_t$ is generated by a distribution that is far away from the groups of distributions $P_g$ (see lines 140-141) that generated the training data. In this case, the DRO solution will still be far from perfect, but still the best one can do, given the evidence in the training data. Therefore, if I minimize the CE versus $\boldsymbol{q}_U$ (first component of Eq. (10)), I get a credal set Cre of predictions whose upper probability is such that, for each class $i = 1,…,C$, there exists a probability vector $\boldsymbol{q}\in \mathbb{Q}$ such that $q(i)$ is the MLE prediction for that class (please remember the discussion of lines 172-175). Similarly, if I minimize the DRO loss versus $\boldsymbol{q}_L$ (the second component of Eq. (10)), for each class $i = 1,…,C$, there exists a probability vector $\boldsymbol{q} \in \mathbb{Q}$ such that $q(i)$ is the DRO prediction for that class. Minimizing the two components together, as in Eq. (10), thus encourages the predicted credal set to extend from the best-case to the worst-case prediction. **Response to W2, W4, and W6**: We kindly invite you to follow the detailed response in our global rebuttal. Based on your suggestions, we further reported additional inference complexity comparison in Table 1 of the rebuttal file and included MC Dropout and two TensorFlow-standardized BNNs. **Response to W3**: Yes, our method is for classification and we provide a roadmap for extending it to regression tasks in Appendix E.3. Our training includes a hyperparameter $\delta$. We performed the ablation study (lines 287-312), showing that our CreDEs are not particularly sensitive to $\delta$. $K$ is a design choice by the end user. The ablation study in Appendix B.2 shows that increasing the value of $K$ improves performance and increases cost. In addition, the reduction algorithm has a strong mathematical background in probability intervals [17], which can rigorously guarantee a valid credal set. **Response to W5, Q1, Q2**: Based on your advice, we reported the entropy plots and reliability diagrams in the rebuttal file. Regarding Table 5, the EU estimates of DEs-5 are 0.0996 (CIFAR10), 0.4758 (SVHN), and 0.4753 (Tiny-ImageNet). Although the EU estimates of DEs-5 and CreDEs-5 are not comparable in values due to different representations, CreDEs-5 exhibits much higher EU estimates of OOD samples than those of ID data and more reliable predictions on ID samples, qualitatively. We could not report $Var_i[\mu_i(x)]$ in our cases because the calculation is for models with a custom softmax layer, according to Eq. (6-8) in [10]. **Response to Q3**: In our settings, the corrupted CIFAR10 samples will be detected as OOD data and be to abstain. Therefore, we did not report the accuracy and ECE values on CIFAR10-C. For completeness, we reported the numbers in Table 3 of the rebuttal file, showing the visibly improved performance of CreDEs compared to DEs. **Response to Q4**: $K$ is used to reduce the complexity of uncertainty estimation. We do not employ Alg. 2 for accuracy and ECE calculation. The difference is most likely because the training settings are significantly different. As the ImageNet experiments are conducted fairly for DEs and CreDEs (shown in Appendix C), we believe the results can properly reflect the performance improvements. **Response to Q5**: We randomly selected five members from the 15 models to construct the ensemble. The member list of each ensemble was strictly different. Could you please specify the settings that you are curious about? We are eager to continue the discussion. --- Rebuttal Comment 1.1: Title: Clarification for Question 5 Comment: Thanks for your response. Regarding Q5, I am curious about the ensemble setting with the "product of experts" [1], not "mixture of experts" setting that is currently used in your paper. References: [1] Hinton, G.E, Products of experts, 9th International Conference on Artificial Neural Networks, 1999. --- Reply to Comment 1.1.1: Title: Further discussion on Question 5 Comment: Dear Reviewer, Thank you for your engagement in the discussion. Following your enlightenment, we conducted the ablation study over deep ensembles using the product of expert strategy, denoted as DEs-5 (POE). Table 1 reports the test accuracy (ACC) and ECE values on the CIFAR10 dataset of different models across ResNet50, VGG16, and ViT Base architectures. Table 1. Test accuracy (ACC) (%)and ECE comparison. | | ResNet50: ACC | ResNet50: ECE | VGG16: ACC | VGG16: ECE | ViT Base: ACC | ViT Base: ECE | |:--------------------------:|:--------------:|:-----------------:|:--------------:|:-----------------:|:--------------:|:-----------------:| | CreDEs-5 ($\hat{i}_{max}$) | **93.74±0.11** | **0.0109±0.0017** | **87.92±0.11** | **0.0611±0.0012** | **93.59±0.39** | **0.0104±0.0012** | | CreDEs-5 ($\hat{i}_{min}$) | **93.75±0.11** | **0.0092±0.0016** | **87.94±0.11** | **0.0203±0.0014** | **93.60±0.40** | **0.0107±0.0014** | | DEs-5 | 93.32±0.13 | 0.0131±0.0010 | 85.53±0.10 | 0.0815±0.0011 | 90.43±0.97 | 0.0181±0.0019 | | DEs-5 (POE) | 93.47±0.11 | 0.0610±0.0011 | 85.55±0.08 | 0.1368±0.0008 | 90.56±0.90 | 0.0894±0.0087 | *Table 1 shows that DEs-5 (POE) could improve test accuracy but significantly reduce the calibration performance of DEs-5 (larger ECE values). In these comparisons, our CreDEs-5 is the most superior method.* In addition, we also present the Reliability diagram in the table format in Table 2. Table 2 can equally reflect the results of CreDEs-5 and DEs-5 in Figure 3 of the rebuttal file. Table 2. Test accuracy (ACC) of the ResNet50-based models in each confidence bin of the reliability diagram. | Confidence Bin | [0.0, 0.1] | [0.1, 0.2] | [0.2, 0.3] | [0.3, 0.4] | [0.4, 0.5] | [0.5, 0.6] | [0.6, 0.7] | [0.7, 0.8] | [0.8, 0.9] | [0.9, 1.0] | |:--------------------------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:|:----------:| | CreDEs-5 ($\hat{i}_{max}$) | 0.0 | 0.0 | 0.3333 | 0.1951 | 0.4206 | 0.5492 | 0.6361 | 0.7532 | 0.8849 | 0.9863 | | CreDEs-5 ($\hat{i}_{min}$) | 0.0 | 0.0 | 0.3333 | 0.2759 | 0.4688 | 0.5701 | 0.6444 | 0.7673 | 0.8899 | 0.9872 | | DEs-5 | 0.0 | 0.0 | 0.5 | 0.2857 | 0.4031 | 0.5035 | 0.6569 | 0.7092 | 0.8583 | 0.9839 | | DEs-5 (POE) | 0.0 | 0.0 | 0.0 | 0.0 | 0.0 | 0.3 | 0.3947 | 0.5 | 0.3529 | 0.9396 | *Table 2 demonstrates the worse performance of DEs-5 (POE) from the reliability perspective. Our model performs best out of the three.* Furthermore, we also study the uncertainty estimation performance via OOD detection. Here, we computed the entropy of the final prediction of DEs-5 (POE) as the total uncertainty. In the cases of CreDEs-5 and DEs-5, we use the upper entropy $\overline{H}(\mathbb{Q})$ and $H(\tilde{\boldsymbol{q}})$ as in our paper, respectively. *The results in Tables 3, 4, and 5, consistently show the outperformance of our method.* Table 3: OOD detection performance comparison (%) on ResNet50 architecture. | | CIFAR10 vs. SVHN | CIFAR10 vs. SVHN | CIFAR10 vs.Tiny-ImageNet | CIFAR10 vs.Tiny-ImageNet | |:-----------:|:----------------:|:----------------:|:------------------------:|:------------------------:| | | AUROC | AUPRC | AUROC | AUPRC | | CreDEs-5 | **95.71±0.42** | **97.73±0.27** | **89.02±0.10** | **88.02±0.15** | | DEs-5 | 94.80±0.43 | 97.26±0.29 | 88.80±0.19 | 87.21±0.29 | | DEs-5 (POE) | 93.90±0.24 | 96.10±0.21 | 88.03±0.20 | 84.11±0.32 | Table 4: OOD detection performance comparison (%) on VGG16 architecture. | | CIFAR10 vs. SVHN | CIFAR10 vs. SVHN | CIFAR10 vs.Tiny-ImageNet | CIFAR10 vs.Tiny-ImageNet | |:-----------:|:----------------:|:----------------:|:------------------------:|:------------------------:| | | AUROC | AUPRC | AUROC | AUPRC | | CreDEs-5 | **87.05±0.80** | **93.36±0.42** | **82.14±0.14** | **80.81±0.16** | | DEs-5 | 84.50±0.49 | 90.78±0.35 | 79.40±0.10 | 75.91±0.14 | | DEs-5 (POE) | 84.10±0.22 | 89.83±0.16 | 78.11±0.08 | 72.23±0.16 | --- Rebuttal 2: Title: Further responses for remaining questions Comment: Dear Reviewer, We express our sincere appreciation for your recognition of our additional results and your effort in reviewing our paper. Please allow us to address your remaining concerns. **Lacking of a generalization bound or robustness certificate from a theoretical perspective** Thank you for clarifying your question. This is true, CreDEs currently do not provide coverage guarantees/robustness certificates. However, the extensive experiments across different datasets, architectures, and tasks do empirically show that our CreDEs generally achieve consistent improvements over DEs. We have acknowledged lacking theoretical guarantees of our CreDEs in our conclusion (lines 349-350) and detailed our future research plan in Appendix E.2. Incorporating your comments, we will expand our emphasis on this point in the main body of the revised paper. **Computationally inefficient than other baselines** *Regarding more hardware settings to evaluate the inference cost* Following your comments, we report the inference cost measured on a single AMD EPYC 7643 48-core CPU in Table R1. From the result in Table R1, we did not observe any significant overhead. In addition, we can also observe that using VGG16 (a lighter model compared to ResNet50) can visibly reduce the inference cost of DEs and CreDEs. Table R1: Inference cost comparison between SNN and CreNet per single CIFAR10 input of different architectures. | | VGG16 (ms) | ResNet50 (ms) | |:------:|:----------:|:-------------:| | SNN | 19.2 ± 3.8 | 148.2 ± 49.0 | | CreNet | 23.1 ± 5.2 | 163.3 ± 39.4 | Since our main evaluation scope focuses on image classification of different deep neural network architectures across various tasks, good-performance GPUs are a favorable choice for implementation. We appreciate your consideration of the complexity of applying our models on smaller hardware such as mobile and IoT devices. To our knowledge, in the context of smaller hardware, lightweight neural networks are generally used, and there are also techniques such as quantization to further improve computational efficiency. We believe that these factors, as well as optimized code implementation of our CreDEs, could help mitigate the added complexity compared to DEs. *If practitioners are already using or planning to use DEs in their applications, we believe it is worth considering our CreDEs to improve performance with an acceptable increase in complexity.* *Regarding further comparisons.* We fully agree that there are uncertainty-aware models that are more computationally efficient than DEs and CreDEs. As you rightly pointed out, achieving comparable or slightly lower performance of DEs while being computationally cheaper is the main objective of this line of research. However, since DEs are active strong baselines [1, 2] and are actively applied in practice, such as [3] [4] [5], we aim to improve the performance of DEs (e.g., the quality of uncertainty estimation) to benefit safety-critical applications in particular. Following your comments, we tried to implement the Rank-1 BNNs in our environment to fairly compare the performance between Rank-1 BNNs and CreDEs. Implementing Rank-1 BNNs requires installing a package of Edward2's Bayesian Layers from their official GitHub repository. Unfortunately, although we followed the instructions, we could not import the edward2 package correctly due to dependency conflicts, software version incompatibility, and limited time. There are the same unresolved issues posted by others on the official GitHub repository. Nevertheless, from the main results of Tables 1-4 in the original paper on Rank-1 BNNs, we could observe that Rank-1 BNNs achieve comparable or slightly lower performance compared to DE baselines while being computationally efficient. Combined with our observations in our paper that CreDEs generally outperform DEs, we believe, there is a high probability that our CreDEs could achieve better performance than Rank-1 BNNs. The practical message we could deliver to the community is that our CreDEs have a high potential to improve the performance of DEs in real-world applications with an acceptable increase in complexity. If DEs are already out of consideration in practice due to computational constraints, our CreDEs would indeed not be an ideal option. As promised, we will extend our original discussion on the computational limitations of CreDEs (lines 347-348) in the revised paper. [1] Benchmarking uncertainty disentanglement: Specialized uncertainties for specialized tasks. 2024. [2] Deep deterministic uncertainty: A new simple baseline. CVPR 2023. [3] De-tgn: Uncertainty-aware human motion forecasting using deep ensembles. IEEE Robotics and Automation Letters 2024 [4] Flood uncertainty estimation using deep ensembles. Water, 2022. [5] Benchmarking common uncertainty estimation methods with histopathological images under domain shift and label noise. Medical image analysis, 2023. --- Rebuttal 3: Title: Further responses for remaining questions Comment: **hyperparameter $\delta$** Throughout the main experimental evaluations (except the ablation study on $\delta$), we set the default $\delta=5$, to reflect a balanced assessment of the train-test divergence and show how such a value allows our model to outperform the baselines. Upon this setting of $\delta$, the extensive comparisons consistently show our CreDEs outperform DEs. The ablation study on Hyperparameter $\delta$ (lines 287-312) verifies the robustness of CreDEs across hyperparameter setups in terms of test accuracy, ECE, and OOD detection performance, consistently better than the performance of DEs. For a visible sense, we copied the results of Table 4 of the paper as follows in Table R2. Table R2: Performance comparison of CreDEs trained on the CIFAR10 dataset under different settings of $\delta$. | | | 0.5 | 0.625 | 0.75 | 0.875 | 0.9375 | 0.96875 | |:-------------------:|:---------------:|:------:|:------:|:------:|:------:|:------:|:-------:| | Test ACC (%) | $\hat{i}_{max}$ | 93.74 | 94.54 | 94.47 | 94.57 | 93.88 | 93.99 | | | $\hat{i}_{min}$ | 93.75 | 94.55 | 94.47 | 94.56 | 93.87 | 93.99 | | ECE Values | $\hat{i}_{max}$ | 0.0092 | 0.0059 | 0.0053 | 0.0056 | 0.0087 | 0.0086 | | | $\hat{i}_{min}$ | 0.0109 | 0.0087 | 0.0089 | 0.0093 | 0.0087 | 0.0087 | | SVHN (OOD) | AUROC | 97.44 | 97.44 | 97.92 | 97.95 | 97.42 | 97.51 | | | AUPRC | 93.07 | 96.34 | 97.00 | 96.92 | 98.79 | 98.82 | | Tiny-ImageNet (OOD) | AUROC | 88.28 | 89.01 | 89.10 | 89.18 | 89.95 | 89.24 | | | AUPRC | 88.13 | 89.81 | 89.76 | 89.72 | 89.18 | 89.26 | Table 5 of the paper reports the averaged EU estimate under different settings of $\delta$. You are correct, we could see significant differences between $\delta=0.5$ and $\delta=96875$ (The EU estimates using $\delta=96875$ are much smaller than those using $\delta=0.5$). From Table 5, we could also observe that increasing the value of $\delta$ (i.e., giving less importance to the divergence between test and training distributions) leads to a decreasing trend in the average EU estimates per dataset (particularly for ID CIFAR10 samples). This aligns with the intuition that, if the model is more uncertain about the divergence of the distributions (smaller $\delta$), it should express a larger EU. Despite smaller uncertainty values at high $\delta$’s, the difference between In-domain (ID) and OOD samples remains noticeable. This explains why a $\delta$ closer to 1 does not always lead to low-performance OOD detection and why our model’s OOD detection performance is robust against the choice of $\delta$. The ablation study also shows that we further improve the performance of CreDEs by finding the 'best' $\delta$. One possible way is to conduct standard cross-validation on specific test scenarios. Considering the performance of CreDEs robust against $\delta$, an alternative interesting option, in the presence of multiple datasets (e.g., acquired over time in a continual learning setting), could be applying the DRO loss component to different components of the training set, and assessing the results to robustly select $\delta$. (lines 294-300). **Regarding the trade-off parameter $K$** The parameter $K$ represents a trade-off between uncertainty estimation and computational efficiency. As demonstrated in the ablation study in Appendix B.2, $K\leq10$ is a practical range for measuring epistemic uncertainty using the Generalized Hartley (GH) measure. As shown in Tables 7 and 8 and Figure 7 of Appendix B.2, the application of $K=4$ for OOD detection involving the CIFAR10 dataset (reducing the dimension from 10 to 4 classes and costing 0.02 ms per input sample) and the application of $K=10$ for the OOD detection involving the CIFAR100 datasets (reducing the dimension from 100 to 10 classes and costing 17 ms per input sample) can guarantee the outperformance of CreDEs over the DEs. Choosing $K$ for computing the upper and lower entropy measures is more flexible. For instance, in Tables 2 and 7 and Figure 8 of the paper, $K=20$ for the ImageNet experiment dataset (reducing the original 1k to 20 classes and costing 6.1ms per sample) shows an improved OOD detection performance compared to DEs. We appreciate your efforts to review our work and your active engagement in this discussion. We benefit from the process. We hope that the increased clarity of our responses has addressed your concerns. --- Rebuttal Comment 3.1: Comment: Thank you for your explanation. I also thank the authors for clarifying the ablation study of hyperparams $\delta$ regarding test accuracy. Most of my concerns are addressed. Because of theoretical contribution and computational limitations, I updated my final score to 5. Good luck! --- Reply to Comment 3.1.1: Comment: Dear reviewer, Thank you for your positive feedback. We appreciate the time and effort you have dedicated to reviewing our paper and the discussions!
Summary: The paper proposes credal neural networks, which output probability intervals for each class as opposed to a single probability estimate. They also propose to use ensembles of these models, averaging the outputs of the members. Their ensembles of credal neural networks show a higher performance than traditional ensembles in various experiments, from epistemic uncertainty (OOD detection), to the normal accuracy on ImageNet. Strengths: The method is benchmarked across an impressive range of tasks and datasets in both main paper and appendix and shows consistent improvements over traditional deep ensembles. The method has many hidden details, e.g., trivial solutions it could collapse to. The paper uncovers many of these potential problems and explains how they are solved by neat implementation tweaks. These tweaks can be relevant to other credal approaches. Reproducible code is provided, along with pseudo code in the paper. Weaknesses: In all experiments, there is only one baseline, namely traditional deep ensembles. I would encourage to pull results for the standard ensemble with DRO loss (Called DEs*-5) from the appendix to the main paper and to also implement other methods, like Mahalanobis (https://arxiv.org/abs/1807.03888), which recently showed SOTA performance on EU estimation (https://arxiv.org/pdf/2402.19460) The resampling setup is internally correlated as the authors construct their 15 ensembles by drawing 5 members out of 15 trained models repeatedly. The 15 ensembles thus overlap which will likely lead to underestimating the error bars in the results Technical Quality: 4 Clarity: 4 Questions for Authors: To calculate accuracy, did you use i_max or i_min? Do you have some insights into how big the credal intervals are usually, etc? I’m asking because I want to rule out the alternate hypothesis that your method is producing almost-Dirac credal sets. I understand your motivation of why you apply the vanilla cross entropy loss to the upper bound estimate of the credal set $q_U$ as opposed to the midpoint $m$, but do you have ablations quantitatively supporting this choice? Do the computational complexity studies in Table 6 include the constrained optimization search for maximum / minimum entropy? In Appendix C it seems like this could be a burden, where training a single ensemble credal ensemble member takes five times as long as training a standard cross-entropy loss member. There’s a small typo in line 284, “AUORC” Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors acknowledge that the runtime and RAM costs for ensembles are a hindering aspect of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We sincerely appreciate your high recognition of our work and valuable feedback and suggestions. Below we address your concerns. **Regarding W1** **Response:** We fully agree that moving the additional results of DEs$^*$-5 (standard ensemble with DRO loss) from the appendix to the main body is a better presentation. Following your suggestion, we include the MC Dropout model in Table 2 of the rebuttal file, which is claimed to be a good choice for uncertainty quantification across the board in the evaluation of [A]. The additional comparison further demonstrates the outperformance of our CreDEs. We did not report the result in the rebuttal because the Mahalanobis method is a distance-based method for uncertainty quantification and was outside the original scope, which was to address uncertainty quantification using ''second-order'' representations. However, we will properly extend the above discussion in the limitations section of the revised paper. For a more detailed response to the baseline choice, we invite you to read our global rebuttal (part 2). **Regarding W2** **Response:** In our experiments, we randomly selected five members from the 15 trained models to construct the ensemble. The ensemble member lists of each ensemble were strictly guaranteed to be different. Therefore, we believe that this setting could reflect the error bars of the results. Based on your comments, we will highlight the setting to improve clarity. **Regarding Q1** **Response:** Yes, we reported the accuracy of the models using $i_{max}$ and $i_{min}$ in Table 1 and Table 3 of the paper. They both show improvement in test accuracy. **Regarding Q2** **Response:** Due to the high dimensionality, it is hard to visualize and compute the size of the credal set when there are $C>3$ elements. However, we can indirectly judge whether the generated credal set resembles a Dirac credal set by calculating the maximum reachable upper bound probability of each sample, namely $\max{(q^*_{U_1}, ..., q^*_{U_C})}$. The closer the probability value is to 1, the closer it is to an almost Dirac credal set. We show the results of ResNet50-based CreDEs-5 on the CIFAR10, SVHN, and Tiny-ImageNet datasets in *Figure 1 of the rebuttal file*. We can conclude that our method does not always generate almost Dirac credal sets, especially on OOD samples. For CIFAR10 samples, there are a large number of quasi-Dirac credal sets (but not all), which is reasonable considering the high test accuracy of CreDEs and the low ECE (shown in Table 1 of the paper). **Regarding Q3** **Response:** If we first calculate the probability vector based on the midpoint $\boldsymbol{m}$ and then calculate the cross-entropy loss, we observed that we could not make the model reasonably learn to generate half-length $\boldsymbol{h}$ to generate the probability interval from the training process, as the $\boldsymbol{h}$ is not incorporated in the training. In another case, we could not calculate the midpoint of the generated probability intervals as $\boldsymbol{q}_U-\boldsymbol{q}_L$ for cross-entropy calculation, it is not normalized. In Appendix E.1, we discuss another sensible way of generalizing cross-entropy for lower/upper probability as our future exploration. **Regarding Q4** **Response:** No, the inference cost in Table 6 does not include the uncertainty calculation cost. We reported the cost of calculating the generalized Hartley (GH) measure and the upper entropy in Figures 7 and 8 of the paper. For instance, the time cost for GH calculation for CIFAR10 without approximation is 17 ms (0.02 ms in the reduced case considering 4 out of 10 classes) while calculating EU in deep ensembles for CIFAR10 takes $4.1\times 10^{-4}$ ms, measured on the same single CPU. Though higher CreDEs remain practical without actual computational constraints. Besides, the numbers reported are without code efficiency optimization: a more efficient code implementation could significantly reduce the cost. Regarding the training complexity, the CreNet uses a custom training loop, unlike the TensorFlow-standardized training standard neural networks, precluding a fair comparison. Given the evidence that CreNets only marginally increases the inference time (single forward pass) in Table 6, we are optimistic that by standardizing and optimizing the custom training loop and adopting a more efficient code implementation of Algorithm 1, the training effort could be significantly reduced. **Regarding Q5** **Response:** Thank you for pointing the typo out. We will revise it accordingly. [A] Benchmarking uncertainty disentanglement: Specialized uncertainties for specialized tasks. 2024 --- Rebuttal Comment 1.1: Comment: Thank you for your responses, especially Figure 1 of the rebuttal PDF. Given that this will be included in the revised paper and that the runtime analysis may get an improvement (like testing the traditional training in your custom training loop, for a fair comparison), I remain with my score of 7. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, We are grateful for your efforts and valuable comments and suggestions in this review process. We promise to include the additional results (e.g. Figure 1 of the PDF) as suggested by you in the revised paper. Once again, we thank you for recognizing the quality of our work and supporting the acceptance of our paper.
Summary: This paper introduces Credal Deep Ensembles, which are ensembles of Credal-Set Neural Networks designed to predict lower and upper probability bounds for each class, representing epistemic uncertainty. Strengths: Novel approach to uncertainty quantification in deep learning. Empirically it seems that the proposed approach is performing not too bad, while I am not entirely sure about the interpretation of the results (a lot of the results seem to indicate only marginal improvement). Weaknesses: Empirical results are not very convincing, since the improvements (if there are any) seem to be of marginal nature. While the paper introduces novel methods and validates them empirically, it could benefit from a deeper theoretical analysis of why the proposed methods improve uncertainty quantification. E.g. the paper introduces novel components like the Interval SoftMax function and the application of DRO, but it lacks a detailed theoretical analysis that justifies their effectiveness. Lack of baseline comparisons (some model classes were excluded). Technical Quality: 2 Clarity: 3 Questions for Authors: Why were Dirichlet-based approaches and other recent uncertainty quantification methods excluded from the comparison? (See also comment on baseline comparisons). Please elaborate on the loss function (10). Why is this a sensible choice? It is not immediately clear to me from the corresponding explanations in the paper. How does the Interval SoftMax function handle cases where the intervals for different classes overlap significantly? What impact does this have on the final predictions and uncertainty estimates? Instead of learning upper/lower probabilities, what is the difference to get the upper/lower probabilities e.g. from an ensemble of NNs? How do you guarantee that the computation of the lower and upper probabilities within the credal set is scalable for high-dimensional classification tasks? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: There is a very short discussion of potential limitations at the end of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We appreciate your efforts in reviewing our work and your recognition of our novelty. Below we address your concerns. **Response to W1:** We believe that we have provided a comprehensive evaluation of our approach, which consistently shows significantly improved performance, uncertainty quality, OOD detection, and model calibration in several popular and important setups. For instance, in Table 3 of our paper, we can observe the OOD detection improvements of about 5% and 11% for CIFAR 10 vs SVHN (AUROC) on VGG16 and ViT base architectures, respectively. Such visible improvements can be consistently observed in, for example, Table 2, Figures 5, 6, and Tables 8-12 of the paper. **Response to W2 and Q2:** We value the opportunity to provide a more rigorous argument of the theoretic analysis and loss design. Epistemic uncertainty in predictions is induced by the probability distribution(s) that generated the training data may be very different from the distribution that generates a test point. It is well known that using CE as a loss leads to the maximum likelihood estimator of the model parameters, under the assumption that all training points are drawn i.i.d. from a single, unknown distribution $P$: $$ \hat{\theta} = \arg \min_\theta\text{CE} = \theta_{MLE}. $$ What does Distributionally Robust Optimization (DRO) do? It takes a cautious stance, and assumes training points are generated by one of many potential distributions $\mathcal{U}$, and minimizes the upper bound of the expected risk (Eq. (6) of the paper). In practice, this is approximated using Adversarially Reweighted Learning as in Eq. (9). Therefore, it learns $\hat{\theta} = \arg \min_{P \in \mathcal{U}} E_P (Loss)$. However, if we apply DRO to a traditional network, the network will learn a single parameter value (Eq. (9) of the paper); for a new test data point $x_t$ it will then produce a single predicted probability vector, the one produced by the model that, in the long run, minimizes the worst-case risk. This, though, does not express at all the (epistemic) uncertainty about *which* of the distributions in $\mathcal{U}$ has generated the particular test datapoint $x_t$; as a result, it does not express how uncertain the prediction may be for that particular data point. Allowing a network to output lower and upper probability vectors for each input, instead, allows this epistemic uncertainty to be explicitly modeled. What is the best-case scenario? That in which all data (including the test point $x_t$) are indeed generated by the same distribution – in this case, $\theta_{MLE}$ is the best choice, in a maximum likelihood sense. What is the worst case? That in which $x_t$ is generated by a distribution that is far away from the groups of distributions $P_g$ (see lines 140-141) that generated the training data. In this case, the DRO solution will still be far from perfect, but still the best one can do, given the evidence in the training data. Therefore, if I minimize the CE versus $\boldsymbol{q}_U$ (first component of Eq. (10)), I get a credal set Cre of predictions whose upper probability is such that, for each class $i = 1,…,C$, there exists a probability vector $\boldsymbol{q}\in \mathbb{Q}$ such that $q(i)$ is the MLE prediction for that class (please remember the discussion of lines 172-175). Similarly, if I minimize the DRO loss versus $\boldsymbol{q}_L$ (the second component of Eq. (10)), for each class $i = 1,…,C$, there exists a probability vector $\boldsymbol{q} \in \mathbb{Q}$ such that $q(i)$ is the DRO prediction for that class. Minimizing the two components together, as in Eq. (10), thus encourages the predicted credal set to extend from the best-case to the worst-case prediction. **Response to W3 and Q1:** We kindly invite you to follow the detailed response in our global rebuttal (Part 2) for the comparison baselines. **Response to Q3:** Let us consider a numerical example in a three-element case. If we have three intervals such as $[1.1, 3.1]$, $[1, 3]$, and $[1, 2]$ as the inputs of the Interval SoftMax, the Interval SoftMax returns the following probability intervals as the outcomes: $[0.15612615, 0.71893494]$, $[0.12732467, 0.70053512]$, $[0.09810377, 0.43282862]$. We can observe that the Interval SoftMax also generates overlapped probability intervals to correctly reflect the overlap in the input. In extreme cases, If all the probability intervals over all classes are significantly overleaped, the resulting credal set could include the uniform probability. However, this does not prevent us from using the maximin and maximax criteria in Eq. (11) to derive a class prediction according to the conservativeness of the decision maker. Uncertainty estimation methods for the credal sets are also valid. **Response to Q4:** The main difference is that getting upper/lower probabilities e.g. from an ensemble of NNs would be a post-processing procedure only in the prediction space, no training process is involved. In our CreDEs, the model learns the probability interval from data and could encourage the predicted credal set to extend from the best-case to the worst-case prediction. **Response to Q5:** The Interval SoftMax Eq.(4) can ensure a valid credal set regardless of the dimension of the tasks. To reduce the computational complexity for high-dimensional classes, we performed Alg. 2 to reduce the dimension from high $C$ to $K$. Alg. 2 has a strong mathematical background in probability intervals [17] (Eq. (12) of the paper) and can rigorously guarantee a valid credal set. In addition, the ablation study in Appendix B.2 and the OOD detection cases for ImageNet in Table 1 show the effectiveness of the reduction in uncertainty quantification of credal sets. **Response to L1:** Following your comments, we will duly extend the discussion on the computational complexity and comparison baselines in the revised version, as we discussed in the global rebuttal. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I thank the authors for their effort responding to my questions. > Epistemic uncertainty in predictions is induced by the probability distribution(s) that generated the training data may be very different from the distribution that generates a test point. Can you please elaborate on this comment? If the test data is generated from a distribution different from the training data, the model may be uncertain because it hasn't been trained on similar data. While this is true in the OoD case, it's not the only factor contributing to epistemic uncertainty. If I am not wrong, the motivation for this paper is not only the OoD case, correct? > Minimizing the two components together, as in Eq. (10), thus encourages the predicted credal set to extend from the best-case to the worst-case prediction. This does not fully address my question. For clarification, let me ask a follow-up question, which will help me understand why Eq. (10) is a sensible choice: Why does Eq. (10) give incentive to learn suitable interval predictions? Are we not minimizing the first term in the sum in (10) by setting all upper probabilities to 1? > Response to Q4: The main difference is that getting upper/lower probabilities e.g. from an ensemble of NNs would be a post-processing procedure only in the prediction space, no training process is involved. In our CreDEs, the model learns the probability interval from data and could encourage the predicted credal set to extend from the best-case to the worst-case prediction. I apologize if my question was ambiguously stated. I see your point, that your approach involves a learning process instead of “post-processing” of the ensemble predictions. The question would be, can’t we achieve similar good results with this naïve post-processing approach? At least it would be interesting to look at. I remain skeptical about the loss itself, but given the results in the global rebuttal and taking into account the responses to my questions, I increased my initial score. --- Rebuttal 2: Title: Further response to the remaining questions Comment: Dear reviewer, We deeply appreciate your recognition of our response and your engagement in the discussion with us. In the following, we address your remaining questions. **Interpretation of the epistemic uncertainty** In supervised learning, we assume an underlying data-generating process in the form of an unknown probability distribution $P$ on the input space $\mathbb{X}$ and the target space $\mathbb{Y}$. The lack of knowledge regarding the underlying process is the source of the epistemic uncertainty [A, B, C]. In an ideal scenario, if the model could learn the exact distribution $P$ from the training data, there would be no epistemic uncertainty about a prediction given a new input. Nevertheless, in practice, given that the training dataset represents only a subset of the full space and that the test data may differ from the training data, it is not possible to guarantee that the distribution $\hat{P}$ learned from the training data will be identical to the true distribution $P$. Therefore, the epistemic uncertainty is induced. In our paper, we proposed CreDEs that output lower and upper probability vectors for each input to express this epistemic uncertainty in predictions. OOD data are significantly different from the training data, but not limited to the OOD data. For example, active learning aims to efficiently train models using minimal data by selectively acquiring additional samples, which are then labeled by experts. In this context, a larger EU indicates that such samples are significantly different from those used in the previous active learning iteration. Using the effective epistemic uncertainty (EU) measure as the acquisition function to obtain samples with a higher EU allows for an efficient active learning procedure [D]. The motivation of our paper is to improve the uncertainty estimation using Credal Deep Ensembles, not limited to the OOD case. We chose OOD detection benchmarks as the main evaluation because they are widely used to assess the quality of epistemic uncertainty estimation. We also performed an active learning case study in Appendix B.5 of the paper, which also shows that our CreDEs provide improved uncertainty quantification. If we consider only the in-domain samples, effective uncertainty estimation can also improve classification performance through a rejection operation [C, E]. For example, when processing a batch of input data, those with higher epistemic uncertainty are first rejected, and then the accuracy of the remaining test samples is calculated. If the estimation of the prediction uncertainty is valid, the accuracy shows a monotonic increase. To illustrate, we show the results of the CreDEs and DEs on the CIFAR100 dataset in Table 1. *The result shows that our CreDEs provide valid EU estimates and better accuracy than DEs.* Table 1. Test accuracy on the CIFAR100 dataset using EU estimates as rejection metrics under different rejection rates. | Rejection Rate | 0 | 0.1 | 0.2 | 0.3 | 0.4 | 0.5 | 0.6 | 0.7 | 0.8 | 0.9 | 1.0 | |:--------------------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:----------------:|:-------------:| | DEs-5 | 75.80 ± 0.28 | 80.48 ± 0.29 | 84.80 ± 0.29 | 88.37 ± 0.16 | 92.07 ± 0.15 | 95.28 ± 0.16 | 97.44 ± 0.23 | 98.73 ± 0.15 | 99.49 ± 0.10 | 99.87 ± 0.06 | 100.00 ± 0.00 | | CreDEs-5 ($\hat{i}_{min}$) | **79.54 ± 0.21** | **84.26 ± 0.23** | **87.84 ± 0.18** | **90.90 ± 0.22** | **93.80 ± 0.13** | **96.63 ± 0.15** | **98.32 ± 0.13** | **99.10 ± 0.10** | **99.62 ± 0.15** | **99.90 ± 0.09** | 100.00 ± 0.00 | | CreDEs-5 ($\hat{i}_{max}$) | **79.65 ± 0.19** | **84.31 ± 0.22** | **87.86 ± 0.18** | **90.90 ± 0.22** | **93.80 ± 0.13** | **96.63 ± 0.15** | **98.32 ± 0.13** | **99.10 ± 0.10** | **99.62 ± 0.15** | **99.90 ± 0.09** | 100.00 ± 0.00 | [A] A review of uncertainty quantification in deep learning: Techniques, applications and challenges. Information Fusion, 2021 [B] Aleatoric and epistemic uncertainty in machine learning: An introduction to concepts and methods. Machine Learning, 2021 [C] Quantification of credal uncertainty in machine learning: A critical analysis and empirical comparison, UAI, 2022. [D] Deep deterministic uncertainty: A new simple baseline. CVPR 2023. [E] Benchmarking common uncertainty estimation methods with histopathological images under domain shift and label noise." Medical image analysis, 2023. --- Rebuttal 3: Title: Further response to the remaining questions Comment: **Ablation study on comparing our CreDEs and the post-processing approach** Thank you for your enlightenment. Inspired by your comments, we did an ablation study to compare the methods. Regarding the post-processing approach, we compute the upper and the lower bound per class $k$ from the five single predictions from DEs-5, as follows: $$ q_{U_k} = \max_{n=1,..,5}{q_{n,k}}; p_{L_k} = \min_{n=1,..,5}{q_{n,k}}. $$ Then, we formulated a credal set by post-processing for uncertainty representation. Table 2 reports the performance comparison on CIFAR10 vs SVHN and Tiny-ImageNet benchmarks. Table 2: OOD performance comparison on CIFAR10 vs SVHN and Tiny-ImageNet benchmarks. | | SVHN: AUROC | SVHN: AUPRC | Tiny-ImageNet: AUROC | Tiny-ImageNet: AUPRC | |:-------------------------:|:-----------:|:-----------:|:--------------------:|:--------------------:| | DEs-5 | 89.58 | 92.29 | 86.87 | 83.02 | | DEs-5 (post processing) | 93.77 | 96.06 | 88.78 | 86.83 | | CreDEs-5 ($\delta=0.5$) | **96.55** | **98.17** | 88.10 | **87.85** | | CreDEs-5 ($\delta=0.875$) | **97.95** | **96.92** | **89.18** | **89.72** | *From Table 2, we can observe that the post-processing approach can improve the performance of classical DEs. Our method still performs better than others. In addition, due to the post-processing nature (representing the DEs predictions in credal sets), our CreDEs also achieve better performance in test accuracy and ECE values. For instance, test accuracy: **93.75%** ($\delta=0.5$), **93.87%** ($\delta=0.875$) vs 93.32% (DEs-5) and ECE: **0.0092** ($\delta=0.5$), **0.0093** ($\delta=0.875$) vs 0.0131.* We deeply thank your comments and inspiration, we believe incorporating more extensive comparisons, especially in real-world applications such as medical image analysis could strengthen our future work. **Following questions about the loss Eq. (10)** For a training batch, our CreNet first forward propagates via the Interval SoftMax activation (Eq. (4) of the paper) to generate meaningful upper and lower probability vectors to define the credal sets (lines 105-166). Because of the functional structure of Interval SoftMax activation, the upper and lower probability vectors are not computed independently but are correlated. In backward propagation, we apply the upper and lower probability vectors to the vanilla and DRO components, respectively. Following the rationale of the design choices (as we explained in lines 157-166 of the paper and the previous responses), the learned prediction interval can represent the boundary cases of the credal set by considering the optimistic scenario (Vanilla component) and the pessimistic case (DRO component), respectively. Namely, assuming that the true class of the label is index $j^*$, the Vanilla and DRO components in Eq. (10) minimize the Cross-Entropy loss $-log_2q_U(j*)$ and $-log_2q_L(j^*)$, respectively, in a mini-batch optimization process (lines 167-171). Thanks to Interval SoftMax, the trained probability intervals are valid for generating credal sets. In addition, because of the correlated computation between the upper and lower probability vectors in Interval SoftMax, minimizing the DRO component also affects the upper probability, driving the solution away from the all-one upper probability vectors. Once again, Thank you very much for your effort and your engaging in this discussion! We hope the increased clarity in our responses has addressed your concerns.
Summary: In this paper, the authors propose a novel method for uncertainty estimation in deep networks called Credal Deep Ensembles, which combines credal inference and ensembling approaches. During inference, the model predicts intervals (lower and upper probability values) for each class, resulting in a final output that is a simplex subset defined by these intervals. The ensembling technique predicts N such intervals and averages them out. Based on this final prediction, the authors derive several uncertainty measures (both aleatoric and epistemic). The authors validate their method through extensive experiments on several image classification tasks (CIFARs and ImageNets) and various model architectures (ResNets, VGGs, ViTs), demonstrating significant improvements over ensembles in terms of model accuracy, quality of uncertainty estimations, OOD detection, and model calibration. Strengths: * The paper is clearly written and easy to follow. The idea is intuitive and easy to grasp. The related work section provides an adequate discussion of existing approaches to credal learning and nicely describes the proposed method in detail. * The proposed methodology is interesting and provides a novel perspective on second-order approaches as evidential learning by combining credal inference with ensembling techniques. * The authors provided an extensive evaluation of their approach, showing improved performance, uncertainty quality, OOD detection, and model's calibration in several popular and important setups. Weaknesses: * The approach involves too many parameters and procedures for training and inference, making it more complex compared to other methods such as ensembles. This added complexity could limit its practicality (and applicability) and ease of implementation. * The paper does not provide a comparison with second-order methods such as evidential learning, which leaves a gap in the evaluation of the proposed approach. Including such comparisons could be important since the proposed credal set approach is deeply connected to second-order distribution prediction methods. Including such comparisons would strengthen the validation of the proposed approach and offer a more comprehensive assessment of its performance against established techniques. * The output size is doubled, which, for tasks with a high number of classes, could significantly increase the number of weights. This could result in higher computational costs and memory requirements, potentially limiting the scalability and efficiency of the approach. * It is not entirely clear where the improvements over ensembles are coming from or why credal inference brings additional benefits. More explanation is needed to understand the source of these enhancements. * From the perspective of the experimental evaluation, I would be curious to see evidence that the behavior demonstrated in the paper would hold in other domains, such as texts, graphs, more complicated vision tasks (e.g. segmentation), not limiting to image classification task. I tend to assess the paper positively and would be happy to increase my score after the discussion. Technical Quality: 3 Clarity: 3 Questions for Authors: * Given the increased complexity and number of parameters in the proposed approach compared to other methods like ensembles, how do the authors justify its practicality and ease of implementation? Are there any strategies to mitigate these complexities (for example, how to chose $\delta$ parameters for training or $K$ parameter for the optimized inference)? * I believe that comparisons with and discussions of second-order methods such as evidential learning in the context of credal inference are important. How does the proposed approach perform relative to these techniques? * The paper mentions significant improvements over ensembles but does not clearly explain why credal inference brings additional benefits. Can the authors provide a more detailed explanation of the source of these improvements? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors provide an adequate discussion of limitations in the last section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear Reviewer, We express our sincere gratitude for acknowledging our work and your valuable feedback. In the following, we address your concerns. **Regarding W4 and Q3: A more detailed explanation of the source of the improvements** **Response:** We would like to make a clearer explanation below. Epistemic uncertainty in predictions is induced by the probability distribution(s) that generated the training data may be very different from the distribution that generates a test point. It is well known that using CE as a loss leads to the maximum likelihood estimator of the model parameters, under the assumption that all training points are drawn i.i.d. from a single, unknown distribution $P$: $$ \hat{\theta} = \arg \min_\theta\text{CE} = \theta_{MLE}, $$ where $\hat{\theta}$ denotes the learned parameters. What does Distributionally Robust Optimization (DRO) do? It takes a cautious stance, and assumes training points are generated by one of many potential distributions $\mathcal{U}$, and minimizes the upper bound of the expected risk (Eq. (6) of the paper). In practice, this is approximated using Adversarially Reweighted Learning as in Eq. (9). Therefore, it learns using the following equation: $$\hat{\theta} = \arg \min_{P \in \mathcal{U}} E_P (Loss)$$ However, if we apply DRO to a traditional network, the network will learn a single parameter value (Eq. (9) of the paper); for a new test data point $x_t$ it will then produce a single predicted probability vector, the one produced by the model that, in the long run, minimizes the worst-case risk. This, though, does not express at all the (epistemic) uncertainty about *which* of the distributions in $\mathcal{U}$ has generated the particular test datapoint $x_t$; as a result, it does not express how uncertain the prediction may be for that particular data point. Allowing a network to output lower and upper probability vectors for each input, instead, allows this epistemic uncertainty to be explicitly modelled. What is the best-case scenario? That in which all data (including the test point $x_t$) are indeed generated by the same distribution – in this case, $\theta_{MLE}$ is the best choice, in a maximum likelihood sense. What is the worst case? That in which $x_t$ is generated by a distribution that is far away from the groups of distributions $P_g$ (see lines 140-141) that generated the training data. In this case, the DRO solution will still be far from perfect, but still the best one can do, given the evidence in the training data. Therefore, if I minimize the CE versus $\boldsymbol{q}_U$ (first component of Eq. (10)), I get a credal set Cre of predictions whose upper probability is such that, for each class $i = 1,…,C$, there exists a probability vector $\boldsymbol{q}\in \mathbb{Q}$ such that $q(i)$ is the MLE prediction for that class (please remember the discussion of lines 172-175). Similarly, if I minimize the DRO loss versus $\boldsymbol{q}_L$ (the second component of Eq. (10)), for each class $i = 1,…,C$, there exists a probability vector $\boldsymbol{q} \in \mathbb{Q}$ such that $q(i)$ is the DRO prediction for that class. Minimizing the two components together, as in Eq. (10), thus encourages the predicted credal set to extend from the best-case to the worst-case prediction. **Regarding Q1: any strategies to mitigate these complexities ($\delta$ parameters for training or $K$ parameter for the optimized inference)** **Response:** In our main evaluation, we set by default $\delta=0.5$ to reflect a balanced assessment of the train-test divergence and show how such a value allows our model to outperform the baselines. Most importantly, the ablation study on $\delta$ (Lines 287-312) demonstrates that our CreDEs are not particularly sensitive to $\delta$. One possible way to find the best $\delta$ in practice is to conduct standard cross-validation on specific test scenarios. Perspectively, an interesting option, in the presence of multiple datasets (e.g.acquired over time in a continual learning setting), could be applying the DRO loss component to different components of the training set, and assessing the results to robustly select $\delta$. $K$ is a trade-off parameter between uncertainty estimation and computational efficiency. The ablation study in Appendix B.2 shows that increasing the value of $K$ improves the OOD detection performance, but leads to an increase in execution time. Applying $K<=10$ to the Generalized Hartly (GH) measure for EU estimation is more practical. Choosing $K$ for upper and lower entropy measures is more flexible. For instance, in Table 2 of the paper, $K=20$ for the ImageNet dataset (original 1k classes) shows an improved OOD detection performance compared to DEs. **Regarding concerns about the complexity (W1 and W3) and the comparison baselines (W2 and Q2)** **Response:** We kindly invite you to follow the detailed response in our global rebuttal part 1 and part 2, respectively. **Regarding W5: Expecting evidence in other domains** **Response:** We fully agree with your comments that showing more evidence of performance improvement of our CreDEs in other domains can further strengthen our work. Given the consistent performance improvements of CreDEs on a wide range of image classification tasks (which you acknowledged as our strength), we are optimistic about successfully extending our work to other domains. We will include your suggestion in the discussion of our future work and find its proper place within the expanded scope of that work. --- Rebuttal Comment 1.1: Comment: Thank you for thoroughly addressing my concerns in your rebuttal. I find the proposed method both novel and compelling, especially with the detailed explanations you provided regarding the source of the improvements and the strategies to manage the complexity of the approach. Additionally, your acknowledgment of the potential for extending this work to other domains strengthens the paper's broader applicability. Given these considerations, I find the contributions of this paper to be very interesting and impactful, and I would like to upgrade my score by 1 point. --- Reply to Comment 1.1.1: Comment: Dear Reviewer, Thank you for recognizing the value of our work and supporting the publication of our paper. We are glad that your concerns are fully addressed. Once again, we appreciate your time and effort in reviewing our work and providing valuable comments.
Rebuttal 1: Rebuttal: Dear Reviewers, We appreciate your efforts in reviewing our paper and recognizing the novelty and strengths of our work. In the following, we would like to address your concerns regarding the complexity of our Credal Deep Ensembles (CreDEs) method and the comparison baselines. **Part 1: added complexity compared to classical Deep Ensembles** **Response:** We fully acknowledge the practical limitations of our method when there are strict computational resources (Lines 347-348). However, the extensive experiments show the scalability of our CreDEs on larger datasets (e.g., ImageNet) and larger network architectures (e.g., vision transformer). In addition, CreDEs consistently demonstrate visible improvement in a wide range of tasks and datasets, compared to deep ensembles (DEs). Therefore, CreDEs have great practical potential to enhance the performance of classical DEs in diverse real-world applications, such as medical image analysis [42], flood uncertainty estimation [10], structural health monitoring [70], or uncertainty-aware human motion forecasting [E]. Regarding inference time, doubling the final layer nodes would slightly increase the inference time. For instance, the inference time per sample for a ResNet50 architecture on the ImageNet dataset is 5.5 ms for a single standard neural network, vs 5.7 ms for a single CreNet (a marginal increase). The inference cost on the CIFAR10/100 dataset reported in *Table 1 of the rebuttal file* further demonstrates a slight complexity increase in our method. Regarding the uncertainty estimation cost, we reported the cost of calculating the Generalized Hartley (GH) measure and the upper entropy in Figures 7 and 8 of the paper, respectively. For instance, the time cost for GH calculation for CIFAR10 without approximation is 17 ms (0.02 ms in the reduced case considering 4 out of 10 classes) while calculating the Epistemic Uncertainty (EU) in deep ensembles for CIFAR10 takes $4.1\times 10^{-4}$ ms, measured on the same single CPU. Though higher CreDEs remain practical without actual computational constraints. Besides, the numbers reported are without code efficiency optimization: a more efficient code implementation could significantly reduce the cost. Regarding the training complexity, the CreNet uses a custom training loop, unlike the TensorFlow-standardized training standard neural networks, precluding a fair comparison. Given the evidence that CreNets only marginally increases the inference time (single forward pass) in Table 6, we are optimistic that by standardizing and optimizing the custom training loop and adopting a more efficient code implementation of Algorithm 1, the training effort could be significantly reduced. **Part 2: Comparison limited to DEs** **Response:** We chose DEs for comparison, as they serve as a strong baseline for uncertainty estimation, e.g., shown in studies [A, B, C]. More recently, study [A] shows that DEs are good choices across the board for uncertainty estimation in their evaluation. Through extensive evaluation, we draw our main conclusion: CreDEs show significantly and consistently improved performance, uncertainty quality, OOD detection, and model calibration in several popular and important setups. We much appreciate the reviewers' acknowledgment of the extensive evaluation as one of the strengths of our paper. The main reason for excluding Bayesian neural network (BNN) approaches in our original paper is that they generally have difficulty scaling to large datasets and complex network architectures [C]. Inspired by the suggestions, we conducted an additional comparison between CreDEs and DEs, MCDropout [F], and two TensorFlow-standardized BNNs (BNN-R [G] and BNN-F [H]). All the models are trained on the ResNet50 for the CIFAR10 dataset from scratch. The Adam optimizer is applied with a learning rate scheduler, initialized at 0.001. The learning rate is subject to a reduction of 0.1 at epochs 80 and 120. For BNNs, 10 forward passes are used for uncertainty estimation. The uncertainty evaluation via OOD detection on the CIFAR10 vs SVHN/Tiny-ImageNet dataset is reported in *Table 2 of the rebuttal file*. The results consistently demonstrate the significant improvements of our methods. We excluded the Dirichlet-based approaches (lines 69-76) (other branches of second-order models) mainly because, they are generally trained to predict Dirichlet distributions from only one-hot label data. A criticism of these methods is the lack of Dirichlet distribution labels in the training process, and the performance of the models often deviates from theoretical EU assumptions [69]. In addition, a recent study [D] showed that epistemic uncertainty is generally not faithfully represented in these methods and the resulting measures of epistemic uncertainty cannot be interpreted quantitatively. We will duly extend the above discussion in the Limitation and Future work in the revised version. One of the essential objectives of our future work is to build a universal uncertainty evaluation framework, and comprehensively assess our methods together with various other models. We believe that the kind of broader comparison you suggest will find its right place within the expanded scope of that work. [A] Benchmarking uncertainty disentanglement: Specialized uncertainties for specialized tasks. 2024. [B] Deep ensembles work, but are they necessary? NeurIPS 2022 [C] Deep deterministic uncertainty: A new simple baseline. CVPR 2023 [D] Is Epistemic Uncertainty Faithfully Represented by Evidential Deep Learning Methods? ICML 2024 [E] De-tgn: Uncertainty-aware human motion forecasting using deep ensembles. IEEE Robotics and Automation Letters 2024 [F] Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning, ICML 2016 [G] Variational dropout sparsifies deep neural network. ICML 2017 [H] Flipout: Efficient pseudo-independent weight perturbations on mini-batches. iCLR 2018 Pdf: /pdf/ded344f4ec17d2fad1932432b16fb4dd61122fa1.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Linear Causal Bandits: Unknown Graph and Soft Interventions
Accept (poster)
Summary: The paper considered a specific setting in the causal bandits problem where 1) the graph is unknown, 2) the causal model is linear , and 3) the action set consists of $2^{N}$ soft interventions. To tackle the problem, the author proposed an algorithm that first learns the causal structure and then uses UCB-based approaches to find the best action and minimize the regret. The regret of the algorithm is analyzed for both cases: with and without the knowledge of the graph. Strengths: 1- The paper considers a novel setting where the causal graph in unknown and SEM is linear. 2- The flow of the paper is clear, and each part of the proposed algorithm is thoroughly discussed. 3- The upper bound of GA-LCB-ID, when the graph is known, demonstrates an improvement compared to related work. Also, it is closed to the minimax lower bound. 4-  GA-LCB-ID, in the main case when the causal graph is unknown, achieves a reasonable upper bound. 5- The related work is comprehensively discussed in the introduction. Weaknesses: 1- The algorithm assumes that the causal graph does not contain latent variables. 2- The algorithm requires access to identifiability parameters, which is an unrealistic assumption. 3- The set of interventions is limited, for each node, we only have one soft intervention. Technical Quality: 4 Clarity: 4 Questions for Authors: Can you provide a discussion for the above points? 1- I expected in the regret bound, the identifiability parameter would take part similar to previous work in an unknown setting. What is the intuition behind excluding it in this setting? 2- I didn’t understand why you took an expectation in Theorem 2. The theorem should show that with high probability, the regret is less than a certain bound. Why is it necessary to take an expectation? Note that you defined regret with implicit expectation over the randomness of rewards in Equation 5. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment:** The algorithm assumes that the causal graph does not contain latent variables. The algorithm requires access to identifiability parameters, which is an unrealistic assumption. **Response:** We thank the reviewer for the thoughtful comment. In principle, we agree with the reviewer about the importance of relaxing these assumptions. However, we note that these are standard assumptions in the current causal bandit literature (e.g., [14,17,19,22]). The primary reason is that causal bandits are not fully understood even under such assumptions. Causal bandit literature has been evolving towards removing as many assumptions as possible about graph topology and interventional distribution. This is the first paper that has dropped both under general structures and interventional models (with the remaining piece being extending linear SEMs to non-linear SEMs). However, we certainly agree that next, it is also important to dispense with assumptions on latent variables and identifiable parameters. **Comment:** The set of interventions is limited, for each node, we only have one soft intervention. **Response:** Thanks for the note. For presentation clarity, we have focused on the binary intervention model. However, the algorithms can be readily extended beyond binary interventions where we can show that the regret will scale as $\sqrt{KT}$ when $K$ is the number of interventions per node. In the final version we can use the additional one page to provide a brief discussion on the extension to $K$ interventions per node. Importantly, we note that expanding the intervention space will not impose additional costs for identifying the causal structure, as one soft intervention is sufficient. We also note that under $do$ intervention, the regret scales with $\sqrt{KT}$ where $K$ is the number of possible $do$ interventions. This means that under a significantly more relaxed intervention model, we can achieve the same regret that $do$ interventions achieve. Questions: **Comment:** I expected in the regret bound, the identifiability parameter would take part similar to previous work in an unknown setting. What is the intuition behind excluding it in this setting? **Response:** We thank the reviewer for this question. If we define $R=(m / \eta)^2$ and incorporate it into the regret, the term $N$ in the regret order will appear as $RN$. We will include this in the updated version. The $RN$ term arises from the step of identifying the ancestor relationship. Notably, a similar term is observed in related literature [19,22], where the term $R$ is coupled with the degree as they only need to identify $pa(N)$. Besides, in previous work, $K$ represents the number of distinct interventions, and we will have similar behavior when generalized to multiple soft interventions. **Comment:** I didn’t understand why you took an expectation in Theorem 2. The theorem should show that with high probability, the regret is less than a certain bound. Why is it necessary to take an expectation? Note that you defined regret with implicit expectation over the randomness of rewards in Equation 5. **Response**: We thank the reviewer for the sharp observation. Having the expectation is a typo, and we will remove it. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I also have another question. How do you compare your algorithm and Corollary 1 with the Naive approaches (e.g., the ucb-based algorithm in classic bandit) on a specific graph like the following graph. X1 -> X2 -> .... -> Xn. Also for all $i$, Xi is the parent of Xn. It's a chain graph where all variables are parent of Xn. --- Rebuttal 2: Comment: We thank the reviewer for this comparison question. A chain graph is a **special** simple graph with a maximum in-degree of $d=1$ and a maximum causal depth of $L=N-1$. **Regret Comparison:** For UCB-based algorithms in the classic bandit setting **without knowing** the graph, the regret scales as $\tilde{\mathcal{O}}(\sqrt{|\mathcal{A}| T}) = \tilde{\mathcal{O}}(\sqrt{2^N T})$. In contrast when the the graph is known, the regret reduces **significantly** and it scales as $\tilde{\mathcal{O}}(\sqrt{T})$ (e.g. Corollary 1). The reason for the higher regret of the UCB-based algorithm without knowing the graph is that due to the lack of graph structure, the algorithm treats each set of interventions as an independent arm. This means that there are $2^N$ possible intervention sets, leading to the regret mentioned above. **Computational Cost:** Here we compare the computational cost of three algorithms: UCB-1 (a UCB-based algorithm for the classic bandit problem), LinSEM-UCB [14] (a UCB-based algorithm for causal bandits), and our proposed algorithm. The UCB-1 algorithm incurs a computational cost of $\mathcal{O}(2^N)$ because it needs to select the best intervention from $\mathcal{A}$ with $|\mathcal{A}|=2^N$. The LinSEM-UCB algorithm, designed for causal bandits, has a computational cost of $\mathcal{O}(N \cdot 2^{2N})$, as each UCB iteration requires maximization over the vertices of an $N$-dimensional hypercube, with each vertex having a computational cost of $\mathcal{O}(N)$. Additionally, it selects the best intervention from $\mathcal{A}$. In contrast, our algorithm has a computational cost of $\mathcal{O}(N |\mathcal{A}_s|)$, where $\mathcal{A}_s$ represents the set of possible optimal interventions at time $t$ after elimination, with $|\mathcal{A}_s| \leq 2^N$ and $|\mathcal{A}_s| \rightarrow 1$ as $t \rightarrow \infty$. This is because the computational cost of the confidence width calculation in equation (19) is $\mathcal{O}(1)$ (recall that $d=1$), and the UCB only requires the estimated mean value (of order $\mathcal{O}(N)$) and $N$ confidence widths. Consequently, our algorithm has a slightly higher computational cost at the beginning but becomes more efficient as $|\mathcal{A}_s|$ decreases to $\frac{2^N}{N}$, which can be achieved in a few elimination steps. Overall, our approach achieves a substantial reduction in regret while maintaining a comparable computational cost to UCB-1 in the context of this simple graph. Finally, we emphasize that as the graph topology becomes more complex, the UCB-based algorithms further degrade. For example, in hierarchical graphs, the intervention set size is $2^{dL+1}$, leading to a regret of $\tilde{\mathcal{O}}(\sqrt{2^{dL+1} T})$, compared to our algorithm’s $\tilde{\mathcal{O}}(\sqrt{d^{L-\frac{1}{2}} T})$. --- Rebuttal Comment 2.1: Comment: Sorry, it seems you misunderstood my example. My example was a chain graph where all variables also are parents of Xn (reward node). So, the graph has $2(n-1)$ edges (if I'm not wrong, $d = N -1$ and $L = N-1$) --- Reply to Comment 2.1.1: Comment: Thank you for the clarification. To lay some context for discussion, we would like to address this in the dichotomy of instance-dependent versus class-level regret analysis. The former analyzes the performance of a policy on a specific bandit instance while the latter captures the performance over a class of instances. In this context, for any causal bandit algorithm (including ours), one can discuss - Instance-dependent regret analysis, which involves analyzing a policy for a specific causal bandit instance (e.g., with a specific graph topology). - Class-level analysis, which involves analyzing a policy designed for a class of causal bandits (e.g., causal bandits with maximum degree $d$ and causal depth $L$, which is standard in causal bandit literature). Given this context, we address the regret of the example proposed by the reviewer in both settings: **Class-level regret for the class of graphs with maximum in-degree $d$ and causal depth $L$:** The UCB-1 algorithm can have a class-level regret that scales as $\tilde{\mathcal{O}} \big(2^{\frac{d^L}{2}} \sqrt{T}\big)$ (e.g., in the reverse tree graph). On the other hand, our class-regret scales as $\tilde{\mathcal{O}}\Big( d^{L - \frac{1}{2}} \sqrt{T}\Big)$. **Instance-dependent regret for the chain graph:** As the UCB-1 algorithm does not take the causal structure into account, its regret remains as $\tilde{\mathcal{O}}(2^{N/2} \sqrt{T})$. On the other hand, we can readily analyze the instance-dependent regret of our algorithm, which scales as $(N-1)\sqrt{T}$. To get the instance-level regret of our algorithm, let's examine the source of the $d^{L-1/2}$ term in our regret:. if we define the maximum in-degree at each causal depth as $d_{(\ell)} = \max_{i\in [N], L_i=\ell} d_i$ for $\ell \in [L]$, then the $d^{L-1/2}$ term can be refined to the instance-dependent term $\sqrt{d} \prod_{\ell=1}^{N-1} d_{(\ell)}$ (in the proofs we need to replace $d$ with $d_{(\ell)}$). In the term $\sqrt{d} \prod_{\ell=1}^{N-1} d_{(\ell)}$, $\sqrt{d}$ reflects the complexity of the linear function, and the product accounts for the compounding effect where each causal depth contributes to $d_{(\ell)}$. --- Rebuttal 3: Comment: Thanks for your response. based on the current result, the upper bound is $\mathcal{O} \left ( (N -1) ^{(N-3/2)}\sqrt{T} \right)$? If so, it seems that, compared to the classic UCB for this instance, the regret bound is not particularly advantageous. It may be beneficial to discuss the improved instance-dependent regret, as you mentioned, in the revised version. I intend to maintain my current score; however, I have concerns regarding soft interventions. In the title, abstract, and introduction, the authors claim to propose an algorithm for soft intervention cases. However, in the problem formulation, they define only a single soft intervention per node, which was unexpected. --- Rebuttal Comment 3.1: Comment: As the reviewer points to, we have consistently mentioned that we focus on soft interventions. This is the most general form of intervention and subsumes hard and $do$ interventions. If the reviewer is asking about one versus multiple interventions per node, we re-emphasize that using one is standard in the causal bandit literature for notational simplicity. Extensions from one to multiple interventions is trivial. A constant 2 will be replaced by $K$ (number of interventions) in cardinality of the intervention space. We kindly ask the review to base their judgment on the theoretical contribution of this paper: this paper significantly extends the scope of the causal bandit literature by entirely removing the assumption about knowing the graph topology and general soft interventions. All our other assumptions are either in line with or more relaxed compared to those in the existing literature. We are glad that the reviewers have not pointed to any technical flaws in the analysis. --- Rebuttal 4: Comment: Just for clarification regarding soft interventions. I expected that you define it in the following way. By soft intervention on node $i$, we can change any entries of $B_i$ to any values (with some restriction). But in your setting, by soft intervention, we can only change $B_i$ to $B^*_i$. Could you discuss the complexity of the former setting? Is it not learnable? I understand the theoretical contribution of the paper, and my judgment is based on that. --- Rebuttal Comment 4.1: Comment: Thanks for giving us a chance to clarify this. Please note $B_i$ is *not* the interventional value of the random variable $X_i$ generated by node $i$. Rather, it specifies the mechanism for generating it. **Post-interventional random variables:** Specifically, upon intervening on node $i$, the random variable $X_i$ can take any arbitrary real value. Its value is not limited to only one or finite number of choices. Specifically, by applying a soft intervention on node $i$, we are changing its conditional distribution $\mathbb{P}(X_i \ | \ X_{{\sf pa}(i)})$ to a distinct conditional distribution. The post-intervention random variable $X_i$ will be generated according to the post-intervention conditional distribution. This is in contrast to $do$ interventions that specify a choice for $X_i$ (which essentially means placing the entire post-interventional conditional probability on a singular value). **Post-interventional SEMs:** Since we are working with linear SEMs, both pre- and post-interventional conditional distributions are fully specified by matrices $B$ and $B^*$ as well as the exogenous noise model. We are assuming that all these (i.e., $B$, $B^*$, and noise distribution) are fully *unknown*. This means that the pre- and post-interventional distributions are fully *unknown*. Our algorithm is learning all these, which includes learning $B$ and $B^*$ (their support and their real-valued entries). The complexity of learning all $2N$ such vectors $\{B_i,B^*_i:i\in[N]\} $is already included in the regret. We have considered one post-intervention SEM $B^*$. This can be readily extended to any finite number of post-intervention SEMs. Extensions to an infinite number of post-intervention models can be a potential extension. That necessitates learning an *infinite* number of interventional distributions, which to the best of our knowledge even for simpler settings (e.g., knowing the topology or even interventional distributions) is an open question. So, if we are understanding the reviewer’s concern correctly, having one single post-intervention $B_i^*$ does *not* mean that we are forcing $X_i$ to take only one value. Rather, we are changing the conditional distribution that generates $X_i$ and it still can take a real value generated by the interventional distribution (and this is a generalization of $do$ interventions that specify the $X_i$ to take a specific deterministic value – which in the context of causal bandits has a finite number of possibilities).
Summary: This paper studies the linear scm setting for causal bandits. In particular, there are two vectors associated with the linear response of every node, and the learner may independently choose which of the two vectors to use. The value as well as the graph are unknown. Hence, the action space is 2**number_nodes. The authors provide an algorithm with a regret upper bound that nearly matches a lower bound, also presented. Strengths: The problem formulation is natural and seems to be “the right size;” it’s added generality compared to the previous state of the art is enough to be interested but still allow for a solvable problem. The presentation is generally clear and polished. Weaknesses: There are some technical concerns; please see the questions below. Technical Quality: 2 Clarity: 3 Questions for Authors: What does assuming that B_i and B_i^* have the same support buy you? Is the Lasso calibrated for the max number of parents? The bound you have on kappa in (27) is very big, O(m^2). Doesn’t this make Theorem 1, part 2 trivial? Is assumption 5 necessary? Why can’t you argue that soft interventions that would break assumption 5 would have a minimal effect on the expected loss and therefore only contribute a small amount to the regret? One might even be able to have a bound that is adaptive to eta. Or do you need it for graph discovery? What is c, or at least what does it depend on? Why use ridge regression for estimating B, B*? They are assumed sparse, after all. Why do you need to solve an optimization problem over 2^|A|? Won’t the solution be on a corner of the |A|-dimensional hypercube, so a continuous relaxation would suffice? Can you provide some intuition for what the lower bound construction looks like? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: No limitations discussed in the main body. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Support of weights matrices:** We clarify that $B_i$ and $B_i^*$ always have the same support under soft interventions. as such interventions only change the conditional distribution of node $i$ without affecting the topology. The causal topology remains intact, leading to $B_i$ and $B_i^*$ having the same support. As a side note, we also remark that under more restrictive types of interventions ($do$ and hard), an intervention on node $i$ removes its causal dependence on its parents, i.e., $pa(i)$. In such cases, the support of $B_i^*$ becomes a subset of $B_i$. However, even in such cases when the topology is unknown, assuming that the support of $B_i^*$ is a subset of $B_i$ does not provide any gain. Overall, since we are assuming soft interventions, $B_i$ and $B_i^*$ have the same support. **Lasso calibrated for the max number of parents:** Yes, the algorithm is designed for the entire class of causal models with maximum in-degree bound $d$. If more node-level information is available, the Lasso regression can be accordingly modified. We remark that the interest of the community working on causal bandits has been towards removing as much as possible assumptions about the graph structure and interventions. Our setting is in line with extending the state of the art by removing topology information. **Loose bounds on kappa:** We thank the reviewer for the sharp observation. The reviewer is right that this is not a tight bound. Nevertheless, we note that for achieving a tight regret, such a loose bound is inevitable, and further improving it compromises the regret. Here’s the reason: the regret is only a function of the topology information, and the topology should be learned only to the extent that it facilitates identifying the best intervention. In other words, learning the topology up to some level improves regret and, after that, starts compromising it since learning the topology requires collecting more samples, which implies less sample efficiency. Therefore, any algorithm that learns the topology accurately is over-learning, and our conjecture is that it will compromise the regret. Please note that the purpose of Theorem 1 is to provide the guarantees needed only for characterizing the regret, and it is not intended to claim that the topology is learned accurately. **Necessity of Assumption 5:** As the reviewer points out, this assumption is needed for graph recovery, and it is standard when graph skeletons is unknown [19,22,R1]. When a node acts as a confounder, intervening on that node might not impact certain intermediary nodes but still influences reward. Without Assumption 5, the confounder relationships cannot be identified, leading to an incorrect graph topology recovery. The reviewer’s point is correct: the interventions on some nodes might have minimal effect on the reward. However, when the topology is unknown, these nodes need to be identified and treated properly. The reason is that even a minimal effect on the regret can still accumulate linear regret if it is chosen linearly in $T$. So the algorithm needs to also properly identify these nodes to ensure they are selected sublinearly. [R1]: Elahi et. al. Partial Structure Discovery is Sufficient for No-regret Learning in Causal Bandits, ICML 2024 Workshop RLControl Theory. **Definition of $c$:** Thanks for the sharp observation. This constant is $c=\kappa$ with $\kappa$ is defined in (27). We recognize this notation is redundant and will change it to $\kappa$. **The weights matrices are assumed sparse:** Thank you for this question. We are not assuming sparsity for $B$ and $B^*$. The maximum in-degree $d$ can be arbitrarily large, i.e., it can be as large as $N−1$ . This means the columns of $B$ and $B^*$ are $d$-sparse, where $d$ can be $N−1$. Nevertheless, the reviewer is correct that if we limit the scope of the problem to assume that $B$ and $B^*$ are sparse, then sparse-recovery methods can be more effective than ridge regression. **Continuous relaxation for optimization** We recognize that the reviewer might have an interesting point about solving a bandit combinatorial problem via continuous relaxation. We are unaware of such relaxation in the bandit literature. One challenge in adopting such an approach is that the utility to be optimized (i.e., UCB) does not take values at non-discrete values. Specifically, $UCB_{a}$ is not defined for $a_i\notin\{0,1\}$. As such, while we appreciate the reviewers' thoughts, we do not see an immediate way of using continuous relaxation in combinatorial bandit decisions. **Lower bound construction:** In this paper, we provide a minimax lower bound. This is done by constructing two bandit instances that are highly similar, and distinguishing them is difficult. Specifically, we construct two bandit instances that share the same hierarchical graph structure. These instances differ only in the parameters that switch the observational and interventional weights for the nodes with causal depth of 1, resulting in distinct optimal interventions. We identify the minimum samples that any algorithm will need to distinguish between the two bandit instances and the minimum regret it will incur on one of the instances. This serves as a lower bound on the minimax regret. **Final note about “soundness”:** We hope we have addressed the reviewer’s concerns about “soundness” (rate 2: fair) – especially why $B_i$ and $B_i^{\star}$ have the same support, why the second part of Theorem 1 is sufficient for our regret minimization purposes, lack of sparsity in $B$ and $B^{\star}$, and continuous relaxation. We remark that this paper significantly extends the scope of the causal bandit literature by entirely removing the assumption about knowing the graph topology (a common assumption by extensive recent publications in JMLR, NeurIPS, ICML, and AISTATS). This is a major leap in causal bandits, and kindly request that the reviewer consider re-evaluating their rating of the manuscript. --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response, and I think many of my concerns are answered, concerning theorem 1. On a related note, how does the regret bound scale with m? --- Reply to Comment 1.1.1: Comment: **Scale with $m$:** We appreciate the reviewer for raising this important question. The regret bounds scale linearly in $m$. An intuitive reason is that if we multiply all the random variables of the system by a constant $M$ then the final reward will be scaled up with $M$ and the regret of any algorithm scales up with a constant $M$. As discussed at the end of Section 2, the dependence of the regret on $m$ is linear even in simpler linear bandit problems (e.g., [25]). An exact same linear dependence has been also reported in other causal bandit settings [14,17]. We note that in some literature such dependence might not appear explicitly since they normalize the range of the random variables or rewards to fall in the range [0,1], i.e., by setting $m=1$ (e.g., causal bandit with $do$ intervention [1,6] and linear bandits [R2]). [R2]: Abbasi-Yadkori et. al. ​​Improved algorithms for linear stochastic bandits, NeurIPS 2011 **Other concerns:** We are glad that the reviewer’s concerns about Theorem 1 are addressed. We will be happy to also elaborate more on other concerns if there are still remaining ones.
Summary: The authors consider a stationary causal bandit problem for an unknown linear model with weight matrix $B \in \mathbb{R}^{N \times N}$ and noise vector $\epsilon \in \mathbb{R}^N$. In their setup, intervention on the node $X_i$ replaces all of the weights into $X_i$ with those from another unknown matrix $B^* \in \mathbb{R}^{N \times N}$. The authors propose a two-phase approach: learning the causal structure by (1a) learning a valid topological ordering (1b) learning the causal DAG, and then (2) applying a phase elimination algorithm to learn the best pulls over a time horizon $T$. The authors also include a minimax lower bound for their problem setup. For either bound, the authors argue that the maximum causal depth $L$ and a known upper bound (denoted here as $d^+$ for clarity) for the maximum in-degree $d$ are the relevant topological parameters. It is assumed throughout that interventions do not affect the causal structure: $\mathrm{Supp}(B)=\mathrm{Supp}(B^*)$. Strengths: The authors identify a novel problem setup in the understudied field of causal bandits with soft interventions and unknown causal structure. Their results are significant to the causal bandit literature and have clear applications for real-world modelling. Weaknesses: The authors make an informal claim near the beginning of the paper, which I suspect affects the tightness of their bounds. They argue that all conditional independencies must be learned for regret minimisation with an unknown graph. However, it should be clear that only the nodes in $\mathrm{An}(Y)$ are relevant. In stage (1b), beginning with the reward node $N$ and iteratively learning the parent sets $\mathrm{pa}(N)$ then $\mathrm{pa}(\mathrm{pa}(N))$ should be a more efficient approach. This suggests to me that the relevant causal path depth in their bounds is actually $L_N$, while the relevant in-degrees in stages (1) and (2) of their algorithm are $d^+$ and $d^* := \mathrm{max}_{i \in an(N) \cup N} (d_i)$ respectively. The authors also seem to ignore the parameter $R := (m/\eta)^2$ in their regret bounds. While it is not a topological property of the population DAG, it is certainly an important parameter in the problem setting and could be thought of as a kind of guaranteed minimal signal-to-noise ratio (SNR). From the definition of $T_1$ there should be a linear scaling of the regret bounds with $R$, up to a poly-log factor. This SNR is important for real-world modelling and could also offer insights into relaxing assumptions 3, 4 and/or 5. Putting the above together, my estimation of the same upper regret bound is $\tilde{O}((cd^*)^{L_N - 1/2} \sqrt{T} + R + d^+ + N)$. Please correct me otherwise and/or update the manuscript. I will revise my score and evaluation of the paper's soundness following the authors' rebuttal. The authors provide a very minimal experiment, relegated to the appendix. It would have been informative to see parameter sweeps and experiments with random DAGs to compare their bounds with empirical scalings. The authors should make no mention of the possibility of hidden confounders and whether their models can incorporate this. In particular, the claim that only $\mathrm{pa}(N)$ are possibly optimal with hard interventions is false - see [1] for details. [1] Sanghack Lee and Elias Bareinboim, *Structural Causal Bandits: Where to Intervene?*, NeurIPS 2018. Technical Quality: 3 Clarity: 2 Questions for Authors: Does your approach assume causal sufficiency? In particular, are noise terms assumed to be mutually independent (i.e., $\mathrm{Corr}(\epsilon_i, \epsilon_j) = \delta_{ij}$)? I do not understand the first case in for the estimator for the ancestors of $i$ in equation (9): it appears to read "if $i$ is not the reward node but has decedents then it can't have any ancestors". Is this a typo? In table 1, the meaning of $K$ in the benchmark approaches should be indicated. Is this the number of possible hard interventions on a node for categorical data or something else? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 4 Limitations: Yes, the authors provide some key underlying assumptions. Whether causal sufficiency is assumed should be indicated either way. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the thoughtful questions, especially those pertinent to the instance-dependant regret and hidden confounders. We provide more discussions and we hope these clarify the reviewer’s technical concerns. **Dependence on $d$ and $L$:** The reviewer raises a good point about also characterizing instance-dependent regret bounds. This falls into the general dichotomy of class-level versus instance-level regret analysis. We have provided regret bounds for a class of bandits with a maximum in-degree $d$ and maximum causal length $L$. As the reviewer correctly points out, this can be further fine-tuned to recover instance-dependent regret bounds that use the instance-level information $L_N$ and $d^*$. Characterizing these bounds is verbatim similar to our analysis, and we will add a remark in the paper to state these instance-dependant regret bounds. The changes in the instance-dependent bounds can be done as follows: - **The effect of $L$ and $L_N$.** We note that neither Algorithm 1 nor Algorithm 2 requires the information of $L$. Algorithm 1 implicitly learns $L$ by learning the parent sets, which is then fed to Algorithm 2. For the regret bound proof, we use recursion along the longest causal path. Each layer in the causal path contributes a term $cd$ to the regret, the compound effect of all layers along the causal path becomes $(cd)^L$ in the class-level regret. For the instance-dependent regret on the reward node, it is sufficient to perform this process along the causal path on the subgraph that only contains $an(N)$, in which case the regret will depend on $L_N$. This subgraph is learned by Algorithm 1. - **The effect of $d$ and $d^{\star}$.** Similar to the previous point, when we evaluate the regret compounding along the causal paths formed by $an(N)$, the maximum in-degree will be $d^*$. Thus, the contribution of each layer to the regret will $cd^*$ instead of $cd$. - Algorithm 1 requires knowing $d$, and the iterative approach suggested by the reviewer can save some computational cost by applying Lasso estimators only on $an(i)$. But this approach does not tighten the regret since the same amount of observational data $T_2$ is needed to learn any parent sets. **Parameter $R=(m / \eta)^2$:** We certainly agree with explicitly incorporating the effect of $R$ on the regret bounds. We agree with the reviewer’s point that introducing this notion of SNR allows for replacing/relaxing Assumptions 3-5 could be restated equivalently as an Assumption on the SNR term $R$. We further elaborate on the dependence on $R$ in the answer to the next comment. We also note our objective has been primarily focused on the fundamental question of how the regret depends on (i) the graph topology and (ii) interventional distribution. Causal bandit literature has been evolving towards removing as many assumptions as possible about graph topology and interventional distribution. This is the first paper that has dropped both under general structures and interventional models (with the remaining piece being extending linear SEMs to non-linear SEMs). However, we certainly agree that it is also important to understand the dependence of regret on other model parameters. **Upper regret bound:** Based on the previous point, the correct instance-dependant regret bound is $\tilde{\mathcal{O}}((c d^*)^{L_N - 1/2} \sqrt{T} + d^{+} + RN)$. The term $RN$ term arises from the step of identifying the ancestor relationship. We note that a similar term appears in related literature [19,22], where the term $R$ is coupled with the degree as they only need to identify $pa(N)$. **Additional experiment:** Please refer to the global response where we provide more experiments. **Hidden confounders:** Thank you for the comment. We note that all our statements are in the context of our specified setting (no hidden confounders). Under this setting (i.e., no hidden confounders and full intervention capability), only $pa(𝑁)$ are optimal. We will re-emphasize this as a remark in the paper to avoid confusion. We highlight that having no hidden confounders is a standard assumption in the existing literature on causal bandits (even under more restrictive $do$ interventions). The primary reason is that causal bandits are not fully understood even when there are no hidden confounders. Nevertheless, we certainly agree that understanding the effect of unobservable confounders is also important. Including hidden confounders will introduce non-trivial challenges. For instance, it brings in bi-direction edges, as a result of which one cannot express the reward as a linear function of observable variables. **Causal sufficiency:** We thank the reviewer for this question. Yes, our approach does assume causal sufficiency, and we will include this in Assumption 4. **first case in equation (9):** Thanks for the sharp observation. Yes, based on Assumption 5, indeed the condition part "if $\text { if } i<N \text { and }|\widehat{\operatorname{de}}(i)|>0$ Is a typo and should be removed. **$K$ in table 1:** Yes, $K$ represents the number of possible $do$ interventions on one node. We will add a comment to specify it. **Final note about “soundness”:** We hope we have addressed the reviewer’s concerns about “soundness” (rate 2: fair) – especially about having hidden confounders and class-level versus instance-level regret (which can be readily recovered from our result). We remark that this paper significantly extends the scope of the causal bandit literature by entirely removing the assumption about knowing the graph topology (a common assumption by extensive recent publications in JMLR, NeurIPS, ICML, and AISTATS). This is a major leap in causal bandits, and kindly request that the reviewer consider re-evaluating their rating of the manuscript. --- Rebuttal Comment 1.1: Comment: Many thanks to the authors for their thoughtful and detailed responses, which have addressed my major concerns and questions. In light of this, I will upgrade my overall rating to a 7: Accept, and my soundness rating to a 3: Good. My high overall score is based off the significance and novelty of the authors' contribution. I suspect correlated noise variables (i.e., latent confounders) could be incorporated into the authors' analysis, which presents one exciting avenue for future work. --- Reply to Comment 1.1.1: Comment: We are grateful to the reviewer for the great suggestion. Indeed -- we agree that modeling the latent confounders via modifying the noise model is an effective of integrating them into the causal bandit model. We certainly agree that is an important next line of research in the causal bandit literature.
null
null
Rebuttal 1: Rebuttal: We thank the reviewers about the comments about our assumptions and empirical evaluations. We would like to clarify the following: **Theoretical contributions:** We remark that this paper significantly extends the scope of the causal bandit literature by entirely removing the assumption about knowing the graph topology and general soft interventions. All our other assumptions are either in line with or more relaxed compared to those in the existing literature. We have provided more details on these. We are glad that the reviewers have not pointed to any technical flaws in the analysis. We believe this paper provides a major contribution in the trajectory of development in causal bandits, which has been moving away from stylized assumptions. We certainly agree that there are still important issues to address (e.g., hidden confounders) that remain for future work. **Additional empirical evaluations:** With the additional results, overall we have provided evaluations to assess the key theoretical findings of the paper. These include: Scaling behavior with respect to the three key parameters in the results, i.e., $d, L$. Sublinear regret and fast convergence. Computational advantage, which is a bottleneck for almost all the existing approaches to causal bandits with even stronger assumptions. We remark that due to computational challenges, causal bandit algorithms are generally focused on small graph sizes. Additionally, the hierarchical graph is the one that can mimic the worst-case regret for algorithms with given parameters. The hierarchical graphical models that we are evaluating have $L=9$ layers and $N=19$ nodes, which is the largest model considered in the literature (for instance the JMLR paper [14], the AISTATS paper [17], and the JSAIT paper [24] consider hierarchical with no more than 3 layers; JMLR [14] and JSAIT [24] also consider simpler graph with at most $N=10$ nodes; and [22], which focuses on $do$ intervention with unknown graph, provides experiments for a simple binary tree with $N = 20$ nodes.) **Scaling with $L$:** Scaling of the regret with respect to the causal depth $L$ is depicted in Figure 1 in the attached document for setting of a hierarchical graph with $d=2$ and $L$ varies in the range $1$ to $9$. The theoretical results (regret upper and lower bounds) predict that the regret grows at the range $(2c)^L$ (i.e., exponential in $L$). The empirical results in Figure 1 corroborate that the cumulative regret scales exponentially with length $L$, and the actual regret closely tracks the upper bound’s trend. **Scaling with $d$:** Scaling of the regret with respect to the maximum in-degree $d$ is depicted in Figure 2 in the attached document based on a hierarchical graph with $L=2$. We note that we increase the number of sufficient exploration parameters $T_1$ and $T_2$ to ensure the results for all the degrees. Theoretical predictions suggest that our algorithm’s regret scales as $d^{3/2}$ (i.e., polynomial growth in $d$). For the choice of $L$, the lower bound becomes linear in $L$. Figure 2 demonstrates that our regret is super-linear and tracks the polynomial trend of the regret upper bound (i.e., our achievable regret). **Sublinear regret:** Figure 3 of the attached document shows the cumulative regret for a larger graph with $L=6$ and $d=2$. In this scenario, both GA-LCB with and without graph information exhibit sublinear regret. However, the GA-LCB with unknown graph information incurs higher cumulative regret due to the need for additional exploration required for estimating the topology. **Computational cost:** As discussed in Section 3.3, a key contribution of our algorithm is its significant computational advantage compared to the existing causal bandit algorithms (which also rely on stronger assumptions). First, our algorithm circumvents the computational complexity of computing the upper confidence bounds (which generally requires solving an optimization problem) by iteratively computing the value through the causal depth. To highlight the advantage, we note that the UCB-based causal bandit algorithms are computationally viable for only $L\leq 2$ or for special graphs such as chain graphs [24]. In contrast, the computational complexity of our algorithm scales linearly with $L$ (especially scales as with $\mathcal{O}(Ld^3)$). This has allowed us to easily implement algorithms for causal paths as long as $L=9$. Additionally, our algorithm includes a phase elimination step that removes sub-optimal arms from the potential best intervention set. Since we need to calculate UCBs for each possible intervention, the phase elimination step prevents unnecessary UCB computation. This results in significant computational savings as the time horizon increases. We have also provided more empirical analysis of the computational complexity in Appendix A. Figure 4 (Appendix A) illustrates the computational differences between various algorithms. It is evident that our algorithm significantly reduces computation time, even for small graphs. In contrast, the other algorithm is not scalable with respect to both degree $d$ and length $L$. Pdf: /pdf/0963a61a2a38dcc4c66f1fa48de29a7874b53f0e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Functionally Constrained Algorithm Solves Convex Simple Bilevel Problem
Accept (poster)
Summary: The paper first shows the difficulties of obtaining the absolute optimal solutions to simple convex bilevel problems. The authors also present the lower bound of the first-order methods for solving simple convex bilevel problems. Moreover, the authors proposed a novel framework based on functionally constrained reformulation. Based on the framework, they developed near-optimal algorithms for solving simple convex bilevel problems under smooth and non-smooth settings. Strengths: 1. The paper shows the difficulties of obtaining the absolute optimal solutions to simple convex bilevel problems under common assumptions. 2. The authors present the lower bounds of the first-order methods for solving simple convex bilevel problems, which are $\Omega(1/\sqrt{\epsilon})$ and $\Omega(1/\epsilon^{2})$ for smooth and non-smooth settings, respectively. 3. The authors also present the algorithms for solving these types of problems, which match the lower bounds ignoring the logarithmic factors. Weaknesses: 1. Many lemmas in this paper are directly quoted from other works, but some, such as Lemma 5.4 and 5.5, are not identical to the original ones. It would be better to provide proof for these lemmas. 2. In this paper, the authors only consider the unconstrained case, while many other works about simple bilevel optimization consider the constrained case. 3. In this numerical experiment, the authors should count the time of finding initial $u$ and $l$ and show it on the plots. Otherwise, it is not a fair comparison, as the complexity of finding the initial $u$ and $l$ has the same complexity as the main stage. 4. The desired accuracy ($10^{-2}$ in Figures 1 and 2) for the experiments is not small enough. It would be better to set them to be smaller than $10^{-6}$ and see the performances of the algorithms for the late stage. 5. The proposed algorithm requires a predetermined $T$, whereas similar studies do not have this requirement. 6. The novelty of this work is limited, as the main idea is borrowed from [1] with some extensions, and many of the technical proofs are identical to [2] and [3]. References: [1]. Jiulin Wang, Xu Shi, and Rujun Jiang. Near-optimal convex simple bilevel optimization with a bisection method. [2]. Yurii Nesterov. Lectures on convex optimization [3]. Sébastien Bubeck et al. Convex optimization: Algorithms and complexity. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: No negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for dedicating their valuable time and effort to evaluate our manuscript. Below are our responses to the reviewer’s concerns: ### W1 Thanks for the suggestion! We will provide self-contained proofs for these lemmas in the revised version. ### W2 Actually, our method can be extended to the constrained case where the feasible set is convex and compact (as in many previous works, e.g. [1][2]). In our manuscript, we focus on the unconstrained case so as to provide a clearer discussion. Specifically, if the feasible set of the problem is a convex and compact set $\mathcal X$, one only needs to replace the Euclidean ball $\mathcal B(\mathbf x_0,D)$ in the original problem with $\mathcal X$. For Lipschitz problems, computing a projection onto $\mathcal X$ is needed in each gradient step, which is standard. For smooth problems, one needs to compute the projection onto the intersection of $\mathcal X$ and a hyperplane (see Proposition 5.2), which is also needed by previous work [1]. ### W3 As presented in Appendix E, the initialization time is already taken into account in the numerical experiments and is shown in the plots. In the attached pdf in the general response, the initialization stage is plotted with dashed lines in Figure 1 and Figure 2. ### W4 In the first experiment (Figure 1), we actually set the desired accuracy $\epsilon_f=\epsilon_g = 10^{-6}$. As we noted in Line 628-629, a $(\epsilon_f,\epsilon_g)$-weak optimal solution is indeed solved by $\texttt{FCB-BiO}^{\texttt{sm}}$, since the approximate solution $\hat {\mathbf x}$ found by the algorithm satisfies $g(\hat {\mathbf x })\leq g^*+\epsilon_g, f(\hat {\mathbf x})\leq f^*\leq f^*+\epsilon_f$. However, to present the results more clearly, Figure 1 shows the **absolute optimality gap** $|f(\hat {\mathbf x})-f^*|$, instead of the weak optimality gap $f(\hat{\mathbf x})-f^*$. Since the aim is to find a **weak optimal solution**, the absolute optimality gap does not converge to a very small level. The vertical axis of Figure 2 does not indicate the accuracy. As we remarked in Line 634-636, to the best of our knowledge, no existing solver can solve the exact optimal value $f^*,g^*$ of the second experiment. Thus we plot the function values $f(x),g(x)$, instead of the suboptimality gaps $f(x)-f^*$, $g(x)-g^*$ in Fugure 2. To better demonstrate the performances of the algorithms for the late stage, we conducted another numerical experiment. The setup and result of the experiment are presented in the general response and the attached pdf. In this experiment, we set $\epsilon_g=10^{-9}$ and $\epsilon_f=10^{-6}$. The results again show the superior performance of our method compared to existing methods in both upper-level and lower-level in the late stage. ### W5 FCB-BiO is a double-loop algorithm (bisection as the first level and SGM/generalized AGM as the second level). We need to predetermine a $T$ (which is equivalent to predetermining a desired accuracy $\epsilon$) to stop the subprocess (SGM/generalized AGM) when it runs for enough time and the approximate solution is accurate enough. Many previous double-loop methods also need to tackle such issues, such as Catalyst[5]. We would like to remark that some previous works also need to predetermine the number of iterations $T$ to set the hyperparameters, e.g. learning rate, such as Theorem 4.4 in [1] and Proposition 1 in [3]. ### W6 We appreciate the reviewer's feedback and would like to clarify the distinctions and contributions of our work in comparison to previous works. Firstly, while the bisection process in our proposed algorithm $\texttt{FCB-BiO}$ might appear similar to the approach in Wang et. al. (2024) [4], the underlying motivations and objectives are fundamentally different. In Wang et. al. (2024) [4], the bisection is employed to transform the original problem $\min f(x) \text{ s.t.}\ g(x)\leq g^*$ to a series of problems $\min \ g(x) \text{ s.t.}\ f(x)\leq c$ for different $c$. In contrast, in $\texttt{FCB-BiO}$, the original problem is reformulated to finding the smallest root of a univariate monotone auxiliary function $\psi^*(t)$. For such one-dimensional root-finding problems, the bisection method is the most standard and straightforward approach. Additionally, our reformulation improves the results in Wang et. al. (2024) [4]. Specifically, [4] requires an oracle that efficiently projects $x$ onto the sublevel set of $f$, i.e., {$x|f(x)\leq t$}. This assumption only holds for a limited class of functions and would otherwise incur additional gradient evaluations. On the other hand, $\texttt{FCB-BiO}$ does not need this assumption, making it applicable to a broader range of problems. In summary, we view the main contribution of our work as **introducing the functionally constrained reformulation** to simple bilevel problems and proposing a near-optimal method which is simple and effective. We also prove the intractability of finding absolute optimal solutions for zero-respecting first-order algorithms, and establish lower bounds for finding weak optimal solutions, ensuring the comprehensiveness of our study. The bisection search is only a subprocedure in our algorithm. [1] Cao, Jincheng, et al. An Accelerated Gradient Method for Simple Bilevel Optimization with Convex Lower-level Problem. [2] Jiang, Ruichen, et al. A conditional gradient-based method for simple bilevel optimization with convex lower-level problem. [3] Samadi S, et al. Achieving optimal complexity guarantees for a class of bilevel convex optimization problems. [4] Jiulin Wang, et al. Near-optimal convex simple bilevel optimization with a bisection method. [5] Lin H, et al. A universal catalyst for first-order optimization. We thank the reviewer once again for the effort and time invested in the review. We would love to provide further clarifications if the reviewer has any additional questions. --- Rebuttal 2: Title: Could you kindly let us know if your concerns have been addressed? Comment: Dear Reviewer EaJT: We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. As the discussion period draws to a close, we kindly ask if you could let us know whether your concerns have been satisfactorily addressed. If you have any additional questions, we will be more than willing to provide further explanations. We would highly value any additional feedback you may provide. --- Rebuttal Comment 2.1: Comment: Thank you for the rebuttal. I still believe the novelty of this paper is not significant. Both this paper and [Wang et al. (2024)] use an oracle of solving single-level problems and apply a bisection approach to tackle an associated 1-D problem. The only distinction is that this paper uses bisection to solve a different 1-D problem. Thus, I will maintain my score. --- Reply to Comment 2.1.1: Comment: Dear Reviewer EaJT, Thank you for your response. We greatly appreciate the time and effort you put into providing constructive feedback. We think your main concern is still the comparison to [1]. We want to highlight that the sublevel set oracle (i.e. projecting onto the sublevel set of $f(\mathbf x)$: $\text{prox}_{f,c}(\mathbf x)=\arg \min _{f(\mathbf y)\leq c}\Vert\mathbf y-\mathbf x\Vert^2$) used in [1] is a **strong assumption**. For example, when $f(x)$ is the mean square loss or logistic loss as in our Experiment 2, projecting onto the sublevel set cannot be computed efficiently. In another example, we consider the single-level optimization of $f$ using such an oracle. In this case, one can simply solve this optimization problem using a binary search to identify $f^*$, which only requires $\mathcal{O}(\log(1/\epsilon))$ complexity to achieve $\vert f(x) - f^* \vert \le \epsilon$ (see also Remark 3 in \[1\]), even breaking the standard lower complexity bound $\Omega(1/\sqrt{\epsilon})$. In contrast, our method only uses the **standard gradient oracle** to achieve the $\tilde {\mathcal{O}}(1/ \sqrt{\epsilon})$ rate. We think this is a significant improvement over \[1\], and we hope that the reviewer can acknowledge our contributions. \[1\] Jiulin Wang, Xu Shi, and Rujun Jiang. Near-optimal convex simple bilevel optimization with a bisection method. International Conference on Artificial Intelligence and Statistics, 2024.
Summary: This work studies simple bilevel problems with convex upper and lower level objectives. The paper studies the problem for both smooth and Lipschitz functions. The contribution is two folded: 1- first, it shows that no zero-respecting algorithm can achieve $(\epsilon_f,\epsilon_g)$ absolute optimal solutions for limited number of iterations. 2- Second, it proposes the iteration complexity lower bound for first-order zero respecting algorithms and proposes a method that achieves this lower bound up to a logarithmic factor. The proposed method is achieved through relaxing Problem 1 to Problem 4 and solving this problem through a bisection procedure and Lemma 2.3.4 from [20]. Numerical experiments were conducted to study and compare the proposed method. Strengths: The paper has good literature review and good structure. I enjoyed reading it. Weaknesses: **Writing:** 1- paragraph starting from line 24: the first and second sentences are opposing each other. Meaning that the second sentence should bring something in contrast to what expressed in the first sentence. This is expected due to the use of "Yet". However, the comparison is not correct since the first sentence assumes an explicitly given constraint set while the second sentence is mentioning Problem 1 which does not have an explicit constraint set (which should be the case for bilevel programs). 2- In (5) it feels like a constraint over $x$ is missed in the minimization. Please clarify, thanks! 3- Line 205, the sentence starts with "And", but this sentence is connected to the previous sentence. Please fix! 4- Line 309, "It is open whether ..." sounds a bit informal, please revise, thanks! 5- Statements of Theorems 4.1, 4.2: "there exists an (1,1)..." should be "there exists a (1,1)..." to help readability. **Technical:** 1- Discussion below Theorem 4.2 on its proof states that the failure of the algorithms happen when $n\geq 2T$ where $n$ is the problem dimension and $T$ is the number of calls to the first order oracle. This, however, was not mentioned anywhere else (at least I did not see it). This means that if you go beyond a number of calls to the oracle, then you can actually reach the approximate absolute optimal solutions and your first claim/contribution is falsified. I'd like to add that the paper is studying the zero respecting property, which enforces coordinate-wise updates at each iteration. Convergence rate of $\mathcal O(n/T)$ is typical for coordinate methods when applied to convex problems. Thus, the dependency on $n$ (problem dimension) is expected naturally. 2- As a part of the algorithm, the initialization should be, though briefly, explained in the main body of the paper. I kindly suggest considering it. 3- The Bisec-Bio method showed a close behaviour in the first experiment (Figure 1). Due to the similar performance (not the same) between this method and the proposed method it is expected to see a more detailed comparison between these two in the second experiment. Unfortunately, this was skipped. More careful design of the second experiment might be helpful to increase the impact of the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: Could you please explain your reasoning and ideas regarding the points raised in the "Weaknesses" section? Thanks! Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for dedicating their valuable time and effort to evaluate our manuscript. However, there are some misunderstandings in the review, and we respond to your concerns one by one. > Writing Thanks for the valuable suggestions! We will incorporate your advice in the revised version. > Technical Weakness 1: Theorem 4.1/4.2 We respectfully disagree with your opinion. Throughout the paper, we adopt the standard oracle complexity framework for first-order methods established by Nesterov and Nemirovski [1], [4]. A crucial aspect of this framework is that the function class is defined in a **dimension-free** manner (i.e. the dimension of the function can be arbitrarily large). More specifically, in this framework, all **upper bounds** developed for gradient methods are dimension-free (which is a nature of gradient methods such as GD, SGD, and AGM) , and the **hardness results**, including lower complexity bounds (e.g. Theorem 5.1/5.2) and intractability results (e.g. Theorem 4.1/4.2) should also be dimension-free. Below we list some of such hardness results established in previous works. [1-3] Therefore, when proving hardness results, it is reasonable to state that for any fixed $T>0$, there exists a "hard problem", whose dimension can be large and depend on $T$, that is unsolvable in $T$ steps. In other words, we showed that no zero-respecting first-order methods can solve absolute optimal solutions to simple bilevel problems dimension-freely. We want to clarify that our hardness result applies to zero-respecting algorithms, which update all $n$ coordinates simultaneously. This is different from coordinate methods that only update one coordinate at each iteration. We also want to highlight that, similar to our proof, most existing hardness results for first-order methods involve constructing "hard problems" whose dimension is larger than $T$. For example: * Standard lower bound for convex minimization (Theorem 2.1.7 [1]. * Standard lower bound for nonconvex minimization (Theorem 1 [2]). * Recent intractability result for nonconvex-convex bilevel optimization (Theorem 3.2 [3]). They show that finding small hyper-gradients for nonconvex-convex bilevel optimization is hard for a fixed budget of $T$ iteration. [1] Nesterov, Yurii. Lectures on convex optimization. Vol. 137. Berlin: Springer, 2018. [2] Carmon, Y., Duchi, J. C., Hinder, O., & Sidford, A. (2020). Lower bounds for finding stationary points I. Mathematical Programming, 184(1), 71-120. [3] Chen, L., Xu, J., & Zhang, J. On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness Results and Improved Analysis. In COLT, 2024. [4] Nemirovskij A S, Yudin D B. Problem complexity and method efficiency in optimization. 1983. > Technical Weakness 2: As a part of the algorithm, the initialization should be, though briefly, explained in the main body of the paper. Thank you for the suggestion! We will add a concise explanation in the main body of the paper in the revised version. > Technical Weakness 3: Comparison with Bisec-BiO method in the experiment. The Bisec-BiO method has the same convergence rate as our proposed method FCB-BiO, thus shows a close behavior in the first experiment. However, we highlight that the advantage of FCB-BiO over Bisec-BiO is that its **applicability in a broader range of problems** (instead of a faster convergence on problems that both can solve). That's because Bisec-BiO requires a strong assumption that projection onto the sublevel set of the upper-level objective {$x|f(x)\leq a$} is easy to compute, while FCB-BiO **does not** rely on such an assumption. In the second experiment, the projection onto the sublevel set of the upper-level objective cannot be solved efficiently, rendering Bisec-BiO inapplicable. In summary, the first experiment was designed to illustrate a scenario where both methods are applicable and show comparable performance as expected. The second experiment, on the other hand, highlights a scenario where Bisec-BiO fails to apply, yet our method is still applicable and shows near-optimal performance. We thank the reviewer once again for the effort and time invested in the review. We would love to provide further clarifications if the reviewer has any additional questions. --- Rebuttal 2: Title: Could you kindly let us know if your concerns have been addressed? Comment: Dear Reviewer iPzZ: We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. As the discussion period draws to a close, we kindly ask if you could let us know whether your concerns, particularly regarding Theorem 4.1/4.2, have been satisfactorily addressed. If you have any additional questions, we will be more than willing to provide further explanations. We would highly value any additional feedback you may provide. --- Rebuttal Comment 2.1: Comment: Thanks for your responses. Considering your reponses to my comment and the Reviewer RLu3, I think the concerns are addressed. The score was adjusted accordingly.
Summary: This paper provides a theoretical proof that first-order zero-respecting algorithms are incapable of approximating the optimal solution for a simple bilevel optimization problem where both the upper-level and lower-level functions are convex. Then they propose a functional constrained reformulation to solve the simple bilevel problem. The method achieves the near-optimal convergence rate and shows good experimental results. Strengths: - This paper provides the theoretical proof to show that the zero-respecting first-order methods cannot find absolute optimal solutions. - They proposed a novel algorithm FCB-BiO to find $(\epsilon_f, \epsilon_g)$-weak optimal solutions for non-smooth and smooth cases, and this method achieves the near-optimal rates. - Compared to [1], this paper gets rid of the weak sharp minima condition. [1] Cao, Jincheng, et al. "An Accelerated Gradient Method for Simple Bilevel Optimization with Convex Lower-level Problem." arXiv preprint arXiv:2402.08097 (2024). Weaknesses: - Can other baselines like [1], [2] find $(\epsilon_f, \epsilon_g)$-weak optimal solutions? Actually, they can find absolute solutions with additional conditions, but hard to say they cannot work well for weak optimal solutions. - The experiments are little bit weak, and they simply specify $f(.)$ and $g(.)$ as quadratic functions. Can it be applied to neural networks? For example, $x$ is the model parameters of a neural network. - It would be better if authors can show the results on meta-learning or some other typical bilevel optimization problems. - For Figure 1 and 2, do they take the warm-up time and initialization time into consideration? [1] Cao, Jincheng, et al. "An Accelerated Gradient Method for Simple Bilevel Optimization with Convex Lower-level Problem." arXiv preprint arXiv:2402.08097 (2024). [2] Jiang, Ruichen, et al. "A conditional gradient-based method for simple bilevel optimization with convex lower-level problem." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for dedicating their valuable time and effort to evaluate our manuscript. Below are our responses to the reviewer’s concerns. > W1: Can other baselines like [1], [2] find $(\epsilon_f,\epsilon_g)$ weak optimal solutions? Actually, while some of the previous works have developed algorithms to find absolute solutions under additional assumptions, most previous methods, including [1], [2], aim to find weak optimal solutions. So yes, they can find weak optimal solutions. However, as we discussed in the "Related work" section, previous methods either do not achieve optimal convergence rate (e.g. [2], which only achieves a suboptimal $O(1/\epsilon)$ rate for smooth problems), or require additional assumptions such as weak sharp minima condition (e.g. [1]). In contrast, our proposed method achieves near-optimal rates without needing such assumptions. [1] Cao, Jincheng, et al. "An Accelerated Gradient Method for Simple Bilevel Optimization with Convex Lower-level Problem." arXiv preprint arXiv:2402.08097 (2024). [2] Jiang, Ruichen, et al. "A conditional gradient-based method for simple bilevel optimization with convex lower-level problem." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. > W2: The experiments are little bit weak, and they simply specify $f$ and $g$ as quadratic functions. Can it be applied to neural networks? We would like to clarify that in our second experiment, the objective functions $f,g$ are the loss functions of logistic regression, rather than quadratic functions. We acknowledge that our method is designed for convex simple bilevel optimization problems as in many previous works, thus is not directly applicable to neural networks which are nonconvex. However, we would like to highlight that there is **no existing theory** for simple bilevel problems with nonconvex lower-level objectives. But we are willing to study it in the future works. > W3: It would be better if authors can show the results on meta-learning or some other typical bilevel optimization problems. Meta-learning takes the following form: $$ \begin{align*} \min_{x,y}\ & f(y)\\\\ {\rm s.t.} \quad y\in & \arg\min_z g(z)+\frac \lambda 2 \|x-z\|^2 \end{align*} $$ which is a more general bilevel setting where the lower-level problem is parameterized by the upper-level variable $x$. See Equation 4 in [3]. This is different from our setup of simple bilevel optimization (Equation 1 in the manuscript). [3] Rajeswaran, A., Finn, C., Kakade, S. M., & Levine, S. Meta-learning with implicit gradients. In NeurIPS, 2019. > W4: For Figure 1 and 2, do they take the warm-up time and initialization time into consideration? Yes, as discussed in Appendix E, the initialization time is also taken into account and is plotted in Figure 1 and Figure 2 of the manuscript. In the attached pdf in the general response, the initialization stage is plotted with dashed lines in Figure 1 and Figure 2. We thank the reviewer once again for the effort and time invested in the review. We would love to provide further clarifications if the reviewer has any additional questions. --- Rebuttal 2: Title: Could you kindly let us know if your concerns have been addressed? Comment: Dear Reviewer KpB7: We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. As the discussion period draws to a close, we kindly ask if you could let us know whether your concerns have been satisfactorily addressed. If you have any additional questions, we will be more than willing to provide further explanations. We would highly value any additional feedback you may provide. --- Rebuttal Comment 2.1: Comment: Thanks for your response. I would like to keep my rating.
Summary: This work proposes a novel and near-optimal method to solve convex simple bilevel problems by finding weak optimal solutions. The author also provides theoretical and numerical guarantees of the convergence of this algorithm. Strengths: 1. This paper is easy to follow, with techniques that are rigorously proven and thoroughly explained. 2. This work is proven without the Hölderian error bound condition, and the lower bounds clearly demonstrate near-optimality in both Lipschitz and smooth settings. Weaknesses: 1. Since this work focuses only on weak optimal solutions, the title *Near-Optimal Methods for Convex Simple Bilevel Problems* seems somewhat boastful. 2. There are some minor typos like missing space before "When" in line 238 and 261. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. What will happen if one of $f$ and $g$ is Lipschitz continuous and the other one is smooth? 2. What it is "," in eq.(5) but ";" in eq.(6)? 3. Assumption 3.1.2 does not appear in previous works and seems very strong. Could the author further explain why this assumption is necessary and what the difficulties would be without it? 4. How does the author determine the initial interval $[l, u]$, and how does this interval affect the convergence? I would like to change my grade based on the response from the author. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: No limitation Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for dedicating their valuable time and effort to evaluate our manuscript. Below are our responses to the reviewer’s concerns. > W1: Since this work focuses only on weak optimal solutions, the title Near-Optimal Methods for Convex Simple Bilevel Problems seems somewhat boastful. In this paper, we study two solution concepts, absolute optimal solutions and weak optimal solutions. However, when we say "near-optimal", we refer specifically to weak optimal solutions, as we have proven that absolute optimal solutions are intractable. We aim to have a concise title that accurately reflects the focus of our work, and we are open to suggestions from the reviewers to further refine and improve the title. > W2: minor typos. Thanks for pointing out the typos. We will carefully review and correct the typos in the revised version. > Q1: What will happen if one of $f$ and $g$ is Lipschitz continuous and the other one is smooth? This is an interesting and valuable question. A similar intractability result of finding absolute optimal solutions as presented in Section 4 still holds in this case. The proof would be similar to the one of Theorem 4.1/4.2, by letting the lower-level objective g be a Lipschitz/smooth zero-chain and letting the upper-level function f be a quadratic function that only depends on the last half of the components. A similar lower bound for finding weak optimal solutions as in Section 5.1 can also be established, as the argument of decoupling the optimization processes for $f$ and $g$ still applies. More specifically, the lower bound would be $\Omega \left(\max\left(\frac{L_f}{\epsilon_f}D, \frac{C_g^2}{\epsilon_g^2}D^2\right)\right)$ if $f$ is $L_f$-smooth and $g$ is $C_g$-Lipschitz, or $\Omega \left(\max\left(\frac{C_f^2}{\epsilon_f^2}D^2,\frac{L_g}{\epsilon_g}D\right)\right)$ if $f$ is $C_f$-Lipschitz and $g$ is $L_g$-smooth. For the upper bound, a smooth function is also Lipschitz-continuous in $\mathcal B (\mathbf x_0,D)$, thus applying $\texttt{FCB-BiO}^{\texttt{Lip}}$ directly gives the complexity of $\tilde O(\max\{\frac{1}{\epsilon_f^2},\frac 1 {\epsilon_g^2}\})$. If $\epsilon_f\asymp \epsilon_g$, then the dependency on $\epsilon$ is near-optimal. However, achieving near-optimal dependency on other parameters remains an open question and would be an interesting direction for future research. > Q2: What it is "," in eq.(5) but ";" in eq.(6)? This is a typo, and we apologize for any confusion it caused. The notation in (5) and (6) should be consistent and both signify the maximum of $f(\mathbf x)-t$ and $\tilde g(\mathbf x)$ > Q3: Assumption 3.1.2 does not appear in previous works and seems very strong. Could the author further explain why this assumption is necessary and what the difficulties would be without it? The assumption that $\|\mathbf x-\mathbf x_0\|\leq D$ is a technical assumption due to the bisection procedure. This procedure involves solving a series of subproblems $\text{minimize} \max(f(\mathbf x)-t,\tilde g(\mathbf x))$ with resect to $\mathbf x$ (Equation 6), the total complexity would depend on $\max_t \|\mathbf x^*_{(t)}-\bar {\mathbf x}\|$, where $\bar {\mathbf x}$ is the starting point of the subprocess, and $\mathbf x_{(t)}^*$ is the optimal solution of the subproblem. The term $\|\mathbf x^*_{(t)}-\bar {\mathbf x}\|$ cannot be bounded by the initial distance $\|\mathbf x^*-\mathbf x_0\|$ because $\mathbf x^*_{(t)}$ might be far away from $\mathbf x^*$. To address this, we restrict the domain to $\mathcal B(\mathbf x_0,D)$, ensuring that the distance term $\|\mathbf x^*_{(t)}-\bar {\mathbf x}\|$ is always bounded by $D$ in each bisection step. > Q4: How to determine the initial interval $[\ell, u]$, and how does this interval affect the convergence? As mentioned in Line 201, we discuss how to determine the initial interval $[\ell,u]$ in Appendix B.1. Specifically, we apply single-level optimization methods SGM/AGM to $f$ and $g$ respectively and solve approximate solutions $\hat {\mathbf x}_f,\hat {\mathbf x}_g$ such that $$ p^*\leq f(\hat {\mathbf x} _f)\leq p^*+\frac \epsilon 2,g^*\leq g(\hat {\mathbf x} _g)\leq g^*+\frac \epsilon 2, $$ where $p^*,g^*$ are global minima of $f$ and $g$ respectively. Then setting $\ell = f(\hat {\mathbf x} _f)-\frac \epsilon 2\leq p^*\leq \hat f^*$ would be a valid lower bound of $\hat f ^*$, and setting $u=f(\hat {\mathbf x}_g)$ would be a valid upper bound. As presented in Theorem 5.3 and Theorem 5.4, the initial distance $u-\ell$ affects the convergence rate up to a logarithm term. If $f$ is bounded, then $u-\ell$ is also bounded. We thank the reviewer once again for the valuable and helpful suggestions. We would love to provide further clarifications if the reviewer has any additional questions. --- Rebuttal Comment 1.1: Comment: Thank you for the authors' response and efforts. I'd prefer to maintain my current score. Additionally, it would be great to see a relaxed assumption in place of Assumption 3.1.2. --- Rebuttal 2: Title: Could you kindly let us know if your concerns have been addressed? Comment: Dear Reviewer 213i: We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. As the discussion period draws to a close, we kindly ask if you could let us know whether your concerns have been satisfactorily addressed. If you have any additional questions, we will be more than willing to provide further explanations. We would highly value any additional feedback you may provide.
Rebuttal 1: Rebuttal: We are deeply grateful for the efforts and valuable feedback of the reviewers and area chairs in reviewing our manuscript. Combining the suggestions of the reviewers, we conduct an additional numerical experiment to better compare the performance of $\texttt{FCB-BiO}$ with other methods. Following previous works [1],[2], we consider the over-parameterized linear regression problem. We use the Wikipedia Math Essential dataset [24], which contains $1068$ instances with $730$ attributes. We use $50\%$ of the dataset as the training set $(A^{tr},\mathbf b^{tr})$ and $50\%$ of the dataset as the validation dataset $(A^{val},\mathbf b^{val})$. The problem can be formulated as: $$ \begin{aligned} \min f(\mathbf x)&:=\frac 1{2m^{val}} \|A^{val}\mathbf x-\mathbf b^{val}\|^2,\\\\ \text{s.t.}\quad \mathbf x&\in \arg\min_{\mathbf z}g(\mathbf z):=\frac 1 {2m^{tr}}\|A^{tr}\mathbf z -\mathbf b^{tr}\|^2. \end{aligned} $$ The result is shown in Figure 3 in the uploaded pdf. The initialization time is plotted with dashed lines. In this experiment, we set $\epsilon_g=10^{-9}$ and $\epsilon_f=10^{-6}$. The optimal solution of this problem can be solved by Lagrange multiplier method. The result again shows the superior performance of our method compared to existing methods in both upper-level and lower-level. We remark that in this experiment, an $(\epsilon_f,\epsilon_g)$-**absolute optimal solution** is solved by $\texttt{FCB-BiO}$. But this is not a contradiction to our hardness result in Section 4. For this problem, the lower-level objective satisfies the second-order Holderian Error Bound condition. As we discussed in Appendix D, $\texttt{FCB-BiO}$ can solve the absolute optimal solution with the best known rate when the lower-level objective satisfies this additional Holderian error bound condition. In this experiment, we set the initial point $\mathbf x_0$ to be a random vector of unit length. For $\texttt{AGM-BiO}$, we set $\gamma = 1 /(\frac {2L_g}{L_f}T^{2/3}+2)$ as in Theorem 4.4 [1]. For $\texttt{PB-APG}$, we set $\gamma = 10^5$. For $\texttt{a-IRG}$, we set $\eta_0=1$ and $\gamma_0=10^{-3}$. For $\texttt{CG-BiO}$, we set $\gamma_k = \frac 2 {k+1}$. [1] Cao, Jincheng, et al. "An Accelerated Gradient Method for Simple Bilevel Optimization with Convex Lower-level Problem." arXiv preprint arXiv:2402.08097 (2024). [2] Jiang, Ruichen, et al. "A conditional gradient-based method for simple bilevel optimization with convex lower-level problem." International Conference on Artificial Intelligence and Statistics. PMLR, 2023. Pdf: /pdf/41952e033c1283ee7a6005ecc37ee7511bfac127.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes a method to solve the bilevel problems where both upper and lower level objective functions are convex. The new algorithms, $FCB-BiO^{sm}$ combine bisection with sub-gradient or gradient methods to solve a reformulated problems to find a weak optimal solution. Convergence analyses show that the proposed method achieves the optimal rate up to a log factor. Numerical experiments illustrate the performance of $FCB-BiO^{sm}$ compared to existing works. Strengths: - Different from other related work, the paper focuses on the reformulated minimax problem of the original bilevel optimization. - To solve the reformulated minimax problem, the paper uses bisection followed by gradient or sub-gradient methods. - The assumption in the paper is more relaxed than existing works that use Hölderian error bound condition. Weaknesses: - Theorem 4.1 and Theorem 4.2 are not well-motivated and unclear to me. The theorems state that for any algorithm that runs for a fixed budget of T iteration, there exists a problem instance such that we cannot achieve the absolute optimal solution. By showing this, the authors claim that finding absolute optimal solutions for the class of bilevel problem (1) is hard which I disagree. The key point is that, since we only run for T iterations, what if we did not run enough iterations so that the absolute optimal solution is achieved. As shown in the proof of Theorem 4.1 and Theorem 4.2, the authors come up with a problem of dimention $2T$ where the algorithms only run up to T iteration and in each iteration, only 1 component becomes non-zero. What if for the same problem, we allocate more budget of $\hat{T}$ iteration where $\hat{T} >> T$? - Also, showing that there exists one instance such that when running an algorithms for T iteration cannot lead to absolute optimal solution within T iteration does not mean that the problem cannot be solved for any T iteration for $T\to\infty$. - Due to the above concerns, the paper does not containt much novelty as the bisection technique has already been proposed in Wang et. al. (2024) [30], and the convergence result of the gradient/sub-gradient methods are known. Minor comments: - Typo at line 279: "minimmum" $\to$ "minimum"? Technical Quality: 2 Clarity: 2 Questions for Authors: 1. To come up with an intial interval $[\ell ,u]$, we need to run SGM or AGM applied to $g(x)$. However, to do so, we require knowledge of the Lipschitz constant which might not always be available. How can we come up with the interval in such cases? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The paper does discuss its limitation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for dedicating their valuable time and effort to evaluate our manuscript. However, there are some misunderstandings in the review that we wish to clarify. > W1 & W2: In the proof of Theorem 4.1/4.2, what if for the same problem, we allocate more budget of iterations, so that the absolute optimal solution is achieved? We respectfully disagree with your opinion. Throughout the paper, we adopt the standard oracle complexity framework for first-order methods established by Nesterov and Nemirovski [1], [4]. A crucial aspect of this framework is that the function class is defined in a **dimension-free** manner (i.e. the dimension of the function can be arbitrarily large). More specifically, in this framework, all **upper bounds** developed for gradient methods are dimension-free (which is a nature of gradient methods such as GD, SGD, and AGM) , and the **hardness results**, including lower complexity bounds (e.g. Theorem 5.1/5.2) and intractability results (e.g. Theorem 4.1/4.2) should also be dimension-free. Below we list some of such hardness results established in previous works. [1-3] Therefore, when proving hardness results, it is reasonable to state that for any fixed $T>0$, there exists a "hard problem", whose dimension can be large and depend on $T$, that is unsolvable in $T$ steps. In other words, we showed that no zero-respecting first-order methods can solve absolute optimal solutions to simple bilevel problems dimension-freely. We also want to highlight that, similar to our proof, most existing hardness results for first-order methods involve constructing "hard problems" whose dimension is larger than $T$. For example: * Standard lower bound for convex minimization (Theorem 2.1.7 [1]). * Standard lower bound for nonconvex minimization (Theorem 1 [2]). * Recent intractability result for nonconvex-convex bilevel optimization (Theorem 3.2 [3]). They show that finding small hyper-gradients for nonconvex-convex bilevel optimization is hard for a fixed budget of $T$ iterations. [1] Nesterov, Yurii. Lectures on convex optimization. Vol. 137. Berlin: Springer, 2018. [2] Carmon, Y., Duchi, J. C., Hinder, O., & Sidford, A. (2020). Lower bounds for finding stationary points I. Mathematical Programming, 184(1), 71-120. [3] Chen, L., Xu, J., & Zhang, J. On Finding Small Hyper-Gradients in Bilevel Optimization: Hardness Results and Improved Analysis. In COLT, 2024. [4] Nemirovskij A. S., Yudin D. B. Problem complexity and method efficiency in optimization. 1983. > W3: Novelty; Similarity with Wang et. al. (2024) [1] We appreciate the reviewer's feedback. We have addressed the rationality of Theorem 4.1/4.2 in our previous discussion. Below we would like to clarify the distinctions and contributions of our work in comparison to previous works. Firstly, while the bisection process in our proposed algorithm $\texttt{FCB-BiO}$ might appear similar to the approach in Wang et. al. (2024) [1], the underlying motivations and objectives are fundamentally different. In Wang et. al. (2024) [1], the bisection is employed to transform the original problem $\min f(x) \ \text{s.t.}\ g(x)\leq g^*$ to a series of problems $\min \ g(x) \ \text{s.t.}\ f(x)\leq c$ for different $c$. In contrast, in $\texttt{FCB-BiO}$, the original problem is reformulated to finding the smallest root of a univariate monotone auxiliary function $\psi^*(t)$. For such one-dimensional root-finding problems, the bisection method is the most standard and straightforward approach. Additionally, our reformulation improves the results in Wang et. al. (2024) [1]. Specifically, [1] requires an oracle that efficiently projects $x$ onto the sublevel set of $f$, i.e., {$x|f(x)\leq t$}. This assumption only holds for a limited class of functions and would otherwise incur additional gradient evaluations. On the other hand, $\texttt{FCB-BiO}$ does not need this assumption, making it applicable to a broader range of problems. In summary, we view the main contribution of our work as **introducing the functionally constrained reformulation** to simple bilevel problems and proposing near-optimal method which is simple and effective. We also prove the intractability of finding absolute optimal solutions for zero-respecting first-order algorithms, and establish lower bounds for finding weak optimal solutions, ensuring the comprehensiveness of our study. The bisection search is only a subprocedure in our algorithm. [1]. Jiulin Wang, Xu Shi, and Rujun Jiang. Near-optimal convex simple bilevel optimization with a bisection method. International Conference on Artificial Intelligence and Statistics, 2024. > Minor Comments Thanks for pointing out the typo. We will fix it in the revised version. > Q1: To come up with an initial interval $[\ell,u]$,we need to run SGM or AGM applied to $g(x)$. This requires the knowledge of Lipschitz constant which might not always be available. There are two approaches we can take to tackle this problem: First, tune the Lipshitz constant as a hyperparameter, which is also equivalent to tuning the learning rates. Second, apply a parameter-free method on $g(x)$, such as [1,2]. [1] Guanghui Lan, Yuyuan Ouyang, and Zhe Zhang. "Optimal and parameter-free gradient minimization methods for smooth optimization." arXiv preprint arXiv:2310.12139 (2023). [2] Itai Kreisler, Maor Ivgi, Oliver Hinder, and Yair Carmon. "Accelerated Parameter-Free Stochastic Optimization." In COLT, 2024. We thank the reviewer once again for the effort and time invested in the review. We would love to provide further clarifications if the reviewer has any additional questions. --- Rebuttal 2: Title: Could you kindly let us know if your concerns have been addressed? Comment: Dear Reviewer RLu3: We sincerely appreciate the time and effort you have dedicated to reviewing our manuscript. As the discussion period draws to a close, we kindly ask if you could let us know whether your concerns, particularly regarding Theorem 4.1/4.2, have been satisfactorily addressed. If you have any additional questions, we will be more than willing to provide further explanations. We would highly value any additional feedback you may provide. --- Rebuttal Comment 2.1: Title: Response to authors Comment: Dear authors, Thank you for providing responses to my comments. I have additional comments below: - I believe that the hardness results in [3] is the closest setting to this paper where they show that under zero-chain assumption, the solution stays at 0 at any iteration. - Line 149-154 in the paper confused me at the beginnning mainly from the statement "Due to the zero-respecting property, only one component of $x_k$ is activated (i.e. becomes nonzero) per iteration." If this happens, then at a certain iteration large enough, we would be able to arrive at the optimal solution. However, in the proof the authors actually prove that the solution stays at 0 if the initial point is 0. I suggest clarifying this in the paper. As my concerns have been mostly addressed, I have adjusted the score accordingly. --- Reply to Comment 2.1.1: Comment: Dear Reviewer RLu3, Thank you for your thoughtful feedback and for adjusting the score. We appreciate your additional comments and will clarify the points you mentioned in the revised version.
null
null
null
null
null
null
Membership Inference Attacks against Fine-tuned Large Language Models via Self-prompt Calibration
Accept (poster)
Summary: In this work, the authors propose an MIA for LLMs that utilizes prompt calibration to measure variation on model behavior for neighboring inputs. The authors also make a connection with the neighborhood attack from Mattern et al and show how their framework encapsulates such neighborhood-based attacks. Performance on multiple domains demonstrate a clear bump in performance using this attack. Strengths: - Attempts to improve black-box privacy auditing are useful and needed, especially with more and more model trainers where access is available only via wrappers or APIs. - Figures and diagrams in the paper are very well made and helpful, and complement the writing well. - Assessing the method's robustness for different sources of prompts is useful and helps understand worst-case performance when prompt data is completely unrelated to the member distribution. Weaknesses: - The core idea here is not very different from distillation-based stealing [1], followed by a slightly modified neighborhood attack. - L240: There is no clear relationship between "modest" paraphrasing in token space, and a corresponding $h$ in the Equation 10, let alone plus/minus. For a language model, $x$ is in the embedding space, so small perturbations to the embeddings will probably not correspond to any actual tokens. Moreover, inputs are sequences and thus perturbations to multiple tokens makes the assumption around a small $h$ even less plausible. Replacing 20% of all tokens is in no way a "modest" replacement. - In Table 1: why not include a version in the baseline that uses the reference models that you train but does not use the neighborhood logic. This would help better understand the contribution of both these steps, serving as an ablation study. - L631-633: This is not a grounded statement. Making the connection not only seems unnecessary, but incorrect. While it is nice to have theoretical connections, it is by no means necessary for a good paper. I would urge the authors to rethink the use of theory to make connections in places where they do not seem valid without handwavy justifications. ## Minor comments - L32: '[44]' is merely a presentation abstract. [2], for instance, provides an actual systemization of memorization in LLMs - Figure 1b - what is "memorization"? Figures should be labeled clearly. - The authors seem to be confused about which works introduced LiRA. The authors seem to suggest that Mireshghallah et al introduced LiRA (L45, L102), whereas it was proposed by Carlini et al. [3]. - L49-50: While overfitting means higher leakage, it is by no means an "assumption" for MIAs. - L59: "Exponential" is a bold claim here; there are only 3 datapoints. It is not surprising that it declines, as it is explained by theory [4] - L81: Computing a second-order derivative in an LLM is just not feasible practically. - L84: The neighborhood-based MIA requires another masking model, which need not be an LLM. - L163: If it is over-represented in dataset, it is needless to say a member. The statement here feels vacuous. - L164: Please also cite appropriate work that introduces this formal dependency on reference models for difficulty calibration [4]. - L168: Missing citation - The dot on top of $\theta$ in Equation 5 (and other related equations) is barely visible- please use better notation. - L182: Missing citation - L229: Why is a negative-definite Hessian problematic? In practice, the (empirical) Hessian is very low-rank and most of its entries are close to zero. - L235: "..can be interpreted as kind of" - this is very handwavy - Table 1: Please also include TPR@FPRs (like 1% and 0.1% FPR). The table is also missing a lot of attacks like Min-K%++ [5] and MoPE [6] - Figure 3 Why is Y axis going above AUC = 1.0? ### References - [1] Galichin, Andrey V., et al. "GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation." arXiv preprint arXiv:2405.07562 (2024). - [2] Hartmann, Valentin, et al. "Sok: Memorization in general-purpose large language models." arXiv preprint arXiv:2310.18362 (2023). - [3] Carlini, Nicholas, et al. "Membership inference attacks from first principles." 2022 IEEE Symposium on Security and Privacy (SP). IEEE, 2022. - [4] Sablayrolles, Alexandre, et al. "White-box vs black-box: Bayes optimal strategies for membership inference." International Conference on Machine Learning. PMLR, 2019. - [5] Zhang, Jingyang, et al. "Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models." arXiv preprint arXiv:2404.02936 (2024). - [6] Li, Marvin, et al. "Mope: Model perturbation-based privacy attacks on language models." arXiv preprint arXiv:2310.14369 (2023). Technical Quality: 3 Clarity: 1 Questions for Authors: - In the codebase, at line 65 of `ft_llms/refer_data_generate.py`, the prompt data is sampled directly from 10K-20K range of samples from the train data, so there is a very clear bias here (supposed to be from same "distribution"?) If prompt data is indeed sampled from train data, completions would likely be the actual data itself, so reference model would behave similar to target model. Scores would thus be $m(x) - m(\hat{x}) \approx 0$ for train data, and non-zero for validation data. If this is indeed true, this is a **big empirical flaw**. I would urge the authors to clarify, and also plot the distributions of actual scores corresponding to $m(x)$. I may consider increasing my score if the authors can look into. - Why is there no mention of any studies related to membership inference for LLMs in Section 2 (Large Language Models)? There have been plenty specifically looking at LLMs [1,2,3,4]. - L147: "adversary has no prior information" - Does it have access to the model's initial weights? - Figure 5b: Why is there a drop in performance for the domain-specific case? ### References - [1] Maini, Pratyush, et al. "LLM Dataset Inference: Did you train on my dataset?." arXiv preprint arXiv:2406.06443 (2024). - [2] Mozaffari, Hamid, and Virendra J. Marathe. "Semantic Membership Inference Attack against Large Language Models." arXiv preprint arXiv:2406.10218 (2024). - [3] Meeus, Matthieu, et al. "Inherent Challenges of Post-Hoc Membership Inference for Large Language Models." arXiv preprint arXiv:2406.17975 (2024). - [4] Duan, Michael, et al. "Do membership inference attacks work on large language models?." arXiv preprint arXiv:2402.07841 (2024). - [5] Das, Debeshee, Jie Zhang, and Florian Tramèr. "Blind Baselines Beat Membership Inference Attacks for Foundation Models." arXiv preprint arXiv:2406.16201 (2024). Confidence: 5 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: There is no proper discussion about limitations, apart from a 1-2 sentences in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer i2LA, We deeply appreciate the time you took to review our work and your meticulous comments for improvement. Below we address the questions raised in your review, the responses to the weaknesses and minor comments, as well as the reference can be found in subsequent comments: **Q1: In the codebase, is the prompt data sampled directly from the fine-tuning data? If this is indeed true, this is a big empirical flaw.** We want to clarify that we did not directly extract prompt data from the fine-tuning data, this is simply a part of the code that is easily misunderstood. Specifically, the `train_dataset` variable in line 65 of the `ft_llms/refer_data_generate.py` file is not the actual training set used to fine-tune the target model. We chose to extract prompt data from the `train_dataset` variable because, in some datasets, the number of samples in the `valid_dataset` is small and may lead to out-of-range errors. In fact, when preparing the fine-tuning dataset for the target model, we will manually skip the 10k-20k range: see line 226 in the `ft_llms/llms_finetune.py` file: `226: train_dataset = Dataset.from_dict(train_dataset[args.train_sta_idx:args.train_end_idx])` Moreover, we have experimentally demonstrated that the source of the prompt data has almost no impact on the performance of our proposed method (In Section 5.4, Fig. 4). Even prompt data from arbitrary unrelated sources can achieve excellent performance. Furthermore, we are committed to fully addressing your concern by adhering to your guidance to plot the distributions of metric score $\Delta m(x)$ (in the PDF file of the global comment, Figure 2), for the training data (member) and validation data (non-member). The results clearly demonstrate that $\Delta m(x) = m(x) - \widetilde{m}(x)$ is not approximate to zero for the training data. Additionally, the training data metric score is generally higher than the validation data, indicating that the reference model would behave very differently from the target model regarding the target model's training data. All these results demonstrate that we did not directly extract prompt data from the fine-tuning data. **Q2: Why is there no mention of any studies related to membership inference for LLMs in Section 2 (Large Language Models)? There have been plenty specifically looking at LLMs [1,2,3,4,5]** We have reviewed the current membership inference attacks on LLMs in Section 2 (Membership Inference Attack) and therefore did not replicate this content in Section 2 (Large Language Models). Moreover, since most of the literature [1,2,3,5] you recommended was submitted to arXiv in June this year (after the NeurIPS deadline), we are not able to discuss these papers in our submitted version. Thank you for the reminder, we will add a comprehensive discussion of the literature you recommended in Section 2 (Membership Inference Attack). **Q3: L147: "adversary has no prior information" - Does it have access to the model's initial weights?** > Explanation for L147: "adversary has no prior information about which data records are utilized for fine-tuning." On line 147, we emphasize that our method does not have prior information about the fine-tuning dataset of the target model. Existing reference-based methods heavily rely on the strong assumption that the adversary can obtain a dataset that is from the same distribution but does not intersect with the fine-tuning dataset. Our proposed self-prompt method avoids this assumption. > Answer for the question: Does it have access to the model's initial weights? The proposed method does not necessarily require the initial parameters of the model. An API that allows the adversary to upload a customized dataset for fine-tuning would also meet the conditions. Therefore, our proposed method is not only applicable to open-source models with publicly available parameters but also for closed-source models that provide fine-tuning APIs (such as ChatGPT [6]). Thank you for your question for improvement, we will include this discussion in the camera-ready version. Furthermore, to further allay your concerns, we consider a more strict scenario where there is no access (even the fine-tuning API) to the pre-trained model corresponding to the target model. We employ LLaMA-7 fine-tuned over the Agnew dataset as the target model, and let the adversary fine-tune over different pre-trained models (GPT-J, Falcon and LLaMA are adopted) to prepare the reference model. The results are shown in the table below. From the experimental results in the table, we can find that the proposed method still maintains a large performance margin over the baselines that can fine-tune the pre-trained model of the target model. We will include these results in the camera-ready version. |Metric|AUC|TPR@1%FPR|TPR@0.1\%FPR| |----------------------|-----|---------|-----------| |SPV-MIA (GPT-J)|0.832|19.3\%|7.5\%| |SPV-MIA (Falcon)|0.812|21.7\%|8.7\%| |SPV-MIA |0.903|39.5\%|23.5\%| |LiRA-Base |0.657|8.7\%|1.4\%| |LiRA-Candidate |0.714|11.5\%|1.9\%| **Q4: Figure 5b: Why is there a drop in performance for the domain-specific case?** This phenomenon occurs because the performance of reference-based MIA is highly dependent on the quality (source) of the reference dataset (in Section 5.3, Figure 3). Although a domain-specific dataset pertains to the same domain as the target model's training set, there can still be significant distributional differences, even if these differences are smaller compared to an irrelevant dataset. Moreover, we have conducted experiments that demonstrate the quality of the domain-specific dataset is relatively low (in Section 5.3, Figure 3). Therefore, using excessively long prompt text lengths will cause the reference dataset extracted from the target model via the self-prompt method to be more similar in data distribution to the domain-specific dataset (low quality) rather than the target model's training dataset (high quality). --- Rebuttal 2: Title: (Optional for reviewer to read) The responses to the weaknesses -- Part 1 Comment: To answer the detailed questions of i2LA, we provide more information in the form of comments. We understand reviewers are not obligated to read comments. we keep them just in case i2LA are interested. **W1: The core idea here is not very different from distillation-based stealing [7], followed by a slightly modified neighborhood attack.** > **W1.1: The core idea here is not very different from distillation-based stealing (GLiRA) [7]** GLiRA [7] is a related work that also focuses on MIA. Although our ideas are slightly similar, the idea of SPV-MIA still originated independently and was proposed earlier. A direct piece of evidence is that GLiRA is a recent publication, submitted on arXiv shortly before the NeurIPS ddl (May 13). Moreover, our proposed method significantly diverges from it in multiple aspects: 1. **The Target Model**: GLiRA is an MIA method specifically tailored for classification models. Thus, certain specific designs are only applicable to classification models and cannot be applied to LLMs we focus on. For example, GLiRA requires training a large number of shadow models (128 in their experiments), a level of overhead that is not negligible when the target model is an LLM. 2. **The Adversary Knowledge**: GLiRA assumes that the adversary can directly sample data points from the same data distribution as the training dataset of the target model. This is a widely acknowledged unrealistic assumption, especially in the context of LLMs [8,9,10], as their training datasets are typically not publicly available. In contrast, our proposed method has no prior knowledge about the data distribution utilized for fine-tuning. We have avoided this assumption with the proposed self-prompt method. 3. **The Essential Motivation**: Although GLiRA and the self-prompt method we proposed both share the commonality of mimicking the target model, the motivation driving each approach is different. Specifically, GLiRA essentially relies on hypothesis testing with a large number of shadow models, and they hypothesize that using knowledge distillation to mimic the target model can significantly improve the accuracy of MIA. However, the self-prompt method we proposed is to fine-tune a high-quality reference model without sampling data from the same training data distribution as the target model. Therefore, it is difficult to say that our proposed method is not very different from GLiRA. Thanks for your reminder, we will discuss your suggested paper in the camera-ready version. > **W1.2: Followed by a slightly modified neighborhood attack** Our method and the neighbour attack approach differ significantly in concept, although they both employ text paraphrasing for attacking. The neighbour Attack is motivated by exploring a substitution of reference models. In this work, we provide another motivation based on model memorization, that is, the member text memorized by the model will tend to be located on the local maximum in the generative distribution [11]. Therefore, this study may inspire the following studies to design more sophisticated methods for inferring membership by characterizing the model memorization. For example, a recently published work (released after the NeurIPS deadline) that you also recommended [9] proposed an MIA against pre-trained LLMs based on the concept of local maximum detection. **W2: L240: There is no clear relationship between "modest" paraphrasing in token space, and a corresponding $h$ in the Equation 10, let alone plus/minus. For a language model, is in the embedding space, so small perturbations to the embeddings will probably not correspond to any actual tokens. Moreover, inputs are sequences and thus perturbations to multiple tokens makes the assumption around a small even less plausible. Replacing 20% of all tokens is in no way a "modest" replacement.** Equation 10 is approximately derived from Equation 9, requiring $h \to 0$, which corresponds to conducting modest perturbations in the high-dimensional semantic space of sentence $\boldsymbol{x}$. Thus, the perturbation is conducted at the sentence-level rather than the token-level. Since the mask-filling model samples sentences similar to $\boldsymbol{x}$ with minimal changes to semantic meaning, we can think of the mask-filling model as first sampling a similar semantic embedding $\boldsymbol{\widetilde{e}}$ and then mapping this to a partial paraphrased token sequence ($\boldsymbol{\widetilde{e}}$ → $\boldsymbol{\widetilde{x}}$). Thus, the modest perturbation in the sentence space can lead to partial tokens in this sentence be paraphrased. We chose a 20% perturbation rate over a smaller one because a reduced perturbation rate would render the numerical variances between the original and paraphrased sentences imperceptible in the score metrics. This scenario could lead to the metrics we intend to assess having numerically insignificant values, potentially overshadowed by the noise introduced through numerical computations. --- Rebuttal 3: Title: (Optional for reviewer to read) The responses to the weaknesses -- Part 2 Comment: To answer the detailed questions of i2LA, we provide more information in the form of comments. We understand reviewers are not obligated to read comments. we keep them just in case i2LA are interested. **W3: In Table 1: why not include a version in the baseline that uses the reference models that you train but does not use the neighborhood logic. This would help better understand the contribution of both these steps, serving as an ablation study.** We have shown the ablation study in Appendix A.5.1 for auditing the contributions of the proposed Practical Difficulty Calibration (PDC) and probabilistic variation assessment (PVA, i.e., the neighborhood logic) separately. Furthermore, we also have conducted an extensive ablation study in Appendix A.4 to investigate the contributions of our proposed two symmetrical paraphrasing methods compared with neighbour attack, in which we adopt the two symmetrical paraphrasing methods, and neighbour attack as the PVA module, respectively. Thank you for the reminder, we will add SPV-MIA w/o PVA in Table 1 in the camera-ready version. In terms of these ablation studies shown in the Appendix, we quote the results and merge them into one table as follows: Table: Results of Ablation Study on LLaMA across three datasets. |Dataset|Wiki|AG News|XSum|Avg.| |--------------------------------------|---------|---------|---------|---------| |SPV-MIA (Embedding-based Paraphrasing)|0.956|0.926|0.949|0.944| |SPV-MIA (Semantic-based Paraphrasing)|0.951|0.903|0.937|0.930| |SPV-MIA (Neighbour Attack)|0.934|0.893|0.928|0.918| |**w/o PVA**|**0.913**|**0.885**|**0.919**|**0.906**| |w/o PDC|0.653|0.641|0.661|0.652| The results demonstrate that the PDC approach seems to play a more critical role, and can still serve as a valid adversary without the PVA. Thus, in practical scenarios, we can consider removing the PVA to reduce the frequency of accessing public APIs. **W4: L631-633: This is not a grounded statement. Making the connection not only seems unnecessary, but incorrect. While it is nice to have theoretical connections, it is by no means necessary for a good paper. I would urge the authors to rethink the use of theory to make connections in places where they do not seem valid without handwavy justifications.** Lines 631-633 in Appendix "A.4 Reformulate the Neighbour Attack" only provide a new perspective to understand paraphrasing-based methods. Specifically, we want to convey that the proposed PVA and neighbour attack may have the same potential motivation, which could inspire subsequent work to identify member samples by detecting local maxima. For example, a recent paper you recommended also employs this idea [9]. Considering that the content here is not sufficient to support a compact theoretical framework, we will change the title to "A.4 Rethinking of the Neighbour Attack" and replace the declarative tone with a conjectural tone. --- Rebuttal 4: Title: (Optional for reviewer to read) The responses to the minor comments Comment: To answer the detailed questions of i2LA, we provide more information in the form of comments. We understand reviewers are not obligated to read comments. we keep them just in case i2LA are interested. **Minor Comments: For M1, M2, M3, M4, M5, M7, M9, M10, M11, M12, M14** We are very grateful for your careful review and detailed comments on our paper. We will follow your instructions to revise these minor issues in the camera-ready version. **M6: L81:Computing a second-order derivative in an LLM is just not feasible practically.** We did not directly calculate the second-order derivatives in LLMs; instead, we estimate features of partial second-order derivatives by calculating the score differences between the target text and its paraphrased versions. Similarly, the paper you recommended [9] also used the concept of estimating second-order derivatives on LLMs for membership inference and utilized statistical methods to characterize the features of the second-order derivatives. We will revise the corresponding wording in the camera-ready version. **M8: L163:If it is over-represented in dataset, it is needless to say a member. The statement here feels vacuous.** We want to clarify that an over-represented record in the data distribution is not necessarily a member. This is a widely recognized statement claimed by a highly influential paper [12] in the field of MIA. Specifically, a non-member sample may have a high membership score simply because it is over-represented in the data distribution. Consequently, an attack that determines a sample is likely to be a member due to having a high score will inevitably fail on these over-represented samples. This is the reason why difficulty calibration via the reference model is proposed. **M13: L229:Why is a negative-definite Hessian problematic? In practice, the (empirical) Hessian is very low-rank and most of its entries are close to zero.** Because existing research has shown that member samples fall at the local maximum of the probability distribution, where the Hessian matrix is negative definite [13]. Although we derived our method through the negative definiteness of the Hessian matrix, we did not directly calculate the Hessian matrix because this is impractical in the LLM scenario. Therefore, we translated it into an estimation of probability variation metrics, which can be calculated via our proposed paraphrasing method. **M15: Table 1: Please also include TPR@FPRs (like 1% and 0.1% FPR). The table is also missing a lot of attacks like Min-K%++ [9] and MoPE [2]** > M15.1: Table 1: Please also include TPR@FPRs (like 1% and 0.1% FPR). We have followed your suggestion and added two new evaluation metrics: TPR@1%FPR, and TPR@0.1%FPR. All additional results can be found in the PDF file of the global response. Thank you for your valuable suggestion for improvement. We will update these results to the camera-ready version. We quote partial results in the tables below. Experimental results show that, compared to AUC, SPV-MIA achieves a larger performance margin under new metrics. This indicates that, at lower FPRs, SPV-MIA maintains a higher TPR compared to the baselines. > The table is also missing a lot of attacks like Min-K%++ [9] and MoPE [14] We have added Min-K\%++ [9] as a baseline based on your suggestion. Since MoPE [14] does not provide open-source code, we have attempted to contact the authors to obtain the code and have committed to supplementing the relevant experiments upon receiving it. As an alternative, we suggest adding Min-K\% [8] as a baseline, as it can achieve better performance in some cases. The supplementary experiments are shown in the PDF file of the global response. In the experimental results across 3 datasets and 4 LLMs, Min-K\% and Min-K\%++ achieve the best or second-best performance among 5 reference-free baselines. We will update these results in the camera-ready version. We quote the partial results in the table below. Table. Evaluation of all baselines and SPV-MIA on LLaMA@AG News Dataset using TPR@1\%FPR and TPR@0.1\%FPR |Methods|Loss Attack|Neighbour Attack|DetectGPT|Min-K\%|Min-K\%++|LiRA-Base|LiRA-Candidate|SPV-MIA| |------------|-----------|----------------|---------|-------|---------|---------|--------------|-------| |AUC|0.580|0.610|0.603|0.619|0.631|0.657|0.714|0.903| |TPR@1\%FPR|1.2\%|3.1\%|2.7\%|3.6\%|4.1\%|8.7\%|11.5\%|39.5\%| |TPR@0.1\%FPR|0.2\%|0.4\%|0.4\%|0.6\%|1.0\%|1.4\%|1.9\%|23.5\%| **M17 Figure 3 Why is Y axis going above AUC = 1.0?** The values of the data points within the histogram in Fig 3 do not exceed AUC=1. The y-axis range was drawn larger to accommodate the legend and to make the figure clearer. --- Rebuttal 5: Title: (Optional for reviewer to read) The reference of rebuttal Comment: # Reference [1] Maini, et al. "LLM Dataset Inference: Did you train on my dataset?" arXiv 2024. [2] Mozaffari, Marathe. "Semantic Membership Inference Attack against Large Language Models." arXiv 2024. [3] Meeus, et al. "Inherent Challenges of Post-Hoc Membership Inference for Large Language Models." arXiv 2024. [4] Duan, et al. "Do membership inference attacks work on large language models?" arXiv 2024. [5] Das, Zhang, Tramèr. "Blind Baselines Beat Membership Inference Attacks for Foundation Models." arXiv 2024. [6] Peng, et al. "Gpt-3.5 turbo finetuning and api updates." 2023. [7] Galichin, et al. "GLiRA: Black-Box Membership Inference Attack via Knowledge Distillation." arXiv 2024. [8] Shi, et al. "Detecting Pretraining Data from Large Language Models." ICLR 2024. [9] Zhang, et al. "Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models." arXiv 2024. [10] Mattern, et al. "Membership Inference Attacks against Language Models via Neighbourhood Comparison." ACL 2023. [11] van den Burg, Williams. "On memorization in probabilistic deep generative models." NeurIPS 2021. [12] Watson, et al. "On the Importance of Difficulty Calibration in Membership Inference Attacks." ICLR 2021. [13] Boyd, Vandenberghe. "Convex optimization." CUP 2004. --- Rebuttal Comment 5.1: Comment: Thank you to the authors for their responses. I appreciate all the effort in responding to my questions and concerns so thoroughly. I do not have any more questions, and am increasing the score to 7. Good luck! --- Reply to Comment 5.1.1: Comment: Dear Reviewer i2LA, Thank you for taking the time to review our responses and for incresing the score. We are glad that our responses addressed all your questions and concerns. Thank you once again for all your efforts. Best regards, The Authors
Summary: This paper presents a membership inference attack against causal language models, addressing the limitations of previous attacks, such as the inaccessibility of appropriate reference datasets and heavy reliance on overfitting. To overcome these limitations, the authors propose a self-prompt approach to extract reference datasets from LLMs and introduce a membership signal based on memorization. They compare their method with other methods using the AUC metric. Strengths: - The paper effectively identifies two key limitations of previous membership inference attacks against LLMs, providing a clear motivation for the proposed targeted solutions. - The proposed probabilistic variation assessment is interesting and has potential applications beyond LLMs. - The paper is well-written with a clear logical structure, making it easy to follow. Weaknesses: - The main evaluation metrics need to be revised. As established by Carlini et al. [R1], membership inference attacks should be evaluated by computing their true-positive rate at low (e.g., ≤ 0.1%) false-positive rates, rather than using average-case metrics like AUC. Such evaluation metrics have become the de facto standard in evaluating membership inference attacks and are used by many existing works [R2] [R3]. I suggest using two metrics: the Full Log-scale Receiver Operating Characteristic (ROC) Curve to highlight low false-positive rates, and the TPR at a low FPR, which measures attack performance at specific FPRs (e.g., 0.1%, 0.01%). - The experimental settings are not clearly described. For example, the paper uses a self-prompt approach to extract reference datasets, but it does not specify how many datasets are extracted for each case, which directly relates to the attack costs. This is especially important for commercial LLMs since attacking commercial LLMs will incur high costs in collecting such datasets. Additionally, details on dataset splitting (e.g., how many datasets are used for training the target model) and whether the approach requires training shadow models are missing. If it does not require shadow models, how is the threshold τ in equation (2) determined? - The paper argues that the proposed probabilistic variation assessment is based on memorization rather than overfitting, but it does not clearly explain the key differences between these two concepts or how the approach improves from the perspective of memorization. - The reference-based baselines used in the paper are limited and somewhat outdated. Including more advanced baselines, such as [R4], would strengthen the comparison. - An ablation study to investigate the contributions of the proposed self-prompt approach and probabilistic variation assessment separately would highlight the individual contributions of the paper. [R1] Carlini, Nicholas, et al. "Membership inference attacks from first principles." 2022 IEEE Symposium on Security and Privacy (2022). [R2] Bertran, Martin, et al. "Scalable membership inference attacks via quantile regression." Advances in Neural Information Processing Systems (2023). [R3] Wen, Yuxin, et al. "Canary in a Coalmine: Better Membership Inference with Ensembled Adversarial Queries." International Conference on Learning Representations (2023). [R4] Shi, Haonan, et al. “Learning-Based Difficulty Calibration for Enhanced Membership Inference Attacks”. IEEE European Symposium on Security and Privacy (2024). Technical Quality: 2 Clarity: 3 Questions for Authors: - Does the proposed approach require training shadow models? If not, how is the threshold τ in equation (2) determined? - How many datasets are extracted for each case in training the reference models? - What are the key differences between memorization and overfitting, and how does the proposed approach improve from the perspective of memorization? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Please see my above comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer saYA, Thank you so much for your thoughtful review and your suggestions for improvement. Below we address the questions raised in your review, the responses to the weaknesses and the reference can be found in subsequent comments: **Q1,W2: Does the proposed approach require training shadow models? If not, how is the threshold τ in equation (2) determined?** This is an interesting and meaningful topic, since current MIAs designed for LLMs mostly not discuss how to select an appropriate threshold [1,2,3]. Sharing a similar idea with [4], the proposed method (SPV-MIA) can determine the threshold without a shadow model by picking a quantile on the distribution of confidence scores induced by SPV-MIA on texts that are not used in fine-tuning. In particular, we collected a dataset $\\{\boldsymbol{x}_i\\}_1^n$ from an unrelated task, known to not have been used in fine-tuning. We evaluate the SPV-MIA for each text $\boldsymbol{x}_i$, and record the confidence score $s(\boldsymbol{x}_i)$ that SPV-MIA places it as a member text. We then pick the $1-\alpha$ quantile from the score distribution. Intuitively, a score $s(\boldsymbol{x}_i)$ larger than the $1-\alpha$ quantile indicates that SPV-MIA assigns a confidence on the member text that is higher than a $1-\alpha$ fraction of the texts not used in training. Thus, we can select an expected false positive rate of $\alpha$, then determine the according threshold $\tau$ by measuring the $1-\alpha$ quantile of the score distribution. To further support our statement, we visualize the score distribution of member and non-member texts evaluated by SPV-MIA (see the Figure 2 in PDF file of the global rebuttal). **Q2,W2: How many datasets are extracted for each case in training the reference models? This is especially important for commercial LLMs.** As we have summarized in Appendix A.6.4, the number of samples extracted for fine-tuning reference models is set to 10,000 (16 prompt tokens and 112 completion tokens for each). We used the price of GPT-3.5-turbo-0125 as a reference (input: \\$0.50 / 1M tokens, output: \\$1.50 / 1M tokens), and extracting such a dataset from commercial LLMs would cost approximately only \$1.76. Thus, we believe that it is an acceptable value even for attacking commercial LLMs. Additionally, we have explored the impact of smaller reference datasets (in Section 5.4, Fig. 5(a)), and we quote the relevant results in the table below. This result demonstrates that even in the extracted reference dataset with only 1,000 samples, SPV-MIA achieves performance comparable to 10,000 samples. Table. The performances of SPV-MIA on LLaMA@AG News while utilizing different scales and sources of extracted reference datasets. |Extracted Dataset Scale|1,000|2,000|5,000|10,000| |-|-|-|-|-| |Irrelevant|0.879|0.882|0.886|0.897| |Domain-specific|0.890|0.892|0.895|0.903| |Identical-distribution|0.896|0.912|0.915|0.922| **Q3,W3: Questions aboout memorization.** > **Q3.1: What are the key differences between memorization and overfitting?** Although memorization is associated with overfitting, overfitting by itself cannot completely explain some properties of memorization [5,6,7]. The key differences between memorization and overfitting can be summarized as the following three points: * **Occurrence Time** Existing research defines the first epoch when the LLM's perplexity (ppl) on the validation set starts to rise as the occurrence of overfitting [5]. In contrast, memorization begins early [5,6] and persists throughout almost the entire training phase [6,7]. * **Harm Level** Overfitting is almost universally acknowledged as a detrimental phenomenon in machine learning. However, memorization is not exclusively harmful, and can be crucial for certain types of generalization (e.g., on QA tasks) [8,9]. * **Avoidance Difficulty** Since memorization occurs much earlier, even if we use early stopping to prevent the model from overfitting, we will still achieve significant memorization [6]. Moreover, since memorization is crucial for certain tasks [8,9], and separately mitigates specific unintended memorization (e.g., verbatim memorization [10]) is a non-trivial task. > **Q3.2: How does the proposed approach improve from the perspective of memorization?** Based on the aforementioned discussion, memorization is a more common and robust phenomenon in LLMs compared to overfitting. Therefore, we believe that identifying the memorization footprint left by LLMs on member texts can help improve MIA. Moreover, memorization in generative models [11] or LLMs [2] causes an increased tendency of generative probability density around member records. Thus, we can translate the problem of detecting member texts into identifying texts that are close to the local maximum. Subsequently, we designed a novel metric, probability variation, as an indicator of local maxima and proposed two symmetrical paraphrasing methods to estimate it (Section 4.3). Consequently, our proposed method can enhance the identification of memorization. We have conducted ablation experiments (Appendix A.4, Table 5) to show that our proposed probability variation assessment (PVA) approach can further improve the success rate of MIA compared with neighbour attack. We quote the existing experimental results in the table below. Table: Results of Ablation Study on LLaMA across three datasets. |Dataset|Wiki|AG News|XSum|Avg.| |-|-|-|-|-| |SPV-MIA (Embedding-based Paraphrasing)|0.956|0.926|0.949|0.944| |SPV-MIA (Semantic-based Paraphrasing)|0.951|0.903|0.937|0.930| |SPV-MIA (Neighbour Attack)|0.934|0.893|0.928|0.918| Thanks for your valuable questions for improvement, we will include these discussions in the camera-ready version. --- Rebuttal Comment 1.1: Comment: I thank the authors for their detailed response, which has addressed most of my concerns. I do believe the main evaluation should focus on the True Positive Rate (TPR) in the low False Positive Rate (FPR) regime. Please consider including all related experiments in the revised manuscript. I will raise my score accordingly. --- Reply to Comment 1.1.1: Title: Additional Feedback to Confirm Suggestions Raised by Reviewer saYA Comment: Dear reviewer saYA, Thank you for your further suggestions. We have organized the TPR@1%FPR and TPR@0.1%FPR corresponding to all baselines and SPV-MIA into tables formatted the same as the current main evaluation (Table 1 in the initial submission). After reviewing the papers you recommended, we agree that TPR in the low FPR should indeed be highlighted as the main evaluation. Therefore, we will follow your advice and replace Table 1 in the initial manuscript with `"Table 2: Evaluation of all baselines and SPV-MIA using TPR@1%FPR"` in the PDF file (see the global comment). Additionally, all related experiments mentioned in this rebuttal will be included in the camera-ready version. We would appreciate it if you could raise the score accordingly. Best regards, Authors --- Reply to Comment 1.1.2: Comment: Dear reviewer saYA, Thank you for providing prompt feedback. We will definitely **highlight and focus** on TPR@Low FPR in the main evaluation and add all related experiments into the revised version. If you have further suggestions or comments, we would like to address them before the end of the discussion period. Thank you for your time and valuable comments. Best regards, Authors --- Rebuttal 2: Title: (Optional for reviewer to read) The responses to the weaknesses -- Part 1 Comment: To answer the detailed questions of saYA, we provide more information in the form of comments. We understand reviewers are not obligated to read comments. we keep them just in case saYA are interested. **W1: The main evaluation metrics need to be revised. I suggest using two metrics: the Full Log-scale Receiver Operating Characteristic (ROC) Curve to highlight low false-positive rates, and the TPR at a low FPR.** We have followed your suggestion and added three new evaluation metrics: the Full Log-scale Receiver Operating Characteristic (ROC) Curve, TPR@1%FPR, and TPR@0.1%FPR. All additional results can be found in the PDF file of the global response. Thank you for your valuable suggestion for improvement. We will update these results to the camera-ready version. We quote partial results in the tables below. Experimental results show that, compared to AUC, SPV-MIA achieves a larger performance margin under new metrics. This indicates that, at lower FPRs, SPV-MIA maintains a higher TPR compared to the baselines. Table. Evaluation of all baselines and SPV-MIA on LLaMA@AG News Dataset using TPR@1\%FPR and TPR@0.1\%FPR |Methods|Loss Attack|Neighbour Attack|DetectGPT|Min-K\%|Min-K\%++|LiRA-Base|LiRA-Candidate|SPV-MIA| |------------|-----------|----------------|---------|-------|---------|---------|--------------|-------| |TPR@1\%FPR|1.2\%|3.1\%|2.7\%|3.6\%|4.1\%|8.7\%|11.5\%|39.5\%| |TPR@0.1\%FPR|0.2\%|0.4\%|0.4\%|0.6\%|1.0\%|1.4\%|1.9\%|23.5\%| **W4: Including more advanced baselines would strengthen the comparison, such as LDC-MIA [12]** Thank you for your suggestions regarding the incorporation of more up-to-date baselines. We have carefully reviewed the literature you recommended and believe that LDC-MIA is a related work but not suitable as a new baseline for the following two reasons. However, we will include a thorough discussion of this in the camera-ready version. 1. LDC-MIA is designed for classification models and contains modules specifically tailored for classification tasks, and cannot be directly employed for LLMs. For example, the MIA classifier in the LDC-MIA requires the class label of the target sample, which does not exist in text generation tasks. Therefore, adapting LDC-MIA to LLMs is beyond the scope of this work. 2. LDC-MIA requires a larger scale auxiliary dataset that has the same distribution as the data used for training the target model. The dataset will be split into three parts: the training and heldout datasets of the shadow model, and the training dataset of the reference model. This is a widely acknowledged unrealistic assumption, especially in the context of LLMs [1,2,3], as their training datasets are typically not publicly available. Our proposed method has avoided this assumption through the self-prompt method. Therefore, it is unfair to compare our algorithm when there is a significant disparity in prior knowledge. However, following your suggestion, we have added two state-of-the-art algorithms (Min-K\%[1] and Min-K\%++[2]) proposed in 2024, specifically designed for LLMs. The supplementary experiments are shown in the PDF file of the global response. We will update these results in the camera-ready version. We quote the partial results in the table below: Table: Evaluation of representative baselines and SPV-MIA on LLaMA@AG News Dataset using TPR@1\%FPR, TPR@0.1\%FPR and AUC |Metric|AUC|TPR@1%FPR|TPR@0.1%FPR| |----------------|-----|---------|-----------| |Neighbour Attack|0.610|3.1\%|0.4\%| |Min-K\%|0.619|3.6\%|0.6\%| |Min-K\%++|0.631|4.1\%|1.0\%| |LiRA-Base|0.657|8.7\%|1.4\%| |LiRA-Candidate|0.714|11.5\%|1.9\%| |SPV-MIA|0.903|39.5\%|23.5\%| --- Rebuttal 3: Title: (Optional for reviewer to read) The responses to the weaknesses -- Part 2 Comment: To answer the detailed questions of saYA, we provide more information in the form of comments. We understand reviewers are not obligated to read comments. we keep them just in case saYA are interested. **W5: Conducting an ablation study to investigate the contributions of each proposed component (PVA and PDC) would be beneficial.** We have shown the ablation study in Appendix A.5.1 for auditing the contributions of the proposed Practical Difficulty Calibration (PDC) and probabilistic variation assessment (PVA) separately. Furthermore, we also have conducted an extensive ablation study in Appendix A.4 to investigate the contributions of our proposed two symmetrical paraphrasing methods compared with neighbour attack, in which we adopt the two symmetrical paraphrasing methods, and neighbour attack as the PVA module, respectively. Thank you for the reminder, we will emphasize the ablation study in the main body of the camera-ready version. In terms of these ablation studies shown in the Appendix, we quote the experimental results and merge them into one table as follows: Table: Results of Ablation Study on LLaMA across three datasets. |Dataset|Wiki|AG News|XSum|Avg.| |--------------------------------------|-----|-------|-----|-----| |**SPV-MIA (Embedding-based Paraphrasing)**|**0.956**|**0.926**|**0.949**|**0.944**| |SPV-MIA (Semantic-based Paraphrasing)|0.951|0.903|0.937|0.930| |SPV-MIA (Neighbour Attack)|0.934|0.893|0.928|0.918| |w/o PVA|0.913|0.885|0.919|0.906| |w/o PDC|0.653|0.641|0.661|0.652| The results demonstrate that both PDC and PVA contribute a certain improvement to our proposed method. However, the PDC approach seems to play a more critical role, which can still serve as a valid adversary without the PVA. Thus, in practical scenarios, we can consider removing the PVA to reduce the frequency of accessing public APIs. Additionally, the proposed two paraphrasing methods both yield considerable performance gains compared to the neighbour attack. # Reference [1] Shi, et al. "Detecting Pretraining Data from Large Language Models." ICLR 2024. [2] Zhang, et al. "Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models." arXiv 2024. [3] Mattern, et al. "Membership Inference Attacks against Language Models via Neighbourhood Comparison." ACL 2023. [4] Bertran, et al. "Scalable membership inference attacks via quantile regression." NeurIPS 2023. [5] Tirumala, et al. "Memorization without overfitting: Analyzing the training dynamics of large language models." NeurIPS 2022. [6] Mireshghallah, et al. "An empirical analysis of memorization in fine-tuned autoregressive language models." EMNLP 2022. [7] Zhang, et al. "Counterfactual memorization in neural language models." NeurIPS 2023. [8] Tay, et al. "Transformer memory as a differentiable search index." NeurIPS 2022. [9] Borgeaud, et al. "Improving language models by retrieving from trillions of tokens." ICML 2022. [10] Ippolito, et al. "Preventing Generation of Verbatim Memorization in Language Models Gives a False Sense of Privacy." INLG 2023. [11] van den Burg, Williams. "On memorization in probabilistic deep generative models." NeurIPS 2021. [12] Shi, Haonan, et al. “Learning-Based Difficulty Calibration for Enhanced Membership Inference Attacks”. IEEE European Symposium on Security and Privacy (2024).
Summary: This paper proposes self-calibrated probabilistic variation (SPV)-MIA, a membership inference attack. The novel ideas that SPV-MIA introduces in the space of LLMs: 1) using paraphrasing to obtain samples around the target sample text in the sample domain, and using paraphrased texts to compute probabilistic variation of the target model around the target, and 2) using self-prompting to generate reference data that SPV-MIA uses to train a reference model which is then used to calibrate the probabilistic variation of the target model on the target samples. Experimental evaluation on three benchmark datasets show that SPV-MIA outperforms existing MIAs. Strengths: - Multiple interesting ideas that result in a strong, practical MIA on LLMs - Good and easy to follow MIA Weaknesses: - Some parts of the paper need support, e.g., how PVA is more general than neighborhood attack. - Datasets evaluated with are tiny; larger datasets should be used - Writing can be improved for better comprehension Technical Quality: 3 Clarity: 3 Questions for Authors: - Idea of PVA is similar to neighborhood attack. The paper claims that they generalize the notion of neighborhood attack but end up using the neighborhood attack. This is alright, but how does this generalization help? Is the paper merely depicting neighborhood attack using a set of equations or does the generalization can be somehow helpful to improve the attack success? - Experiments use small benchmark datasets for fine-tuning which are probably also a part of training data of pre-trained base model; both of these can lead to overestimation of power of proposed MIA. Why do you use small datasets? How does the proposed MIA work for larger fine-tuning datasets? - About DP results: - Can you explain how do you perform DP training? - How do the baseline attacks perform for the same DP guarantees? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Writing of the paper is poor; there are too many grammatical mistakes (even in abstract) and many sentences are not properly constructed making it difficult to read the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer DLpC, We deeply appreciate the time you took to review our work and your comments for improvement. Below we address the concerns raised in your review: **Q1,W1: Questions about the PVA generalization.** > **Q1.1: The paper claims the generalization of PVA but end up using the neighborhood attack.** The default paraphrasing method used is not the neighbour attack but the proposed semantic-based symmetrical paraphrasing method. Unlike the neighbour attack, each paraphrasing operation of the proposed methods will generate a pair of symmetrical neighbouring samples. In contrast, paraphrasing operations in the neighbour attack produce independent neighbouring samples. > **Q1.2: How does the generalization of PVA help? Does the generalization can be somehow helpful to improve the attack success?** The proposed PVA algorithm not only uses a series of generalized equations to reformulate the neighbour attack but also actually improves the attack success. Therefore, the PVA contributes both theoretically and empirically: * **Empirical contribution** (improve attack success) We derive two symmetrical paraphrasing methods (Semantic-based and Embedding-based) from PVA. We have conducted an extensive ablation study in Appendix A.4, where the results demonstrate that both two proposed methods achieve considerable performance gains compared with neighbour attack. Thus, the proposed PVA can actually improve the attack performance. Table: Results of Ablation Study on LLaMA across three datasets. |Dataset|Wiki|AG News|XSum|Avg.| |-|-|-|-|-| |SPV-MIA (Semantic-based Paraphrasing)|0.951|0.903|0.937|0.930| |SPV-MIA (Embedding-based Paraphrasing)|0.956|0.926|0.949|0.944| |SPV-MIA (Neighbour Comparing)|0.934|0.893|0.928|0.918| * **Theoretical contribution** (Motivate paraphrasing-based attacks from another perspective) The existing paraphrasing-based attacks (e.g., Neighbour Attack) are motivated by exploring a substitution of reference models. In this work, we provide another motivation based on model memorization, that is, the member text memorized by the model will tend to be located on the local maximum in the generative distribution [1]. Therefore, this study may inspire the following studies to design more sophisticated methods for inferring membership by characterizing the model memorization. Coincidentally, a recently published work (after the NeurIPS deadline) recommended by reviewer i2LA [2] also proposed an MIA against pre-trained LLMs based on the concept of local maximum detection. Thank you for the reminder, we will emphasize the contributions of PVA in the main body of the camera-ready version. **Q2,w2: Questions about small datasets.** > **Q2.1 Why do you use small datasets? The small datasets maybe a part of pre-training data, which can lead to overestimation of MIA performance.**: In fact, we employed two datasets (Ag News, Wiki) that are the same as those used in existing studies [3,4], as well as one larger dataset (Xsum). The possibility of overlap between the fine-tuning and pre-training datasets will lead to an underestimation of MIA performance. Since if the fine-tuning dataset is a part of the pre-training data, then both the training and evaluation splits of the fine-tuning dataset will be included in the pre-training data with a large probability. In this scenario, we not only need to detect texts used during the fine-tuning stage but also avoid falsely identifying texts that were only used during the pre-training stage and did not appear in the fine-tuning stage as member texts. > **Q2.2: How does the proposed MIA work for larger fine-tuning datasets?**: Although we cannot use larger datasets as the default setting for the overall experiment, we agree that exploring larger fine-tuning datasets would further enhance the quality of our work. Therefore, we used a dataset nearly 10x larger than the current ones, Newsroom, to fine-tune LLaMA as the target model. The results in the table below demonstrate that SPV-MIA still maintains relatively good performance for the larger dataset. |Metric|SPV-MIA|Neighbour Attack|LIRA-Candidate| |-|-|-|-| |AUC|0.862|0.545|0.665| **Q3: Can you explain how do you perform DP training? How do the baseline attacks perform for the same DP guarantees?**: We follow the private-transformers [5], using ghost clipping to perform DP training in a memory-saving manner. We have conducted three representative baseline attacks for the DP-SGD model (LLaMA@Ag News) and compared it with our method (SPV-MIA). The results are provided in the table below. The results demonstrate that SPV-MIA consistently maintains the best MIA performance over different settings of the privacy budget. Thank you for your suggestion to improve our work further. We will update these results to the camera-ready version. |Privacy Budget $\epsilon$|15|30|60|+inf| |-|-|-|-|-| |Loss Attack|0.523|0.551|0.568|0.580| |Neighbour Attack|0.542|0.564|0.587|0.610| |LIRA-Candidate|0.611|0.655|0.684|0.714| |SPV-MIA|0.766|0.814|0.852|0.903| # Reference [1] van den Burg, Williams. "On memorization in probabilistic deep generative models." NeurIPS 2021. [2] Zhang, et al. "Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models." arXiv 2024. [3] Mattern, et al. "Membership Inference Attacks against Language Models via Neighbourhood Comparison." ACL 2023. [4] Mireshghallah, et al. "An empirical analysis of memorization in fine-tuned autoregressive language models." EMNLP 2022. [5] Li, et al. "Large Language Models Can Be Strong Differentially Private Learners." ICLR 2021. --- Rebuttal 2: Comment: Dear Reviewer DLpC, Thank you for your positive evaluation of our work and for increasing your score from 6 to 7. We appreciate your support and the time you've taken to assess our submission. Your feedback has been invaluable in helping us refine our paper. Best regards, The Authors
Summary: This paper studies the membership inference attack on large language models finetuned on private data. Instead of reusing other pre-trained public large language models, the paper proposes a way to generate a reference dataset and finetune this dataset to attain a reference model. With this reference model, the paper further calculates the score by probabilistic variation assessment: symmetrically rephrasing the target data and calculating the average score as the final score. As evaluated in the experiment, the proposed attack has a large margin above the baselines. The paper also conducted ablation studies to understand the design of important and demonstrates the robustness of the algorithm. Strengths: 1. The algorithm is novel. It proposes a way to generate the reference dataset via self-prompting, which leads to a high-quality reference model for a better membership inference attack. 2. The results seem very promising. The performance of the proposed attack has a large margin over the baselines. 3. The algorithm is robust to assuming the domain-specific public data. Even if it is relaxed to irrelevant data, the proposed attack still achieve good performance. Weaknesses: 1. It is not fully in the black-box setting, because it assumes the knowledge of the same pre-trained model and the access of its parameters. 2. The algorithm has two novel parts: propose a way to generate the reference dataset for learning a reference model and a novel score similar to probabilistic variation assessment. It might be important to study how much each component contributes to the final improvements. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper has well-discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Dear reviewer v2Kc, Thank you so much for your thoughtful review and overall positive comments. Below we address the concerns raised in your review: **W1: It is not fully in the black-box setting, since it assumes the knowledge of the same pre-trained model and the access of its parameters.** In fact, SPV-MIA does not require access to the parameters of the target model, indicating that we do not use a white-box setting. However, we are not using a pure black-box setting either, since SPV-MIA requires the following adversary knowledge (two APIs): * The access to the query API with response results (generated texts and logits) of the target model. * [+] The access to the fine-tuning API of the pre-trained version of the target model. Here, we only add a fine-tuning API compared with existing works [1,2,3,4]. Overall, we do not require white-box access to the parameters of this pre-trained model; an API that allows the adversary to upload a customized dataset for fine-tuning would also meet the conditions. Therefore, our proposed method is not only applicable to open-source models with publicly available parameters but also for closed-source models that provide fine-tuning APIs (such as ChatGPT [5]). We will include the above clarification in the camera-ready version. Furthermore, to further allay your concerns, we consider a truly black-box scenario in which the adversary is constrained to fine-tune a pre-trained model different from the target model as the reference model. We employ LLaMA-7B fine-tuned over the Agnew dataset as the target model, and let the adversary fine-tune over different pre-trained models (GPT-J, Falcon, and LLaMA are adopted) to prepare the reference model. The results are shown in the table below. From the experimental results in the table, we can find that the proposed method still maintains a large performance margin over the baselines (LiRA-Base, LiRA-Candidate) without this constraint. We will include these results in the camera-ready version. |Metric|AUC|TPR@1%FPR|TPR@0.1\%FPR| |----------------------|-----|---------|-----------| |SPV-MIA (GPT-J)|0.832|19.3\%|7.5\%| |SPV-MIA (Falcon)|0.812|21.7\%|8.7\%| |SPV-MIA |0.903|39.5\%|23.5\%| |LiRA-Base |0.657|8.7\%|1.4\%| |LiRA-Candidate |0.714|11.5\%|1.9\%| **W2: Conducting an ablation study to investigate the contributions of each proposed component would be beneficial.** In Appendix A.5.1, we have shown the ablation study for evaluating the contributions of the proposed Practical Difficulty Calibration (PDC) and probabilistic variation assessment (PVA) separately. Furthermore, we have also conducted a comprehensive ablation study in Appendix A.4 to investigate the contributions of our proposed two symmetrical paraphrasing methods compared with neighbour attack, in which we adopt the two symmetrical paraphrasing methods, and neighbour attack as the PVA module, respectively. Thank you for the reminder, we will emphasize the ablation study in the main body of the camera-ready version. In terms of these ablation studies shown in the Appendix, we quote the experimental results and merge them into one table as follows: Table: Results of Ablation Study on LLaMA across three datasets. |Dataset|Wiki|AG News|XSum|Avg.| |--------------------------------------|-----|-------|-----|-----| |**SPV-MIA (Embedding-based Paraphrasing)**|**0.956**|**0.926**|**0.949**|**0.944**| |SPV-MIA (Semantic-based Paraphrasing)|0.951|0.903|0.937|0.930| |SPV-MIA (Neighbour Attack)|0.934|0.893|0.928|0.918| |w/o PVA|0.913|0.885|0.919|0.906| |w/o PDC|0.653|0.641|0.661|0.652| The results demonstrate that both PDC and PVA contribute a certain improvement to our proposed method. However, the PDC approach seems to play a more critical role, which can still serve as a valid adversary without the PVA. Thus, in practical scenarios, we can consider removing the PVA to reduce the frequency of accessing public APIs. Additionally, the proposed two paraphrasing methods both yield considerable performance gains compared to the neighbour attack. # Reference [1] Shi, Weijia, et al. "Detecting Pretraining Data from Large Language Models." The Twelfth International Conference on Learning Representations (2024). [2] Duan, Michael, et al. "Do membership inference attacks work on large language models?." Conference on Language Modeling (COLM) (2024). [3] Mattern, Justus, et al. "Membership Inference Attacks against Language Models via Neighbourhood Comparison." Findings of the Association for Computational Linguistics: ACL 2023. 2023. [4] Zhang, Jingyang, et al. "Min-K%++: Improved Baseline for Detecting Pre-Training Data from Large Language Models." arXiv preprint arXiv:2404.02936 (2024). [5] Andrew Peng, Michael Wu, John Allard, Logan Kilpatrick, and Steven Heidel. Gpt-3.5 turbo finetuning and api updates, August 2023. --- Rebuttal Comment 1.1: Title: Official Comment by Reviewer v2Kc Comment: Thank authors for the responses, which have addressed my questions. I will keep my positive rate. I would recommend authors better clarify the setting with the assumptions in the revision by adding the discussions above. --- Reply to Comment 1.1.1: Comment: Dear Reviewer v2Kc, Thank you for taking the time to review our responses and for maintaining your positive rating of our manuscript. We appreciate your constructive feedback and will ensure that the revised version of our paper includes clearer explanations of the setting and assumptions, incorporating the discussions we've had. Best regards, The Authors
Rebuttal 1: Rebuttal: We have attached the experimental results added according to the reviewers' requirements in the attached PDF file, which includes the following contents. 1. Table1: Evaluation of all baselines and SPV-MIA using AUC scores. 2. Table 2: Evaluation of all baselines and SPV-MIA using TPR@1%FPR. 3. Table 3: Evaluation of all baselines and SPV-MIA using TPR@0.1%FPR. 4. Figure 1: Full log-scale ROC curves of SPV-MIA and the three representative baselines on LLaMAs fine-tuned over three datasets. 5. Figure 2: The distributions of member and non-member records w.r.t MIA metric score ∆m(x). Pdf: /pdf/85777b0e1ee15093aa0ee472d92789f21cfc36d5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Memory-Efficient LLM Training with Online Subspace Descent
Accept (poster)
Summary: The paper considers memory-efficient optimization for large language model. In particular, the focus is on optimizers leveraging low-rank projection. The paper first derives asymptotic convergence for such methods with arbitrary choice of projection matrix. Then the paper identifies the inefficiency of using SVD to construct the projection matrix and proposes online subspace descent method by updating the projection matrix with online PCA. Experiments show such method improves the efficiency of training LLM while achieving comparable performance compared to low-rank baseline with SVD projection. Strengths: 1. The paper derives the first convergence result for optimizers with low-rank projection. 2. The paper addresses the inefficiency of using SVD to construct the projection matrix by online subspace descent. 3. Experiments show the benefits of the online subspace descent over the SVD baseline. Weaknesses: 1. The main downside of the proposed method is that it could potentially increase the memory overhead. It seems to me that the idea of online updating $P_t$ trades off computation with memory. Although it reduces the computational cost of not computing SVD, it nevertheless increases the memory especially using Adam type of optimizer for $P_t$. In Line 216, the paper states the memory overhead is 9.01GB versus 8.64GB. However, it is not clear what model this numbers refer to. I suspect with the increase in rank and model size, the memory overhead could become larger. 2. The use of Hamiltonian descent and Lyapunov analysis for showing convergence is not properly motivated. Because the main result (Theorem 4.5) only shows asymptotic convergence to stationary point of $L(W)$, it is not convinced why to derive such results under continuous time limit. Can such result also be derived in the discrete case and can non-asymptotic result be derived? 3. Figure 2 is not a fair comparison because in practice, SVD is not used every update. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. In Line 233, what does it mean to be high-frequency versus low-frequency tokens? And is there any verification or justification that high-frequency tokens are learned with low-rank training while learning low-frequency tokens requires higher ranks? 2. In Line 252, can you comment further why AdamW is more suitable for online PCA? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The paper has discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Re: “Memory and efficiency” Updating $P_t$ with AdamW will inadvertently increase the memory consumption. However, in practice, we observe that the increase in peak memory is minuscule compared to the overall size of the model. Meanwhile, we can enjoy the nice properties of online subspace descent for its flexibility and system efficiency when implemented in parallel (as we discussed with reviewer NLbB and Rbsr). In addition, our subsequent experiment shows that SGD can perform similarly to AdamW when updating P, in which case, there is no additional memory cost over GaLore, since no momentum needs to be saved. Here is a comparison between GaLore and our method on 7B llamas trained for 10K steps on C4 (updating $P_t$ with SGD) | Method | Perplexity | Wall Clock Time (hours) | |--------|------------|-------------------------| | Galore | 51.21 | 9.7439 | | Ours | 43.72 | 7.1428 | The hardware setups are completely identical, both using a single node of H800s with 8 cards. It's clear that our method is not only faster in wall clock time, but also reaches lower perplexity when seeing same number of tokens. ## Re: "Discrete case" The derivation of discrete case is highly similar to the continuous example we've shown in the paper. **Example 4.1.** Momentum + Online Subspace Descent is $$ \frac{d}{dt} W_t = -P_t \hat{M}_t, \quad \hat{G}_t = P_t^\top \nabla L(W_t), \quad \frac{d}{dt} \hat{M}_t = a(\hat{G}_t - \hat{M}_t), $$ with Lyapunov function $H(W, \hat{M}) = L(W) + \frac{\|\hat{M}\|^2}{2a}$, for which $$ \frac{d}{dt} H(W_t, \hat{M}_t) = - \nabla L(W_t)^\top P_t \hat{M}_t + \hat{M}_t^\top (P_t^\top \nabla L(W_t) - \hat{M}_t) = -\|\hat{M}_t\|_2^2 \leq 0. $$ Followup this example as a demo, we have the following discrete derivation: $$W_{t+1} = W_t - \epsilon P_t \hat{M}_t$$ $$\hat{M}_{t+1} = \hat{M}_t + a(\hat{G}_t - \hat{M}_t)$$ $$ \hat{G}_t = P_t^\top \nabla L(W_t)$$ where $\hat{M}_{t+1} = \hat{M}_t + a(\hat{G}_t - \hat{M}_t) = (1-\alpha \epsilon)\hat{M}_t + \alpha \epsilon \hat{G}_t = \beta \hat{M}_t + (1-\beta)\hat{G}_t$. Assuming $L(\cdot)$ is $L$-smooth, we have $$ H(W_{t+1}, \hat{M}_{t+1}) - H(W_t, \hat{M}_t) \leq -\epsilon \|\hat{M}_t\|_2^2 + \alpha \epsilon^2 \|\hat{G}_t - \hat{M}_t\|_2^2 + \frac{L}{2}\epsilon^2\|P_t\hat{M}_t\|_2^2. $$ Hence, a telescoping sum yields $$ \frac{1}{T}\sum_{t=0}^{T-1} \|\hat{M}_t\|_2^2 \leq \frac{H(W_0, \hat{M}_0) - H(W_T, \hat{M}_T)}{\epsilon T} + \epsilon B_t, $$ where $B_t = \frac{1}{T} \sum_{t=0}^{T-1} \alpha \|\hat{G}_t - \hat{M}_t\|_2^2 + \frac{L}{2}\|\hat{M}_t\|_2^2$. ## Re: “high-frequency versus low-frequency tokens” This is our attempted explanation for the phenomenon we observe in our ablation study of different ranks. Loss of higher rank runs tend to drop faster and converge to a lower loss after 10K steps. We suspect that it’s due to the unique property of language modeling tasks, where intuitively rarer tokens (tokens that are not common) require high gradient rank to be learnt. However, we would like to delay the justification of this hypothesis to future study, since it’s not the main focus of our paper. ## Re: "AdamW is more suitable for online PCA" This is what we find empirically, when trying different optimizers such as LION and Ada Factor. However, it’s not unfathomable that SGD in principle can perform similarly, which we show above. It’s just for the default learning rates and schedules, adamW seems to work more robustly and reliably and it’s the least sensitive one. --- Rebuttal Comment 1.1: Comment: Thank you for the responses. For the discrete-time convergence analysis, first, the convergence is measured in terms of gradient momentum, which is non-standard. Second, there seems to be an extra term $\epsilon B_t$, which depends on the entire past history as well as the gradient momentum, which is also on the left hand side of the final inequality. Can the authors clarify? --- Rebuttal 2: Comment: The derivation presented above provides a foundational outline for our convergence analysis. Building on this, we can conduct a more detailed analysis following the established classical (or traditional) approach. Taking a step further from the bound above, we have $(1 - \frac{L \epsilon}{2}) \frac{1}{T} \sum_{t=0}^{T-1} ||\hat{M}_t||_2^2 \leq \frac{H(W_0, \hat{M}_0) - H(W_T, \hat{M}_T)}{\epsilon T}$ $+\epsilon \frac{1}{T} \sum_{t=0}^{T-1} \alpha ||\hat{G}_t - \hat{M}_t||_2^2$ One can use the established standard methods to bound the last term $||\hat{G}_t - \hat{M}_t||_2^2$ (for example at page 11-12 in the supplementary from [Sun et al. 2023]). ### References Tao Sun, Qingsong Wang, Dongsheng Li, and Bao Wang. Momentum ensures convergence of signsgd under weaker assumptions. In *International Conference on Machine Learning*, pages 33077–33099. PMLR, 2023.
Summary: The authors propose a variation of GaLore that doesn't require a fixed SVD for achieving a memory efficient LLM training. Instead, the authors project the gradients dynamically into a small sub-space using online PCA which depends on the evolving trace of the gradient landscape. The authors tested their method against GaLore using perplexity as a metric while training on the C4 dataset. Strengths: - the paper is well written and easy to follow - having a dynamically changing projection matrix based on the gradient landscape makes total sense - the theoretical explanation shows an interesting perspective on why this proposed method of using online pca for matrix projection could work - the results is promising in C4 as the memory seems more or less similar in efficiency as GaLore with similar perplexity. Weaknesses: - the perplexity results is not enough to show that the method works well on standard NLP tasks like GLUE which consists of the datasets MRPC, COLA, STS-B, RTE, SST2, MNLI, QNLI, QQP to see if the actual downstream task performance is still retained with the proposed projection method. - while the theory is interesting, it seems to include many strong assumptions that do not reflect real-world datasets - the memory savings and the perplexity is very similar to GaLore and it is not clear that it is significant. The authors need to do multiple runs and compute the standard deviation to make the results are not due to noise. Technical Quality: 3 Clarity: 3 Questions for Authors: please answer how you could address the weaknesses above Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Re: "Downstream tasks" We did standardized GLUE evaluation for the above two 7B checkpoints with eval-harness. | Method | MRPC | RTE | SST2 | MNLI | QNLI | QQP | AVG | |--------|-------|-------|-------|-------|-------|-------|--------| | GaLore | 0.6838| **0.5018**| 0.5183| 0.3506| 0.4946| 0.3682| 0.4862 | | **Ours** | **0.6982**| 0.4901| **0.5233**| **0.3654**| **0.5142**| **0.3795**| **0.4951**| Both Cola and STSB require further fine-tuning, which brings in more confounding factors (even Llama-2 7B trained by Meta doesn't get meaningful results), so we exclude them from the comparison. ## Re: “Many strong assumptions that do not reflect real world datasets” We respectfully argue that our assumptions (Assumption 4.4) are in fact, very much minimum. 1) The model needs to be continuously differentiable. (pretty much all modern neural nets) 2) The optimization stops when the projected update hits 0. (this is rather a fact than a assumption) 3) It says we should pick $P$ such that $P^TG$, the projected gradient is not always 0. Otherwise, it just doesn’t update the model weights at all and it defeats the purpose. Every theoretical work has to make some assumptions. We want to assure our reviewers that it will be hard to find papers with same level of theoretical rigorousness that is making fewer assumptions than we do: we did not assume convex losses; we did not assume specific forms of the model (linear model or MLPs); we did not even assume specific optimizers (Adam, momentum, lion and etc)... ## Re: “The memory savings and the perplexity is very similar to GaLore and it is not clear that it is significant” We agree that this does have very similar memory saving as GaLore. However, we argue that our contributions lie majorly in our analysis that introduces a new family of optimizers that are guaranteed to converge. Among them, online subspace descent not only possesses the same memory saving properties, but also can be implemented more efficiently in a parallel manner, whereas Galore doesn’t share the same flexibility. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal and for addressing most of my concerns about the assumptions and memory saving. I have increased my score by 1.
Summary: Utilizing low-rank structure has become a popular way for memory-efficient LLM training. The authors are the first to provide convergence guarantees for general low-rank updating rules. Furthermore, based on their theoretical result, they propose a family of optimizers called online subspace descent. The empirical result shows that their performance is better than existing low-rank training methods. Strengths: This work provides a general theoretical guarantee for a range of low-rank updating rules, which can guide and help future research in this direction. Weaknesses: While the authors provide mathematical proofs, it could be more beneficial to the readers if they can provide more high-level intuitions why these methods could converge. As stated by the authors, the convergence result is surprising under the complications. Technical Quality: 3 Clarity: 3 Questions for Authors: Could you provide more high-level intuitions on why the methods could converge under the complications? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors discuss the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## Re: “Intuitions” The rough intuition is that $P_t$ serves as a kind of preconditioning matrix in the Hamiltonian systems. But to arrive at the precise mathematical conclusion, we find that the best and quickest way to understand it is through the derivation in Eq (8), together with physical understandings of Hamiltonian systems (that, e.g., the hamiltonian consists of potential energy and kinetic energy, and the kinetic energy term is different for different optimizers) More specifically, as long as the projected subspace defined by $P_t$ keeps changing and not always be orthogonal to the gradient as we stated in Assumption (4.4), it will always make some progress towards the fixed point. Given enough time, it will eventually reach there. Hence the consequence of convergence does not depend on $P_t$. --- Rebuttal Comment 1.1: Comment: Thank you for the response to my question!
Summary: The paper presents Online Subspace Descent, a memory-efficient modification applicable to a wide class of gradient-descent based algorithms where low-rank projections of gradients can be employed to reduce the memory overhead. Contrary to recent techniques such as GaLore, which require infrequent but costly updates of the projection matrix, the proposed method updates the projection matrix in an on-line manner, and proves the convergence of the resulting algorithm. Strengths: Memory-efficient optimizers are crucial to enable LLM training/fine-tuning outside of huge datacenters; as such, this is a potential high-significance paper. While I think the idea of gradually updating the projection matrix is pretty much "the obvious thing to do", I really like that the authors didn't just propose "Adam[Add-some-random-letter-here]", but instead took a step back and showed that the same principle applies to a wide class of optimization algorithms. Weaknesses: * There are some grammatical errors and typos, I think (not a native speaker), but not to the degree that they hamper the understanding of the text (e.g., I think it should be: update rules of _the_ projection matrix; in the subspaces that dynamically _change_, How Should P be _updated_, is not a problem for optimizers that _yield_) l.1ß6: `Note that the update of P t can be done in parallel with that of 106 (W t, ˆ St), and incurs no slow down once it is fast enough to not cause a speed bottleneck` Generally, Adam updates are memory-bound operations, so I'm not sure if you can schedule a second operation concurrently without slowing down the update. Also, the same could be said for GaLore, you run you SVD in parallel to the main network; if it takes multiple steps, you could just wait till it is done, and update only afterwards, so GaLore can be made as parallel as the algorithm that is proposed here, I think. l.119: `In this work, we propose to update P t in a continuous online fashion that incorporates the most 119 recent gradient information in a timely fashion, without calling linear algebra routines.` Matrix multiplications are linear algebra routines, so I think most P updates will involve some linear algebra. I think the description around l. 119-128 could do with some rewriting. The sentence `we propose to update P t at time t by minimizing the following PCA objective` is misleading, because P_t is not actually updated like that; instead P_t is one step along some optimizer that decreases this objective. Second, `Instead of minimizing [...] exactly, to retain computational efficiency, we propose` seems to be misleading, too; in an earlier paragraph, it was GaLore was criticized for using only a single gradient information for its update. This sentence suggests that if you had enough compute, you would do the same here, only due to computational constraints do you end up with something that integrates multiple gradients, which I think goes against the message you want to make in the paper. l. 170: `There is no requirement on Γ here, besides that the derivative in (8) should exist` Γ does not appear in (8), so it isn't clear which derivative should exist. l. 223: `The typical Pytorch implementation of SVD can be up to 142 times slower than running a single step online PCA on representative weight tensors from LLaMA architectures.` Given that GaLore runs SVD only every 300 steps, this would make GaLore comutationally more efficient than the proposed method. Technical Quality: 3 Clarity: 3 Questions for Authors: It has been observed that scaling up LLMs is notrivial also from a numerical stability point of view, e.g., in quantization, starting with 7B models, outliers become much more of a problem than they are for smaller models. As such, it would be great to see this method applied to larger models, at least to 7B, ideally even larger. If compute is a problem, maybe scaling the amount of training steps less than would be optimal could help, to at least have some proof-of-concept for the method at larger model scales. l.145: `this is a mild condition that can be satisfied e.g., when P_t is updated by (online) PCA on G_t` Is this really true, in the generality that is suggested here? If so, I'd like to see a formal argument. The paper only fixes the objective eq. (6), but leaves the choice of optimizes to the user, and my intuition says it should be possible to construct an example where minimizing (6) with a momentum-based optimizer could lead to $<G_t, P_t>$ being zero without $G_t$ being zero. l 158, I'm not sure what "once" is supposed to mean here. Should it just be "if"? Regarding the convergence guarantee for arbitrary $P_t$, I think it would be illuminating to consider what happens in case of an adversarial choice; for example, for SGD with momentum, choose $P_t = min <G_t, P_t^T P M_t>$, i.e., choose P so that it projects the momentum buffer _against_ the direction of the current gradient. Why does this choice of P not break convergence? l. 235 > In conclusion, given sufficient time and resources, higher ranks yield better performance for Online Subspace Descent. It is recommended to select the highest rank until the perplexity reduction saturates. Can this reliably be selected a priori, i.e., does picking a rank optional for running optimization for, e.g., 10k steps, also result in optimal rank for training until "convergence" Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Evaluations for production-size LLMs are missing. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: ## l.1ß6 “Generally, Adam updates are memory-bound operations” - This is precisely the bottleneck that this family of online subspace descent optimizers aims to tackle. By projecting into subspace, the memory footprint of Adam will be greatly reduced, allowing us to schedule another operation concurrently. - In addition, since we are running tensor-wise operation, the incremental memory consumption over running adamW in subspace is simply backward of P for a single tensor. This is reflected in our experiment for Llama 1B, where the peak memory increase compared with Galore AdamW is only 0.4 GB, that is counting fragmentations, redundant intermediate saving in practice. ## l.1ß6 “the same could be said for GaLore, you run you SVD in parallel to the main network” - It can definitely run SVD in parallel synchronously. But it won’t benefit from doing that. Instead, it will slow down that specific training step SVD is paralleled with if SVD on the latest gradient is computed. SVD uses Jacobi iterations until the principal components converge, in some cases it fails and has to restart. Overall on weight matrix sizes we care about, we observe that it’s 20x - 150x slower than running a single backward step + update which makes it impossible to be hidden by the adamW operations. Waiting for SVD to be done slows down training, whereas we essentially propose to distribute and amortize that workload into every step of training instead of doing it all at once and wait at a single step. In our latest experiment of the 7B model, we also show that our method is 1.37x faster than baseline GaLore in wall clock time. - In another scenario, it’s also possible to run SVD completely asynchronously and use $P_t$ computed from some arbitrarily old gradients to perform projections and updates. However, this becomes a very different class of algorithms from baseline GaLore. It overly complicates implementation in practice, since it needs to manage schedules for every tensor in the network. SVD finish time can also vary and it is unpredictable. Hence, it's possible to have part of the network using $P_t$ calculated from a different timestep, if it's running in a non-blocking fashion. The implication of this is unknown and it's a very interesting direction of exploration. Yet, given the limited span of this work, we would like to defer this discussion to future works. ## l.119 "Matrix multiplications are linear algebra routines" We meant to refer to torch.linalg.decomposition routines. We will make it more specific in revision. ## Re: "119-128 could do with some rewriting" - We agree that we can rewrite it to highlight the goal of doing online PCA, not just for saving computational cost. ## l. 170 "There is no requirement on Γ here, besides that the derivative in (8) should exist Γ does not appear in (8), so it isn't clear which derivative should exist." - The derivatives in (8) refers to the partial derivatives of $H_t$ that appear in Eq (8). We will make it specific in the revision. ## l. 223 "Given that GaLore runs SVD only every 300 steps, this would make GaLore computationally more efficient than the proposed method." Please refer to the above explanation on system efficiency/parallel implementations. Even if running Galore every 300 steps, it’s still an unmitigable slowdown where no parallelization tricks can help. And as tensor shape grows, as figure 2 shows, the gap of latency becomes higher. Meanwhile the cost of online subspace descent can be hidden with a smart parallelization trick. Our latest experiment of 7B model shows that our method is 1.37x faster than baseline GaLore in terms of wall clock time with the exactly the same hardware setup. ## Re: "Larger model" We pretrained from scratch 7B llamas on C4 for 10K steps. Perplexity is the lower the better. The results are consistent with our claim in production model scale. | Method | Perplexity | Wall Clock Time (hours) | |--------|------------|-------------------------| | Galore | 51.21 | 9.7439 | | **Ours** | **43.72** | **7.1428** | ## l.145 " Adversarial projections" The loss in $L_{G_t}(P)$ attains global optima only if P spans the linear space of the top k eigenvectors of $G_t$ in Loss (the other eigenvectors are saddle points). Hence, if an optimizer minimizes $L_{G_t}(P)$ correctly, it should find the top k eigenvectors of G, which is zero iff G ==0. Regardless of the choice $P_t$, the system yields a given Hamiltonian function and converges to some invariant set. But the invariant set coincides with the local optima of the objective function f only when Assumption 4.4 satisfies. The choice you provide may not satisfy Assumption 4.4. More concretely, solution of $P_t=min<G_t,P_t^TPM_t>$ gives the opposite direction of $G_t$. Considering the negative direction won’t harm it, because it’s still the same subspace even the gradient has an opposite sign and while it’s projected back, it’s still “gradient descent”, the two negative signs simply cancel out. The real adversarial scenario is $P_t=min|<G_t,P_t^TPM_t>|$. It is projecting into complementary orthogonal subspace, aka, $<G_t,P_t^TPM_t> = 0$. This is against ii) in our Assumption 4.4, which states we can not pick $P$ such that $P^TG$, the projected gradient is always 0. Otherwise, it just doesn’t update the model weights at all and it defeats the purpose. ## l 158 "I'm not sure what "once" is supposed to mean here. Should it just be "if"?" Yes, it should be “if”. We will correct that in revision. ## l. 235 "choices of rank" Just in terms of speed of convergence, in our ablation study, within 10K steps, the higher rank it’s the faster it goes. Eventually, if it’s run long enough, our theory indicates that it will always converge. However, in practice, it might be difficult to observe that due to constraint in resources. So as a safe recommendation, if it can be afforded, choose a higher rank. --- Rebuttal Comment 1.1: Comment: Thank you for the response, which adequately addresses the concerns raised in my review. With the additional clarifications as described, and an additional experiment in the regime where memory-efficient optimizers start to become very important and useful, I think this is a good paper that, were I to see it for the first time at the conference, I probably would point out to my colleagues. I will raise my score to an "Accept" rating.
Rebuttal 1: Rebuttal: We really appreciate our reviewers for their constructive reviews and suggestions. Here we summarize and highlight our response to a few main points of concerns. Hope that will help address the majority of those questions. ## Re: “System efficiency compared to Galore” (NLbB, FtMV, RBsr) In terms of speed, online subspace descent’s additional cost can be easily hidden by simple parallelization, whereas GaLore can not. As model size increases, the gap in latency between our method and GaLore becomes larger. Additional experiment shows on 7B model on same hardware setup, our method can be up to **1.3x** faster than GaLore. In terms of memory, we have flexibility in choosing different optimizers for updates of $P_t$. When choosing AdamW, inadvertently, it increases memory consumption slightly over GaLore. We chose to start with AdamW due to its robustness to hyperparameters. However, additional experiment on the 7B model shows that SGD also shares similar properties and can indeed be used for $P_t$ updates, incurring no extra memory cost over GaLore. ## Re: “Larger scale experiment” (NLbB) We pretrained from scratch 7B llamas on C4 for 10K steps. Perplexity the lower the better. The results are still consistent with our claim in production model scale. $P_t$ is updated by SGD. | Method | Perplexity | Wall Clock Time (hours) | |--------|------------|-------------------------| | Galore | 51.21 | 9.7439 | | **Ours** | **43.72** | **7.1428** | ## Re: “Downstream tasks” (RBsr) We did standardized glue evaluation for the above two 7B checkpoints with eval-harness. | Method | MRPC | RTE | SST2 | MNLI | QNLI | QQP | AVG | |--------|-------|-------|-------|-------|-------|-------|--------| | GaLore | 0.6838| **0.5018**| 0.5183| 0.3506| 0.4946| 0.3682| 0.4862 | | **Ours** | **0.6982**| 0.4901| **0.5233**| **0.3654**| **0.5142**| **0.3795**| **0.4951**| Both Cola and STSB require further fine-tuning (even Meta Llama-2 7B gets unmeaningful results), so we exclude them from the comparison. ## Re: “Motivation for the Lyapunov Analysis” (FtMV, 6zai) The rough intuition is that Pt serves as a kind of preconditioning matrix in the Hamiltonian systems. But to arrive the precise mathematical conclusion, we find that the best and quickest way to understand it is through the derivation in Eq (8), together with physical understandings of Hamiltonian systems (that, e.g., the hamiltonian consists of potential energy and kinetic energy, and the kinetic energy term is different for different optimizers)
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Are Uncertainty Quantification Capabilities of Evidential Deep Learning a Mirage?
Accept (poster)
Summary: This paper highlights problems with the Evidential Deep Learning (EDL) framework, analyzes these problems, and proposes possible solutions for some of them. The paper provides a new taxonomy and a unifying objective function for a wide range of EDL methods. The authors identify that aleatoric and epistemic uncertainty estimates of EDL methods do not conform to their textbook definitions and provide a bagging distillation technique that improves the learned epistemic uncertainty. Strengths: - The range of EDL methods considered in theoretical arguments and empirical experiments is extensive which allows for convincing conclusions. - Proving properties of a novel unified objective function is a nice way to make general claims about multiple popular EDL methods. - The introduced taxonomy of EDL methods is easy to understand and extend. - The OOD datasets considered in the OOD detection benchmarks are diverse and lead to OOD tasks with varying characteristics. The average score describes OOD detection capabilities well. - The mathematical argumentations are substantial and clear. Weaknesses: - The paper has many typos and grammatical errors. - "Second, the VI loss in Eq. (3) and UCE loss Eq. (4) turn out to be equivalent to the RPriorNet objective [...]" While the RPriorNet method uses a held-out out-of-distribution (OOD) dataset and incorporates a positive $\gamma_\text{ood}$ scalar, the VI and regularized UCE objectives only utilize an in-distribution (ID) dataset during training. This is an essential difference between the two method classes that does not allow such a conclusion. However, all three methods are, indeed, special cases of the unified objective function in Eq. (6), utilizing different parts of it. - The distinguishing features (2-4) discussed in L180-181 are already discussed in the referenced survey of Ulmer et al. (2023; TMLR) (e.g., in Tables 1-2 for the use of OOD data). - The fact that epistemic uncertainty does not vanish in EDL methods is not surprising, as, considering a constant regularization factor $\lambda$, their explicit regularizer that drives them to a flat Dirichlet distribution trades off with the (deterministic) training signal from the labels. Therefore, the global minimizer will not be Bayes optimal even in the limit of infinite data. However, considering a regularization factor $\lambda$ (or, equivalently, a scaling factor $\nu = \lambda^{-1}$) that depends on the dataset size readily alleviates this problem. It also fixes the misalignment with the "dictionary definition" of aleatoric uncertainty. Based on these observations, the proof shows that keeping the regularization strength constant is a bad idea, but does not identify problems with the EDL framework. This conclusion is well known, however: in Bayesian learning, the effect of the prior also vanishes as more data is observed, and the prior can be viewed as a regularizer. - Similarly, the fact that in the empirical results the aleatoric uncertainty is not constant is expected: estimates derived from the model are always inherently model- and objective-dependent. This holds not only for EDL methods but also for any learned uncertainty estimator. - For many practical tasks (such as OOD detection, a widely used proxy task for epistemic uncertainty, or correctness prediction, a predictive uncertainty benchmark), the absolute scale of the epistemic uncertainty value does not matter. In particular, an estimator can be performant without being constrained to vanish in the limit of infinite data, e.g., if it reaches a strictly positive minimum in this limit. The observation of non-vanishing epistemic uncertainty does not affect the practical use of these uncertainty estimates. The authors do not discuss such practical implications, even though they use the AUROC metric for OOD detection evaluation which is scale- and shift-variant by design. - L242 contradicts with L207. According to L207, the objective enforces $p_\psi(y \mid x)$ to fit $\frac{\mathbf{1}_C + \nu\boldsymbol{\eta}(x)}{C + \nu\mathbf{1}_C^\top\boldsymbol{\eta}(x)} \ne \boldsymbol{\eta}(x)$. This, in turn, makes the correspondence between L250 and L207 weaker. - In the abstract, the following is stated: "[we] reveal that the EDL methods can be better interpreted as an out-of-distribution detection algorithm based on energy-based-models". This claim is unsubstantiated, as the parallel with energy-based models is only drawn for EDL approaches that utilize ID + OOD mixture datasets for training. Having access to different training signals for ID and OOD samples naturally makes the learned models better OOD detectors (c.f. Figure 5). However, as Charpentier et al. (2020; NeurIPS) point out, this assumption is unrealistic and fails to cover _all_ meaningful OOD regions. In particular, eleven out of the nineteen methods Ulmer et al. (2023; TMLR) consider do not require any form of OOD data. - The paragraph at L304 suggests that incorporating model uncertainty would result in the "dictionary definition" of distributional uncertainty (which remains to be provided). This claim is not backed theoretically and is only supported empirically via distillation. The success of distillation might be mostly explained by the use of model ensembles (or MC-Dropout) and not the framework of Eq. (6) used for theoretical reasoning by the authors. The bridge between the theoretical argumentations and the empirical results L304 foreshadows is missing. - The improvement upon ensembling by further bootstrapping the training datasets is marginal. Due to the lack of error bars, Figure 3 can only be used to judge the result of a single seed, but the error bars of ensembles and bootstrapped ensembles might overlap significantly. The novelty of the use of bootstrapping is only in the context of EDL: Lakshminarayanan et al. (2017; NIPS) already considered bagged ensembles (and found that non-bagged variants performed better). - In the empirical evaluation several standard evaluation metrics are missing, e.g., the ECE score and (strictly) proper scoring rules. While total uncertainty and predictive uncertainty are evaluated, the aleatoric uncertainty estimate is not benchmarked separately. This would be key to back the claim that EDL methods are (mostly) OOD detectors: perhaps they are just as aligned with aleatoric uncertainty as epistemic, which would refute this claim. - The authors do not benchmark against a baseline, e.g., a cross-entropy-trained deterministic network. This makes the connection of results with methods outside the EDL framework hard. **Minor issues:** - "underlying distribution" appears twice in L79, making the sentence hard to parse. - $D(p\ \Vert\ q)$ is an ambiguous notation for the Kullback-Leibler divergence. For example, it is often used for general Bregman divergences. Consider using the notation $D_\text{KL}(p\ \Vert\ q)$. - $h(\cdot)$ is also an unusual notation for entropy. $\mathbb{H}(\cdot)$ or $\mathrm{H}(\cdot)$ are the usually used notations. - The UCE loss was not proposed by Charpentier et al. (2020, NeurIPS). Their contribution was the Regularized UCE. UCE was proposed by Biloš et al. (2019, NeurIPS). Therefore, the notation $\ell_\text{UCE}$ is also questionable. - "Other choices of prior were also proposed to promote some other desired property." Such sentences are too ambiguous for an academic text. Consider naming (at least some of) the desired properties before pointing to further resources. Technical Quality: 2 Clarity: 2 Questions for Authors: - "We use validation accuracy as the criterion to save the optimal model, as we observe using validation loss leads to unstable results." What does "unstable results" mean here? Additionally, have the authors tried to choose the optimal model based on their evaluation criteria (e.g., the selective classification performance)? - Does bootstrapping lead to a qualitatively different uncertainty quantification behavior compared to regular ensemble training? This is hard to judge, as Fig. 10 only features Bootstrap. - The extrapolated epistemic uncertainties of methods not utilizing OOD data are seemingly random based on Fig. 5. What might be the reason for the high AUROC score for most of these methods in Fig. 3? In particular, might it be the case that the OOD datasets are so different from their ID variants that the task becomes easy even with ill-posed epistemic uncertainties? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors do not adequately discuss their work's limitations, as they only acknowledge that their unified objective function does not cover all EDL methods and that their Bootstrap Distillation method is costly. - Several standard evaluation metrics are missing from the empirical evaluation. - The aleatoric uncertainty estimates are not evaluated quantitatively. - The paper does not present the performance of a simple baseline (e.g., a deterministic neural network's predictive entropy), making the difficulty of the tasks hard to estimate. - The authors do not formally argue why incorporating model uncertainty would alleviate the problems with aleatoric and epistemic uncertainty estimation outlined in Section 5.1. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments and address them in detail below. **1. The difference between RPriorNet and VI, UCE cannot support the claim " the VI loss in Eq. (3) and UCE loss Eq. (4) turn out to be equivalent to the RPriorNet objective [...]"** We acknowledge that this sentence could be misleading, and we will revise it accordingly. What we want to convey is the resemblance in the in-distribution (ID) part of the objectives. Specifically, although VI loss and UCE loss were originally derived from different motivations, they are equivalent to the ID part of the VI objective in disguise. To our knowledge, this observation has not been made before, and we believe it provides a unified way to reason about a class of EDL methods in the literature. **2. EDL’s non-vanishing epistemic uncertainty is not surprising, and a data size dependent $\lambda$ can resolve this issue.** We agree that epistemic uncertainty would vanish with a choice of data size dependent $\lambda$’s. However, we would like to point out a few key considerations. First, the classical Bayesian approach does not require a data size-dependent prior. As long as the data size goes to infinity, the epistemic uncertainty will eventually vanish regardless of the choice of prior. Second, it remains unclear how to properly define the data size-dependent $\lambda$, in particular, its decay rate. Third, while it is true that one can always propose heuristic tricks to make the uncertainty of any algorithm align with the dictionary definition, this does not imply that such an algorithm is a faithful uncertainty quantifier. **3. EDL’s non-constant aleatoric uncertainty is expected, this holds for any uncertainty estimator.** Although it might be correct to say that aleatoric uncertainty is implicitly dependent on the model's capacity, randomness in training, data sampling, and other artifacts, what we are trying to argue is that, even under ideal scenarios (infinite sample size, overparameterized model), aleatoric uncertainty quantified by EDL methods can still be dependent on hyper-parameters ($\lambda$). **4. EDL’s non-vanishing epistemic uncertainty does not affect the practical use of this uncertainty in downstream tasks.** First, in this paper, we do not claim that EDL methods are ineffective for downstream tasks. Instead, we attempt to answer the following question: “Why and how do EDL methods work in practice?” As a first step, we characterize the global optimizer of several EDL methods and highlight several consequences, including the non-vanishing epistemic uncertainty. We believe that a clear understanding of the objectives used for "uncertainty quantification" is crucial to understanding how EDL methods actually behave, how to further improve these methods, and how to lead to a more reliable and grounded UQ approach. Second, while we agree that non-vanishing epistemic uncertainty might not be directly related to common UQ downstream tasks such as OOD detection, there are practical applications that require accurate quantification of epistemic uncertainty. For example, in active learning, we cannot rely on the EDL model’s epistemic uncertainty as a signal to indicate the model’s actual knowledge at a certain stage because it never vanishes. The $\lambda$-dependent aleatoric uncertainty also may have issues in certain applications. For example, suppose an application relies on multiple pre-trained models to make predictions and identify ambiguous samples. If the learned aleatoric uncertainty of each model is trained with different $\lambda$'s in an EDL framework, then they might not be in a comparable scale. **5. L242 contradicts with L207.** In the paragraph, we consider the standard regime where $\nu$ is sufficiently large, so that $\alpha_0+\nu\eta(x)$ is approximately $\nu\eta(x)$, and the correspondence approximately holds in that regime. **6. The claim “the EDL methods can be better interpreted as an out-of-distribution detection algorithm based on energy-based-models”' is unsubstantiated, as multiple EDL methods do not require any form of OOD data.** We acknowledge that such resemblance applies to not all EDL methods, but to the methods covered by the unified objective (6). The main claim of this section is that the unified objective (6) closely resembles the EBM-based OOD detection objective. When no OOD data is available, the resemblance still remains, if we ignore the OOD regularization term. In that case, they simply promote very large outputs for ID data. The empirical success of EDL methods that do not assume OOD data, such as PostNet, implies that OOD data may not be required for OOD detectors to distinguish OOD data for some reason. We conjecture that this success is an artifact due to a combination of some optimization phenomena and neural networks’ extrapolation ability. In fact, similar to the eleven out of the nineteen methods in Ulmer et al., there are also many effective OOD detection algorithms in the literature that do not require any OOD data during training. We acknowledge that this point was not made clear in the submission, and we will revise the section accordingly, especially in the case when no OOD data is available. **7. The claim about the benefit of incorporating model uncertainty is not backed theoretically.** Please refer to global response 1. **8. Missing standard evaluation metrics such as ECE.** Please refer to global response 2.(1) and Figure 13. **9. Missing aleatoric uncertainty benchmark.** Please refer to global response 2.(2) and Figure 14., 15. **10. Missing benchmark against a cross-entropy-trained deterministic network.** Please refer to global response 2.(3). --- Rebuttal 2: Title: Rebuttal by Authors (Part II) Comment: **11. L180-181 are already discussed in the referenced survey of Ulmer et al. (2023; TMLR).** We acknowledge that lines 180-181 are also discussed in Ulmer et al., but these were not used to classify the EDL methods in Ulmer et al. We argue that the features mentioned in lines 180-181 can better classify the EDL methods with our unified view on the objective functions, and the dichotomy between “PriorNet-type methods” and “PostNet-type methods” in Ulmer et al. may not effectively contrast different EDL methods compared to our taxonomy. We will clarify our arguments accordingly in the revision. **12. The improvement upon ensemble distillation is marginal; The novelty of the use of bootstrapping is only in the context of EDL.** First, the detailed numbers with standard error are provided in Tables 2, 3, and 4 in the Appendix, which shows that the improvement is statistically significant. Second, we did not claim that bootstrap distillation is superior to ensemble distillation. Instead, we aim to advocate for distillation-based methods in general: ensemble distillation is the existing algorithm, and we propose bootstrap distillation as yet another alternative. Third, the main contribution of this work is to offer an analysis to better understand how EDL methods behave and provide insights on how to improve them. Proposing a novel EDL algorithm that achieves SOTA downstream task performance is beyond the scope of this work and could be considered a separate publication. **13. Question 1.** For RPriorNet, which utilizes OOD data for training, both the training loss and validation loss can fluctuate significantly due to the conflict between in-distribution and OOD data optimization. In this case, validation accuracy is a better signal to ensure the saved model is well-optimized. For other EDL methods, we rely on validation loss and never use any downstream task performance to choose the optimal model **14. Question 2.** We note that bootstrap distillation and ensemble distillation only differ in terms of the underlying stochastic algorithm, and the induced behavior depends on the choice of stochasticity. **15. Question 3.** First, as we mentioned in response 8, we conjecture that the success of EDL methods without OOD data is an artifact due to a combination of certain optimization phenomena and the extrapolation ability of neural networks. Second, the success is relative to other EDL methods, which actually suggests that other EDL methods relying on auxiliary techniques are not reliable for real data tasks (see Section 5.3). If these clarifications satisfactorily address the reviewer's concerns, we kindly ask if the reviewer would consider updating the score to reflect what we believe is a paper with noteworthy contributions to the community. --- Rebuttal 3: Comment: Dear Authors, thank you for your rebuttal. **The difference between RPriorNet and VI, UCE cannot support the claim " the VI loss in Eq. (3) and UCE loss Eq. (4) turn out to be equivalent to the RPriorNet objective [...]"** I understand the aim to highlight the resemblance in the ID part of the objectives. However, the paper neglects the importance of including or excluding the OOD data from the training objective. This choice is a key characteristic of EDL methods, thus saying that the "VI loss [...] and UCE loss [...] turn out to be equivalent to the RPriorNet objective [...]" is misleading. **EDL’s non-vanishing epistemic uncertainty is not surprising, and a data size dependent λ can resolve this issue.** > the classical Bayesian approach does not require a data size-dependent prior Consider a Bayesian maximum a posteriori learning task using a Gaussian isotropic prior. Then $- \log p(\theta \mid \mathcal{D}) \equiv -\sum_{i=1}^n\log p(y_i \mid x_i, \theta) + \frac{\tau}{2}\Vert \theta \Vert_2^2$ where $\tau$ is the prior precision. Further dividing by $n$ results in an equivalent objective: this now matches Eq. (6) with the first term being an expectation and the second term being $\frac{\tau}{2n}\Vert \theta \Vert_2^2$, which depends on $n$. > As long as the data size goes to infinity, the epistemic uncertainty will eventually vanish regardless of the choice of prior. The same holds for the EDL objective with a data-dependent $\lambda$. The cited challenges of scheduling the decay of $\lambda$ only affect the theoretical argumentation for the well-behavedness of the objective. **EDL’s non-vanishing epistemic uncertainty does not affect the practical use of this uncertainty in downstream tasks.** The paper does not sufficiently investigate the question “Why and how do EDL methods work in practice?”. Investigating the global minimizer of a unified objective is indeed useful, but the consequences the authors enumerate do not notably affect the practical usability of EDL methods. In particular, active learning also requires an effective _ranking_ of inputs based on the importance of acquiring supervision on them. Shifting the uncertainty values by a constant does not affect this ranking. **The claim “the EDL methods can be better interpreted as an out-of-distribution detection algorithm based on energy-based-models” is unsubstantiated, as multiple EDL methods do not require any form of OOD data.** > We acknowledge that such resemblance applies to not all EDL methods, but to the methods covered by the unified objective (6). The resemblance applies only to EDL methods that leverage OOD data during learning. There is no such resemblance to EDL methods that only utilize ID data: "if we ignore the OOD regularization term", the learning dynamics become substantially different. Please refer to the first point of this reply. **The improvement upon ensemble distillation is marginal; The novelty of the use of bootstrapping is only in the context of EDL.** Consider including the error bars in the bar plots as well. **Additional experiments.** Thank you for adding the requested experiments. They clarify the behavior of different EDL methods and allow for easier comparability. **Conclusion.** The authors have addressed some of my concerns. In light of their clarifications, I am willing to increase my score from 4 to 5. However, due to the remaining points discussed above and the work's limited impact, I do not propose a higher score. --- Rebuttal Comment 3.1: Title: Request for updating the visibility Comment: We apologize for the inconvenience, but **could you update the visibility of your response to include `Reviewers Submitted`?** We suspect that, as our Rebuttal Part II was posted as "official comment" and the system didn't allow it to be visible to other reviewers at the time of submission, your comment inherited the limited visibility. We have updated the visibility of our Part II, so you should be able to update it too. We believe that making our discussion visible to other reviewers would help other reviewers. Sorry again for the inconvenience. --- Rebuttal 4: Comment: $$\newcommand{\piv}{\boldsymbol{\pi}}\newcommand{\av}{\boldsymbol{\alpha}}\newcommand{\bv}{\boldsymbol{\beta}}\newcommand{\E}{\mathbb{E}}$$ We thank the reviewer for engaging in the discussion and sharing additional comments. We are glad that some of our concerns have been addressed, and we appreciate the reviewer for raising the score. Here, we wish to clarify concerns in some of the additional comments as follows. **1. Regarding the resemblance between EDL method without OOD data and EBM-based OOD detection algorithm.** The reviewer is correct that EDL methods without OOD data are not exactly equivalent to the specific method proposed by [24], which explicitly assumes OOD data during training. However, we wish to clarify that **we can still argue a close resemblance if we ignore the OOD regularization in both objectives** as follows: If we *remove* the OOD-data-dependent term in the EBM-based OOD detector’s objective, the objective becomes $$-\E\_{p(x,y)}[\log {p_\phi(y|x)}] + \tau \E\_{p(x)}[\max(0,E_\phi(x)-m_{\textsf{id}})^2],$$ where $E\_\phi(x)= -\log \boldsymbol{1}\_C^\intercal\bv\_\phi(x)$. This still has a similar effect to the ID part of the unified objective in eqn. (6), which is in turn simplified in eqn. (8) as $$ \E\_{p(x)}[D(\mathsf{Dir}(\piv;\av\_\psi(x)) \~\|\|\~ \mathsf{Dir}(\piv; {\av\_0+\nu\boldsymbol{\eta}(x)}))]. $$ That is, both objectives promote $\bv\_\phi(x)$ and $\av\_\psi(x)$ to output large values for ID data, while to be aligned with the underlying label distribution $\boldsymbol{\eta}(x)=(p(y|x))_{y=1,\ldots,C}$ when normalized, provided that $\nu \gg 1$. We will clarify these points in the revision of the manuscript. **2. Data size-dependent prior in Bayesian.** The example provided by the reviewer, in fact, suggests that $\frac{1}{N}$ term in the Bayesian framework is naturally induced, rather than heuristically crafted. Here, the $\theta$ (model parameter) in Bayesian framework is a “global” random variable dependent on all the samples in dataset $\mathcal{D}$, when the model observes more samples, the prior belief (of model parameter) is naturally dominated by likelihood, and the posterior become more concentrated. However, EDL methods do not have such a nice property. The reason is straightforward: In the EDL framework, $\pi$ is a “local” variable dependent on each sample $x$ instead of shared by all samples in the datasets, and its uncertainty is induced by its unique second order distribution $p_{\psi}(\piv|x)$ (parameterized by network $\psi$). To make the same analogy as the Bayesian example provided by the reviewer, we need multiple labels for each $x$, i.e., $\mathcal{D}\_x = \\{(x, y\_1), (x, y\_2), \ldots, (x, y\_{N\_x})\\}$, where $y_1, y_2, \dots, y_{N_x}$ are $N_x$ samples from the underlying first-order label distribution $p(y|x)$. Then, we could end up with a similar objective of the form $$-\log p(\piv|\mathcal{D}\_x) = - \sum\_{i=1}^{N} p(y\_i|\piv) + \lambda (\text{prior\\_term}).$$ In practice, however, for each $x$, we only observe a single label $y$, as $x$ is continuous and/or extremely high dimensional like images, and thus, in the EDL methods, the effect of prior does not naturally vanish as the sample size increases. A similar intuition has also been offered in some recent works, e.g., see the third paragraph of Section 3.3 in [4] and the second paragraph of Section 6 in [5]. - **On the practical usage of EDL methods** We agree that `active learning also requires an effective ranking of inputs based on the importance of acquiring supervision on them. Shifting the uncertainty values by a constant does not affect this ranking`. We wish to remark, however, that beyond the query strategy (such as ranking), another important problem in active learning is to decide when to stop querying new labeled samples to minimize the labeling budget. It is unclear whether a heuristic trick such as "stopping data acquisition as long as the uncertainty becomes a constant" is reliable, because if the model queries bad samples in the previous iterations or the model is not well-trained, its uncertainty can also be roughly constant compared to previous iterations without improvement. Stopping the model training in such scenarios is not desirable. Uncertainty learned with a relative scale would render such a decision even more difficult. Ideally, we believe that uncertainty accurately quantified in an absolute scale can better reflect how a machine learning system is truly lacking knowledge, especially for high-stake applications, such as medical diagnosis, and finance, where users need to rely on uncertainty measures to make important decisions. **References**: - [24] Energy-based Out-of-distribution Detection, Liu et al., NeurIPS 2020. - [4] Pitfalls of Epistemic Uncertainty Quantification through Loss Minimisation, Bengs et al., NeurIPS 2022. - [5] On Second-Order Scoring Rules for Epistemic Uncertainty Quantification, Bengs et al., ICML 2023. --- Rebuttal Comment 4.1: Comment: Dear Authors, Thank you for your answer. Your arguments have not resolved my remaining concerns; therefore, I retain my score. --- Reply to Comment 4.1.1: Comment: We appreciate your reply and engagement in the discussion. Could you kindly elaborate on the concerns that remain and how we can address them? We are eager to improve the quality of the paper, and your feedback would be greatly valuable.
Summary: The presented paper offers a (further) critique on EDL. They show that a range of EDL objective functions are largely equivalent by presenting a unified objective function that subsumes many existing objective functions. With this unified objective function some problematic properties are shown that prove that these EDL methods cannot properly represent aleatoric and epistemic uncertainty. Following this, they show various failure modes of EDL. Additionally, they show that observed good performance on OOD-detection is not robust and may not work well with even moderate dimensionality. Lastly, they say that distillation-based methods are a better alternative and demonstrate an adaptation to a distillation-based method that outperforms existing methods. Strengths: - The authors expand upon an existing critique by showing that many seemingly different EDL methods are actually very similar, collapsing multiple different methods into a single conceptual understanding (originality). - Some of the practical weaknesses shown of EDL also appear to be novel, such as unchanged epistemic uncertainty under different dataset sizes (significance). - The claims surrounding the critique of EDL appear to be well supported and sound (quality) - The observation that EDL methods for OOD detection can be sensitive to changes in architecture and that it may fail in moderately high dimensionality give suggestions on how OOD detection using density estimation generally may or may not work. This may have implications outside of EDL methods, and generally says something about how Neural Networks learn. (significance) Weaknesses: - The central point of critiquing EDL is not novel. The authors acknowledge that and do progress the critique, but I do think doubling down on the critique may have limited impact (originality) - The investigation into the newly proposed Bootstrap distillation is section 7 is lacking. Leave one-class-out OOD detection is not considered. The observation that the predicted epistemic uncertainty is monotonically increasing with accuracy is not sufficient to claim that it is “faithfully representing” epistemic uncertainty. (quality) - Some parts of the critique are about EDL’s failure in estimating epistemic uncertainty. However, the claim that EDL’s “distributional uncertainty” is “a kind of epistemic uncertainty” does not seem to have much support. Theoretically that claim is not sound (as epistemic uncertainty is the uncertainty in the model parameters), and the authors provide no sources that make this claim. Without a good source of why EDL is expected to represent epistemic uncertainty the critique on its epistemic uncertainty is meaningless. I was able to find some papers that make this claim: https://proceedings.neurips.cc/paper/2018/file/a981f2b708044d6fb4a71a1463242520-Paper.pdf, https://openreview.net/pdf?id=UI4K-I2ypG. Consider adding some discussion on the (mistaken) belief in the field that EDLs distributional uncertainty represents epistemic uncertainty. (clarity/quality) - The density of the paper (many experiments and findings for only 9 pages) means a large part of the methods and results are pushed to the appendix. As a result, the implementation details are not always clear. How distillation-based methods work is not sufficiently explained. (clarity) - The newly proposed bootstrap-distillation method is unlikely to have a meaningful impact considering it’s limited theoretical argumentation and the limitations of that method are minimally explored. (significance) Technical Quality: 3 Clarity: 2 Questions for Authors: - The bootstrap-distillation method seems to have limited relation to the current work. Distillation methods in general seem relevant, but the newly proposed bootstrapping method compared to END^2 seems to be unnecessary and not completely discussed. Would the authors consider introducing the bootstrap-distillation method as a separate publication (with space for more theory and more experiments)? I think this would also allow some of the more important parts of the Appendix to be moved into the main body of the current work. - The demonstration in Fig 1.a is a bit confusing. How do you define epistemic uncertainty in the EDL? E.g. PriorNet discusses “distributional uncertainty”, but does not consider this to be equivalent to epistemic uncertainty. Can you cite any sources that claim that the distributional uncertainty from EDL should be equivalent to epistemic uncertainty? And ideally that this kind of behavior of being “reducible” with more data should follow? - Line 105 says “distributional uncertainty” is a kind of “epistemic uncertainty”. What is this claim based on? Is there a source for this? It seems that p(psi|D) would still be the epistemic uncertainty in this decomposition (which is reduced to a single point). - It is correct that lambda affecting the predicted aleatoric uncertainty is problematic. However, what practical problems will this cause (and will this not cause) when aleatoric uncertainty is considered for downstream tasks? Identifying difficult, ambiguous or mislabelled samples may still be perfectly functional. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: EDL objective functions that are not subsumed by the unified objective function are discussed. The authors give a thorough critique of the failures of EDL, with the implication that Bayesian NNs would be the better alternative. However, they are not explicitly compared. Adding the behavior of a BNN in Figures 1.a, 4.a and 10 would demonstrate whether EDL are also worse than BNN-based methods. This is a limitation and it is not discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments. We address them in detail below. **1. The central point of critiquing EDL is not novel.** While novelty is in the eye of the beholder, and critiquing EDL is indeed a primary focus of this work, our contribution is non-trivial compared to existing papers. First, we offer a more advanced understanding of EDL methods compared to existing critiques [4, 5, 6] (see Appendix A). Second, beyond the critique, this work is the first to attempt to remedy the limitations of EDL methods. Third, to the best of our knowledge, this work is also the first to unify multiple representative EDL methods into a single framework for theoretical analysis, supported by empirical justification with real data experiments. **2. Regarding the support and the source of the claim “distributional uncertainty is a kind of epistemic uncertainty”.** We respectfully point out that this is a common claim in the EDL literature. We provide a list of supporting evidences from the literature as follows: - Malinin et al. (NeurIPS 2018) [7] proposed the "forward Prior Network". In this work, they introduced the concept of "distributional uncertainty," which is different from "model uncertainty" in Bayesian methods. However, similar to "model uncertainty," "distributional uncertainty" also measures the model's lack of knowledge, so it is aligned with the definition of epistemic uncertainty. They mentioned, “Distributional uncertainty is an ’unknown-unknown’ - the model is unfamiliar with the test data and thus cannot confidently make predictions.” The follow-up work of the same group of authors Malinin et al. (NeurIPS 2019) [8] also noted that “Knowledge Uncertainty, also known as epistemic uncertainty or distributional uncertainty, arises due to the model’s lack of understanding or knowledge about the input.” - Malinin et al. (ICLR 2020) [9] proposed the distillation-based method “END^2”. Therein, the authors again remarked that “Knowledge uncertainty, also known as epistemic uncertainty, or distributional uncertainty, is uncertainty due to a lack of understanding or knowledge on the part of the model regarding the current input for which the model is making a prediction.” - Charpentier et al. (NeurIPS 2020) [15], where the“Posterior Network” was proposed, uses "epistemic uncertainty" to refer to the "distributional uncertainty." The same usage can be found in its follow-up work (Charpentier et al., ICLR 2022) [20], in which “Natural Posterior Network” was proposed. There is no mention of “distributional uncertainty” in both works. - Recently, several critiques of the EDL methods such as Bengs et al. (NeurIPS 2022) [4], Bengs et al. (ICML 2022) [5], Jürgens et al. (ICML 2024) [6] have adopted the same convention. In these papers, they all use “epistemic uncertainty” as the terminology to refer to EDL's “distributional uncertainty” and criticize that EDL cannot faithfully quantify epistemic uncertainty. We will include these sources in the manuscript to avoid any confusion. **3. Regarding paper presentation and clarity.** We acknowledge the content is a bit cluttered in the submission due to the page limit. In the revised manuscript, we will do our best to improve clarity and elaborate on the details with the additional page. **4. Lacking theoretical justification and further investigation of proposed Bootstrap distillation.** Please refer to global response 1. **5. Would the authors consider introducing the bootstrap-distillation method as a separate publication?** The main point of this work is to advocate for distillation-based EDL methods in general, and the proposed bootstrap-distillation method serves as another alternative in the ensemble approach to justify this argument. We agree with the reviewer that conducting a comprehensive analysis (both theoretical and empirical) of distillation-based EDL methods can be an interesting future work. **6. What practical problems will $\lambda$ dependent aleatoric uncertainty cause in downstream tasks?** It is correct that identifying difficult, ambiguous, or mislabelled samples is still possible in practice, as long as the downstream task only relies on a relative scale of uncertainty values. However, there might be practical scenarios where the absolute value of uncertainty also matters. For example, suppose an application relies on multiple pre-trained models to make predictions and identify ambiguous samples. If the learned aleatoric uncertainty of each model is trained with different $\lambda$'s in an EDL framework, then they might not be in a comparable scale, which prohibits any further comparison. **7. Missing the behavior of a Bayesian approach in Figures 1.a, 4.a and 10.** Please refer to global response 2.4. --- Rebuttal Comment 1.1: Comment: **1. The central point of critiquing EDL is not novel.** > this work is the first to attempt to remedy the limitations of EDL methods The bootstrap-distillation is this first attempt. However, it’s rather weakly analysed. Effectively it’s a combination of BNN-based methods and EDL based methods but should then be compared to both extensively. > this work is also the first to unify multiple representative EDL methods into a single framework I acknowledge that this is a substantial contribution. I think it is interesting and shows insight into the various EDL methods and how they relate. However, this is a nice insight for a small subfield of EDL. I think this is why a 6 is fitting. **2. Regarding the support and the source of the claim “distributional uncertainty is a kind of epistemic uncertainty”.** I acknowledge that this (confused) claim is indeed somewhat common. I believe adding the proposed references will strengthen the argument. **4. Lacking theoretical justification and further investigation of proposed Bootstrap distillation.** The additional results are beneficial, but don’t constitute a substantial analysis comparing the Bootstrap Distillation with BNN-based methods and EDL based methods. **7. Missing the behavior of a Bayesian approach in Figures 1.a, 4.a and 10.** Thank you for adding these results. I agree that the behavior of the ensemble has been extensively studied but I think adding them to the manuscript gives a clear comparison. ## Conclusion I appreciate the authors engaging in the discussion. I agree that unifying the existing EDL approaches is interesting, but I doubt it is impactful. The same applies to the further investigation into EDL failure cases. The Bootstrap Distillation could be more impactful, but the current analysis remains not sufficient to show this. I stay at a “Weak accept” evaluation. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for appreciating the contribution of this work, and providing valuable comments. We will carefully revise the manuscript to incorporate the suggestions.
Summary: The paper focuses on Evidential Deep Learning (EDL) models developed for uncertainty quantification in a computationally efficient manner. It identifies key limitations of the EDL methods: their inability to faithfully express both the epistemic and aleatoric uncertainties. The paper then proposes to integrate model uncertainty into the EDL-based models to address the limitations of existing EDL models with a Bootstrap-distill method. The bootstrap-distill methods train multiple models on a subset of training data. Each model trains on a disjoint subset of the dataset to capture the model uncertainty. Experiments are carried out over multiple benchmark datasets to demonstrate the weaknesses of EDL models and the effectiveness of their proposed bootstrap-distill model. Strengths: - The paper is generally well-written and easy to follow - The paper discusses the effectiveness of the evidential deep learning models and highlights their limitations regarding uncertainty quantification. - The relationship of EDL methods with EBM-based OOD detectors further helps better understand the EDL based methods. - Experiments are carried out with benchmark datasets to empirically validate their claims. The experiments justify the claims regarding the weaknesses of EDL models. Weaknesses: - The proposed solution of introducing model uncertainty p(\psi|D) over the point estimate \delta(\psi - \psi*)of the EDL approach to address the limitations seems to be a heuristic choice. Is the proposed solution theoretically guaranteed to faithfully express the epistemic/aleatoric uncertainties? - In my understanding, EDL approach presents a computationally efficient Uncertainty Quantification (UQ) approach that is alternative to Bayesian approaches as it only involves one forward pass. However, introducing the model uncertainty p(\psi|D) would lead to the same computational expense compared to Bayesian approaches. It is not clear if there is any benefit of the p(\psi|D)-based EDL framework in UQ over Bayesian approaches for UQ. - The proposed solution (bootstrap-Distil)'s aleatoric/epistemic uncertainty behavior is not well justified. Maybe, the uncertainty behavior of bootstrap could be added in Figure 1/4 to better justify the proposed model. Also Figure 10: Which dataset is used? How is the epistemic uncertainty quantified for the bootstrap method? How does it compare with standard UQ methods (eg. bayesian ensembling and dropout uncertainty). Also, the range of epistemic uncertainty seems to be small (0.1 - 0.0001). Is there any range of the aleatoric/epistemic uncertainty values? Are the uncertainty values relative, or is there any meaning to the values? - Comparison with SOTA OOD detection methods is missing. Only EDL-based models are considered where the proposed model is superior. - Experiments and results are shown on simple datasets such as Cifar10 and Cifar100. It would be interesting to see the developed model's uncertainty results along with baseline model's results for more challenging datasets such as imagenet. Technical Quality: 3 Clarity: 2 Questions for Authors: → Impact of hyperparameter lambda in EDL (Figure 2): I wonder how the OOD detection performance works for different evidential models for lambda = 0 (It seems as if using no regularization (\lambda = 0) might lead to the best performance.) → The proposed solution: Using model distribution p(\psi|D) for EDL models instead of the point estimates would lead to three levels of uncertainty: Model distribution through p(\psi|D), distributional uncertainty p(\pi|x,\psi) and aleatoric uncertainty through p(y|\pi). I wonder if there is any benefit of such a three-level framework over standard Bayesian approaches (i.e. without any distributional uncertainty)? → Will the proposed solution be theoretically better than the EDL methods i.e. will the method guarantee desired epistemic/aleatoric uncertainty behavior? --> Recent works have shown learning deficiencies in evidential deep learning models[1],[2]. I wonder if the proposed model will be robust to such learning deficiencies. [1] Learn to accumulate evidence from all training samples: theory and practice (ICML 2023) [2] Ucertainty regularized evidential regression (AAAI 2024) → The claim that EDL methods perform strongly for OOD detection is not well justified. Do these EDL-based models outperform the SOTA OOD detection methods? Comparison with SOTA OOD detection methods will further strengthen the work. --> Uncertainty behavior of the developed model (bootstrap): Is the aleatoric/epistemic uncertainty of this model accurate on all datasets and settings? How does the aleatoric uncertainty behave with an increase in sample size for the baseline and the developed model? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The work is towards understanding the limitations of EDL-based models that is well presented. The limitations of their proposed model could be further discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments. We address them in detail below. **1. Is the proposed solution theoretically guaranteed to faithfully express the epistemic/aleatoric uncertainties?** Please refer to global response 1. **2. It is unclear if there is any benefit of the $p(\psi|D)$-based EDL framework over Bayesian approaches.** The main benefit of distillation-based EDL is its computational efficiency at inference time. Once the model is trained, it does not require multiple inferences like Bayesian or ensemble methods. **3. The proposed solution (bootstrap-Distil)'s aleatoric/epistemic uncertainty behavior is not well justified.** The bootstrap-distill method does not have any hyper-parameters in its objective, so the aleatoric uncertainty is not model/hyper-parameter dependent. Regarding the behavior of epistemic uncertainty, we provide the figure of the epistemic uncertainty vs. sample size curve on the CIFAR10 dataset in Figure 10. The behavior of uncertainty quantified by bootstrap, ensemble, and Bayesian approaches is similar, as the epistemic uncertainty is induced by model uncertainty. All these methods are well-studied in literature, and they are usually considered as faithful uncertainty quantifiers. **4. Regarding the range of the aleatoric/epistemic uncertainty values. Are the uncertainty values relative, or is there any meaning to the values?** The range of aleatoric/epistemic uncertainty depends on the uncertainty metric. For example, epistemic uncertainty measured using "mutual information" will be non-negative. For typical downstream tasks such as OOD data detection, only the relative uncertainty matters for identifying OOD samples. However, there might be practical scenarios where the absolute value of uncertainty also matters. For example, in active learning, we cannot rely on the EDL model’s epistemic uncertainty as a signal to indicate the model’s actual knowledge at a certain stage because it never vanishes. The $\lambda$-dependent aleatoric uncertainty also may have issues in certain applications. For example, suppose an application relies on multiple pre-trained models to make predictions and identify ambiguous samples. If the learned aleatoric uncertainty of each model is trained with different $\lambda$'s in an EDL framework, then they might not be in a comparable scale. **5. Missing Comparison with SOTA OOD detection methods.** First, the focus of this work is NOT to propose a novel algorithm that outperforms all SOTA OOD detectors. Instead, the main contribution is to offer an analysis to better understand how a specific type of UQ method (EDL) behaves and to provide insights on how to mitigate its limitations. The bootstrap-distill approach serves as one of many possible solutions to address the issues of existing EDL methods. Furthermore, while comparing EDL methods with algorithms specifically designed for OOD detection might not be fair, we include an additional baseline comparison with a classical cross-entropy-trained network (see global response 2.3). **6. how the OOD detection performance works for EDL methods with $\lambda=0$?** We experimented with the $\lambda=0$ case, but observed that this simply encourages the model to output an extremely large value and leads to numerical errors during training. This is consistent with what Theorem 5.1 and Example 5.2 revealed, i.e., $\lambda=0$ corresponds to that the target for ID data is $\infty$. **7. What’s the benefits of three levels of uncertainty approach over Bayesian approaches?** First, our main claim in this paper is that distributional uncertainty alone while ignoring model uncertainty cannot capture uncertainty in a faithful manner. Second, we suggest that distillation-based methods could be a straightforward solution to remedy these issues. While distillation-based EDL methods have less computational complexity than Bayesian methods, we do not claim that distillation-based EDL methods are fundamentally superior, and further studies are warranted to compare the two approaches. **8. if the proposed method will be robust to learning deficiencies issue in EDL literature [1, 2]?** The learning deficiencies issue is mainly related to the activation function in the EDL model architecture, which might be a tangential problem to this work. **9. How does the aleatoric uncertainty behave with an increase in sample size?** Aleatoric uncertainty does not have asymptotic behavior. In practice, aleatoric uncertainty will converge to a constant when the model is trained with a sufficient number of samples. --- Rebuttal 2: Title: Additional Clarifications on the Rebuttal Comment: I thank the authors for the rebuttal response. However many of my concerns remain 2. It is unclear if there is any benefit of the bootstrap-based EDL framework over Bayesian approaches. Clarification regarding faster inference compared to ensembles My current understanding is that M different models would need to be trained on subset of training dataset to obtain the distribution p(\psi|D). I believe that the training is significantly more expensive than EDL approaches, and as expensive as ensemble of M models. I feel as if the inference would be as expensive as ensemble of M models. I wonder if my understanding is correct? 3. The proposed solution (bootstrap-Distil)'s aleatoric/epistemic uncertainty behavior is not well justified. a) The claim: The bootstrap-distill method does not have any hyper-parameters in its objective --> Wouldn't the bootstrap-Distil method introduce the hyperparameter: number of bootstrap samples? b) The clarifying question is: will it's uncertainties show reasonable trends? For eg. what is the trend of aleatoric/epistemic ucnertainty trend of bootstrap-Distil for 2D gaussian? I think for a model claiming fine-grained uncertainty quantification capabilities, both epistemic and aleatoric uncertainties should show meaningful trends. I may have missed but currently I'm not confident on the superiority of bootstrap-Distil's uncertainty (the claim: the aleatoric uncertainty is not model/hyper-parameter dependent, --> what is the uncertainty trend?) 9. How does the aleatoric uncertainty behave with an increase in sample size? Theoretically, aleatoric uncertainty should converge to a constant when the model is trained with a sufficient number of samples. The clarifying question was regarding how the EDL and the proposed model's aleatoric uncertainty behaves with change in number of samples. I believe observing aleatoric uncertainty for the EDL model/proposed model across the experiments/datasets should be straightforward and reveal insights into model's uncertainty behavior. Unanswered claims: Is the aleatoric/epistemic uncertainty trend of this model accurate on all datasets and settings or only accurate on simple dataset of Cifar10 (the epistemic uncertainty is shown only for one experiment of Cifar10) It is still not clear to me whether the proposed model's uncertainty be reasonable in realistic practical datasets (eg. imagenet/tiny-ImageNet/Cifar100) --- Rebuttal Comment 2.1: Comment: We thank the reviewer for engaging in the discussion and providing additional comments. We would like to address the reviewer’s remaining concerns as follows. **1.Clarification regarding faster inference compared to ensembles.** We acknowledge that the distillation-based method incurs a higher runtime compared to classical EDL methods, as mentioned in the last sentence of the abstract. This additional runtime primarily stems from the construction of $p(\psi|\mathcal{D})$, which often requires training multiple models or performing multiple inferences. However, the distillation-based method remains more computationally efficient than ensemble or Bayesian methods during inference at test time. For example, if $M$ ensemble models are trained on CIFAR-10 and distilled into a single EDL model, only this single EDL model is needed for OOD detection during inference. In contrast, the deep ensemble method would require storing and inferring all $M$ models. Overall, distillation-based EDL methods aim to emulate the desired properties of classical Bayesian or ensemble methods for faithfully quantifying uncertainties while being more computationally efficient (during inference). **2. Regarding hyper-parameters in its objective.** We agree with the reviewer that the number of bootstrap samples $M$ can be viewed as a hyperparameter, i.e., using $M$ samples of $\psi$ to approximate the stochastic algorithm $p(\psi|\mathcal{D})$. We wish to remark, however, that larger $M$ would always lead to better approximation of $p(\psi|\mathcal{D})$, and thus a practitioner can choose the largest $M$ within the computational budget. This is qualitatively different from the hyperparameter $\lambda$ in the EDL methods, which has to be tuned based on a certain criterion with additional validation dataset. **3. Regarding both epistemic/aleatoric uncertainty trends of bootstrap distillation method.** We appreciate the reviewer’s suggestion. We have conducted experiments per your suggestion and will include the trends of uncertainty measures with respect to sample size, across the different methods including the bootstrap distillation in the revision. Specifically, we will update Figure 4(a) to include bootstrap distillation and add two additional figures like Figure 4(a) and Figure 10 to demonstrate the trend of aleatoric uncertainty with respect to the sample size. As we are not allowed to include a link or pdf file during the discussion period, here we provide a verbal description on the trend to be added in the revision. - For epistemic uncertainty, the bootstrap distillation exhibits a vanishing trend as the sample size increases even for Gaussian data (which was missing in Figure 4(a)), as in Figure 10. - For aleatoric uncertainty, the bootstrap distillation exhibits a decreasing trend as the sample size increases and converges to constant in both datasets (Gaussian and CIFAR10), as one can expect. **4. How does the aleatoric uncertainty behave as sample size increases?** As alluded to above, a reasonable learned aleatoric uncertainty is expected to converge to constant as sample size increases. We plotted the aleatoric uncertainty trends for the same experiments setting in Figure 4 (Gaussian) and Figure 10 (CIFAR10), and found that other EDL methods also exhibit similar trends in general, except in a few cases. On Gaussian, all classical EDL methods and bootstrap distillation methods exhibit decreasing trend of aleatoric uncertainty as sample size increases, but classical EDL methods show some fluctuation after the aleatoric uncertainty converges to a certain level. On CIFAR10, all classical EDL methods and bootstrap distillation methods exhibit a monotonically decreasing trend of aleatoric uncertainty, except RPriorNet. The above observation justifies that main limitations of classical EDL methods are mainly twofold: (1) non-vanishing epistemic uncertainties and (2) hyper-parameter dependent aleatoric uncertainty. We will add a discussion on these new results in the revision, to clarify in which scenarios EDL methods exhibit expected or undesirable behaviors. **5. Results on larger scale datasets.** We respectfully emphasize that the main focus of our submission is on the theoretical analyses of EDL methods to provide insights on their underlying principle and pitfalls, and illustrate the practical implications with synthetic and standard real data benchmarks. We agree, however, with the reviewer that including results for more large-scale datasets would further strengthen our claims. We will incorporate more experimental results in the revision to address this.
Summary: In this paper, the authors propose a novel analysis of existing evidential learning approaches, which have recently gained significant attention in the domains of uncertainty estimation and probabilistic modeling. The two major contributions of this work are as follows: First, the authors provide a clearer understanding of the asymptotic behavior of a wide class of EDL methods by unifying various objective functions. Second, they reveal that EDL methods can be better interpreted as out-of-distribution detection algorithms based on energy-based models. Specifically, the first contribution explains why evidential learning might have poor uncertainty quantification capabilities (or probabilistic modeling quality), while the second explains why EDL methods empirically show good performance on a number of downstream tasks. The authors validate their insights through extensive experiments on several image classification tasks (CIFARs and TinyImageNet) and EDL baselines. Strengths: * The paper is clearly written and easy to follow. The idea is intuitive and easy to grasp. The related work section provides an adequate discussion of existing approaches to anchoring-based training. The analysis narrative, with the presented drawbacks of existing methods, is very clear and easy to understand. * The authors derive a novel way to unify existing evidential learning approaches within Unified EDL Objectives, which provides a new perspective on these types of models and facilitates further analysis. As an example, the authors demonstrate that a typical EDL approach suffers from asymptotic behavior when aleatoric uncertainty does not vanish with an increasing training set and remains constant for ID data. * The analysis also provides an interesting finding that the EDL models could be considered within the energy-based OOD framework, which partially explains their success in a number of downstream tasks. Weaknesses: * From the perspective of the experimental evaluation, I would be curious to see evidence that the behavior demonstrated in the paper would hold in other domains, such as texts, graphs, more complicated vision tasks (e.g. segmentation), not limiting to image classification task. Additionally, evidential learning also covers regression tasks, which would also be very beneficial for the paper to discuss. * The authors conducted their evaluation exclusively using ResNet18 and VGG16 models. Incorporating more recent models and state-of-the-art architectures would likely provide a more comprehensive and robust assessment of their approach, ensuring validity of the results across a wider range of scenarios. The paper does not have any major flaws or weaknesses that would warrant rejection, and I tend to assess the paper positively. I think that the paper would be very interesting for the community since it provides several important insights about a popular family of uncertainty estimation approaches. Technical Quality: 3 Clarity: 3 Questions for Authors: * How do the proposed results translate to the regression case for evidential learning? Is there any way to extend the mentioned insights into this case? For example, it seems that the EBM intuition of OOD provided for classification potentially wouldn't work for the regression case. * The paper highlights a very interesting problem with EDL models, which have gained a lot of attention recently. In Section 6, the authors discuss the reason for these problems and the potential solution, which is stochastic modeling either through ensembling or dropout (plus the proposed bootstrapping). If stochasticity is a solution for EDL drawbacks, could we apply efficient ensembling alternatives [1, 2, 3, 4] to achieve the same results? I think it could be valuable to discuss this. [1] Wen, Yeming, Dustin Tran, and Jimmy Ba. "Batchensemble: an alternative approach to efficient ensemble and lifelong learning." ICLR 2020 [2] Durasov, Nikita, et al. "Masksembles for uncertainty estimation." CVPR 2021 [3] Laurent, Olivier, et al. "Packed-ensembles for efficient uncertainty estimation." ICLR 2023 [4] Turkoglu, Mehmet Ozgur, et al. "Film-ensemble: Probabilistic deep learning via feature-wise linear modulation." NeurIPS 2022 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful comments. We address them in detail below. **1. Whether EDL’s behavior demonstrated in the paper would hold in other modalities and domains.** We totally agree with the reviewer that analyzing EDL’s behavior in other domains and modalities is worth exploring. Since the classification setting for the vision domain is standard in the UQ literature, however, we focus on this setup to clearly demonstrate the implication of our analyses. While we leave such extensions as future work, we refer the reviewer to Appendix F.2, where we provide some analysis for the regression case. **2. Incorporating more recent models and state-of-the-art architectures for evaluation.** This is also a good suggestion. In fact, most EDL methods evaluate their performance on classical neural networks and standard benchmark datasets. Conducting a comprehensive empirical study to benchmark all existing EDL methods under a unified framework would also be an interesting future work. **3. How do the proposed results translate to the regression case? The EBM intuition of OOD wouldn't work for the regression case.** As the reviewer noted, the EBM intuition would not hold for the regression case. However, a similar observation can be made: the existing EDL methods for regression only aim to fit to a “fixed” uncertainty target. This is consistent with the observation made by a recent paper (Meinert et al., 2023). We kindly refer the reviewer to Appendix F.2, where we elaborate further on this case. **4. If stochasticity is a solution for EDL drawbacks, could we apply efficient ensembling alternatives to achieve the same results?** In this work, we argue that the EDL framework can be better used as a computational tool to emulate the behavior of classical ensemble approaches, while reducing their computational burden. In this context, using more advanced ensembling techniques can indeed boost the UQ performance of distillation-based EDL methods. We will add the relevant discussion in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for your thoughtful engagement in the rebuttal. I appreciate the authors' efforts in addressing my concerns and providing additional insights, particularly in discussing the potential extensions of their work to other domains and modalities. The clarification on the regression case and the consideration of efficient ensembling alternatives also add valuable context to your findings. While the paper focuses on standard classification settings within the vision domain, the discussions provided indicate promising directions for future work. Given these improvements and the solid contributions made, I am increasing my original score by 1 point. --- Reply to Comment 1.1.1: Comment: We thank the reviewer for the positive feedback and increased score. We also appreciate the insightful comments, which have given us valuable ideas for potential extensions of this work.
Rebuttal 1: Rebuttal: # To ALL Reviewers: We thank all the reviewers’ effort in reviewing our paper and providing thoughtful comments. We would like to take this opportunity to further clarify our contribution, and resolve some of the common concerns as follows: **1. Theoretical justification of proposed Bootstrap Distillation** As several reviewers asked about a further justification of the proposed bootstrap distillation in Section 6, we provide our response here. - We first emphasize that the main contribution of this paper is to analyze the common limitation of EDL methods with empirical support, show that standard EDL methods simply fit the UQ model to a fixed target (Section 5.1), and reveal that the learned uncertainties bear no statistical meaning. Since this is due to the ignorance of model uncertainty in the framework, we show that incorporating model uncertainty in the EDL framework via distillation can alleviate the characterized issues at the cost of computational complexity. We propose the bootstrap procedure in this context as a new mechanism to induce model uncertainty. While we do not claim that bootstrap distillation (BD) is the best approach, our experiment shows that BD attains a good downstream task performance even compared to the existing distillation based approaches END2 and S2D; see Tables 2, 3, and 4 in Appendix. - This suggests that a further study of the distillation-based EDL framework is warranted. We acknowledge that the BD lacks a rigorous justification, but conducting a comprehensive theoretical analysis of the method is rather nontrivial. As reviewer gUpP also suggested, such an analysis could be considered as a separate publication in the future. - While it is relatively easy to argue why the epistemic uncertainty would vanish in the sample limit ($|\mathcal{D}|\to\infty$) with the bootstrap from the intuition (Section 6), we believe that a more sophisticated asymptotic analysis for vanishing epistemic uncertainty can be carried out with overparameterized neural networks, adopting a similar setting in [A]. Succinctly speaking, if the trained model's prediction can be shown to be asymptotically normal in the limit of the sample size, we can argue that the uncertainty captured by bootstrap behaves as Gaussian of vanishing variance in the sample limit. This implies a naturally vanishing (epistemic) uncertainty. We will carry out the analysis and discuss this in more detail in the revision. [A] Huang et al., Efficient Uncertainty Quantification and Reduction for Over-Parameterized Neural Networks, NeurIPS 2023. **2. New experiment Results** As some reviewers suggested, we provide additional experimental results to further strengthen our work. We summarize the results below and the accompanying figures can be found in the attached pdf: - (1) **Calibration Performance.** We measure and compare the Expected Calibration Error (ECE) score of different EDL methods in Figure 13. Calibration measures how well the model's confidence aligns with its prediction accuracy. Good calibration performance also indicates accurate “total uncertainty” quantification. From Figure 13, we observe a behavior similar to the selective classification task in Figure 12. Distillation-based methods achieve lower ECE scores on calibration and better AUROC scores on the selective classification task, which relies on total uncertainty. We also observe that Fisher-EDL achieves the lowest calibration error, aligning with its promising performance on the selective classification task in Figure 12. - (2) **Aleatoric Uncertainty Benchmark.** In Section 5.2 and Section H.2 of Appendix, we argue that using a Dirichlet-framework-independent objective that promotes the same behavior as the EBM-based OOD detection algorithm suffices for downstream task performance. We justify this through the epistemic uncertainty benchmark. We further examine the behavior of aleatoric uncertainty quantified by EDL methods. First, we provide a visualization of aleatoric uncertainty on 2D Gaussian data in Figure 14. Second, we evaluate EDL methods’ capability to detect ambiguous data (linear interpolation between two test images) using aleatoric uncertainty in Figure 15. The results in Figures 14 and 15, together with Section 5.2, further support that EDL's UQ capability (on both epistemic and aleatoric uncertainty) is not due to the Dirichlet framework but rather due to other auxiliary techniques. - (3) **Comparison with a cross-entropy-trained deterministic network.** Due to space limitations, we summarize the results in words as follows. On the OOD detection task, the cross-entropy-trained network achieves an average AUROC score of 91.6 (66.8) on CIFAR10 (CIFAR100). On the selective classification task, the cross-entropy-trained network achieves AUROC scores of 90.2 (84.1) on CIFAR10 (CIFAR100). The performance of this baseline is suboptimal compared to most EDL methods, especially on the OOD detection task, as expected. We will add the concrete results (using tables and figures) in the revision. - (4) **Behavior of deep ensemble’s epistemic uncertainty.** Although the behavior of classical Bayesian approaches and ensemble approaches has been well studied in the literature, we provide the epistemic uncertainty vs. sample size curve of the deep ensemble method in Figure 16. Unlike EDL methods, the epistemic uncertainty of the deep ensemble demonstrates a vanishing trend, as expected. Pdf: /pdf/fe2c60d52caf0f9c9d77bd4d7ce1e50fd62b9474.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
GACL: Exemplar-Free Generalized Analytic Continual Learning
Accept (poster)
Summary: This paper proposes a new exemplar-free GCIL technique named generalized analytic continual learning (GACL). It adopts analytic learning (a gradient-free training technique), and delivers an analytical (i.e., closed-form) solution to the GCIL scenario, which is derived via decomposing the incoming data into exposed and unexposed classes. The GACL attains a weight-invariant property, supporting an equivalence between the incremental learning and its joint training. This method is both therectically and empircally validated. Strengths: 1. This is very interesting technique that obtains an equivalence between the GCIL and the joint training. Although a pre-trained network is needed, the weight-invariant property is very valuable in this area. 2. Another key contribution of GACL is that it is both accurate and exemplar-free, and this is powerful and significant. 3. The approach is also well-motivated and clearly explained. 4. The paper provides a comprehensive literature review, effectively contextualizing its contribution and facilitating the reader's understanding. 5. It is very clear theoretically regarding how the algorithm is parted into W_unexposed and W_ECLG. Weaknesses: 1. Could you explain why the avg accuracy on CIFAR-100 is much lower than the last accuracy? 2. You need to explain more regarding "weight-invariance" in Theorem 3.1. It is not very easy to follow for those who are not in this area of research. 3. In this paper, the authors assume that the pre-trained backbone is generalizable so that the model can be frozen throughout the learning process. However, there are also some cases where the pre-trained model cannot generalize to downstream tasks. It would also be interesting to see the scenario where the pre-trained backbone yields a large domain gap. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Continual learning has many variations (e.g., domain/task/class -incremental learning). Since the experiments only focus on generalized class-incremental learning, it would be better to adjust the title accordingly. 2. Since generalized CIL is also a specific case of class-incremental learning, it would be better to discuss why the proposed method is specially designed for GCIL. For example, when the data stream only contains new classes, will GCIL still work? 3. As the authors discussed in the main paper, the proposed method is a super case of ACIL and other analytical methods. As many readers would like to know, the authors are suggested to compare this method to other analytical methods to show how this method stands against other analytical works. Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Replies to Reviewer tsV2 Thank you for your constructive and detailed feedback. We provide detailed responses to your concerns below. ### W1. Why the avg accuracy on CIFAR-100 is much lower than the last accuracy? **Response to W_1**: As mentioned in Appendix G, tasks on CIFAR-100 are notably more complex and intricate, resembling a few-shot learning scenario. Consequently, the accuracy in the initial tasks tends to be lower, leading to a relatively low average accuracy. ### W2. Explain more regarding "weight-invariance" in Theorem 3.1. **Response to W_2**: The weights of the classifier obtained by this recursive update are exactly the same as the weights obtained by training the classifier from scratch on the entire data set. This *weight-invariant property* achieves a near “complete non-forgetting” and makes the GACL outperforms all EFCIL methods and most replay-based methods. ### W3. It would also be interesting to see the scenario where the pre-trained backbone yields a large domain gap. **Response to W_3**: As a demonstration, a 5-phase experiment is conducted in table below, with DeiT-S/16 backbone pretrained on ImageNet-1k, and CIL with DTD [1]. | | Buffer | DTD | | | |:----------------------:|:------:|:--------:|:-------:|:--------:| | | | ACC_AUC | ACC_AVG | ACC_LAST | | EWC++ (ECCV,2018) | 2000 | 55.66 | 58.97 | 47.32 | | ER (ICML,2019) | 2000 | 51.78 | 57.87 | 50.99 | | MVP (ICCV,2023) | 2000 | 52.97 | 55.15 | 54.10 | | EWC++ (ECCV,2018) | 500 | 54.28 | 57.82 | 45.44 | | ER (ICML,2019) | 500 | 51.23 | 57.45 | 49.67 | | MVP (ICCV,2023) | 500 | 52.76 | 55.19 | 53.72 | | LwF (ECCV,2016) | 0 | 42.67 | 48.41 | 33.42 | | L2P (CVPR 2022) | 0 | 43.11 | 51.25 | 39.61 | | DualPrompt (ECCV 2022)| 0 | 42.12 | 50.17 | 39.29 | | MVP (ICCV,2023) | 0 | 46.88 | 49.98 | 44.41 | | SLDA (IEEE/CVF2020) | 0 | 49.84 | 50.33 | 55.11 | | **GACL** | **0** | **61.93** | **66.16** | **63.69** | [1] Mircea Cimpoi, Subhransu Maji, Iasonas Kokkinos, Sammy Mohamed, Andrea Vedaldi; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2014, pp. 3606-3613. ### Q1. Since the experiments only focus on generalized class-incremental learning, it would be better to adjust the title accordingly. **Response to Q_1**: Thank you for pointing out that. We will modify our title into "Exemplar-Free Generalized Analytic Class Incremental Learning". ### Q2. Why is the proposed method specifically designed for generalized class-incremental learning (GCIL)? **Response to Q_2**: As demonstrate in Section 3.2 and 3.3, we introduce the ECLG module that corresponds to the exposed-class gain since classes can reappear in GCIL setting. ### Q3:Will GCIL still work if the data stream contains only new classes? **Response to Q_2**: Yes. We employ Si-Blurry[2] to generate GCIL tasks for our experiments. As demonstrated in Section 4.4, when the disjoint class ratio $r_{\text{D}}$ is 0, it indicates that the data stream contatians only new classes. [2] Jun-Yeong Moon, Keon-Hee Park, Jung Uk Kim, Gyeong-Moon Park; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 11731-11741 --- Rebuttal 2: Comment: I thank the authors for providing the rebuttal. After reading the rebuttal and other reviewers' comments, all my previous concerns have been adequately addressed. I will keep my positive rating. --- Rebuttal Comment 2.1: Title: Thank you for the response Comment: Thank you for taking the time to read our response! We are glad that our response addressed your concerns!
Summary: Class Incremental Learning (CIL) faces the problem of catastrophic forgetting when training a network, i.e., the model loses previous knowledge when learning a new task. Generalized CIL (GCIL) aims to address more realistic scenarios, but existing methods are either ineffective or violate data privacy. This paper propose a new exemplar-free GCIL technique called Generalized Analytic Continual Learning (GACL), which provides a closed-form solution through analytic learning. GACL achieves equivalence between incremental learning and co-training by disaggregating the input data to keep the weights constant. This approach is theoretically validated and performs well across multiple datasets and settings. Strengths: * The paper is well-written and easy to follow. * The proposed method outperforms other methods on several GCIL benchmarks. * This paper provides a detailed proof of the theorem. Weaknesses: * The experimental setup is not clear. Given that the most recent method MVP [1] compared in this paper is the GCIL method for online CIL, were the experiments in this paper also conducted in an online scenario? * The technical improvements on the previous work of ACL are somewhat incremental. [1] Jun-Yeong Moon, Keon-Hee Park, Jung Uk Kim, and Gyeong-Moon Park. Online class incremental learning on stochastic blurry task boundary via mask and visual prompt tuning. ICCV 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What was the motivation for introducing ACL into GCIL? The claims in this paper seem to be an extended experiment of ACL in the GCIL scenario. It is difficult to generate further inspiration to the reader for solving the GCIL problem. 2. The proposed method utilize the DeiT-S/16 as backbone but previous works typically use ViT-B/16. Regarding the comparison methodology, what backbone network was used to obtain the reported results? 3. What are the technical improvements and contributions of this paper over previous ACL approaches such as ACIL [2] and DS-AL [3]? [2] Huiping Zhuang, Zhenyu Weng, Hongxin Wei, RENCHUNZI XIE, Kar-Ann Toh, and Zhiping Lin. ACIL: Analytic class-incremental learning with absolute memorization and privacy protection. NeurIPS 2022 [3] Huiping Zhuang, Run He, Kai Tong, Ziqian Zeng, Cen Chen, and Zhiping Lin. DS-AL: A dual-stream analytic learning for exemplar-free class-incremental learning. AAAI 2024. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: It is hard to foresee any potential negative societal impact of this theoretical work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Replies to Reviewer qU32 Thank you for your constructive and detailed feedback. We provide detailed responses to your concerns below. ### W1. Were the experiments in this paper also conducted in an online scenario? **Response to W_1**: Yes, we follow the settings in Si-Blurry [1], which is a online setting with blurry task boundaries. [1] Jun-Yeong Moon, Keon-Hee Park, Jung Uk Kim, Gyeong-Moon Park; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 11731-11741 ### W2. "The technical improvements on the previous work of ACL are somewhat incremental." and "What are the technical improvements and contributions of this paper over previous ACL approaches such as ACIL and DS-AL?" **Response to W_2 and Q_3**: We are very sorry that our delivery of GACL gives a "trivial extension" impression. In fact, the GACL is developed with nontrivial and sophisticated manupulations beyond existing ACL techniques. According to the derivation in lines 457-484, the solution **CAN NOT** be trivially extended from ACIL or its variants (the derivation takes **2 whole pages** in the appenix). We **deliberately** show the connection to the existing ACL techniques, to **better illustrate** the weight-invariant property (which has been recognied in ACIL, GKEAL). We will mark additional highlight in this part to avoid such an "incremental" impression. ### Q1. the motivation for introducing ACL into GCIL. **Response to Q_1**: Many real-world datasets follow GCIL tasks, such as the autonomous driving dataset SODA10M [2], the IoT dataset described in [3], and the real-world e-commerce service dataset described by [4]. GCIL simulates real-world incremental learning, where the distributions of data categories and sizes can be unknown in a given task. ACL techniques emerging as a new CIL branch, have a very appealing weight-invariant property in the CIL community, which **works magically esepcially for large-phase CIL tasks**. GCIL tasks (e.g., Si-blurry setting) are usually online (very large-phase), where ACL techinique can thrive. Existing ACL methods are specifically designed for CIL tasks only, and **CAN NOT** process GCIL ones, and the extension is **NOT** trivial (see derivations in lines 457-484), hence the motivation of GACL. [2] Han, Jianhua, et al. "SODA10M: A large-scale 2D self/semi-supervised object detection dataset for autonomous driving." *arXiv preprint arXiv:2106.11118* (2021). [3] Wen, Zhenyu, et al. "Fog orchestration for IoT services: issues, challenges and directions." *IEEE Internet Computing* 21.2 (2017): 16-24. [4] Bang, Jihwan, et al. "Rainbow memory: Continual learning with a memory of diverse samples." *Proceedings of the IEEE/CVF conference on computer vision and pattern recognition*. 2021. ### Q2. What backbone network was used to obtain the reported results?" **Response to Q_2**: As mentioned in Section 4.1, all experiments were conducted using the same pre-trained backbone. We utilize the DeiT-S/16 as our backbone and pre-train the backbone on 611 ImageNet classes after excluding 389 classes that overlap with CIFAR and Tiny-ImageNet to prevent data leakage. --- Rebuttal 2: Title: Official Comment by Reviewer Comment: Thanks to the authors for the response. This rebuttal addresses my concerns well. It fully explains the motivation for introducing ACL to address GCIL and the improvements this paper makes to existing ACL methods in response to the GCIL problem. Taking into account the comments of other reviewers and the authors' rebuttal, I decide to increase my rating. --- Rebuttal Comment 2.1: Title: Thank you for the response Comment: Thank you for taking the time to read our response and increasing your score! We are glad to hear that the response addressed your concern.
Summary: This paper proposes a new exempler-free generalized continual learning (GCIL), named generalized analytic continual learning (GACL) technique. It does not depend on gradient-based tranining, which avoids the task-recency bias leading to the forgetting issue. It also delivers an closed-form solution to the GCIL scenario that provides identical solutions to its joint training. Entensive experiments demonstrate that the proposed GACL achieves consistently leading performance. Strengths: - The paper proposes very effective method that completely avoids forgetting in the gradient-free training technique. - Extensive experiments are included explaining that GACL works good overall in the three datasets compared with many studies. - Ablation studies are coducted that clarifies the differences among datasets and the reason why GACL is slightly worse than other baselines at early tasks. They also analyzed the contributions of ECLG module and its robustness. - The paper is well written and easy to read. Weaknesses: - Missing baselines. As described in the Section 2, many ACL methods have been proposed. Comparing GACL with those methods (especially RanPAC) derives more solid results. - Comparison to the joint-traning is not included. Although it is argued that GACL can bridge the gap between continual learning and joint-training, such result is not provided in the experiments. Technical Quality: 3 Clarity: 4 Questions for Authors: - I believe that the proposed gradient-free training scheme is beneficial also in the perspective of training speed. Does GACL run more fast than other (ACL) methods? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: - As described, GACL cannot update backbone weights, which can be a problem. However, as new backbones emerge frequently, showing that GACL can work well not only with the used DeiT-S/16 but also others makes the arguments more convincing, instead. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Replies to Reviewer Xb3R Thank you for your constructive and detailed feedback. We provide detailed responses to your concerns below. ### 1. Comparing GACL with those methods (especially RanPAC)derives more solid results." **Response to W_1**: Thank you for pointing out this important reference as they both adopt closed-form solutions! The RanPAC emphasizes on random project while our GACL focuses on recursive formulation. As RanPAC's codes are for CIL tasks, and our settings (e.g., SI-Blurry) are very different, and they utilize different backbones, we are working on reproducing its results while facing several technical issues. We are sorry that we may miss the rebuttal deadline before providing the comparison. We will try to provide them during the disucssion period. Thanks! ### 2. Comparison to the joint-traning. Although it is argued that GACL can bridge the gap between continual learning and joint-training, such result is not provided in the experiments. **Response to W_2**: As suggested, we have included the last-phase accuracy of GACL and its joint-training accuracy on the CIFAR-100 dataset with different buffer sizes. Results in the table below show that the accuracy of GACL is indeed the same as that of joint training (the mile differences are caused by quantization). | Buffer Size | Joint-training (%) | G-ACL (%) | | :---------: | :----------------: | :-------: | | 1000 | $66.83\pm0.14$ | $66.73\pm0.16$ | | 2000 | $69.22\pm0.14$ | $69.22\pm0.23$ | ### 3. Does GACL run more fast than other (ACL) methods? **Response to Q_1**: In general, our GACL gives rather strong performance regarding speed. The table below further records the GACL's training time in seconds compared with EFCIL methods and replay-based methods with the memory size of 2000. The GACL runs faster than many baselines except a few candidates such as SLDA on three dataset. For SLDA, since only the classifier and autocorrelation memory matrix $\mathbf{R}$ are updated, leading to smaller numbers of trainable parameters compared with those baselines in back-propagation manner. | Methods | EFCIL | CIFAR-100 (s) | ImageNet-R (s) | Tiny-ImageNet (s)| |--------------|-------------|-----------|------------|---------------| | EM | ❌ | >2 days | >2 days | >2 days | | MVP-R | ❌ | 717 | 527 | 1597 | | ER | ❌ | 369 | 330 | 715 | | EWC++ | ❌ | 650 | 391 | 1356 | | LwF | ✅ | 334 | 229 | 862 | | L2P | ✅ | 651 | 285 | 1246 | | DualPrompt | ✅ | 656 | 332 | 1294 | | MVP | ✅ | 628 | 300 | 1345 | | SLDA | ✅ | 401 | 284 | 915 | | GACL | ✅ | 611 | 321 | 1246 | --- Rebuttal Comment 1.1: Title: Comparison update Comment: Dear Reviewer, We would like to provde our updates during the discussion period, in which we have still been working on the response regarding the comparion with the RanPAC. We have managed to produce the results in comparison with RanPAC as follows. Overall, the RanPAC is a very strong-performing counterpart with average and final results comparable to ours. However, it is **NOT designed for GCIL/online tasks**, and hence **gives limited performance in ACC_AUC** (area under the curve of accuracy), which measures the online performance. For instance, on CIFAR-100, the GACL obtains 66.79% while RanPAC only has 51.62%. | | Buffer | CIFAR100 | | | ImageNet-R | | | Tiny-ImageNet | | | |:----------------------:|:------:|:--------:|:-------:|:-------:|:-------------:|:-------:|:-------:|:-------------:|:--------:|:-------:| | | | ACC_AUC | ACC_AVG | ACC_LAST |ACC_AUC | ACC_AVG | ACC_LAST |ACC_AUC |ACC_AVG | ACC_LAST | | RanPAC (NeurIPS,2023) | 0 | 51.62 | 63.45 | 77.83 | 42.39 | 61.68 | 57.77 | 62.80 | 82.80 | 78.54 | | GACL | 0 | 66.79 | 63.94 | 77.34 |57.02 |62.26 |57.68 |77.65 |82.95 |77.80 | Let us know if more clarification is required!
Summary: This paper deals with the generalized CIL (GCIL) problem where incoming data have mixed data categories and unknown sample size distribution. The author proposes generalized analytic continual learning (GACL) which adopts a pre-trained and fixed backbone and uses least squares to get a closed-form solution. Experiments verified that the GACL achieves better performance compared to other baselines. Strengths: * The generalized class incremental learning is an important and valuable problem. * This paper is generally clear and easy to follow. * Experiments show that GACL is effective in addressing forgetting in GCIL. Weaknesses: * Main concern. Most of the content on page 4 and page 5 (e.g., Theorem 3.1) in Section 3 is overly similar to existing ACL works [7, 8]. The main difference claimed by the authors, i.e., the ECLG module that corresponds to the exposed-class gain, is a trivial extension of the original ACL [7]. * Experiments. This paper adopts the pre-trained backbone developed in [40, 41]. However, many baselines (Table 2 in [40]) are missing in experiments. The GACL method is similar to RanPAC [25]. Experiments are needed to compare the performance of GACL with RanPAC. * Comparison of the memory cost of weights in the buffer layer and other replay-based method. Technical Quality: 3 Clarity: 3 Questions for Authors: * How and why the dimensionality of the random projection affects the performance. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Replies to Reviewer vKCe Thank you for your constructive and detailed feedback. We provide detailed responses to your concerns below. ### W1. Most of the content on page 4 and page 5 (e.g., Theorem 3.1) in Section 3 is overly similar to existing ACL works. The main difference claimed by the authors, i.e., the ECLG module that corresponds to the exposed-class gain, is a trivial extension of the original ACL." **Response to W_1**: Thank you for your valuable comment! We ackownledge that there are certain simlarities between ACIL and the GACL, mostly in the begnining part. However, this part is to **generate embedding** (e.g., Eq. 2, Eq. 3) and the **joint-learning objective function formulation** (e.g., Eq.4, Eq. 5), both of which are **NOT** the contributions of this paper. There are more like a **Preliminary** contents (e.g., like introducing what is loss function before designing a new one). Our key contributions start from Eq. 6 and Eq. 8 that separate the derivation into an exposed-unexposed pair. The derivation holds the key difference from ACL techniques (see lines 457-484). We apologize that appendixing these items have led to such confusion, and will try to highlight the difference by moving some of them in the main context. Regarding the concern of the ECLG module being a trivial extension, we respectfully disagree. According to the derivation in lines 457-484, the solution **CAN NOT** be trivially extended from ACIL or its variants. We **deliberately extracted the ECLG** to show the connection to the existing ACL techniques, to **better illustrate** the weight-invariant property (which has been recognied in ACIL, GKEAL). We are very sorry to receive the "trivial extension" impression, and will mark additional highlight in this part to avoid such an impression. ### W2. Experiments are needed to compare the performance of GACL with RanPAC. **Response to W_2**: Thank you for pointing out this important reference as they both adopt closed-form solutions! The RanPAC emphasizes on random project while our GACL focuses on recursive formulation. As RanPAC's codes are for CIL tasks, and our settings (e.g., SI-Blurry) are very different, and they utilize different backbones, we are working on reproducing its results while facing several technical issues. We are sorry that we may miss the rebuttal deadline before providing the comparison. We will try to provide them during the disucssion period! ### W3. Comparison of the memory cost of weights in the buffer layer and other replay-based method. **Response to W_3**: GACL operates without the need for prior knowledge regarding the total number of classes. We denote the size of features acquired after the feature extractor and buffer layer as "feature_size," and the number of classes learned at phase $k$ as "n_class." During the continual learning process, the classifier can progressively increase the number of classes in the head and simultaneously augment the dimension of $W_k$ (feature_size $\times$ n_class), while maintaining the size of $R_k$ as **feature_size $\times$ feature_size**. Compared with other replay-based methods, such as MVP [1] with a memory size of 500, the memory requirement is 500 $\times$ feature_size $\times$ feature_size $\times$ n_class, which **far exceeds the storage needed by GACL**. This is **without even accounting for model parameters and prompts**. [1] Jun-Yeong Moon, Keon-Hee Park, Jung Uk Kim, Gyeong-Moon Park; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 11731-11741 ### W4. How and why the dimensionality of the random projection affects the performance. **Response to Q_1**: Based on Cover's theorem [2], a promising approach to enhancing the separability of features from different domains is to project the features extracted by the pre-trained model into a higher-dimensional space using a non-linear projection. This non-linear, higher-dimensional projection can improve performance by increasing the separability of the features. [2] T. M. Cover, "Geometrical and Statistical Properties of Systems of Linear Inequalities with Applications in Pattern Recognition," in IEEE Transactions on Electronic Computers, vol. EC-14, no. 3, pp. 326-334, June 1965, doi: 10.1109/PGEC.1965.264137. --- Rebuttal Comment 1.1: Title: More updates Comment: Dear Reviewer, We would like to provde our updates during the discussion period, in which we have still been working on the response. 1) We have managed to produce the results in comparison with RanPAC as follows. Overall, the RanPAC is a very strong-performing counterpart with average and final results comparable to ours. However, it is **NOT designed for GCIL/online tasks**, and hence **gives limited performance in ACC_AUC** (area under the curve of accuracy), which measures the online performance. For instance, on CIFAR-100, the GACL obtains 66.79% while RanPAC only has 51.62%. | | Buffer | CIFAR100 | | | ImageNet-R | | | Tiny-ImageNet | | | |:----------------------:|:------:|:--------:|:-------:|:-------:|:-------------:|:-------:|:-------:|:-------------:|:--------:|:-------:| | | | ACC_AUC | ACC_AVG | ACC_LAST |ACC_AUC | ACC_AVG | ACC_LAST |ACC_AUC |ACC_AVG | ACC_LAST | | RanPAC (NeurIPS,2023) | 0 | 51.62 | 63.45 | 77.83 | 42.39 | 61.68 | 57.77 | 62.80 | 82.80 | 78.54 | | GACL | 0 | 66.79 | 63.94 | 77.34 |57.02 |62.26 |57.68 |77.65 |82.95 |77.80 | 2) Regrarding your concern of novelty we have clarified this in "**Response to W_1**" in our rebuttal. Reviewer qU32 also pointed out this issue, and **has aknowledged our clarification**. Could you have a look and let us know if more is required to clarify this?
Rebuttal 1: Rebuttal: # General Response We thank all the reviewers for their time, insightful suggestions and valuable comments. In summary, Reviewer vKCe, Reviewer Xb3R and Reviewer qU32 all appreciate that our writing is **clear** and **easy to follow**. Reviewer tsV2 appreciates that our method is **clear**, **well-motivated** and **valuable**. We provide point-by-point responses to all reviewers’ comments and concerns. On the other hand, reviewers also point out that our delivery of GACL could resenble existing ACL techniques, giving an unnecessiry "trivial extension" impression. In this regard, we will marke the highlights of our GACL from formulation, derivation and contributions to avoid such an impression.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Monoculture in Matching Markets
Accept (poster)
Summary: The authors are focused on matching markets in which different firms use a single algorithm / evaluation criterion (monoculture) vs. markets where different firms may each have different evaluation algorithms / criterion (polyculture). This can be seen as a substantial generalization of the wonderful work of Kleinberg and Raghavan [35] on monoculture in hiring with two firms. The authors first introduce the continuum matching market model introduced by Azevedo and Leshno [10]. Here, there is a continuum of applicants, and a finite number of firms. The authors make the assumptions that (1) each firm has the same fixed capacity, and (2) not all applicants will eventually be matched with a firm. The authors first review critical results in the existing continuum model. These include the fact that a stable matching corresponds to a particular cutoff vector, and subject to this cutoff vector, applicants always choose their highest preferred firm for which their estimated quality is higher than the cutoff for that firm. Next, the authors introduce their notion of mono and polyculture into this model. Intuitively, monoculture is where each firm has an identical estimate of the value of an applicant of type $\theta(v)$, given by $v + X$ for an $X$ drawn from some noise distribution $D$. This captures, for example, each firm using Chat GPT to evaluate the resumes of all applicants. In polyculture, each firm $i$ may have a different estimate $v + X_i$ for the value of applicants of type $\theta(v)$. The authors begin by proving that the cutoff characterization of stable matching is unique in the mono and polyculture settings (Lemma 2). This follows from the lattice structure of stable matchings. Then, in Proposition 3, they show that the probability of an applicant of type $\theta(v)$ being matched under polyculture is related to the maximum of $X_i$ over all firms' noisy estimates $X_i$, whereas under monoculture this probability is related only to $X$ (since all firms have identical estimates). We now move to the main results. In Theorem 1, the authors show that under polyculture, as the number of firms $m \to \infty$, the (firm-) optimal welfare can be achieved by the resulting matching. This does not hold for monoculture. In particular, under monoculture, the probability that an individual of type $\theta(v)$ is matched at all is constant for varying $m$. In Theorem 2, the authors examine applicant welfare. They show that applicants have a higher chance of being matched with their top choice under monoculture, but that for a subset of applicants of positive measure, the variance in whether they are matched or not is higher under monoculture than polyculture. This means that not all applicants are incentivized to prefer monoculture unconditionally. Finally, some extensions under a differential application access setting are provided. Intuitively, the authors show that more applications do not help applicants under monoculture but does under polyculture. Experiments complement most of the theoretical results, and also demonstrate that the uniform preference assumption is not essential to practical relevance of the results. Strengths: The paper is generally extremely well written, motivated, and clear. I also think that the work is already very important in the modern context in which universities and hiring managers may already be using one of only a handful of services to conduct automated applicant filtering. The work examines what this would potentially lead to in terms of macroeconomic market dynamics. The theoretical results are presented clearly, and I understood most even though I have not personally worked in the continuum matching model (have only worked in the discrete matching model). I appreciate that the authors also empirically investigate the (strong) assumption that all applicants' preferences over firms are drawn uniformly at random. The empirical results confirm that this is perhaps not a fundamental assumption, even though the (current) proofs critically hinge on it. This work certainly challenged my preconceived notion (“monoculture=bad”) in a fundamental way and may open a more general line of inquiry into monoculture more broadly. This paper was a pleasure to read, and I look forward to additional work from the authors. Weaknesses: Note that I did not carefully check the proofs. (W1) I think the introduction of the continuum model could be made a bit more clear, in particular the definition of applicant types. For example, there seems to be a small typo in lines 141-142:: “The realization of θ(v) is their type, which lies in $\Theta \coloneqq \mathcal{R} \times \mathbb{R}^m$ is the set of applicant types,”. Further, I am not sure if this is a typo as well (in line 143): “$\succ^\theta$ is the preference ordering of v over firms…”. Do all applicants of value $v$ have the same type $\theta(v)$? That is, do all have the same preferences over firms? Or, do we draw different preferences uniformly at random for each individual of value $v$? These were not clear from just this introduction on the continuum model. (W2) The model considers identical noise across all applicant “types”. This is certainly a reasonable form of polyculture to analyze, however, in practice we may be more concerned with bias based on different “types” of individuals. I.e., historically underrepresented minorities having a skewed or higher variance noise distribution. “Types” in this sense is (I believe) not captured by solely preference and firm quality estimates. This is certainly less of a weakness and more of a direction of future work, but I think it is perhaps important to mention. The authors have some discussion in lines 350-352, but more could certainly be added earlier in the paper. (W3) I think that the assumptions made throughout the work are sprinkled throughout the paper. Having a collection of all assumptions, perhaps in the appendix, may help the reader better understand the limitations of the work. Minor: Most non-theorems from the main paper are referred to incorrectly in the appendix, e.g. Lemma 2 is mistakenly referred to as proposition 2 in the appendix, and similarly proposition 10 / lemma 10, and corollary 4 / proposition 4. Technical Quality: 4 Clarity: 4 Questions for Authors: Q1: How does Lemma 2 (Equal Cutoffs Lemma) relate to Theorem 1 part 1 from Azevedo and Leshno [10], which says that if $\eta$ has full support, then there is a unique stable matching? Q2: Do we expect the maximum concentrating distribution assumption to hold in, e.g., the included experiments? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: I do not believe that the authors have a dedicated limitations section of their paper, which I recommend. Some limitations are sprinkled throughout the paper (e.g. Line 350-353), but perhaps more could be mentioned. In particular, how strong/important the technical assumptions are in practice are could be explicitly discussed in a formal limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the thoughtful feedback, and are glad you enjoyed the paper. We answer your questions below: **Q1: How does Lemma 2 (Equal Cutoffs Lemma) relate to Theorem 1 part 1 from Azevedo and Leshno [10], which says that if has full support, then there is a unique stable matching?** It is essentially implied (since the existence of unequal cutoffs would then imply multiple stable matchings). We can make note of this. We include a standalone proof, partly because it gives some intuition on how to analyze the noisy model we have, and partly because our proof provides some additional facts (equations 30 and 31) that are important for later results. **Q2: Do we expect the maximum concentrating distribution assumption to hold in, e.g., the included experiments?** We think that in many cases, we might intuitively expect the maximum order statistic to at least partially concentrate (i.e., have diminishing variance); this is the case when the “maximum evaluation” of an applicant is less noisy than a single evaluation, which seems generally plausible. If we think noise is Gaussian, as is often assumed (though perhaps primarily for tractability), then it would fully concentrate. Theoretically, max-concentration does not hold for noise distributions where tails are “heavier than exponential”, such as the Laplace distribution. Our ML experiments, in which we did not hard-code any notion of noise (in fact, noise is not even i.i.d.), seem to suggest maximum-concentrating behavior holds in that setup, since the market as a whole tends to select higher-value applicants than individual ML models. It would be interesting to better understand what noise looks like in practice. **Other comments.** Thank you for pointing out potential areas of confusion in our model, which we will address. Applicants of the same true value do not have the same estimated value. What we care about is the distribution of outcomes for a student with true value v. We will make this more clear in the text. Thank you also for the suggestion about a limitations section. We will add in a limitations section that discusses the following: - The technical assumptions (e.g., homogeneous firms, max-concentrating noise), and when they are more/less likely to hold. - A summary of what our computational experiments address (varying correlation structure in applicant preferences, estimates derived from ML models). - Broader limitations (e.g., heterogeneous noise/bias across applicants, notions of welfare beyond utilitarian frameworks, potential degradation of effects due to search frictions). --- Rebuttal Comment 1.1: Comment: I thank the authors for the careful response. In general, I agree with the sentiment that simple proofs are in some sense desirable. Thank you for also explaining the relation (and differences) to Castera et al. I am happy to keep my score as is. --- Reply to Comment 1.1.1: Comment: We appreciate the reply, and are glad that you found our responses helpful. Thanks again for the comments.
Summary: This paper studies the monoculture problem in matching markets from a theoretical perspective. The authors found that on the firms' (colleges') side, monoculture may decrease the quantity of matched applicants; while on the applicants' side, monoculture may help matched applicants to match with higher-ranked firms (colleges). Additionally, monoculture may decrease the risk of unfairness if some applicants are born with more opportunities to apply to more firms (colleges). Strengths: 1. The paper studies the monoculture problem with multiple (possibly infinite) homogeneous firms and heterogeneous applicants differed by real-numbered type, broadening the elements modeling within the monoculture literature. 2. The paper provides positive theoretical and empirical results for monoculture, which are inspiring as the results are counter-intuitive and challenged the preexisting beliefs about monoculture. Weaknesses: **General Weakness:** 1. The connection between this work and CS/ML conferences is vague, as this paper mainly addresses monoculture, which is intrinsically an economic problem. Additionally, the technical derivation seems quite straightforward. 2. In the model, firms are assumed to be homogeneous, which might be an oversimplification and not realistic. **Corrections for Typos:** 1. In line 203, it should state "the cutoff under monoculture must be lower." Technical Quality: 3 Clarity: 3 Questions for Authors: See above Weakness. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: One concern is whether the paper is a good fit for NeurIPS acceptance. The paper focuses more on economics and matching markets than on machine learning and computer science, attempting to analyze the phenomenon of monoculture using economic models. This paper might be more suitable for conferences like EC and WINE. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewers comments, and are glad to hear that it challenged preexisting beliefs about monoculture. We address your comments below: **Assumptions/Straightforwardness:** We note in our general response that we believe our computational experiments demonstrate that our conceptual insights extend beyond our theoretical assumptions. We also noted in the general response that we believe the technical straightforwardness is only possible due to key insights. **Fit for NeurIPS:** Regarding the fit for NeurIPS, we note that papers studying monoculture have appeared in several ML venues. For example, Bommasani et al. (NeurIPS 2022), Jagadeesan et al. (NeurIPS 2023), Toups et al. (NeurIPS 2023) Jain et al. (ICML 2024), and Jain et al. (Best Paper at FAccT 2024). Insofar as the ML community is interested in questions related to algorithmic hiring and monoculture, we believe that the market-level approach we present is essential for understanding effects. Indeed, while this prior work exclusively views monoculture as bad for applicants, our work (as you note) demonstrates that market-level effects may reverse (and add nuance to) this effect. In fact, we directly seek to connect to this literature in our paper by adapting the setup of Bommasani et al. into our framework in our computational ML-based experiments. More broadly, this paper fits into the “social and economic aspects of machine learning” and “theory” parts of the Neurips CFP, the latter of which further specifies Algorithmic Game Theory (which arguably is even more an “EC” topic than our paper). We also note that NeurIPS accepts matching papers that intersect with ML, e.g.: Learning equilibria in matching markets from bandit feedback. M Jagadeesan, A Wei, Y Wang, M Jordan, J Steinhardt (NeurIPS 2021). (Thanks also for pointing out the typo, which we will correct.) --- Rebuttal Comment 1.1: Comment: I'd like to thank the authors for the response regarding the contributions on techniques and novelties of the paper (though mainly within the response to Reviewer dgPg), as well as the careful clarifications about the fit-to-venue. I'm not familiar with the monoculture literature, thus I hold a conservative opinion towards the novelty contribution of this paper. I agree with the authors that the straightforward techniques are advantages in the sense of showing the insight. The model contribution seems to firstly extend the literature to the multiple-firm setting, yet weak assumption of homogeneous firms is still not a strong reason for acceptance. Although the supplemented experiments demonstrate that the paper results seem to be correct beyond the homogeneous firms assumption, it seems that for the orientation of this paper, it should be theoretical results rather than experiments that determine the acceptance. Overall, my positive evaluation of this paper remains unchanged. Besides, the meanings of $\beta$ and $\gamma$ are unclear in the attached pdf. --- Reply to Comment 1.1.1: Comment: Thank you for the reply, and we're glad that you found our responses helpful. Thanks also for the note about $\beta$ and $\gamma$. These are defined as in lines 743-746 in the paper (in the appendix). $\beta$ controls the level of "global correlation" in applicants' preferences over colleges. $\gamma$ controls the level of "local correlation" (i.e., how much applicants prefer to be close to "nearby" firms).
Summary: The paper considers a matching model with a continuum of students/applicants and m colleges/firms where the firms have a noisy estimate of the candidates' quality and compares the stable matching outcome in two situations: monoculture (where all firms have the same estimate) vs polyculture (where each firm has an iid estimate). It makes two major assumptions : all firms have the same capacity and candidates' preferences are uniform amongst the m firms. Then the unique stable matching is described by a single cutoff for all firms. The paper states two main results: - Thm 1: with polyculture, as m grows large, the stable matching approaches an optimal cutoffs mechanism on the true quality; whereas with monoculture it does not - Thm 2: the probability of first choice match is higher under monoculture - Thm 3: if candidates can submit variable length preference lists, the students submitting more benefit from polyculture but not from monoculture Strengths: The topic of the paper is clearly important. From what I see, the paper is not original except in rewording things studied in previous works as monoculture vs polyculture instead of correlation vs independence. Perhaps this can contribute to increasing the volume of literature labeled as studying the effect of monoculture (which is an important concern), but other than that I don't see anything fondamental it brings. The paper is clear. The take-aways are interesting but besides their lack of novelty their significance is diminished a lot by the very strong assumptions made (see below). Weaknesses: There are two main weaknesses to the paper: (i) lack of novelty in the model, the questions and the flavor of the results and (ii) very strong assumptions that make the results mostly trivial so that I could not identify any strong technical result either. I elaborate on both in the remaining of this box. - The paper claims (l. 38) that the technical contribution is a matching market model that can be used to analyze monoculture. However, the model used is a particular case of that of [13] (because in [13] they can have multiple demographic groups). Even [13] with a single demographic group is more general because it can handle any level of correlation and not just 0-1 (mono-polyculture). The particular case of latent quality+noise is described in appendix A.4 of [13]. Note that in [13], some results apply only for 2 colleges, but the model applies to any number of colleges. - The paper makes two very strong assumptions: each firm has the same capacity and candidates' have uniform preferences. Under these, it shows (actually, it just states, because it is trivial) that at the stable matching each firm uses the same cutoff. This makes the rest of the paper technically straightforward, but this is very unrealistic in practice. So, even if one were to see the paper as an extension of some results of [13] for m firms, this would be only under *extremely* simplifying assumptions that make the extension straightforward and of much diminished significance. Just in contrast, most of the technical difficulty in [13] seems to be in handling the fact that the cutoffs may be different and hence if one increasing, it does not imply that the other does so too. The authors in fact show that the property of diminishing cutoffs is no longer always true for >2 firms. Of course, it holds with all equal cutoffs but as mentioned above, this is too simplifying (and trivial). - Thm 1 gives an interesting message but it is very straigthforward under the assumptions mentioned above. - Thm 2 is very similar to [13], the extension to m firms does not seem significant. See my discussion above. - Thm 3 is very connected to [7]. This is (really) discussed only in the appendix (l. 644-651), but it appears that the additional value of Thm 3 compared to [7] is minimal. - There are some numerical simulations. Unfortunately, as far as I could see, they do not relax the assumption of same capacity. Overall, even though the paper is well-written and interesting, I cannot identify any strong or novel result that would get close to the NeurIPS bar. Perhaps if the authors focus on the elements that are novel and try to remove the strong assumptions that make the results straightforward, this would improve the paper's contribution. [13] Castera et al. EC'22. I used for this review the latest available version from May 2024 https://hal.science/hal-03672270v6, but I checked and the previous version is extremely similar. [7] Arnosti. MS (2022) Technical Quality: 2 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: I very strongly encourage the authors to be much more upfront about the limitations of their model (discussed at length above). I have not seen them mentioned in abstract or introduction, whereas they are extremely important for the results to hold (in fact, some results do not hold without, see the counter-example at the end of [13] for more than 2 firms). Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the careful feedback, and we discuss your concerns below. **Assumptions:** We give detailed comments on our theoretical assumptions in the general response. We reemphasize here that we will incorporate your feedback in being clearer about assumptions in the introduction. We also believe that our results extend beyond the assumptions, and we hope you find the previous experiments (varying correlation of applicant preferences, considering ML experiments) as well as the new experiments (varying firm capacities, as you suggest) convincing of this point. **Novelty:** We respectfully disagree regarding novelty. We describe three main points of novelty below: **(1) Modeling true applicant values.** The statement “the model used is a particular case of that of [13]” is inaccurate. This can be seen by observing that our model extends that of Kleinberg and Raghavan, while Castera et al. does not. More specifically, our model departs from Castera et al. (and Arnosti) by following Kleinberg and Raghavan in modeling a ground-truth *true value* of each applicant, which firms estimate. This is essential for many of our results: - Theorem 1 reveals that under polyculture, exactly the applicants with the highest true values match. - Theorem 2 shows that applicants with high true value face a trade-off between likelihood of matching to top choice (monoculture) and likelihood of matching overall (polyculture). - Theorem 3 shows that, under polyculture, applicants with higher true values may fare worse than applicants with lower true values who can apply to more places. Castera et al. and our work share a common high-level finding with past work on tie-breaking in school choice (correlation improves applicant welfare); however, our respective novel contributions are fundamentally distinct. In particular, Castera et al. certainly identifies interesting and important results that our paper does not touch upon (we discuss these results in lines 110-122 in Section 2); we view our work as studying a complementary set of questions, in studying the effects of monoculture with heterogeneous true applicant values and emergent/strengthening effects as the number of firms grows large **(2) Bridging literatures.** We also believe that it is a novel contribution to bridge the algorithmic monoculture papers of [Kleinberg and Raghavan (theoretical, firm-side)] and [Bommasani et al. (empirical, applicant-side)] with a matching markets framework. **Prior to our work first being posted online, these literatures did not discuss each other** (the 2022 version of Castera et al. cites Kleinberg and Raghavan only in passing alongside many other algo fairness papers and does not discuss monoculture). As reviewers noted, our work extends and challenges these past ML results. Our ML-based simulations directly show how our framework can be adopted for empirical studies like in Bommasani et al., demonstrating how researchers in ML can utilize the framework. **(3) M > 2 firms/schools and emergent effects.** All the main results of prior relevant work (including K&R and Castera, who directly state their models in the 2 school setting) hold in a 2 school model. One technical insight of our work is showing how a Azevedo-Leshno continuum model admits a tractable way to study M > 2 schools (Castera also use AL model, but only to study 2 schools). **Crucially, this is not just a technical detail: one important conceptual insight of our work is that considering many firms/schools *fully strengthens* the underlying effect—large polyculture markets behave essentially like noiseless markets (i.e., sort according to true value).** This insight about emergent effects is novel in our work, compared to all the prior literature in both matching markets and machine learning. We also note that, from the proof techniques of Castera and other related work, it is not clear how to derive results for more than 2 schools—doing so seems algebraically intractable. One key insight of our work is to consider maximum-concentrating distributions (which generalizes common Gaussian assumptions), which become extremely tractable with a large number of firms/colleges. In other words, studying emergent phenomena with many firms is a novel insight of our work, that is distinct from the dimension studied in Castera et al. and other work. To illustrate this, we modify our numerical simulations so that we vary both number of firms and the amount of correlation between firms (which Castera et al. varies). **Indeed, in the bottom figure in the PDF attached in the main response, effects of variation in correlation become stark only when the number of firms is large.** Finally, we also note that we did not intend to minimize the relationship with Castera et al. Lines 110-120 detail the relationship with Castera, and lines 123-133 summarize some of the above points about emergent phenomena with many firms and the novel aspect of assuming that students have heterogeneous true values. We also have a 3.5 page extended related work section in the Appendix that details the relationship with the burgeoning literature around this area, and our novel contributions. To summarize, we highly respect the work of Castera et al. and others in this literature, and we believe that our work is complementary, both conceptually and technically, to this line of work. It is inaccurate to say that our model and assumptions nest within past work: while some assumptions (full correlation vs no correlation) are stricter than some past work, other aspects are more general / entirely distinct (especially, considering the true values of applicants and M>>2 firms). In particular, while our model extends Kleinberg and Raghavan, the model in Castera et al., Arnosti, and others do not. We are happy to modify the writing in the main text further to clarify this important point. --- Rebuttal Comment 1.1: Comment: As a brief point of clarification, Castera et al.'s paper does mention latent value, as a potential generating process for differential correlation. However, they do not then analyze applicant outcomes in terms of latent values, as is one of our focuses. --- Rebuttal Comment 1.2: Comment: I thank the authors for their response. This does not change my view of the paper for a number of reasons. First, it is not true that [13] does not model true value (which they call latent value), as the added comment of the authors seems to acknowledge. It is correct, as the authors state, that [13] does not focus on this in the results, but still, this (to me) considerably reduces the novelty of the model. Also, regarding m colleges (instead of 2): the model of [13] is stated with 2 colleges, however they mention in the discussion that the model trivially extends to m colleges; so I was not able to understand the sentence stated in the rebuttal, "One technical insight of our work is showing how a Azevedo-Leshno continuum model admits a tractable way to study M > 2 schools". Regarding the results, some results are certainly different in their statement than past work, however as I have mentioned in my review, I find the results extremely underwhelming given the extremely strong assumptions. I was not convinced by the argument that strong assumptions are a feature rather than a limitation. Also, I appreciate that the authors made extra simulations with random capacities, but I find the outcome quite unreliable. Indeed, I do not think that it is reasonable to draw any conclusion that "the theoretical results continue to hold" with such simulations. First, random settings are often not the pathological ones that challenge the theoretical results. Second, they are also not happening in practice. Finally, note that [13] has a counter-example in the discussion showing that with 4 colleges the property of diminishing cutoffs may not hold. This counter-example has capacities (0.05, 0.05, 0.2, 0.5) if I read well Fig. 5, which does not seem to be random. This seems to indicate that pathological things may happen and showing numerical simulation with random capacities does not rule them out. All in all, I agree that the paper has some results that were not explicitly stated that way in past literature, but I still find that the contribution is insufficient in terms of novelty and technical significance. --- Reply to Comment 1.2.1: Comment: We respond to each point below. Fundamentally, we believe that Castera et al. (a nice paper!!) studies a different question than us, our results/insights/techniques are conceptually new, and that another paper stating that their model can extend to ours does not equate to **findings** that affect novelty. **“First, it is not true that [13] does not model true value (which they call latent value), as the added comment of the authors seems to acknowledge.”** Again, we disagree with this comment. The model in [13] is only stated in terms of estimated preferences (which can be *motivated* by a latent value model). Consequently, and most importantly, no results imply anything about students in terms of their latent value, as all our results do. **“Also, regarding m colleges (instead of 2): the model of [13] is stated with 2 colleges, however they mention in the discussion that the model trivially extends to m colleges; so I was not able to understand the sentence stated in the rebuttal, ‘One technical insight of our work is showing how a Azevedo-Leshno continuum model admits a tractable way to study M > 2 schools’.”** While it is of course true that the model in [13] could be stated with many colleges, we believe the relevant factor is if the **results** are extended. Castera et al. does not prove many-firm results; we show that some effects *only emerge* when considering many schools (for example, comparing to results in Kleinberg and Raghavan). Moreover, increasing the number of schools *increases theoretical tractability,* especially when combined with studying maximum-concentrating noise (e.g., Gaussian, bounded). **Concerns about experimental setup: “First, random settings are often not the pathological ones that challenge the theoretical results. Second, they are also not happening in practice.”** We believe that (1) our numerical experiments capture the important/realistic types of variations in real markets, and (2) our ML-based experiments (extending those of Bommasani et al.) are more realistic than existing theory/simulations and speak to the particular community of interest. The focus of our experiments is not to identify potential “pathological” counterexamples, but rather to test our predictions in a range of settings that we believe mimic real markets. Our simulation setup focuses on two types of correlation widely noted in the literature: correlation arising from shared vertical preferences, as well as from horizontal preferences (preferences that depend on “proximity”). These are controlled by $\beta$ and $\gamma$ parameters (see lines 740-746), which we vary widely. We also simulate firm preferences generated using ML models—testing our predictions in a more realistic and relevant decision-making setting. More broadly, we think a key role of a model is to convey intuition and provide useful predictions. For example, even though Castera et al. note that “most of our results do not extend to more than two colleges” (page 5), we think that their two-college results are useful to understand broader phenomena (e.g., increasing correlation in one group can help all groups). In fact, Fig. 5 in their paper, even if it does not *exactly* match their theory, seems to suggest that the theory a very good approximation in this “pathological” example.
Summary: This paper examines the effects of algorithmic monoculture in a large two-sided matching market, in which participants on both sides compete with each other and outcomes are determined by preferences on both sides. It proposes a matching markets model to study monoculture and produces both expected and surprising results. While under monoculture, all firms use a single shared estimate of an applicant's value, under polyculture, they obtain separate, independently drawn estimates. - One expected result is that polyculture benefits firm welfare. - More surprisingly, another result shows that applicants are better off under monoculture, yet risk-averse well-qualified applicants would rather prefer the security afforded by polyculture. In other words, and expectedly in this regard, monoculture presents a risk of systemic exclusion to certain more qualified applicants. - A third result is that polyculture benefits applicants who submit more applications, thus allows differences in the number of applications submitted to harm firm welfare. Strengths: S1. Connects literature on monoculture and matching markets. S2. Produce novel results. S3. Computational experiments training on real data. Weaknesses: W1. The results in Figure 5 seem to contradict the strong claims made about polyculture outperforming monoculture; the related claims need to be qualified. W2. The concept of "positive label", or binary 0-1 outcome, is not defined. W3. In the experiment, applicants have uniformly random preferences, which do not yield competition as suggested. Technical Quality: 3 Clarity: 3 Questions for Authors: What happens when applicants have competitive preferences? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for the helpful comments. We’re glad to see that you found the results interesting, and at times surprising. We address your comments, mostly about the ML experiments, below: “The results in Figure 5 seem to contradict the strong claims made about polyculture outperforming monoculture; the related claims need to be qualified.” - Thanks, we agree that the results in Figure 5 (California data) are less strong than those in Figure 3 (Texas data), and that we should qualify the claims more in the main text. The current paper briefly notes that results are weaker, but we will add exact description of results into the main text. Specifically, we will move line 709-714 in the appendix into the main text. We will also more carefully note that the weaker results could be due to less adherence to our “maximum-concentrating noise” assumption of Theorem 1, and that the accuracy of this assumption needs to be further studied in real-world settings. “The concept of ‘positive label’, or binary 0-1 outcome, is not defined.” - Thanks, we will add more description. First, we will note that the 0-1 label in the ACSIncome dataset is whether or not an applicant exceeds a certain income. We will also note that this itself is not a realistic task for make hiring decisions, but is a standard ML prediction task similar to those used in hiring. In particular, the ML experiments mimic the setup of Bommasani et al., except with many firms in a market framework. “In the experiment, applicants have uniformly random preferences, which do not yield competition as suggested.” “What happens when applicants have correlated preferences?” - We further tested the ML experiments in a setting where applicants have shared preferences over firms, regenerating Figures 3 and 5. These can be seen in our PDF. The results continue to hold, almost identically. We will make note of this robustness check in the paper. - More generally, we substantially vary the amount and type of variation in the numerical simulations (as noted in Section 5, and fully detailed in Appendix C). We find that each of our (directional) theoretical predictions continue to hold over all parameterizations. --- Rebuttal Comment 1.1: Comment: Your reply should mention whether there is something to be seen in the current pdf, and if so, where that is to be seen. The statement "these can be seen in our PDF" is not helpful. --- Reply to Comment 1.1.1: Title: PDF details Comment: We apologize. The PDF reference regarding applicants having shared preferences over firms refers to the middle pair of plots, labeled "Texas, Correlated Applicant Prefs" and "California, Correlated Applicant Prefs". Thank you for considering our response!
Rebuttal 1: Rebuttal: We thank all the reviewers for the thoughtful comments. We’re glad to see that reviewers found the paper to be clear and well-written (R2, R3, R4), and the results to be surprising, countering past literature and expectations about algorithmic monoculture (R1, R3, R4). All reviewers noted how the model bridges the ML literature with the matching markets literature. The most common concerns were about the strength of theoretical assumptions (particularly noted by R2, and by others to varying degrees); R2 relatedly noted that this led to simple and straightforward proofs. In this general response, we discuss these concerns about theoretical assumptions (and corresponding simple proofs). We defer more reviewer-specific questions and concerns to individual responses, including concerns about novelty (R2) and fit-to-venue (R3). **Discussion on theoretical assumptions:** We agree that our theoretical model makes a strong assumption of homogeneous firms (equal capacities, and that students have uniformly random preferences). At a high level, we believe that our experiments convincingly demonstrate that the conceptual insights extend beyond these assumptions. Both R2 and R4 note that we should make this assumption clearer earlier in the text. We fully agree, and will add the following after line 46 in the introduction: “Our theoretical results are shown in a setting with a symmetric setup with a large number of homogeneous firms. This allows for tractability, as well as clear intuition. We demonstrate empirically that our theoretical results hold in markets with heterogeneous firms, as well as when firms use ML models to rank applicants.” To address limitations in our theoretical model, we test our theoretical predictions in computational experiments. In our paper, we included both (1) simulations of markets with correlated student preferences, following a simulation setup of Ashlagi et al., and (2) simulations of markets with ML models, bringing the setup in Bommasani et al. to a market context. Each of these experiments verify the predictions of our model. R2 noted that our theoretical results on polyculture only consider fully independent noise. Our ML experiments represent a case in which firms using different algorithms have some but not perfect correlation (since the ML models share data and overlap in features). There were still some settings that our experiments did not cover, and which reviewers noted. To alleviate these concerns, we include a PDF of additional experiments addressing these specific concerns: - R1 noted that our ML experiments only assumed uniformly random preferences (unlike our pure simulation setting). We include the corresponding figures when applicants have heavily-correlated preferences (beta=10 in our simulation setup), showing that the same results hold. - R2 noted that our numerical experiments did not vary capacity. We rerun each numerical experiment when firms have random differing capacities (still also varying correlation in applicant preferences), showing that our results continue to hold. In the paper, we will remark that these further experiments align with our theoretical results. We note that varying capacity has similar effects as non-uniform preferences (which our simulation setting already includes), as they induce heterogeneous cutoffs. **Straightforwardness.** While this theoretical setup makes our proofs relatively straightforward (as R2 notes), we believe this to be a feature, and that our work makes a significant technical contribution in two ways. In particular, our model *generalizes* aspects of models in past work that make the model *more tractable:* we consider many firms rather than two firms (as considered by Kleinberg and Raghavan, and Castera et al.). We also consider an interpretable-but-general class of noise distributions (that includes bounded and Gaussian distributions common in other work). The key technical insight is that by considering many firms, we can leverage the well-behaved tail of max-concentrating distributions to tractably (and cleanly) analyze the resulting stable matching. The simple-seeming nature of our proofs is a consequence of these technical insights, contra the (substantial) algebra often needed in prior work. We also want to note the attached pdf, in which we run new computational experiments to supplement our responses, including around assumptions and to make clearer the relationship with prior work. Pdf: /pdf/8d936e31cc6d610e67f65c589b566950c35f5eeb.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
FlexCap: Describe Anything in Images in Controllable Detail
Accept (poster)
Summary: This paper introduce a versatile flexible-captioning vision-language model called FlexCap, capable of generating region-specific descriptions of varying lengths. The FlexCap use caption length to control the information density of the generated sentences. The paper also introduces a large-scale dataset of image-text-box triplets for this flexible region captioning task. The proposed model outperforms baselines and achieve leading performance under this flexible captioning scenario. Strengths: 1. This paper proposes a novel mechanism to control the complexity for region captioning. 2. Introduce a large-scale image-text-box dataset. 3. Achieve SOTA VQA performance on several benchmarks. Weaknesses: 1. About the dataset building:1) It seems that there is no human involvement in the data construction process. I worry about the correctness and diversity of the generated sentences. 2) One of the key contributions of the data is the different lengths of descriptions for a region proposal. There should be at least a statistic about the length distributions. 3) Based on Table 1(b) and the cases presented in Figure 5, is the max length 8 and is it enough for a complex region/object? 2. About the baselines: I think it is important to list the backbone types and parameter scaling for each baseline in the result tables. For example, the OWL-ViT has ViT-H/14, ViT-L/14, ViT-B/32, etc. The CLIPM could be applied to different backbones. The parameter scaling (especially the LLMs) will significantly affect performance. 3. About the evaluation: 1) Using the CLIP similarity to map captions and objects is plausible but just one-sided. Have you considered adopting some rule-based verbalization methods to map captions and class names? 2) Back to the motivation, I wonder how to evaluate the information density for each generated caption? Do the authors have any insights? 4. Overall, the proposed framework seems conventional. Can the author discuss what makes the proposed model outperform current similar architectures? How and to what extent does the prefix token influence the performance? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see my weakness comments. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see my weakness comments. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort in reviewing our work. We address the concerns raised below: > About the dataset building:1) It seems that there is no human involvement in the data construction process. I worry about the correctness and diversity of the generated sentences. We were also curious if a dataset generated in a fully automated manner would be useful. For correctness, we were interested to know if the produced captions are useful for downstream tasks and found them useful for a variety of VQA tasks and region classification. Diversity-wise we choose the n-grams from alt-text of the captions instead of pre-selecting the words. The vocabulary is diverse because of the phrases found in the alt-text of images. Please see examples of the dataset in Figure 11. > One of the key contributions of the data is the different lengths of descriptions for a region proposal. There should be at least a statistic about the length distributions. These statistics are given in Figure 10 in the Appendix. We agree that this is important. >Based on Table 1(b) and the cases presented in Figure 5, is the max length 8 and is it enough for a complex region/object? We have empirically found the length to be sufficient for tasks like region classification and VQA tasks across different datasets where our model outperforms baselines with a max length of 8. We also found that the model pre-trained with max length of 8 words can be fine-tuned to produce longer descriptions for regions like those present in the Visual Genome dataset. In the Supplementary material (flexcap-spatial.html), we show that the model produces a mix of short and long captions in the style of Visual Genome. There are 50 captions in the 40 images with length longer than 8. Finally, it is possible to use lengths longer than 8 words for pre-training. >About the baselines: I think it is important to list the backbone types and parameter scaling for each baseline in the result tables. For example, the OWL-ViT has ViT-H/14, ViT-L/14, ViT-B/32, etc. The CLIPM could be applied to different backbones. The parameter scaling (especially the LLMs) will significantly affect performance. Yes, we agree. We will add this information to the final version of the paper. >About the evaluation: 1) Using the CLIP similarity to map captions and objects is plausible but just one-sided. Have you considered adopting some rule-based verbalization methods to map captions and class names? It is possible to build a more complicated method than using CLIP similarity. But this is an evaluation technique that allows us to show the captions generated by our model are correct. So we use text-image embedding matching used commonly for evaluating zero-shot recognition in papers (CLIP). Even with this simplistic approach, we find that we significantly outperform the baselines. We will add an ablation using descriptive classnames in the final version of the paper. > 2) Back to the motivation, I wonder how to evaluate the information density for each generated caption? Do the authors have any insights? We are not directly addressing information density in captions. Instead, we are using caption length as a proxy for information content. We observe that captions incrementally add more information as we increase length and add this ability to the model for users to control. If a length 1 caption just mentioned object class name, length 4 would contain information about attributes, and length 8 would contain information about context and attributes. This can be seen in the examples shown in Figure 5 and the Supplementary material (flexcap-length.html). It is possible to parse the produced captions to show length 1 captions are mostly nouns, length 2-4 are adjectives and nouns, but for longer captions image context we need a more complex method of measuring information density than just parsing. > Overall, the proposed framework seems conventional. Can the author discuss what makes the proposed model outperform current similar architectures? Our contributions are: endowing vision-language models with a new capability of producing length controlled localized captions and producing a large-scale dataset of image-box-caption triplets that can be used to train the model. We are *not* proposing architectural changes to achieve length-controlled localized captions. > How and to what extent does the prefix token influence the performance? The prefix token enables a new capability for the captioning models. The same model can be used for producing short and long captions in a controllable manner. We further show our dataset creation technique allows us to generate length-controlled captions for both small and large regions in an image. Also, in the *Length Conditioning* subsection in Section 2, we describe how the length token helps when there are many captions for the same box.
Summary: This paper proposes a vision-language model termed as FlexCap, which given a specific region in the image represented as a bounding box, outputs a description of that region in a length-controllable fashion where the exact length of the generated description can be controlled via a prefix token. First the authors harnesses a dataset of region-text pairs using existing large-scale image-caption datasets such as YFCC100M and WebLI. Specifically, they extract n-grams from the caption, apply a filtering process, and each of those n-grams are then fed to OWL-ViT to get a bounding box. This allows for multiple variable-length captions for a region. They then train FlexCap on that dataset. Generating a large set of descriptions for many possible regions in the image allows for a complete detailed understanding of the image, and those generated descriptions can be fed to a LLM to perform VQA by considering and reasoning over those descriptions. Experiments are conducted on dense captioning (without localisation), zero-shot VQA (image and video), and other tasks such as image labeling, object attribute recognition, and visual dialog. Strengths: - The idea of using controllable caption generation for specific image regions is interesting an can be beneficial in the case of dense captioning, since some regions are more detailed than others and need more words to be described, while other regions can be described sufficiently by 1-2 words. Thus, having this controlability can be helpful and enhance efficiency, i.e., generating 3-words per region is much more faster and efficient than generating 20 words per region, especially if those descriptions are to be fed to a LLM, where the context length decreases with less words per description. - The authors show how to make use of existing VLM to serve as annotaters. This is very heloful, especially in scenarios where we don't have a specific dataset tailored to our needs. - Evaluation is performed on different tasks, and impressive zero-shot performance. Weaknesses: - [W1] The novelty is limited; it only involves adding a length prefix to condition the generation of length-controllable output. However, the idea of generating region proposals from image-text pairs, and then learning to describing those regions has been a hot research topic lately. Some related works: 1. Osprey [R1] generates descriptions for fine-grained objects rather than coarse-grained bounding boxes with also the capabilities of intruction-based training on those fine-grained regions. 2. Kosmos-2 [R2], where given an image-caption pair the text is processed in a similar way that the authors do to extract sentence chunks (of variable length), and each text chunk is then associated to a bounding box using the GLIP grounding model (instead, the authors use OWL-ViT as the grounding model) and an autoregressive transformer is learned to predict the description of each region. Actually, Kosmos-2 is also capable of grounding, and does not require an external model to perform grounding at test time. 3. In the work of [R3] they use MiniGPT to extract a caption and then the noun from that caption, as well an OCR detector. They then feed these to a LLM to imagine possible other nouns, as well as BLIP model where they feed the cropped region and generate a caption for that cropped region, and extract the noun phrases 4. [R4] takes a step futher and not just generates a caption of a region of interest, but also allow perform instruction-based training such that a user can chat with a given region of interest. 5. There is also a much more close work to the authors: [R5], however, I will not consider this paper in my decision-making process, as it is published on arXiv and not yet published in a peer-reviewed venue. Therefore, I think this work is an incremental improvement of plugging in the length-control idea (which has also been well studied, and the authors mention that in the related work) in region-based captioning - [W2] How should one choose the length of the region descriptions? Isnt there a heuristic to estimate what regions need more words than others? What is the point of having a length-controlled method of generating region descriptions, if there still needs to be a manual selection of the length? - [W3] Controllable image captioning itself is not a new concept. The work of [R6] proposes to do this just in another way where the caption (and not the length) can be controlled. Similarly, their method can control the detail and length of the caption. How does the method of the authors compare to this work? - [W4] InstructBLIP [R7] is not compared to, which seems to perform better than the proposed method (see GQA for example). Also, the authors dont compare on the standard VQAv2 test-dev set benchmark, which makes it hard to compare if their work outperforms previous works. Most works report on VQAv2. - [W5] The dataset that the authors generate, as well as the performance of FlexCap, is highly limited by the performance of OWL-ViT (since it serves as an annotation tool to extract regions). Any biases and errors in OWL-ViT will also be transferred to the dataset and FlexCap. Consequently, the model and dataset are heavily influenced by the quality of OWL-ViT. - [W6] L178, regarding the zero-shot setting, the authors finetune the model on COCO (according to L168). But the VQA dataset (or its variants) is actually built on COCO. While it is not finetuned on the VQA annotations of question-answers, the model is still trained and sees the image, its caption, and its bounding boxes, which all three I think cover well for VQA. Therefore, this method cannot be seen as zero-shot. - [W7] For VQA, the amount of captions generated seem to be huge (128 regions $\times$ number of prefixes used), and therefore is this a fair comparision with other works? Couldn’t a similar or better performance from the compared models, be achieved by using more generated captions (e.g., by different samplings)? There is no ablation study on this, and on performance achieved by varying the number of generated descriptions, which I find important. Minor/Unclear: 1. Table 1a is not cross-references 2. Line 137, for each region, do the authors generate 20 captions for each of the (1,2,3,4) lengths? So that equates to 20x4 = 40 captions per bounding box? Are you then averaging over all the 40 captions in CLIP text encoder? 3. Line 160, what language queries are used here? Is the process similar to how the dataset is built, using N-gram processing? 4. Line 168, Isn't the model natively built to describe regions/boxes rather than localize them? How is the model adapted to localization? References: [R1] Osprey: Pixel Understanding with Visual Instruction Tuning, CVPR 2024\ [R2] Grounding Multimodal Large Language Models to the World, ICLR 2024\ [R3] Towards Panoptic Visual Recognition and Understanding of the Open World, ICLR 2024\ [R4] The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of the Open World, ICLR 2024\ [R5] GPT4RoI: Instruction Tuning Large Language Model on Region-of-Interest\ [R6] Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions, CVPR 2019\ [R7] InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning, NeurIPS 2023 Technical Quality: 3 Clarity: 3 Questions for Authors: Regarding [W1], there are some works [R2, R3, R4] published in ICLR 2024. But I also understand that they were officially published and presented shortly before the NeurIPS deadline, and therefore, I will not consider them as grounds for rejection. However, the authors should take note of them and cite them, as they are proposing very similar works. However, how does the proposed method compare to [R1]? Moreover, although the length-controllable concept is incremental, I believe it is important in such systems. And the authors address this problem well. I would also like to hear a clarification on [W2, W3, W6 and W7]. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are discussed. No special issues on negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We express our gratitude to the reviewer for their insightful feedback and thorough review. >However, how does the proposed method compare to [R1]? In R1, the training dataset is generated by querying GPT4V with regions of interest found by SAM. While they do train OSPREY with this dataset, the main ability of describing regions was already there in GPT4V. We do not know publicly what the dataset and architecture used for GPT4V are. In our work, we are proposing a ground up way of creating a large-scale region description dataset from alt-text data with a publicly available object detector (OWL-ViT). Furthermore, R1 shows a language model (VICUNA) can be trained further with their dataset to learn a visual instruction tuned model. We show that for describing objects we do not need a full-fledged language model but training a reasonably sized text decoder from scratch is sufficient. The other difference is length conditioning but that was not the focus of R1. >Regarding [W1], there are some works [R2, R3, R4] published in ICLR 2024. Thank you for sharing these references. We will add them to related work as they all are relevant. > [W2] How should one choose the length of the region descriptions?... What is the point of having a length-controlled method of generating region descriptions, if there still needs to be a manual selection of the length? The objective is to have one model that can produce long and short descriptions as needed. For example, if someone is interested in recovering only object names they can request for length1 captions from the model. However if one is interested in a longer description of an object then they can ask for length8 captions. In a way, our model merges the label spaces of object detection datasets and dense captioning datasets by using length conditioning. Although there is a common-sense based length prior that most people use for describing a region, people can also describe the content with a few or a lot more words when requested. Our model has this flexibility to be queried at a desired level of detail. But by fine-tuning the model with common-sense human captions (e.g. Visual Genome), our model adjusts itself to the average human biases in information density. We show examples of this in *flexcap-spatial.html* in supplementary material. Although some heuristics can be introduced, we believe fine-tuning with a dataset that captures the desired information density is a more configurable approach to the output information density problem. > [W3] Controllable image captioning itself is not a new concept. ... How does the method of the authors compare to this work? There are several differences with R6. First, R6 only handles image captioning. While we handle object, regions, and full images by using the flexibility of conditioning with a bounding box. Second, their conditioning approach is a specific way of producing different captions by changing the order of detected objects as conditioning. They detect N objects in an image. The captioner is shown different orders of N objects to produce different captions. While this produces grammatically correct captions in different ways, it does not necessitate new information being generated in the captions. The objective in our work is to use length conditioning as a proxy for information content. Both are valid but different objectives of conditional image captioning. Finally, our approach allows for prefix conditioned captioning which can be used to extract attributes of interest: color, material, action, function and text. This capability emerges due to pre-training the model on a large-scale dataset which is missing from R6. >[W6] L178, regarding the zero-shot setting, the authors finetune the model on COCO (according to L168)... Therefore, this method cannot be seen as zero-shot. In our study, we used the term "zero-shot" to indicate that the training process did not involve any VQA annotations of question-answers. The evaluation was conducted on non-train splits of COCO, ensuring that the model has not encountered the evaluation image was not seen in any phase of training. Hence, we maintain that the setting can still be considered zero-shot. To provide greater clarity, we will explicitly mention this in the final version of the paper. >[W7] For VQA, the amount of captions generated seem to be huge (128 regions × number of prefixes used), and therefore is this a fair comparision with other works?... The VQA baselines systems are trained to produce answers using different approaches (mostly end-to-end). They do not have a similar system to ours where we generate a large chunk of text and then deduce a single answer from that. We will add an ablation on how the number of generated descriptions affects performance. >[W5] The dataset that the authors generate, as well as the performance of FlexCap, is highly limited by the performance of OWL-ViT ... Yes we mostly agree with this assessment and have mentioned this in the limitations section. However, just to clarify, OWL-ViT does not define what to caption in images. This information comes from alt-text associated with the images in the form of n-grams. OWL-ViT mainly localizes provided n-grams on the images using the box proposals which can arguably cover almost all the interesting regions in the image. Moreover this automated approach allowed us to scale both in size and diversity which often is the main influencer of the performance, as demonstrated with many results in our experiments section. >[W4] InstructBLIP [R7] is not compared to ... Most works report on VQAv2. To clarify any concerns, we will test our method on VQAv2 and provide the results on this dataset in the final version of the paper. We will also add InstructBLIP to the list of baselines. --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal and for clarifying my concerns. Since my concerns are clear, I have raised my rating. The authors should make sure to incorporate my comments, as well as other reviewers comments, into the final version, and especially the experiments that were promised (W4, W7). --- Rebuttal 2: Title: Answers to Minor Questions Comment: >Table 1a is not cross-references. Thanks for catching this. We will fix this. >Line 137, for each region, do the authors generate 20 captions for each of the (1,2,3,4) lengths? So that equates to 20x4 = 40 captions per bounding box? Are you then averaging over all the 40 captions in CLIP text encoder? We report both non-averaged accuracy using the highest scoring caption and accuracy using the average of top-20 captions (out of the 80 total captions) in the CLIP text encoder when presenting the results in Table 1. The top-k captions are chosen on the basis of mean log-likelihood of the generated caption. We find using top-20 leads to significant boost in performance without any additional training. >Line 160, what language queries are used here? Is the process similar to how the dataset is built, using N-gram processing? We do not use language queries for this step. We use the top-128 boxes based on objectness score from OWL-ViTv2 which does not require any text queries to produce objectness score for a box. Please refer to Figure 15 for details. >Line 168, Isn't the model natively built to describe regions/boxes rather than localize them? How is the model adapted to localization? The FlexCap model does not do localization. In this work, we are exploring a paradigm different from prior work which has usually followed describe-then-localize. We are looking at localize-then-describe. The localization can be done using object proposal methods. Then FlexCap takes each object of interest and describes them. In particular, in Line 168 we use the detection class name as the ground truth caption for fine-tuning.
Summary: This paper introduces a versatile captioner capable of generating region-specific descriptions with controllable information density. This functionality enables dense captioning tasks and enhances visual question answering (VQA) by integrating with a LLM. The paper also presents a large-scale dataset containing image-text-box triplets, which is valuable for the community to explore region-controllable captioning capabilities. Leveraging this dataset, FlexCap can produce localized visual descriptions with adjustable lengths. By providing various localized textual descriptions of images as input to a LLM, FlexCap-LLM demonstrates strong performance in VQA tasks. Strengths: 1. The proposed dataset can promote community research on visual-controllable captioning task, which is useful for the development of user-friendly vision-languague models. 2. FlexCap is easy to follow, and its controllable captioning capability, with positional information and varying information density, is beneficial for downdream tasks like VQA. 3. The experiments are comprehensive, demonstrating the capabilities of region control and length control (Sec 4.1, Sec. 4.3). The VQA results generated by FlexCap-LLM show good human-interpretable representation of the generated located descriptions. Weaknesses: 1. The architecture of FlexCap and its training setup lack novelty, as it is a typical transformer-based captioning model. However, this does not lead me to reject this paper, as the contributions on task and dataset is useful. 2. The authors should carefully consider their statements in the paper. While this paper achieves region and length control, there are many other controllable signals such as mask/point control in visuals and emotion/style control in text (as seen in Caption Anything [45]). Referring to FlexCap as a versatile flexible-captioning vision-language model might be an overstatement. 3. In lines 66-68, the authors state that "the next-word prediction loss encourages the model to increase its score for the <e> token and decrease the score for the word playing." This statement is intuitive. I believe the probability of "<e>" and "playing" depends on the frequency of occurrence of "a dog <e>" and "dog playing …" during training, i.e., depending on the training set. It would be beneficial to see the statistical probabilities of "<e>" and "playing" following "dog" to support the authors' statement. Technical Quality: 3 Clarity: 2 Questions for Authors: Q1. The abstract is divided into two paragraphs (Line #6 and Line #7). Q2. Section 4.1 on Correctness lacks a reference of Table 1(a). Q3. Line 249 lacks a reference of Figure 8. Q4. Regarding weakness 1, can the authors provide some statistics to support their statement? Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: It is commendable that the authors discuss the biases present in FlexCap. However, FlexCap is quite basic, and FlexCap-LLM is not an end-to-end model. It would be intriguing to see how these components could be combined into a fully integrated vision-language model. As mentioned in Weakness 3, FlexCap lacks sufficient flexibility; incorporating more visual and textual controls is an important area for further development. But it actually takes a step forward. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback and meticulous attention to detail in reviewing our paper. We address the concerns raised by them below: >Referring to FlexCap as a versatile flexible-captioning vision-language model might be an overstatement. We used this description to highlight the fact FlexCap can be conditioned using bounding box, desired caption length, and caption prefix to produce diverse captions of objects and regions in images. We show many examples of the flexibility in Fig.5, Fig. 8 and Fig. 13 in paper and in the supplementary webpage. That said, we agree that, when implemented effectively, there could be many more spatial and textual controls for captioning and with FlexCap we take a significant step towards properly exploring this space. We will also soften the claims on versatility in the abstract by focusing more on the specific controls that we introduce (i.e. bounding boxes, length, and caption prefixes) while providing the larger context on potential spatial and textual controls. Also, while theoretically the CaptionAnything system can produce outputs of different lengths, we tested it for length conditioning and it does not consistently return outputs of the desired length. > Regarding weakness about “statistical probabilities of "<e>" and "playing" following "dog" to support the authors' statement.”, can the authors provide some statistics to support their statement? We agree that whether this occurrence will be a problem depends on the dataset statistics. To quantify the prevalence of this problem, we compute a statistic: for each image box, we consider all pairs of captions and measure the fraction sharing prefix words. For instance, one box has three captions: "dog <e>" "dog playing <e>", "dog playing with a frisbee <e>" which share a prefix "dog". While another caption for the same box: "brown dog <e>" does not share a prefix with captions beginning with "dog". After averaging this metric across all images in the dataset, we found that 30.8% of caption pairs share a prefix. The length conditioning token assists in distinguishing between captions with the same prefix while also providing the model with a novel capability during inference. When length conditioning is applied, the probability of prefix similarity decreases from 30.8% to 11.1%. We appreciate the encouragement to explore this issue quantitatively and plan to include this analysis in our paper. We answer the other questions asked by them below: > Q1. The abstract is divided into two paragraphs (Line #6 and Line #7). We will merge the two paragraphs into one. >Q2. Section 4.1 on Correctness lacks a reference of Table 1(a). >Q3. Line 249 lacks a reference of Figure 8. Thanks for catching this. We will add a reference to Table 1(a) in section 4.1 and to Figure. 8 in Line 249. --- Rebuttal Comment 1.1: Comment: The author addresses most of my concerns, and the statistics provided in the rebuttal are quite insteresting. I look forward to seeing them included in the final paper. After considering the author rebuttal, as well as the feedback from other reviewers and the corrsponding author's responses, I decide to raise my rating. It's good to see how the proposed dataset will influence the community.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
3DGS-Enhancer: Enhancing Unbounded 3D Gaussian Splatting with View-consistent 2D Diffusion Priors
Accept (spotlight)
Summary: The paper tackles the problem of enhancing 3GS novel views which are far from existing viewpoints (encountered in sparse-view settings), due to insufficient information in under-sampled areas. This work enhances the low quality views using a video diffusion model to maintain multi view and temporal consistency. The authors reformulate the task of 3D-consistent image restoration as video restoration which is again reformulated as a video interpolation tasks between frames. The high quality frames along with the restored images are then used to fine tune the 3DGS. The authors also propose a modified spatial temporal decoder (STD) to reduce artifacts and introduce confidence awareness at both image level (based on distance of novel view from reference) and pixel level (information gain) to minimize the negative impact of generated images. Strengths: * The paper is well motivated- enhancing low quality 3DGS rendering results for camera viewpoints which are far away * Reformulates the 3D-consistent image restoration task in 3DGS as temporally consistent video generation to leverage video diffusion models * The results look quite impressive and have been tested on a variety of scenes * can be applied to any existing 3DGS models and more generalized scenes * introduces a dataset for evaluation of artifacts in 3DGS (will this dataset be open sourced?) * Sound experimental setup and adequate comparisons with other methods Weaknesses: * Related work appears to be thin and some of these paragraphs can be more descriptive * Method relies on adjacent views for continuous interpolation * Choice for STD should be quantitatively ablated. Individual parts in STD like color correction should be ablated as well. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Authors describe in L184-186 "Despite this significant enhancement in the quality of the rendered views, we propose to rely more on the reference views rather than the restored novel views when fine-tuning the 3DGS model, since the 3DGS model is highly sensitive to slight inaccuracies in the restored views." I would appreciate a more thorough discussion on this. 2. "modified baselines denoted with ∗ taken from [36]". However, no baseline seems to be marked with * in table 1. This is possibly an oversight and the authors should correct it. 3. What's the adversarial loss described in L179 for STD? 4. On what data is the Temporal Denosing U-Net fine tuned on? Suggestions: Figure 2 can be more interpretable if the right part shows the output of the enhanced views (input from left part); currently there is no 1:1 parity Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are adequately addressed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **S1. Will this dataset be open sourced?** Our 3DGS Enhancement dataset is generated based on the publicly available DL3DV dataset. We will provide the complete dataset generation code to ensure the community can quickly reproduce our results and explore new research opportunities. **W1. Related work appears to be thin and some of these paragraphs can be more descriptive.** Thank you for the constructive feedback. We will refine our Related Work section in the camera ready version. Specifically, NVS methods dealing with sparse-view inputs will be categorized under "Few-shot novel view synthesis", while more general enhancement efforts, such as super-resolution and artifact removal, will be categorized under "Radiance fields enhancement". More literature under these two categories will be added. These revisions would make the Related Work more descriptive and organized. **W2. Method relies on adjacent views for continuous interpolation.** Our method performs well when there is a significant overlap between two views. Since we use images rendered from a low-quality 3D Gaussian model as conditions for video diffusion model, even if two views are not close to each other, our method can still achieve decent results when these conditional images contain certain structure or texture information. On the other hand, our method can generate free-trajectory conditional images for continuous interpolation, offering more flexibility in addition to linear interpolation between adjacent views. **W3. Choice for STD should be quantitatively ablated. Individual parts in STD like color correction should be ablated as well.** Thank you for the valuable suggestion, we conducted an ablation study on two STD components, the color correction and the temporal layers, on the DL3DV dataset (9 views). Results are detailed below. Both two STD components contribute to performance improvement, while temporal layers contribute more as they ensure STD produces consistent image outputs to improve final reconstruction performance. We will include the ablation study on STD components in the supplementary material. | Video diffusion | STD (temporal layers) | STD (color correction) | PSNR &uarr; | SSIM &uarr; | LPIPS &darr; | |:------:|:------:|:------:|-|-|-| | $\checkmark$ | $\times$ | $\times$ | 18.11 | 0.591 | 0.312 | | $\checkmark$ | $\checkmark$ | $\times$ | 18.44 | 0.625 | 0.306 | | $\checkmark$ | $\checkmark$ | $\checkmark$ | **18.50** | **0.630** | **0.305** | | **Q1. Authors describe in L184-186 "Despite this significant enhancement in the quality of the rendered views, we propose to rely more on the reference views rather than the restored novel views when fine-tuning the 3DGS model, since the 3DGS model is highly sensitive to slight inaccuracies in the restored views." I would appreciate a more thorough discussion on this.** In the 3DGS training process, we observed that the 3DGS model can fit the training images very well, even if the training images contain noises. Unfortunately, images generated by the diffusion model inevitably introduce various noises and inconsistencies, resulting in a performance decrease. To mitigate the negative impact of these generated images for the 3DGS model, a straightforward approach is assigning larger weights to real images and generated images of higher quality. From our observations, the generated images that are closer to the reference views tend to have higher quality. Motivated by this, we introduce an image-level weighting scheme. Additionally, we observed that smaller 3D Gaussian volumes are more likely to be well-reconstructed. To prevent well-reconstructed regions from being adversely affected by the generated images, we introduce pixel-level confidence scores which are calculated based on the volume of 3D Gaussians. From Table 2 of the paper, we can see the improvement after incorporating image-level confidence and pixel-level confidence. Thank you for the insightful feedback, we will add more discussions in the camera ready version. **Q2. "modified baselines denoted with ∗ taken from [36]". However, no baseline seems to be marked with * in table 1. This is possibly an oversight and the authors should correct it.** Thank you for the helpful comment. This is an oversight, and we will remove it in the camera ready version. **Q3. What's the adversarial loss described in L179 for STD?** The adversarial loss employed in Eq. 6 is the standard adversarial loss, which trains a discriminator $D$ to discriminate between real image $\hat{I^g}$ and fake image $I^g$, as $ L_{GAN} = E_{\hat{I^g}} \left[\log D(\hat{I^g})\right] + E_{I^g} \left[1 - \log D(I^g)\right]. $ Thank you for the valuable comment, we will add it into the camera ready version. **Q4. On what data is the Temporal Denosing U-Net fine tuned on?** Our Temporal Denosing U-Net is fine-tuned on the 3DGS Enhancement Dataset created in this work, which includes a large number of image pairs generated from corresponding high-quality and low-quality 3DGS models. More details of the dataset can be found in Section 5.1 of the paper. **Suggestions: Figure 2 can be more interpretable if the right part shows the output of the enhanced views (input from left part); currently there is no 1:1 parity** Thank you for the helpful suggestion, we will revise Figure 2 of the paper to ensure its right part shows output images corresponding to the input images. --- Rebuttal Comment 1.1: Comment: Thank you again for your time and effort in reviewing our work and providing the constructive comments. Please feel free to let us know if you have any further questions by August 13 AoE, we are more than happy to address them. --- Rebuttal Comment 1.2: Title: Reply to the authors Comment: Thank you authors for your detailed reply and conducting the ablation study. I'm happy with your answers and the suggested changes; I shall be increasing my score. --- Reply to Comment 1.2.1: Comment: We are very glad that our response and ablation study have met your expectations. We sincerely appreciate your willingness to increase the score.
Summary: This paper utilizes a video diffusion model to enhance 3DGS rendering results. It proposes a 3DGS-Enhancer pipeline to reformulate 3D-consistent image restoration tasks and leverage it to generate high-quality and 3D-consistent images. They have enough experiments to prove the soundness of their method. Strengths: They didn't use depth information but video diffusion prior to do the task, which is interesting. They also have the spatial-temporal decoder which solves the temporal consistency problem. For fine-tuning, they add confidence constraints on both pixel level and image level to enhance the result. Extensive comparison with state-of-the-art is done, and the results seem robust. Weaknesses: They can provide more comparisons on ablating the temporal consistency. Technical Quality: 3 Clarity: 3 Questions for Authors: Video Diffusion prior may have motion information of moving objects in the scene. How do we eliminate that effect in enhancing 3DGS? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation is definitely from the pre-trained video diffusion model whether the data is more like novel view synthesis data (static scene) or a sequence of video frames (moving objects). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. They can provide more comparisons on ablating the temporal consistency.** As shown in Fig. 1 of the paper, the images rendered by 3DGS model trained on sparse view inputs often contain significant artifacts or blank areas. We observed that a image diffusion model trained solely on these images is unable to restore high quality images without additional information, and the synthesized multi-view images are often inconsistent even for adjacent views. In addition, for the temporal layers in STD, we conducted an ablation study to verify their effectiveness on DL3DV dataset with 9 views, and the result is detailed below. Temporal layers help STD produce more consistent image outputs to improve final reconstruction performance. Thank you for the helpful suggestion, we will provide more ablation studies on the temporal consistency in the supplementary material. | Temporal layers in STD | PSNR &uarr; | SSIM &uarr; | LPIPS &darr; | |:------:|---------|-------|----------| | $\times$ | 18.11 | 0.591 | 0.312 | | $\checkmark$ | **18.50** | **0.630** | **0.305** | | **Q1. Video Diffusion prior may have motion information of moving objects in the scene. How do we eliminate that effect in enhancing 3DGS?** We took two steps to mitigate this effect of moving objects in video diffusion prior. First, we created a paired novel view synthesis dataset from DL3DV to fine-tune the video diffusion model from an image-to-video model to a video-to-video model, ensuring that it outputs images more akin to the novel view synthesis of static scenes. Second, we introduced images rendered from a low-quality 3D Gaussian model as the conditional inputs for video diffusion. Using these images as guidance helps the video diffusion model generate images suitable for 3D static scene reconstruction. **L1. The limitation is definitely from the pre-trained video diffusion model whether the data is more like novel view synthesis data (static scene) or a sequence of video frames (moving objects).** Your statement of the limitation of this work is correct. Since our approach is based on video diffusion, it is indeed limited by the performance of the pre-trained video diffusion model (e.g., SVD). However, our method will also benefit from advances in video diffusion models, since better pre-trained video diffusion models or larger datasets can enhance the potential of our approach. --- Rebuttal Comment 1.1: Comment: Thank you again for your time and effort in reviewing our work and providing the constructive comments. Please feel free to let us know if you have any further questions by August 13 AoE, we are more than happy to address them.
Summary: This paper presents a novel pipeline aimed at enhancing the quality of 3D Gaussian splatting (3DGS) representations, especially in scenarios with sparse input views. They propose a novel framework, 3DGS-Enhancer, that leverages video LDMs for generating high-quality and 3D-consistent images. Moreover, to mitigate the artifacts caused by the temporal inconsistency, they introduce a spatial-temporal decoder and fine-tuning strategy for the 3DGS optimization process. extensive experiments are conducted to demonstrate their superior reconstruction performance on large-scale datasets of unbounded scenes. Strengths: 1. Applying video models to scene reconstruction is a novel idea and has achieved excellent results. 2. To mitigate the negative impact of inconsistent generated images, this paper proposes improvements to the decoder and defines a confidence measure to enhance the corresponding 3D Gaussian Splatting optimization process. Weaknesses: 1. The writing of this paper is not very clear, and some typo errors exist in the submission, for instance, line147, it should be $ \{ I^{ref}_{i-1}, I_1, I_2, ..., I_T, I^{ref}_i \}$; line216: it should be $I_c$, consistent to the equation (8). 2. Training details are missing in the paper, more details about the video diffusion model and fine-tuning process should be provided. 3. More datasets, such as MipNerf360, should be evaluated to verify the effectiveness and robustness of the proposed method. Technical Quality: 2 Clarity: 2 Questions for Authors: None Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **1.The writing of this paper is not very clear, and some typo errors exist in the submission, for instance, line147, it should be ${ I^{ref}_{i-1}, I_1, I_2, ..., I_T, I^{ref}_i }$; line216: it should be $I_c$, consistent to the equation (8).** Thank you for the helpful comment. You are correct, line147 should be ${ I^{ref}_{i-1}, I_1, I_2, ..., I_T, I^{ref}_i }$. In line216, $I_w$ should be $I_c$. We will carefully revise the writing and typos in the camera ready version. **2. Training details are missing in the paper, more details about the video diffusion model and fine-tuning process should be provided.** Our video diffusion model includes a pre-trained VAE to encode an image sequence into a latent sequence and decode the latent sequence back into the image sequence. It also includes a U-Net with learnable temporal layers, which employs cross-frame attention modules and 3D CNN modules to ensure frame-consistent outputs. The input of video diffusion model is a image sequence segment that includes 25 images with different sample steps from the image sequences rendered from the low-quality 3DGS model. The first and the last frames in this segment are replaced with images rendered from the high-quality 3DGS model. During fine-tuning, our video diffusion model is conditioned on these image sequence segments and trained to synthesize the corresponding segments rendered from the high-quality 3DGS model. Our video diffusion model is fine-tuned with a learning rate of 0.0001, incorporating 500 steps for warm-up, followed by a total of 80,000 training steps. The batch size is set to 1 in each GPU, where each batch consisted of 25 images at 512x512 resolution. To optimize the training process, the Adam optimizer is employed. Additionally, a dropout rate of 0.1 is applied to the conditions between the first and last frames and the training process utilize CFG (classifier-free guidance) to train the diffusion model. The training is conducted on 2 NVIDIA A100-80G GPUs over 3 days. The STD is fine-tuned with a learning rate of 0.0005 and 50,000 training steps. The batch size is set to 1 in each GPU, where each batch consists of 5 images at 512x512 resolution. The fine-tuning process is conducted on 2 NVIDIA A100-80G GPUs in 2 days. Thank you for the valuable feedback, more details of the models and fine-tuning process will be included in the supplementary material. **3. More datasets, such as MipNerf360, should be evaluated to verify the effectiveness and robustness of the proposed method.** Thank you for the constructive feedback, we conducted experiments on the Mip-NeRF 360 dataset, and the results are detailed below and in the attached PDF file in Author Rebuttal section. Our model is trained on the DL3DV dataset, and evaluated on all 9 scenes of Mip-NeRF 360 dataset. Our method significantly outperforms the baselines, although it has not been trained on this complex dataset. This indicates that our method has remarkable *cross-dataset generalization capability* for unbounded environments, thanks to the powerful 2D video diffusion priors. | Method | PSNR &uarr; | SSIM &uarr; | LPIPS &darr; | PSNR &uarr; | SSIM &uarr; | LPIPS &darr; | |------------------|-------------|-------------|--------------|-------------|-------------|--------------| | | | 6 views | | | 9 views || | Mip-NeRF | 13.08 | 0.159 | 0.637 | 13.73 | 0.189 | 0.628 | | RegNeRF | 12.69 | 0.175 | 0.660 | 13.73 | 0.193 | 0.629 | | FreeNeRF | 12.56 | 0.182 | 0.646 | 13.20 | 0.198 | 0.635 | | 3DGS | 11.53 |0.144 |0.651 |12.65 |0.187 |0.607 | | DNGaussian | 11.81 | 0.208 | 0.689 | 12.51 | 0.228 | 0.683 | | 3DGS-Enhancer (ours) |**13.96** |**0.260** |**0.570** |**16.22** |**0.399** |**0.454** | | --- Rebuttal Comment 1.1: Comment: Thank you again for your time and effort in reviewing our work and providing the constructive comments. Please feel free to let us know if you have any further questions by August 13 AoE, we are more than happy to address them.
Summary: This paper is working on the problem of novel view synthesis with sparse input views. The authors present 3DGS-Enhancer to enhance the rendering quality and address 3D view consistency problem using 2D video diffusion priors. The experimental results show that this work has achieved state-of-the-art performance in novel view synthesis enhancement. Strengths: 1. Writing quality is great, clean, and easy to follow. 2. To tackle the sparse view reconstruction problem, the idea of fine-tuning the original 3DGS representation instead of incorporating additional geometric constraints is novel and promising. 3. A new dataset is introduced in this work, which is useful for this research direction and the community. 4. The idea of leveraging video LDM for sparse view reconstruction in unbounded outdoor scenes is novel and effective. Weaknesses: Runtime is not discussed in this paper. I’m wondering how long it will take for one scene compared to the other baselines. Technical Quality: 4 Clarity: 4 Questions for Authors: The experiments are conducted almost on the self-introduced dataset. Can 3DGS-Enhancer also surpasses the other baselines on more commonly used outdoor unbounded datasets (e.g. Mip-NeRF 360 dataset)? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1. Runtime is not discussed in this paper. I’m wondering how long it will take for one scene compared to the other baselines.** As shown in the below table, we estimate the per-scene runtime and rendering FPS of different methods on the DL3DV test set (3 views) with one NVIDIA A100 GPU. Our method takes 24.5 minutes for one scene, including 10.5 minutes for low-quality 3DGS training, 2.0 minutes for SVD inference, and 12.0 minutes for high-quality 3DGS training. Thank you for the helpful suggestion, we will include the efficiency analysis in the supplementary material. | Method | Per-scene runtime &darr; | Rendering FPS &uarr; | |------------------|--------|:------:| | Mip-NeRF | 10.7h | 0.09 | | RegNeRF | 2.5h | 0.09 | | FreeNeRF | 3.8h | 0.09 | | 3DGS | 10.5min | 100 | | DNGaussian | 3.3min | 100 | | 3DGS-Enhancer (ours) |24.5min (LQ-3DGS 10.5min, SVD 2.0min, HQ-3DGS 12.0min) |100 | | **Q1. The experiments are conducted almost on the self-introduced dataset. Can 3DGS-Enhancer also surpasses the other baselines on more commonly used outdoor unbounded datasets (e.g. Mip-NeRF 360 dataset)?** Thank you for the constructive feedback, we conducted experiments on the Mip-NeRF 360 dataset, and the results are detailed below and in the attached PDF file in Author Rebuttal section. Our model is trained on the DL3DV dataset, and evaluated on all 9 scenes of Mip-NeRF 360 dataset. Our method significantly outperforms the baselines, although it has not been trained on this complex dataset. This indicates that our method has remarkable *cross-dataset generalization capability* for unbounded environments, thanks to the powerful 2D video diffusion priors. | Method | PSNR &uarr; | SSIM &uarr; | LPIPS &darr; | PSNR &uarr; | SSIM &uarr; | LPIPS &darr; | |------------------|-------------|-------------|--------------|-------------|-------------|--------------| | | | 6 views | | | 9 views | | | Mip-NeRF | 13.08 | 0.159 | 0.637 | 13.73 | 0.189 | 0.628 | | RegNeRF | 12.69 | 0.175 | 0.660 | 13.73 | 0.193 | 0.629 | | FreeNeRF | 12.56 | 0.182 | 0.646 | 13.20 | 0.198 | 0.635 | | 3DGS | 11.53 |0.144 |0.651 |12.65 |0.187 |0.607 | | DNGaussian | 11.81 | 0.208 | 0.689 | 12.51 | 0.228 | 0.683 | | 3DGS-Enhancer (ours) |**13.96** |**0.260** |**0.570** |**16.22** |**0.399** |**0.454** | | --- Rebuttal Comment 1.1: Comment: Thank you again for your time and effort in reviewing our work and providing the constructive comments. Please feel free to let us know if you have any further questions by August 13 AoE, we are more than happy to address them.
Rebuttal 1: Rebuttal: The authors thank all reviewers for the careful review and constructive feedback. We are encouraged that all four reviewers appreciate the novel idea and excellent experimental results of this work. We address all the raised concerns in corresponding reviewer's rebuttal section. The code and dataset will be publicly available. **Mip-NeRF 360 Result (Cross-dataset Generalization Experiment):** Thanks to constructive feedback from Reviewers 19Ng and KbVQ, we conducted a cross-dataset generalization experiment on the Mip-NeRF 360 dataset, and the results are shown in the below table and attached PDF file. Our video diffusion and STD models are trained on the DL3DV dataset, and evaluated on all 9 scenes of Mip-NeRF 360 dataset. Results show that our method significantly outperforms the baselines, indicating it has remarkable cross-dataset generalization capability for unbounded environments, thanks to the powerful 2D video diffusion priors. | Method | PSNR &uarr; | SSIM &uarr; | LPIPS &darr; | PSNR &uarr; | SSIM &uarr; | LPIPS &darr; | |------------------|-------------|-------------|--------------|-------------|-------------|--------------| | | | 6 views | | | 9 views || | Mip-NeRF | 13.08 | 0.159 | 0.637 | 13.73 | 0.189 | 0.628 | | RegNeRF | 12.69 | 0.175 | 0.660 | 13.73 | 0.193 | 0.629 | | FreeNeRF | 12.56 | 0.182 | 0.646 | 13.20 | 0.198 | 0.635 | | 3DGS | 11.53 |0.144 |0.651 |12.65 |0.187 |0.607 | | DNGaussian | 11.81 | 0.208 | 0.689 | 12.51 | 0.228 | 0.683 | | 3DGS-Enhancer (ours) |**13.96** |**0.260** |**0.570** |**16.22** |**0.399** |**0.454** | | Pdf: /pdf/76303a8313e1d14adc7142077392ecdaee853416.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
QueST: Self-Supervised Skill Abstractions for Learning Continuous Control
Accept (poster)
Summary: The paper introduces an approach utilizing latent variable generative models in conjunction with FSQ quantization techniques to learn good representations of action sequences. It also proposes a prior network for autoregressive modeling at the level of behavioral representations. The superiority of the learned representations is validated across three distinct robotic environments. Additionally, the paper presents intriguing observations, such as the impact of causality on representation learning, and supports these findings with ablation studies. This paper is not too different from many previous relevant literature, and the innovation is a little weak. Strengths: - The overall paper is easy to read, with the description of the methodology being well-written. - The experimental content of the paper is robust, and the demonstrations clearly indicate a significant performance enhancement of the proposed method over the baseline. Weaknesses: - The overall pipeline that first learns the abstract representations of behaviors and then models the behavior with the autoregressive model is common. I believe the main difference of this paper contains two points: (1) using FSQ to learn discrete latent space; (2) implement the encoder with a causal transformer instead of a non-causal transformer. So the innovations seem not enough. - A lot of ablation studies are conducted. However, the related analysis is not sufficient. Technical Quality: 2 Clarity: 3 Questions for Authors: - Why the causality of the encoder transformer is so critical? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks much for your time spent reviewing and your thoughtful comments. We’ve uploaded an updated version of the paper to the website in case you’d like to review the changes we mention. > I believe the main difference of this paper contains two points: (1) using FSQ to learn discrete latent space; (2) implement the encoder with a causal transformer instead of a non-causal transformer. So the innovations seem not enough. Many recent works use discrete latent variable models as a mechanism to learn shared abstractions over continuous low-level skills. QueST, VQ-BeT and PRISE all do this and perform two-staged learning. However, in this work we propose several important architecture choices leading to QueST’s strong performance. Specifically, QueST encodes actions to a sequence of $n$ encodings (skill tokens) using a novel autoencoder that captures temporal correlation within an action sequence with causal convolution and masked self-attention layers. Like QueST, VQ-BeT performs temporal abstraction by encoding a sequence of actions into one state-independent latent vector. It encodes these actions to a single output encoding using an MLP which does not explicitly model any temporal correlation, followed by an MLP decoder and continuous offset predictor. Unlike QueST, VQ-BeT’s relatively small latent space limits its expressive capacity, and thus limits its ability to learn sharable representations between tasks, as evidenced by its worse few-shot performance, worse performance with longer action chunk sizes and reliance on continuous offset prediction (whereas QueST can achieve high success rate using only the output from its action decoder). PRISE performs temporal abstraction by learning discrete codes for state-action pairs and using BPE to group common token sequences into higher-level skills. However, BPE is known to suffer with evolving language leading to a suboptimal character-level tokenization, and it might struggle to effectively encode new actions from unseen tasks. QueST gracefully handles this by encoding new actions as combinations of its learned latent codes, and its superiority in this regard is evidenced by its stronger few-shot performance even without finetuning the decoder, as is necessary for PRISE. > A lot of ablation studies are conducted. However, the related analysis is not sufficient. Below we mention detailed analysis which has also been added to the paper. - Replacing FSQ with VQ still outperforms VQ-BeT in a few-shot setting suggesting that QueST’s superior performance is not only due to a better quantization scheme but also due to its architecture that flexibly maps an input sequence to multiple embeddings and allows for efficient transfer. - It’s tempting to ground the mapping between z-tokens and actions with observation tokens with an intuition that z-tokens will define a coarse set of actions and observation tokens will aid finer action decoding. But we observe worse performance with this. We hypothesize that the reconstruction objective forces encoder and decoder for most optimal quantization at the bottleneck layer but with extra observation information the decoder might focus more on observation tokens in turn hurting the quantization. This observation goes hand-in-hand with a closely related prior work SPiRL[57] that tried the same ablation and found that state conditioned decoder hurts downstream RL. - We observe a poorer performance in both multitask and few-shot settings with a conventional stage 1 autoencoder. This validates the QueST’s cross-attention architecture that allows for attending to all z-tokens and maintaining causality at the same time. - We discuss the causality ablation in response to the next question. - Figure 1b and 1d in the global response PDF illustrates the impact of decoder finetuning in fewshot IL setting. QueST outperforms all baselines in both benchmarks even without finetuning the decoder. Finetuning decoder should not be necessary in LIBERO-LONG setting, as the tasks are combination of two tasks from LIBERO-90 (pretraining set). This highlights QueST’s effectiveness in stitching trajectories using its learned skill-space. In MetaWorld, we use the ML45 benchmark that is specifically designed to evaluate transfer learning capabilities of Meta-RL algorithms, with unseen tasks being only slightly structurally similar to pretraining tasks. QueST outperforms all baselines though by a small margin but with a frozen decoder, this demonstrates the capability of QueST to effectively combine learned tokens for unseen action distributions. > Why is the causality of the encoder transformer so critical? We ablate causal masking of encoder and decoder and observe that a fully-causal stage-1 is most optimal and a non-causal decoder does not hurt as much as a non-causal encoder does. This can be explained with a simplistic setting where the input to stage-1 are 2D trajectories of a point agent. Consider an anti-clockwise circular trajectory and an S-shaped one where the first half of the latter overlaps with the first half (semi-circle) of the former. When both of these trajectory sequences are inputted to the stage-1, a non-causal encoder will assign distinct sequences of z-tokens for both trajectories. But a causal encoder will assign the same sequence of z-tokens for the first half of both trajectories and distinct to later parts. This allows the model to re-use the z-tokens corresponding to a semi-circle for creating other shaped-trajectories that have a semi-circle in them, for example C-shaped or infinity-shaped trajectories. Thus causal masking enables better sharing of skill tokens which leads to improved performance.
Summary: - This work introduces a new method called QueST that uses a latent variable model to learn a set of discrete temporal action abstractions / motion primitives / skills for imitation learning. This is done by training an auto-encoder to encode a sequence of actions using a causality preserving encoder, discretising the embedding using Finite Scalar Quantization (FSQ) to obtain a sequence of discrete latents, and then decoding the latents using a causality preserving decoder to recover the sequence of actions. - A policy is learned by predicting the latent actions from a history of observations and a language instruction while keeping the latent decoder frozen. An exception to the latter is when fine-tuning a policy in a few-shot setting. At test time, top-k sampling for sampling an optimal sequence of latents. - The method is validated on the LIBERO and Meta-World environments and compared to several recent approaches: ResNet-T, diffusion policy, ACT, VQ-Bet, and PRISE. Strengths: Originality: - This work is a nice combination of existing methods (latent action abstraction, causal encoders/decoders, and Finite Scalar Quantisation). - The use of causal encoders/decoders for latent abstraction is an interesting idea. Quality: - LIBERO and Meta-World are reasonable benchmarks for validating the proposed method. - The method shows good performance when compared against several recent baselines: diffusion policy, ACT, VQ-BeT, and PRISE. - The experimental section contains ablations for several key design choices: FSQ vs VQ, conditioning/not conditioning the auto-encoder on observations, mirroring the architecture of the encoder for the decoder, causal/non-causal encoders, and a sensitivity analysis for the auto-encoder downsampling factor and codebook size. - Results are reported for three random seeds for the experiments on LIBERO. - Line 541-544: The paper preempts concerns about the Meta-World results due to issues with the scripted expert policies and updated results are provided at the paper website. Clarity: - The paper is generally well-written. - The videos on the project website provide a nice indication of qualitative behaviour. Significance: - Considering the good performance and the simplicity of the method, it seems reasonable that the proposed method will be adopted or further developed by the community, in particular if the code is released as stated in the NeurIPS paper checklist. Weaknesses: Originality: - Two of the baselines, VQ-BeT and PRISE, also learn temporal action abstractions. It would be nice to have a more detailed and clearer discussion of the differences between these three methods than is currently provided, e.g., differences and similarities could be summarised in a table. Quality: - There are no results for PRISE on Multitask LIBERO-LONG. - The paper website states: “Depending on the dataset, we also tune some key hyperparameters for the baselines”. This is vague and it is unclear why this was not done for every dataset. - It is not clear whether all the baselines are based on new experiments or whether some of the results are shown as reported in previous works. - There is no analysis of inference latency of the proposed method and the baselines, which is important for practical applications. - It might be interesting to have an ablation experiment that uses Residual VQ as used by VQ-BeT instead of FSQ to assess if this choice plays a significant role for the difference in performance. Clarity: - Some information about how the hyperparameters of the baselines were tuned are on the paper website, but important information like this should be in the main body of the paper or the appendix. - The paper website states that “To ensure fair comparison of different model architectures, we use same input modalities and same observation & task encoders for all baselines…”. This is important information that should be in the main body or the appendix of the paper. It is also unclear whether this only applies to the Meta-World experiments or also the LIBERO experiments. - It is not specified where the “provided scripted policies” for collecting demonstrations on Meta-World as mentioned on the paper website can be found. - Minor comment: Figure 3b is not described or mentioned in the text. Significance: - The proposed method feels a bit like an incremental evolution of VQ-BeT and PRISE. Even though these have only come out very recently, it would still be good to have a clearer positioning with respect to these two works as already mentioned above. Technical Quality: 2 Clarity: 3 Questions for Authors: - Are the updated results on Meta-World going to replace the current results in the main body of the paper? - Where can the “provided scripted policies” for collecting demonstrations on Meta-World as mentioned on the paper website be found? - Are all results for the baselines based on new experiments or are any of the results shown as reported in previous works? In the latter case, it would be useful for this to be indicated in the result tables. - How were the hyperparameters tuned for the proposed method, for the ablations, and for baselines? - How does the proposed method compare to the baselines in terms of latency? - Is the code going to be released publicly? Updates: - Presentation: 2 -> 3 - Score: 4 -> 6 Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: - It would be nice to have a discussion of latency considerations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks much for your time spent reviewing and your thoughtful comments. We’ve uploaded an updated version of the paper to the website in case you’d like to review the changes we mention. > Two of the baselines, VQ-BeT and PRISE, also learn temporal action abstractions. It would be nice to have a more detailed and clearer discussion of the differences between these three methods Many recent works use discrete latent variable models as a mechanism to learn shared abstractions over continuous low-level skills. QueST, VQ-BeT and PRISE all do this and perform two-staged learning. However, in this work we propose several important architecture choices leading to QueST’s strong performance. Specifically, QueST encodes actions to a sequence of $n$ encodings (skill tokens) using a novel autoencoder that captures temporal correlation within an action sequence with causal convolution and masked self-attention layers. Like QueST, VQ-BeT performs temporal abstraction by encoding a sequence of actions into one state-independent latent vector. It encodes these actions to a single output encoding using an MLP which does not explicitly model any temporal correlation, followed by an MLP decoder and continuous offset predictor. Unlike QueST, VQ-BeT’s relatively small latent space limits its expressive capacity, and thus limits its ability to learn sharable representations between tasks, as evidenced by its worse few-shot performance, worse performance with longer action chunk sizes and reliance on continuous offset prediction (whereas QueST can achieve high success rate using only the output from its action decoder). PRISE performs temporal abstraction by learning discrete codes for state-action pairs and using BPE to group common token sequences into higher-level skills. However, BPE is known to suffer with evolving language leading to a suboptimal character-level tokenization, and it might struggle to effectively encode new actions from unseen tasks. QueST gracefully handles this by encoding new actions as combinations of its learned latent codes, and its superiority in this regard is evidenced by its stronger few-shot performance even without finetuning the decoder, as is necessary for PRISE. > There are no results for PRISE on Multitask LIBERO-LONG. We were unable to reproduce the PRISE results when we ran their code so we were only able to include results presented in their paper. > The paper website states: “Depending on the dataset, we also tune some key hyperparameters for the baselines”. This is vague… Below we summarize information about hyperparameter tuning we’ve added to the appendix - ResNet-T: We observe optimal performance with a transformer trunk with 6 layers and a hidden dimension of 256 and use those parameters for all results. We use an observation history of 10 timesteps. - Diffusion Policy: We tuned the U-Net hidden dimension across [256, 256, 512, 512, 1024] but did not observe any performance gains. We also tune prediction and execution horizons with 16, 32 and 8, 16 respectively. - VQ-BeT: We tune the encoder MLP dimension across (128, 256, 512) with (2, 4) layers and observe worse reconstruction loss with increase in capacity. We use residual-VQ configuration giving a similar sized codebook to QueST. We use an observation window size of 10 and tune the action window size across (1, 5, 32). - ACT: We tune model hidden dimension (256, 512); chunk size (16, 32); and kl weight (10,100). > It is not clear whether all the baselines are based on new experiments or whether some of the results are shown as reported in previous works. We were able to successfully implement Diffusion Policy, VQ-BeT, ResNet-T and ACT on both benchmarks. Thus, we ran and reported those results as shown in the global response PDF. We were unable to successfully recreate PRISE results so we report the results from their paper. > There is no analysis of inference latency of the proposed method and the baselines… How does the proposed method compare to the baselines in terms of latency? Our pipeline runs at 33Hz, which is more than sufficient for vision-based robot control where most camera systems run at 30 fps. For comparison, our implementation of Resnet-T, VQ-BeT, ACT and Diffusion Policy run at 100Hz, 100Hz, 50Hz, and 12Hz respectively. We’ll add this detail to the paper. > It might be interesting to have an ablation experiment that uses Residual VQ as used by VQ-BeT instead of FSQ… We performed an ablation with Vanilla-VQ of the same codebook size as VQ-BeT’s Residual-VQ and observed full codebook usage throughout the training. Hence we don't expect Residual-VQ performance to vary much from the Vanilla-VQ results as Residual-VQ is mainly used to mitigate codebook collapse which doesn’t happen in our case. Moreover, the VQ-BeT paper doesn't mention if they ensured their VQ ablation to be of the same effective codebook size which might have led to poor performance in their ablation experiments. It would still be interesting to validate this hypothesis but due to time constraints it might not be viable by the end of the discussion period. > The paper website states that “To ensure fair comparison of different model architectures, we use the same input modalities and same observation & task encoders for all baselines…”. This is important information that should be in the…paper. It is also unclear whether this only applies to the Meta-World experiments or also the LIBERO experiments. We’ve moved this detail to the appendix and will add a line confirming that this is the case for all experiments. > It is not specified where the “provided scripted policies”…can be found. They can be found in the official Metaworld codebase. > Figure 3b is not described or mentioned in the text. Fixed > Are the updated results on Meta-World going to replace the current results in…the paper? Yes > Is the code going to be released publicly? This will happen alongside the camera ready version of the paper. --- Rebuttal Comment 1.1: Comment: Thank you for the rebuttal. > “Unlike QueST, VQ-BeT’s relatively small latent space limits its expressive capacity” Have you considered running an experiment to examine how the performance of VQ-BeT changes when increasing the size of the latent space? > “BPE is known to suffer with evolving language leading to a suboptimal character-level tokenization, and it might struggle to effectively encode new actions from unseen tasks” Could you elaborate on to what extent this is an issue? Temporal action abstraction should allow the reuse of existing abstractions for new tasks, i.e., it might not be necessary to encode “new actions”. > “We were unable to successfully recreate PRISE results so we report the results from their paper” Minor comment: I would suggest adding a footnote or similar to indicate that the results shown for PRSE are taken from the original PRISE paper. --- Reply to Comment 1.1.1: Comment: Thank you for your suggestion and further questions. > Have you considered running an experiment to examine how the performance of VQ-BeT changes when increasing the size of the latent space? There is a slight misunderstanding here. If by increasing latent space you mean trying a larger codebook size then yes, we do try an effective codebook size of 4096 for VQ-BeT but did not observe any performance gain (<1% variation in LIBERO-90). This follows what VQ-BeT authors report for their codebook size ablations (Table 12 in VQ-BeT paper). What we meant by a "smaller latent space" is the fact that VQ-BeT concatenates input actions and encodes the sequence to just one single latent encoding using an MLP. A sampled action sequence might contain multiple motion primitives of variable length and start point, and capturing them with a single encoding is limiting as it restricts abstraction at different levels of granularity. This is validated by the poorer (-14%) performance of VQ-BeT with a larger chunk size of 32. QueST flexibly captures this variability within 'n' encodings using its encoder, specifically designed to model temporal correlation within input actions. With a codebook size of C, QueST’s effective latent space is C^n while that of VQ-BeT is C, hence a smaller latent space. While we report QueST results for chunk size 32, we did observe very little variation (<1% in LIBERO-90) with size 16,48 and 64, indicating robustness to this hyperparameter (a major issue in tuning chunking based methods like ACT, MT-ACT). > Could you elaborate on to what extent this is an issue? Temporal action abstraction should allow the reuse of existing abstractions for new tasks, i.e., it might not be necessary to encode “new actions”. The reuse of existing abstractions indeed happens, hence PRISE reports a non-zero success rate in the few-shot setting. However, new tasks will definitely contain new action sub-sequences that the model has not seen before (eg. stitching sequences in between two tasks in LIBERO-LONG). BPE relies on frequency statistics from the pretraining data and hence might lead to inefficient tokenization of such new sub-sequences. This is primarily why decoder finetuning is necessary in PRISE. On the contrary, QueST is more end-to-end as it lets the encoder handle temporal abstraction in the encoding phase itself. Our few-shot results without decoder-finetuning shows that such a unified approach learns more generalized abstractions than the baselines and can more effectively represent new action sequences in unseen tasks. > Minor comment: I would suggest adding a footnote or similar to indicate that the results shown for PRSE are taken from the original PRISE paper. We'll update this in the main paper. We hope we have been able to address your questions in the above clarifications. Kindly let us know if you have additional questions or concerns that stand between us and a higher score.
Summary: The paper aims to capture skill abstractions through training a latent space through encoding and decoding actions. The resulting latent space is used for training a policy that converts observations into the latent space, and uses the trained decoder to output actions. The paper conducts experiments on manipulation suites (LIBERO and MetaWorld) and demonstrates improvement over existing approaches on few-shot and multitask setting. Strengths: - The idea makes use of the structure of robotic manipulation tasks, where the sequence of actions are often similar for different items and tasks. Notably, the action decoder is not conditioned on the states to learn a latent space that is invariant to the low-level state information. - The paper is mostly well written Weaknesses: I am happy to increase my score if these points are addressed. **Comments** - Should probably include analysis on what the latent $z$'s ended up learning. Do they actually have some temporal abstraction going on? Can we associate them? It appears that this is in the website but I believe this should be in the main paper as one premise is that the latent representation is leveraging the structure of the actions. - This work appears to rely on the fact that the demonstrations have some form of structures---probably a limitation. Perhaps in this experimentation setting it is a fair assumption, but what if we have partially observed setting (e.g. items with dynamic inertial properties). - It is not clear to me how this approach handles multimodality (in the sense of action distributions?) - Ablation on "causality": I feel the word "causality" is misused because this is simply a mask enforcing whether or not to look at the future data. Is this not still modelling correlation only? How is this causal exactly? Technical Quality: 2 Clarity: 3 Questions for Authors: **Questions** - Equation 1, perhaps write out what round_std is. - Page 5, lines 179-180: What is the intuition of using cross attention between positional embedding and skill tokens but not self-attention with positional encoding? - Page 5, lines 186-187: Do the authors think that this is because the actions can be invariant (e.g. in velocity control the sequence of action is likely similar if we are reaching downwards to the items)? How would stochasticity of the dynamics play into learning this? - Figure 2: Any intuition why few-shot is better than multi-task for QueST? **Possible typos** - Page 8, line 305: viz. means visualized? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: - The current application is only on robotic manipulation but it will be interesting to extend this to other domains Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks much for your time spent reviewing and your thoughtful comments. We’ve uploaded an updated version of the paper to the website in case you’d like to review the changes we mention. > Should probably include analysis on what the latent 𝑧’s ended up learning. Do they actually have some temporal abstraction going on? Can we associate them? Thanks for the suggestion. First, because the autoencoder learns to compress a sequence of T actions into a smaller sequence of n latent codes (n<T), there is automatically some temporal abstraction. Next, the t-SNE plots on the website clearly show how similar motions are aligned with one another. We visualize an example with two tasks for lifting a bowl and placing it on a plate (on right) and on a drawer (on left) on the website. The z-embeddings align for the motion of approaching and placing the bowl downwards, while deviating for moving leftwards and rightwards, demonstrating how QueST embeds diverse motion primitives into discrete tokens in a semantically meaningful way. Also, we would like to point out that there is no explicit loss for semantic alignment, it’s the reconstruction objective along with the bottleneck layer that guides the learning to extract shared representations. We’ll move these plots and discussion to the appendix. > This work appears to rely on the fact that the demonstrations have some form of structures-probably a limitation. Perhaps in this experimentation setting it is a fair assumption, but what if we have partially observed setting While we are making this assumption of structure as an inductive bias for modeling, we do not train the model on any special datasets or use any explicit labels for primitive skills. We train and evaluate on standard datasets that many recent behavior cloning works use and our architecture is designed to capture the structure of these datasets. Motions involved in everyday manipulation tasks naturally have commonalities that can be shared and reused across tasks. This is not an assumption but a property of the data that our model most effectively leverages. Regarding partial observability, you are correct that we didn't have to deal with partial observability for our experimental settings. The main contribution of our work is a method to tokenize continuous actions. That being said, our method can easily be adapted to handle partial observability. For example, one popular way to do that is to simply pass in a stack of historical frames, and it would be very simple to pass in several observation vectors to our policy prior as opposed to the one we pass in now. > It is not clear to me how this approach handles multimodality (in the sense of action distributions?) First, the policy prior outputs latent actions as a categorical distribution and the inherently multimodal categorical distribution over latent actions lends itself well to multimodal data. This phenomenon has been studied in the past in papers such as BeT [56] and VQ-BeT [34]. The Diffusion Policy paper shows how BeT suffers from mode collapse due to lack of temporal consistency which both VQ-BeT and Diffusion Policy resolves by probabilistically predicting a chunk of actions. Likewise, QueST also predicts a categorical distribution over skill-tokens (this chooses mode as per training distribution) and decodes a chunk of actions (this brings temporal consistency). This effect is studied in several prior works [34, 56, 15, 72]. > Ablation on "causality": I feel the word "causality" is misused... Is this not still modeling correlation only? How is this causal exactly? You are right that causality here means not to look at the future data. There seems to be a namespace collision problem with the word. In the machine learning literature the word causality is used to refer to precisely the type of masking we are using in our architecture. For our convolutions we use the causal convolutions from wavenet [63] and for the transformer we use the square causal mask defined in pytorch, TensorFlow and HuggingFace. However we understand how this might cause confusion so we’ll replace the word ‘causality’ with ‘causal masking’, emphasizing how this boosts performance. > Equation 1, perhaps write out what round_std is. It's the nearest integer rounding with straight-through gradients. We’ll update the manuscript. > Page 5, lines 179-180: What is the intuition of using cross attention between positional embedding and skill tokens but not self-attention with positional encoding? We follow the original transformer decoder architecture which consists of alternate masked self-attention and cross-attention layers. Thus we self-attend to positional embeddings with a causal mask as attending to future positional embeddings won’t provide any extra information for the i^th embedding. > lines 186-187: Do the authors think that this is because the actions can be invariant…? How would stochasticity of the dynamics play into learning this? Yes, state-independent abstractions are generalizable as many distinct tasks share common motion primitives, especially with velocity control which offers further invariances. This is more performant than its state-conditioned counterpart as it forces action information through the bottleneck layer, preventing the decoder from ignoring it and focusing on states, hurting the quantization. A closely related prior work, SPiRL[57], tried the same ablation with similar results. The stochasticity of dynamics are accounted for at execution time. The model predicts actions for the next 1-2 seconds, executing them in a receding horizon fashion and intermittently replanning to account for stochastic dynamics. > Figure 2: Any intuition why few-shot is better than multi-task for QueST? The few-shot version does better because it is also trained on the data from the LIBERO-90 suite, and it transfers action representations to the tasks in the LIBERO-LONG suite. We'll add details to the paper to make this more clear. --- Rebuttal Comment 1.1: Comment: Thank you for the response. In short I have raised my score. Regarding to the analysis: I feel the main paper should include this because that is verifies the claim that the model is able to learn skill abstractions. Even if the "performance" is not as well I believe in that case it would still deliver the point. Regarding structure: Yes, I meant tasks like manipulation/physical tasks have inherit structure, and the method itself is able to leverage this through the data. Would this method still work if the dataset is not stored as trajectories? If I understand correctly this method requires temporal data. --- Reply to Comment 1.1.1: Comment: Thanks for the feedback and for raising your score. Your feedback has been a great help to improve the presentation and clarity of the paper. > Regarding to the analysis… This is a good point. We’ll move a subset of these plots to the main body of the paper as well as some analysis describing how they demonstrate QueST’s capabilities to learn shared skills across several tasks. > Regarding structure… Thank you for raising this concern. An example of a dataset without temporal/trajectory data is the ARNOLD[1] benchmark that has current observation as state (s) and next gripper keypoint as action (a) data. While our proposed method does require temporal data, we believe that the same architecture could work for (s,a) pairs. In fact, we initially did test the architecture on ARNOLD, where the input to the encoder was (s,a) pairs and the decoder output was actions (a), and were able to achieve fairly low reconstruction loss. However, we currently leave this application of the architecture to future work. Since we haven’t evaluated QueST rigorously in such a setting, we’ve added a sentence to the Problem Setting section (3.1) emphasizing this assumption. Additionally, we’d like to point out that this assumption is extremely common in recent behavior cloning literature. All of our SOTA baselines (VQ-BeT, PRISE, ACT, Diffusion Policy) make a similar assumption, and several recent large scale robotics datasets (Open-X Embodiment, DROID, BridgeData, etc) and popular behavior cloning benchmarks (Robomimic, Mimicgen, LIBERO, Metaworld, CALVIN, RLBench, Franka Kitchen, D4RL, etc.) are compatible with this assumption. Thus, while this assumption may limit the method’s applicability to some settings, it enables scalability and still fits in well with the field as a whole. Thanks again for your valuable feedback. We would appreciate hearing about any further limitations we can address in order to further increase the score. [1]: Gong, Ran, et al. "ARNOLD: A benchmark for language-grounded task learning with continuous states in realistic 3D scenes." Proceedings of the IEEE/CVF International Conference on Computer Vision. 2023.
Summary: This work develops a novel framework for learning generalizable skills from demonstration data. The author’s model uses a quantized discrete latent variable model that compresses skills into a sequence of latent variables and predicts temporal sequences of actions. Their approach decodes skill by cross-attending the sequence of latent tokens against fixed positional encodings, which differs from previous works that typically condition states and actions as inputs to the decoder models. In their experimental evaluations on several benchmarks ( LIBERO and MetaWorld), the authors demonstrate their skill learning framework performs better than alternatives, and additional ablation highlights the impacts of the Quantization parameter codebook size thresholds on how much they benefit performance for inference. Strengths: The paper is well-written and explains the author's framework in precise detail. It was interesting to suggest that a state-less sequence combined with cross-attention could yield such good performance. Given the empirically solid performance and the author's attention to ablating relevant factors of their system, the paper addresses the problems described and justifies the proposed skill model. Weaknesses: The only major weakness of the author’s work is the limited evaluations. If the authors can justify using just three seeds across experiments, that would build more confidence. Otherwise, as this system targets robotic learning, it would have been good to see results on a real robotic system instead of just in simulation. The authors use large transformer models, so latency concerns could be relevant if their system is computationally slow. We are also concerned with the lack of error bars in Figure 4, which appears only to use a single seed for the ablation experiments, casting doubts on any conclusion the authors make from these results. The authors should clarify these details and run additional experiments to show the robustness of their results if they have not. Technical Quality: 3 Clarity: 3 Questions for Authors: - How many seeds were used in Figure 4 experiments? - If the latent variables are used as key and query values, is it fair to say the generated skills can be imagined as some interpolation between a subspace of the fixed-sized vectors constructed by the positional encodings? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The major limitation we see is the lack of real robot experiments using the author’s system. For real robotic applications, such evaluation is necessary for there to be an acceptance of such methods to be deployed in the real world. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks much for your time spent reviewing and your thoughtful comments. We’ve uploaded an updated version of the paper to the website in case you’d like to review the changes we mention. > The only major weakness of the author’s work is the limited evaluations. If the authors can justify using just three seeds across experiments, that would build more confidence. For behavior cloning settings, which tend not to show significant variation across random seeds, performing experiments with comparatively low numbers of random seeds is fairly common and accepted in the literature. For example, BAKU and VQ-BeT [34] only use 1 seed per experiment. ACT [72], Diffusion Policy [15] and 3D-Diffusion Policy use 3 seeds while PRISE [73] uses four. For our multitask experiments, we now report 4 seeds for LIBERO and 5 seeds for MetaWorld. As evidenced by the error bars, these results have very low variance (<1%) across seeds for both benchmarks. Hence, such a small number of seeds are sufficient to convincingly demonstrate the statistical significance of our reported results. For our fewshot experiments, which have more variance across random seeds, we’ve run further random seeds for LIBERO benchmark. Specifically, we run three fewshot training seeds for each of the three pretraining seeds, resulting in a total of 9 seeds. Please see the updated results in the global response PDF, summary: QueST outperforms best performing baselines by 14% (absolute). Since variance across pretraining seeds is very low (following multitask results) and all stage-2+decoder parameters are finetuned, the fewshot results have very low reliance on pretraining seeds and hence our seeding methodology is justified. For MetaWorld, we report results for both settings across 5 seeds and observe all methods to perform comparatively with QueST slightly better than others. In order to more rigorously verify these assertions, we’ve performed a t-test, whose results are contained in the PDF response. In summary we see statistically significant improvements in performance across all benchmarks except Metaworld fewshot. > The latency concerns could be relevant if their system is computationally slow. Our pipeline runs at 33Hz, which is more than sufficient for vision-based real-robot control where most camera systems run at 30 fps. For comparison, our implementation of Resnet-T, VQ-BeT, ACT and Diffusion Policy run at 100Hz, 100Hz, 50Hz, and 12Hz respectively. We’ll add this detail to the paper under a new subsection in section 5. > It would have been good to see results on a real robotic system instead of just in simulation. We agree! Unfortunately we do not have the capacity to run real robot experiments at this time but this is an important limitation we’ll add to the limitations section of the paper. > Regarding lack of error bars in Figure 4. How many seeds were used in Figure 4 experiments? Unfortunately this is a very computationally demanding experiment to run and since our university-provided compute is swamped with other authors working on NeurIPS rebuttals, we might not be able to show more seeds for this experiment by the end of the discussion period. That being said, we will add two further random seeds before the camera ready version is released. While this is an unfortunate circumstance, hopefully you'll agree that the lack of seeds in this experiment don't meaningfully detract from our overall claims about the effectiveness of QueST in multitask and fewshot settings. > If the latent variables are used as key and query values, is it fair to say the generated skills can be imagined as some interpolation between a subspace of the fixed-sized vectors constructed by the positional encodings? Not quite. Since decoder cross attends to latent skill tokens, their Key vectors (K) and Value vectors (V) are used with Query vectors (Q) from positional encodings. Thus the output actions (skills) lie in the space spanned by the value vectors from skill tokens. Additionally, the softmax and GeLU non-linearity makes the relationship more complex than simple interpolation. A more accurate way to describe might be that the output actions are a weighted combination of transformed vectors derived from skill tokens, where the weights are contextualized based on positional encodings (for temporal correlation).
Rebuttal 1: Rebuttal: We are grateful for the insightful feedback from all reviewers. The reviewers have recognized the novelty of our approach in modeling the inherent structure of manipulation action data through temporal correlation and causal-masking. A particularly noteworthy aspect of our research is the learning and decision-making framework based on state-independent abstractions, which has garnered significant interest. Our comprehensive evaluation, spanning 145 manipulation tasks across 3 diverse benchmarks and 3 distinct settings, against 5 established baselines demonstrates the robustness, versatility and superiority of our method. We are encouraged by the reviewers' assessment that QueST's exceptional performance and simplicity make it a promising candidate for wider adoption within the community. Below we summarize some common concerns: 1. **Innovation/unique contribution**: QueST introduces a novel autoencoder architecture that flexibly captures diverse, variable-length motion primitives by representing them using a sequence of discrete codebook entries (skill tokens). It does so by modeling temporal correlation in action sequence data resulting in shareable and transferable abstractions, leading to superior multitask and fewshot performance. 1. **Latency**: Our pipeline runs at 33Hz, which is more than sufficient for vision-based real-robot control where most camera systems run at 30 fps. For comparison, our implementation of Resnet-T, VQ-BeT, ACT and Diffusion Policy run at 100Hz, 100Hz, 50Hz, and 12Hz respectively. 2. **Latent space analysis**: On our project website, we provide t-SNE visualizations of the skill token embeddings. These visualizations demonstrate clear alignment of similar motion primitives (such as approaching, picking, and placing) across diverse tasks throughout the rollouts. Notably, this coherent skill-space emerges during stage-1 training, without any explicit skill or task labels, or semantic loss functions. The semantic consistency we observe is a direct result of our carefully designed architecture, which inherently promotes and facilitates the sharing of representations across tasks. This emergent structure in the latent space underscores the effectiveness of our approach in flexibley capturing generalized low-level skills without task-specific supervision. 3. **Limited evaluation/statistical significance**: We report LIBERO and MetaWorld multitask results on 4 and 5 seeds respectively. These results show very low variance (as per error bars) demonstrating strong statistical significance. We’ve expanded our fewshot evaluation to further support our findings: the few-shot LIBERO results now encompass 9 seeds, while few-shot MetaWorld utilize 5 seeds throughout. Moreover, we've performed t-tests across all our results to rigorously validate their statistical significance. Please check out the PDF attached with this response for detailed results. Additionally we would like to point out that QueST performs almost similarly with and without decoder finetuning with both variants still outperforms all baselines in fewshot setting on both benchmarks. This suggests that QueST stage-1 can effectively extrapolate in learned skill space and combine the skill tokens to generalize to unseen action sequences. This highlights the potential for QueST’s encoder (decoder) as an universal tokenizer (detokenizer) for continuous actions once trained on a large enough dataset like OpenX Embodiment. This semantically-sound, task-agnostic, temporally abstracted tokenization can better facilitate the learning of large behavior models like RT2 and OpenVLA as compared to their currently used naive discretization schemes. Pdf: /pdf/f3b1b5a0d4da7b81f959de079b01caafd4aab7fe.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DenoiseRep: Denoising Model for Representation Learning
Accept (oral)
Summary: This paper proposes DenoiseReID to improve feature discriminative with joint feature extraction and denoising, in which FEFDFA is developed to merge parameters of the denoising layers into embedding layers. Experimental results show the proposed DenoiseReID improves performance. Strengths: 1. The proposed joint representation learning and denoising process. 2. The proposed FEFDFA is a computation-efficient algorithm. Weaknesses: The author solves all my quentions. Technical Quality: 4 Clarity: 4 Questions for Authors: N/A Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: **What constitutes this noise?** A1: We appreciate your detailed review. Experimental results of Table 3 (FEFDUF) could empirically demonstrated the hypothesis of "the features obtained by backbone extraction are noisy". FEFDUF includes a well-trained person ReID model, and a denoise model which take feature of the ReID model as input and predict its noise. During the training stage, the ReID model is always frozen, and denoise model is trained under normal process in DDPM (i.e. adding a gaussian noise to feature, taking the nosied feature as input and predicting the original gaussian noise). During the inference stage, given images, we first extract features A with the well-trained ReID model, then put the features into the denoise module to predict its noise A'. Finally, we found that A-A' performs stably better than A. We will add the analysis above to the revised manuscript. &emsp; Q2: **The authors should evaluate the proposed method on these datasets instead of conventional ones.** A2: Thank you for the valuable suggestion. We validate our proposed FEFDFA on Occluded-Duke [1], Occluded-ReID [2] and Partial-ReID [3]. These datasets include person images occluded by misc obstacles such as vehicles, trees and so on, making them more challenging. We will add the experimental results to the revised manuscript. The experimental results are as follows, using mAP as the evaluation metric: | Dataset | P-ReID | OCC_Duke | OCC-ReID | |:-------|:------------:|:------------:|:------------:| | TransReID | 72.3% | 59.5% | 71.2% | | TransReID+FEFDFA | 73.5% (↑1.2%) | 60.4% | 72.0% | Q3: **The authors lack an analysis of the model's complexity, including the number of parameters and training time.** A3: Thanks for your valuable feedback. Our proposed method takes LITTLE extra parameters and training time. The reason is the denoising layers employ simple linear layers, they are very light compared with heavy vision backbone (e.g. ViT, ResNet). For example, the parameter of ViT-Small is 22,313,320, while the sum of the parameters of the denoising module is only 3,538,944, an additional increase of about 15.8%. The detailed experimental results are as follows. Please note that these are training-time parameters, our proposed method takes no extra latency during the test stage. | Backbone | Baseline | Ours | Increase | |:-------|:------------:|:------------:|:------------:| | ViT-Small | 22,313,320 | 25,852,264 | 15.8% | | ViT-Base | 89,781,544 | 103,937,320 | 11.6% | Q4: **The performance gain are minimal.** A4: Thanks for your valuable feedback. Our proposed method is test-time computation-free (FREE LAUNCH!) and scalable to various baseline and downstream tasks, which is proven to be effective in CNN series (e.g. ResNet50), Transformer series (e.g. ViT, Vmanba), Person Re-ID (MSMT, Duke), Vehicle Re-ID (vehicReID), Image Classification (ImageNet, CUB). The latest experimental results on CLIP-REID show better improvement on MSMT (CLIP-REID: 75.8% v.s. CLIP-REID+OURS: 76.5% for mAP, see the table below). The code will be released after the manuscript is accepted. We will add the experimental results to the revised manuscript. &emsp; Q5: **The authors should verify the effectiveness of the proposed method using the visual encoder in CLIP as the backbone.** A5: Thanks for your insightful suggestion. CLIP is a very strong vision-text encoder, which is trained with 400 million data. Beyond CLIP, CLIP-ReID pioneeringly adapts CLIP, a zero-shot classifier, to ReID, a fine-grained image retrieval task. Our proposed FEFDFA is model-free, thus it can be easily applied to CLIP-ReID without any modification. The experimental results on CLIP-REID show stable improvement on MSMT (CLIP-REID: 75.8% v.s. CLIP-REID+OURS: 76.5% for mAP). The code will be released if the manuscript could be accepted. We will add the experimental results to the revised manuscript. The experimental results are as follows, with mAP as the evaluation metric: | Dataset | DukeMTMC | MSMT | Market-1501 | |:-------|:------------:|:------------:|:------------:| | CLIP-REID | 83.1% | 75.8% | 90.5% | | CLIP-REID+FEFDFA | 83.9%(↑0.8%) | 76.5%(↑0.7%) | 91.1% | [1] Jiaxu Miao, Yu Wu, Ping Liu, Yuhang Ding, and Yi Yang. Pose-guided feature alignment for occluded person re-identification. In ICCV, pages 542–551, 2019. 1, 2, 6 [2] Jiaxuan Zhuo, Zeyu Chen, Jianhuang Lai, and Guangcong Wang. Occluded person re-identification. In ICME, pages 1–6. IEEE, 2018. 2, 6 [3] Wei-Shi Zheng, Xiang Li, Tao Xiang, Shengcai Liao, Jianhuang Lai, and Shaogang Gong. Partial person reidentification. In ICCV, pages 4678–4686, 2015. 6 --- Rebuttal 2: Title: Comment by Authors Comment: Hi, dear reviewer, we hope our responses have solved your concerns. If you still have some concerns, please feel free to raise them here and we will respond as soon as possible. --- Rebuttal 3: Comment: The authors slove all my quentions. Overall, the motivation of this paper is clear, the idea is novel, the proposed is simple yet effective, and the writting is satifactory. --- Rebuttal Comment 3.1: Title: Responses by Authors Comment: We thank the reviewer kind comment. We will keep improving the proposed method and apply it to more downstream tasks in future.
Summary: This mauscript proposes a novel denosing model for representaetion learning and take person re-identification as a benchmark. It unifies the frameworks of feature extraction and feature denoising, where the former progressively embeds features from lowlevel to high-level, and the latter recursively denoises features step-by-step. Besides, a FEFDA is proposed to fuse feature extraction and denoising in a single backbone without changing its structure and taking extra runtime latency. Experiments on ReID, large-scale and fine-grained classification tasks show its effectivenss. Strengths: 1. The idea of unifying feature extraction and feature denoising in a single backbone without changing its structure is interesting. As far as I know, its first time to see the idea. 2. The characteristic computation-free and label-free is promising. The thoerical analysis seems precies and right. 3. Experiments on 3 typical tasks and 9 datasets of representation learning (retrieval, classfication, fine-grained classification) are sufficient and extensive. Weaknesses: 1. the proposed "Feature Extraction and Feature Denoising Fusion Algorithm" is little similar to reparameterization, please clarify their difference. 2. its application to transformer series are well analyzed, it will be better if peformance on CNN series are shown. 3. representation learning is a wide and fundational conception, its applications on more downstream tasks, such as detection, segmentation even generation, could be analyzed, in future. 4. The hyper-parameter analysis is missed. 5. This method seems need extra training steps, what if the baseline methods are trained under the same steps? Technical Quality: 3 Clarity: 3 Questions for Authors: For questions, please see the weakness in the above section. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: **Clarify the difference between the proposed FEFDFA and reparameterization.** A1: Thank you for your insightful comment. We appreciate the opportunity to clarify the differences between our "Feature Extraction and Feature Denoising Fusion Algorithm" (FEFDFA) and reparameterization. - Reparameterization: Reparameterization typically involves restructuring the parameters of a model to facilitate more efficient training. It is widely used in variational autoencoders (VAEs) to allow for gradient-based optimization of stochastic variables. - Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA): Our FEFDFA is designed to integrate feature extraction and denoising within the same backbone. Each embedding layer in the backbone serves a dual purpose: extracting features and performing denoising simultaneously. By deriving the formula for diffusion model sampling, the denoising layer parameters are fused with the feature layer parameters, so that the model has no additional time cost in the inference stage. &emsp; Q2: **It will be better if peformance on CNN series are shown.** A2: Thanks for your valuable feedback. Our proposed method is scalable to various baseline (including CNN, see Table 5 and line seven of Table 6 for details) and downstream tasks, including CNN series (e.g. ResNet50), Transformer series (e.g. ViT, Vmanba), Person Re-ID (MSMT, Duke), Vehicle Re-ID (vehicReID), Image Classification (ImageNet, CUB). The experimental results in Section 4.4 of the main text and Appendix C demonstrate that our method significantly improves these tasks. &emsp; Q3: **Representation learning applications on more downstream tasks.** A3: Thanks for your valuable suggestion. Representation learning is a broad and foundational concept with potential applications across various downstream tasks. Due to the limited time, we extend our proposed FEFDFA to the object detection task with Mask-RCNN[2] as a baseline and COCO[1] as a benchmark. Experimental results show that ours carries stable improvements of 1.6%-1.1%. Please see the table below for details. We will extend ours to more downstream tasks including segmentation and generation, in future. | **MaskRCNN** | **bbox_mAP** | **bbox_mAP_50** | **bbox_mAP_75** | **mask AP** | |--------------|:--------------:|:-----------------:|:-----------------:|:-------------:| | Swin-T | 0.427 | 0.652 | 0.468 | 0.393 | | Swin-T+FEFDFA | 0.443(↑1.6%) | 0.671 | 0.486 | 0.405 | | Swin-S | 0.482 | 0.698 | 0.528 | 0.432 | | Swin-S+FEFDFA | 0.493(↑1.1%) | 0.709 | 0.540 | 0.439 | Q4: **The hyper-parameter analysis is missed.** A4: Thanks for your reminder. We analyze hyperparameters in the model, including important parameters such as $\beta_t$ (Line178), and diffusion step size T (Line73). The results of the hyper-parameter analysis will be included in the revised manuscript, along with discussions on how different settings affect the model's performance. We use ViT-small as the backbone and conducted experiments on the DukeMTMC dataset with mAP as the evaluation metric. The experimental results are as follows: | **$\beta_t$** | **[1e-3, 0.02]** | **[1e-4, 0.02]** | **[1e-5, 0.02]** | **[1e-6, 0.02]** | |:--------------:|:--------------:|:-----------------:|:-----------------:|:-------------:| | mAP | 81.15 | 81.22 | 81.21 | 81.13 | | **T** | **100** | **500** | **1000** | **2000** | **5000** | |:------:|:------:|:-------:|:-------:|:-------:|:-------:| | mAP | 80.92 | 81.15 | 81.22 | 81.19 | 80.98 | Q5:**What if the baseline methods are trained under the same steps?** A5: Thank you for your question. We conduct experiments to address this concern. We train the baseline methods using the same number of steps as our proposed method. However, we do not observe any significant performance improvements in the baseline methods. We will include these experimental results and detailed analyses in the revised manuscript to support this explanation. Thank you for your valuable feedback. The experimental results are as follows: | epoch | 120 | 160 | 200 | 240 | |:-------|:------------:|:------------:|:------------:|:----------:| | Baseline | 80.38% | 80.39% | 80.38% | 80.36% | | Ours | 80.38% | 80.84% | 81.20% | 81.22% | where the original baseline trained for 120 epochs. Therefore, when the epoch is 120, our method did not participate in fine-tuning. When the epoch is 160, our method fine-tunes for another 40 epochs based on the pre-trained parameters of the baseline, and so on. [1] T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollar, and C. L. Zitnick. Microsoft COCO: Common objects in context. In ECCV, 2014. 2, 5 [2] He, Kaiming, et al. "Mask r-cnn." Proceedings of the IEEE international conference on computer vision. 2017. --- Rebuttal Comment 1.1: Comment: Thanks for authors's feedback. After reading it, all of my questions have been addressed. I also read the comments from other reviewers and notice an extra point "minimal gains on TransReID". The authors claim that 1.1-1.6% extra gain based on MaskRCNN and 0.6-0.8% extra gain based on latest state-of-the-art CLIP-ReID. I thought this is a satisfying improvement. I prefer its novelty, concision, and its advantages of no-extra-lancy, stable improvements and scalibity to many downstream tasks. In summary, I keep my original rating (strong accept). Looking forward to seeing its application to more tasks. --- Reply to Comment 1.1.1: Title: Reponse to reviewer's comment Comment: We thank the reviewer kind comment. We will keep improving the proposed method and apply it to more downstream tasks in future.
Summary: This paper proposes a new method, Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA), which utilizes the denoising ability of diffusion models to denoise the features in the feature extraction layer, and fuses the parameters of the denoising layer with those of the feature extraction layer through parameter fusion, further improving retrieval accuracy without incurring additional computational costs. The effectiveness of the FEFDFA method has been validated on multiple common image discrimination task datasets. Strengths: 1.The article structure is complete and writing is generally clear. 2.Some experimental results seem to good. Weaknesses: 1.In fact, the intermediate layer features of the pre-trained diffusion model can be used directly for the downstream task such as the discrimination task [1][2][3]. The authors need to enrich the Related Work. 2.Line144-146, no evidence provided to support the proposed hypothesis. 3.The authors treat the backbone as a series of denoising layers, so the training loss in Eq. (11) should be the sum of the MSE losses of each denoising layer. 4.The statement is inconsistent. Line172-173,"we freeze the original parameters and only trained the FEFDFA." Line509-511,"the parameters of the FEFDFA and baseline were trained alternately." In addition, unless the baseline and FEFDFA are trained together, Eq. (12) is incorrect. [1] Mukhopadhyay, Soumik, et al. "Diffusion models beat gans on image classification." arXiv preprint arXiv:2307.08702 (2023). [2] Li, Alexander C., et al. "Your diffusion model is secretly a zero-shot classifier." ICCV 2023. [3] Baranchuk, Dmitry, et al. "Label-Efficient Semantic Segmentation with Diffusion Models." ICLR 2022. Technical Quality: 2 Clarity: 3 Questions for Authors: same to the weaknesses. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: **The authors need to enrich the Related Work.** A1: We thank the valuable suggestions. Our proposed DenoiseReID is different from the related works [1-3]. The related works [1-3] apply the itermediate layer features of an existing pre-trained diffusion model to improve downstream task. Ours applies the denoise algorithm to any existing pre-trained downstream model (e.g. Person ReID of Table 2, Vehicle ReID of table 5, Image Classification of Table 6). Compared with the related works [1-3], ours is more scalable (suitable to more downstream tasks) and light-weight (computation-free). We will carefully review and add them to the revised manuscript. &emsp; Q2: **Line144-146, no evidence provided to support the proposed hypothesis.** A2: Thanks for the kind comment. Experimental results of Table 3 (FEFDUF) could empirically demonstrate the hypothesis of "the features obtained by backbone extraction are noisy". FEFDUF includes a well-trained person ReID model, and a denoise model which takes the feature of the ReID model as input and predicts its noise. During the training stage, the ReID model is always frozen, and the denoise model is trained like DDPM (i.e. adding a gaussian noise to feature, taking the nosied feature as input and predicting the original gaussian noise). During the inference stage, given images, we first extract feature A with the well-trained ReID model, then put the feature into the denoise module to predict its noise A'. Finally, we found that A-A' performs better than A. The training process DO NOT use any label because DDPM is an unsupervised manner. This experiment shows that simple denoising features contribute to improvement, partially supporting the view that "the features obtained by backbone extraction are noisy". We will add the analysis above to the revised manuscript. &emsp; Q3: **The training loss in Eq. (11) should be the sum of the MSE losses of each denoising layer.** A3: Thanks for the valuable reminder. This is a writing error. In the code implementation, the loss during the training of the denoising module is the sum of the MSE losses of each denoising layer. The code will be released if the manuscript could be accepted. We will polish the Eq. (12) to be: $$ Loss_p = \sum_{i=1}^{N} \left\| \epsilon_i - D_{\theta_i} (X_{t_i}, t_i) \right\| $$ Q4: **The statement is inconsistent.** A4: Thanks for your comments. There are THREE questions. Let's discuss them ONE BY ONE: - (1) "we freeze the original parameters and only train the FEFDFA." This is our basic training setting. It only trains the parameters of FEFDFA with denoising and (optional) ReID loss. In all unsupervised settings, we use this training set (without ReID loss). It makes significant improvements. Specifically, in the experimental results between the second and first lines of Table 1, it can be observed that the performance of the unsupervised FEFDFA method has significantly improved compared to the baseline. &emsp; - (2) "the parameters of the FEFDFA and baseline were trained alternately." Alternately training is a TRAINING TRICK for the supervised setting. Specifically, we (a) freeze the original parameters, train FEFDFA with denoising and reid losses, then (b) merge parameters of FEFDFA into the original parameters, train latest original parameters with ReID loss only, (c) repeat (a) and (b). In all supervised settings, we use this trick. It carries more improvement. Specifically, the experimental results in the second and third lines of Table 1. Note that step (a) still carries improvements in the supervised setting. &emsp; - (3) "In addition, unless the baseline and FEFDFA are trained together, Eq. (12) is incorrect." Eq.(12) is CORRECT even if parameters of FEFDFA are trained and the original parameters are frozen. Both ReID loss and denoising losses can be used to optimize parameters of FEFDFA. This equals to (2)-(a) above. Similar ideas of "using task-related loss to supervise denoising module" can be found in DiffDet[4] and DiffSeg[5]. [1] Mukhopadhyay, Soumik, et al. "Diffusion models beat gans on image classification." arXiv preprint arXiv:2307.08702 (2023). [2] Li, Alexander C., et al. "Your diffusion model is secretly a zero-shot classifier." ICCV 2023. [3] Baranchuk, Dmitry, et al. "Label-Efficient Semantic Segmentation with Diffusion Models." ICLR 2022. [4] Chen, Shoufa, et al. "Diffusiondet: Diffusion model for object detection." Proceedings of the IEEE/CVF international conference on computer vision. 2023. [5] Tian, Junjiao, et al. "Diffuse Attend and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the authors' feedback. I am still concerned about the following issues. For Q1, I think the motivation of this paper is the same as that of [1, 2, 3], both of which apply denoising features to downstream tasks. I think it makes more reasonable to directly use the features of the pre-trained diffusion model. For Q2, the results in Table 3 cannot provide strong evidence for the hypothesis stated by the authors in Line144-146 of the paper. Since the ReID loss is used to optimize the parameters of FEFDFA and the parameters of the FEFDFA are fused with the backbone in the final, I suspect that the performance improvement in the Table 3 is more like the data augmentation effect brought about by adding noise, rather than denoising. For Q4, unless the authors provide relevant theory or previous works to prove that denoising loss can be used with other task losses to optimize diffusion models or visual task models, I believe that the training loss Eq.(12) is questionable. Denoising in diffusion is fundamentally different from other downstream visual tasks as the optimization objective of them are inconsistent. The relevant paper listed by the author cannot provide evidence to the rebuttal. DiffDet[4] applies the idea of denoising to the bbox regression in object detection. DiffSeg[5] directly uses the intermediate attention layer features of the stable diffusion for the segmentation task, which is similar to the method [1,2,3] mentioned in Q1 that directly uses the intermediate layer features of the pre-trained diffusion model. [1] Mukhopadhyay, Soumik, et al. "Diffusion models beat gans on image classification." arXiv preprint arXiv:2307.08702 (2023). [2] Li, Alexander C., et al. "Your diffusion model is secretly a zero-shot classifier." ICCV 2023. [3] Baranchuk, Dmitry, et al. "Label-Efficient Semantic Segmentation with Diffusion Models." ICLR 2022. [4] Chen, Shoufa, et al. "Diffusiondet: Diffusion model for object detection." ICCV 2023. [5] Tian, Junjiao, et al. "Diffuse Attend and Segment: Unsupervised Zero-Shot Segmentation using Stable Diffusion." CVPR 2024. --- Reply to Comment 1.1.1: Title: Response to Comment by Reviewer 4MBF Comment: We thank the reviewer's detailed comment. It seems that there are still some misunderstandings. Please allow us to explain them again. The responses are listed below and free feel to deeply discuss. **Q: "For Q2"** A: As we have clarified in the rebuttal, we DO NOT use any label (i.e. ReID loss is NOT used) in this experiment. The improvement should come from denoising loss. Based on the observation, we "suppose that in the inference stage, the features obtained by backbone extraction are noisy". **Q: "For Q4"** A: As far as I know, "unifying feature extraction and feature denoising" is pioneeringly proposed in this manuscript. Thus, we can't provide previous works to prove "why denoising loss and task losses can be used together". However, we show the fake code of our proposed method, hoping this can solve your misunderstanding. Please pay attention to the line "denoised_feats = feats - self.denoise_layer(x)", which we think may cause your misunderstanding. ``` // Fake Code of the Proposed Algorithm // Plz note that, this code targets understanding "why reid_loss and denosing_loss can be used together". Thus, we only show some critical logic and details may be missed and inaccurate. class DenoiseLinear: def __init__(self): self.linear = nn.Linear() self.denoise_layer = nn.Linear() set_require_grad_false(self.linear) def forward_train(self, x): // basic branch feats = self.linear(x) // denoising branch, this process is the same with DDPM gt_noise = sample_from_gaussian() pred_noise = self.denoise_layer(x + gt_noise) denoise_loss = l1_or_l2_loss(gt_noise, pred_noise) // tip to optimzie denoise layer with reid_loss is returning denoised_feats NOT feats denoised_feats = feats - self.denoise_layer(x) return denoised_feats, denoise_loss def forward_test(self, x); w, b = fuse_weight(self.linear, self.denoise_layer) // compute offline denosied_feats = w * x + b return denosied_feats // A Toy ReID Model with 2 linear layers class ReIDModel: def __init__(self): self.linear1 = DenoiseLinear() self.linear2 = DenoiseLienar() // unsupervised manner def train_without_reid_loss(self, x): feat1, denoise_loss1 = self.linear1.forward_train(x) feat2, denoise_loss2 = self.lienar2.forward_train(x) return feat2, denoise_loss1 + denoise_loss2 // supervised manner def train_with_reid_loss(self, x, y): feat1, denoise_loss1 = self.linear1.forward_train(x) feat2, denoise_loss2 = self.lienar2.forward_train(x) reid_loss = compute_reid_loss(feat2, y) return feat2, denoise_loss1 + denoise_loss2 + reid_loss def forward_test(self, x): feat1 = self.linear1.forward_test(x) feat2 = self.linear2.forward_test(feat1) return feat2 ``` **Q: "For Q1"** A: Our proposed method is very different from the related work [1-3]. We summarize them in the table below: | | Usage | Scalibity | Improvement and Latency | | :-----------------: | ------------------------------------------------------------ | ------------------------------------------------------------ | ------------------------------------------------------------ | | Ours | Improve existing vision models, which could be from many vision tasks. | [better] ONE method for MANY vision taks without customizing implementation and hyper-parameters. | [better] Given a model of specific vision task (e.g. CLIP-REID), achieve STABLE improvement with NO extra latency compared to the given model. | | Related Works [1-3] | Customize a model for a specific vision task based on a well-trained strong diffusion model. | ONE method for ONE vision task. Need customize Implementation and hyper-parameters for specific tasks. | Given a diffusion model (e.g. StableDiffusionXL), improvement and latency depend on how strong the diffusion model is and the details of the customized implementation for specific tasks. |
Summary: This paper proposes a novel denoising model called DenoiseReID, designed to enhance representation learning in person re-identification (ReID) tasks. This approach combines traditional denoising processes with feature extraction through a feature extraction and denoising fusion algorithm (FEFDFA) that incurs no additional computational cost. It incrementally improves the discriminability of features without the need for labels. Strengths: The experimental results demonstrate that DenoiseReID achieves stable performance improvements across multiple ReID datasets. Furthermore, it can be extended to large-scale image classification tasks such as ImageNet, CUB200, Oxford-Pet, and Flowers datasets. Weaknesses: 1. In the comparative experiments, the authors missed an opportunity to benchmark their method against the latest advancements like CLIP-ReID, restricting comparisons solely to TransReID. 2. When contrasted with TransReID, this approach shows a moderate enhancement in performance. Technical Quality: 3 Clarity: 3 Questions for Authors: I am particularly concerned about the performance comparison in this paper. Is it necessary to use diffusion models for conducting ReID tasks, and what advantages do they offer compared to CNNs or ViTs? Currently, it seems that their performance improvement is rather modest. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes, the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: **The authors missed an opportunity to benchmark their method against the latest advancements like CLIP-ReID.** A1: Thanks for your valuable feedback. CLIP is a very strong vision-text encoder, which is trained with 400 million data. Beyond CLIP, CLIP-ReID pioneeringly adapts CLIP, a zero-shot classifier, to ReID, a fine-grained image retrieval task. Our proposed method is model-free, thus it can be easily applied to CLIP-ReID without any modification. The experimental results of CLIP-ReID show stable improvement on the three datasets(DukeMTMC, MSMT, Market-1501). The code will be released if the manuscript could be accepted. We will add the experimental results to the revised manuscript. The experimental results are as follows, with mAP as the evaluation metric: | Dataset | DukeMTMC | MSMT | Market-1501 | |:-------|:------------:|:------------:|:------------:| | CLIP-REID | 83.1% | 75.8% | 90.5% | | CLIP-REID+FEFDFA | 83.9%(↑0.8%) | 76.5%(↑0.7%) | 91.1% | Q2: **When contrasted with TransReID, this approach shows a moderate enhancement in performance.** A2: Thanks for your kind comment. Our proposed method is test-time computation-free (FREE LAUNCH!) and scalable to various baseline and downstream tasks, which is proven to be effective in CNN series (e.g. ResNet50), Transformer series (e.g. ViT, Vmanba), Person Re-ID (MSMT, Duke), Vehicle Re-ID (vehicleID), and Image Classification (ImageNet, CUB). For its scalability, we DO NOT hack it for backbones or tasks. With the same hyper-parameters, ours achieves consistent and stable improvement on ALL backbones and tasks above. For example, based on previous state-of-the-art CLIP-ReID, our proposed method achieves extra improvement from 0.8%-0.6% on Duke, MSMT and Market. Q3: **Is it necessary to use diffusion models for conducting ReID tasks, and what advantages do they offer compared to CNNs or ViTs?** A3: Thanks for your kind comment. In this work, we use the ViT series for conducting ReID tasks. Our proposed Feature Extraction and Feature Denoising Fusion Algorithm (FEFDFA) fuse the idea of diffusion into the backbone (e.g. ResNet, ViT). We treat each block of the backbone as a denoising layer and utilize the denoising ability of the diffusion model to denoise on a different feature extraction level. During the inference phase, the denoising layer parameters are fused with the backbone network parameters without incurring additional inference time cost. --- Rebuttal Comment 1.1: Comment: After carefully reviewing the author's rebuttal, I believe that the concerns I previously raised have been thoroughly addressed. I have decided to accept this paper for the following main reasons: This paper presents a novel representation learning denoising model for person re-identification, named DenoiseRelD. By integrating feature extraction and denoising techniques, the model significantly enhances feature discriminability. The approach is both theoretically elegant and practically feasible. The author has clearly articulated the motivation and design details of the DenoiseRelD method in both the paper and the rebuttal. Compared to existing methods, DenoiseRelD demonstrates outstanding performance across four benchmark datasets, showing significant advantages. Given these reasons, along with the author's effective response to the concerns raised by other reviewers, I have decided to revise my initial rating to 'Strong Accept'. --- Reply to Comment 1.1.1: Title: Response to reviewer's comment Comment: We thank the reviewer kind comment. We will keep improving the proposed method and apply it to more downstream tasks in future.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
What do Graph Neural Networks learn? Insights from Tropical Geometry
Accept (poster)
Summary: *Disclaimer*: I did not check proofs carefully and did not read the appendix. I am also by no means an expert in tropical geometry. This paper studies expressivity of message passing neural networks (MPNNs) through the lens of tropical geometry. The paper has results on equivalences between classes of piecewise linear polynomials, tropical geometric functions, and MPNNs. They also study the expressivity of MPNNs by counting the number of linear regions and expressing types of tropical functions as MPNNs. Strengths: Proposition 2 and theorems 3 and 4 lay out the main results regarding the expressiveness and the number of linear regions expressible by a MPNN. For theorems 3 and 4, these results have dependence on various quantities that depend on both the graph input and the structure of the MPNN. Nonetheless, a reader can plug these in and get explicit outputs which is nice. Later sections give nice corollaries and extensions of these main results to classification boundaries and expressiveness in the number of parameters. The paper is largely a theoretical work. Proofs are given in the appendix, but I did not check these. Some of the theory is tied to specific architectures like GraphSAGE and GIN. Weaknesses: Overall, the math of this paper is something worth publishing, but I think the paper needs to be placed in context and written better. Especially for someone like me who is not in this particular sub-topic of tropical geometry, I found the paper a rather frustrating read. I wish the authors took a big step back and gave more background into this area and discussed the main results a bit more from a higher level. Many questions exist from reading this that I couldn’t get from just reading the mathematical statements and the main text: - What does understanding geometric complexity tell us? Does it relate to generalization bounds? When do we want to maximize or minimize this quantity? Or is this paper only concerned with expressiveness? - How far apart are these bounds from practice? - Do the expressiveness bounds tell us anything new about GNNs? Are there important classes of functions that it now excludes or includes which are not given by e.g. the WL hierarchy? From a rhetorical and presentation point of view, I think the authors could benefit greatly from a clearer presentation of mathematical results, a focusing on the main statements, and shortening of key portions of text. I try to outline this in detail in what follows. Much of this is of course my opinion and I am welcome to criticism and feedback. Proposition 1: This statement seems largely intuitive but is missing details. Can the authors formally define what they mean by equivalence? I.e. the function classes are equal? Do we require the width or number of parameters to be arbitrarily large for this to be true? Proposition 2: - This statement is hard to parse as it requires understanding objects like "the maximum convex degree across restrictions of f to different m′-dimensional affine subspaces…” Can the authors explain the importance of this proposition better? All that is stated about its importance is the recursion like property that lets us relate complexity of subsequent layers. - If this proposition is used to only prove subsequent theorems, I would recommend to remove it here and place in appendix. Again, I have read this subsection two or three times and still cannot parse completely what is going on and why it is necessary to be in the main paper. Theorem 3: - The notation of $\mathcal{N}(\chi)$ is a bit confusing here since it does not state the number of linear regions for a function $\chi$, but instead the maximum number of regions for any possible function $\chi$. Perhaps I am missing something though but I feel this notation should be changed if correct to have a maximum in it to be clearer. - The text before the theorem, especially lines 167 to 169 are hard to parse. This seems to be giving proof intuition though I could not really follow. E.g. what is a max-out layer? What do the authors mean by absorbed in? I don’t think it’s helpful to the reader to refer them to an algorithm in the appendix to understand what appears to be one of the main statements in the paper. - Can the authors define the notation $n^{t,l}$ before this and preferably even in the theorem statement itself. - Should the sum in the right hand side in dark green go from $0$ to $d_T$? If correct as is, how does one handle situations where $d_0>d_T$? Also can’t we simplify this part by using the binomial theorem? - This theorem I think could use some discussion and clarification. It seems that the bound depends rather strongly on the input itself (e.g. number of nodes). This is in contrast with standard fully connected nets where I don’t see a bound depending on the input since this is fixed. - Related to the above, how tight is this bound for a random network or one that is found on training? Theorem 4: - The integer weight restriction as the authors point out is largely a technical point and I would just state this as a proof technique rather than a whole paragraph to describe it at the start of the section. It is possible I missed the importance of this point here and perhaps it has actual implications or drawbacks that need to be stressed here (I don’t see anything serious though). - I wish the authors would spend more time explaining the theorems. Similar concerns exist here as in theorem 2. Details are deferred to appendix to explain what is happening in the theorem. Parsing the theorem requires understanding a rather complicated formula. Can we use some big O notation perhaps to clean this up? - See also notational concerns for theorem 3 that also apply here. Section 4.4: - What is PWLM? I assume piecewise linear map? This is not defined though. - From line 222 and on, I was just lost. Let me try to explain why and perhaps the authors can clear up this section. To begin, the authors say “our idea is to construct a clique (i.e., a fully-connected graph) with m nodes…” What is the purpose of this and why is it even constructed? Then there is a discussion of local and global comparison which I could simply not follow. Words like compare, selection gadget, etc. are used that I do not know what they mean. - Proposition 6 seems like a simple statement, and I wish the authors would explain more why this is nontrivial or useful to know. Section 4.5: - These results seem to be corollaries of theorem 3 and 4 from earlier? Perhaps I am missing something? Is the important part here that the boundary "is contained in the tropical hypersurface of a specific tropical polynomial”? Conclusion, broader impact, and limitations are considerably short. This paper, especially given it is using tools that may be outside the wheelhouse of the community would really benefit from a richer discussion of limitations and future work. Literature comments: There are other expressivity hierarchies that the authors could look into which appear more relevant than the WL hierarchy. These include equivariant polynomials (see Puny et al. Equivariant Polynomials for Graph Neural Networks) and homomorphism counts (see Chen et al. Can graph neural networks count substructures). Smaller comments: - Proposition 2 is hard to parse without reading the notation in prior sections and before it. I would prefer more that the most important notation be defined within the formal proposition statement so that it is easier to parse. - Notation $\mathcal{N}$ is overloaded as defining both linear regions and neighborhood. - Related to the above, I would request the authors more formally define $\mathcal{N}$ as a number of linear regions in its own definition statement with some discussion about the definition since it seems to be a crucial quantity. - Line 225: what is “noval”? Technical Quality: 3 Clarity: 1 Questions for Authors: See prior sections. Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 2 Limitations: See prior sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for a detailed review and several excellent suggestions, all of which we will act on. Please see our response below. **(Geometric complexity (GC) and generalization)** GC characterizes the complexity of a neural network to approximate functions. In particular, a high value indicates high expressivity of the architecture (since, each linear region could potentially be assigned any of the class labels, independent of the other regions). While expressivity is desirable, it can be at odds with generalization [1], whence we must strive for a good tradeoff. GC thus also has implications for generalization; e.g., see [2] for a bound in the context of gradient flows. [1] Garg, Jegelka, and Jaakkola. Generalization and representational limits of graph neural networks, 2020. [2] Safran, Vardi, and Lee. On the effective number of linear regions in shallow univariate reLU networks: Convergence guarantees and implicit bias, 2022. `How far apart are these bounds from practice?` In general, good estimation of GC is non-trivial. That said, we recover the bounds for GCNs [3] as a special case, and [3] had verified empirically that complexity was indeed close to the lower bound. [3] Chen, Wang, and Xiong. Lower and Upper Bounds for Numbers of Linear Regions of Graph Convolutional Networks, 2022. **(New results about GNNs vis-a-vis WL)** Yes, the WL hierarchy implicitly assumes that all updates are injective, which does not hold with ReLU activations. Thus, WL fails to characterize the exact class of functions that ReLU MPNNs can represent, which we show to be TRSMs. Moreover, as we note in Remark 2, we provide a novel insight that max is more expressive than sum wrt GC for ReLU MPNNs (in contrast to a result in [4] where under some injectivity assumptions, sum is more expressive in distinguishing graphs). [4] Xu, Hu, Leskovec, and Jegelka. How powerful are graph neural networks, 2019. **(Proposition 1)** Yes, equivalence here means that the function classes are equal. We do not require the width or number of parameters to be arbitrarily large: any TRSM can be realised with explicit ReLU MPNN/FNN architectures with fixed number of layers, width, and parameters (Table 1). **(Proposition 2)** Indeed it allows us to analyze each component of each layer, and combine them to get a bound on the overall GC. This is important, as explained in Section 4.3., as the choice of aggregation will have an impact on the bound. Moreover, it makes clear the contribution of each component, and captures how modifying any component will affect the bound. **(Theorem 3)** By $\mathcal{N}(\chi)$ we mean the number of linear region for a particular ReLU MPNN $\chi$, or equivalently a particular continuous piecewise linear map. We do not take the maximum over all possible functions. A max-out layer is one where the activation function is the max of the inputs [5]. By ``absorbed in" we mean that the network can be extended with another layer of $\Phi$. We apologise for having to refer to Appendix due to space constraints. Indeed, we wanted to give some intuition for the proofs, however that requires mentioning the big FNNs $\Phi^{(t)}_1, \Phi^{(t)}_2$. We'll sketch an outline in the main text based on your feedback. We will also define $n^{t, l}$ as suggested. We believe the formula is correct as is, it goes from $0$ to $d_0$. It builds on the ``hyperplane arrangement'' and ``space folding'' arguments from [5], Zaslavsky (1975), that an arrangement of $n_1$ hyperplanes in $\mathbb{R}^{n_0}$ has at most $\sum_{i=0}^{n_0} \binom{n_1}{i}$. We however need to assume that all $n_{1}^{t, l}, n_{2}^{t, l} \geq d_0$ (in particular, $d_T \geq d_0$). Please note the bound for fully connected nets does depend on the dimension of the input. In our case the dimension of the initial embedding is $|V|d_0$, which explains why the number of nodes shows up in our result. We do not know the tightness of the bound for random networks - it is an interesting open problem. [5] Montufar, Pascanu, Cho, and Bengio. On the Number of Linear Regions of Deep Neural Networks, 2014. **(Theorem 4)** Indeed the assumption of integer weight is a technical condition and can be relegated to Appendix. This would allow us to accommodate other clarifications based on your suggestions. Apologies - we wanted to be explicit about the role of aggregation and update steps. We could use Stirling's formula to clean up a bit. We will see what we can do about this, and fix any other notational concerns. **(Section 4.4)** Thanks for catching this: it should be CPLM (continuous piecewise linear map) instead. Sorry for the rather rushed description on gadgets and comparisons - we use these to devise new ReLU MPNN architectures that afford different tradeoffs wrt number of layers and/or parameters. Proposition 6 precisely elucidates these tradeoffs using Table 1. **(Section 4.5)** Yes, these are indeed corollaries, and the important part is that the boundary is contained in the tropical hypersurface of a specific tropical polynomial. Tropical hypersurface is a well-studied object in tropical geometry and this allows to study the decision boundary from multiple perspectives. **(Conclusion)** We hear you, and will do as suggested. In particular, we will point out that some of the technical conditions could be possibly relaxed and a detailed empirical study with novel ReLU MPNN architectures for different kinds of graph inputs (e.g., random graphs) would be helpful. **(Literature)** Thank you for drawing our attention to these hierarchies. We will be sure to position the mentioned influential works by Puny et al and Chen et al. We'll take care of the smaller comments about notation and fixing the typo ("noval" should be replaced by novel). We are grateful for your extremely valuable inputs, and will act on them. Hope all your comments are satisfactorily addressed - we're happy to engage further as well. --- Rebuttal Comment 1.1: Comment: I am at times left more confused with the authors’ answers; so much so that I worry the authors’ results are incomplete and/or wrong. I think the authors were space limited in their responses so let me ask them to expand their response. Let me give a few examples of my confusion and concerns: > [Theorem 3]: By $\mathcal{N}(\chi)$ we mean the number of linear region for a particular ReLU MPNN , or equivalently a particular continuous piecewise linear map. We do not take the maximum over all possible functions. This does not make sense. E.g., set all the weights of the neural network to 0 and there is only one linear region. The theorem cannot possibly hold then. > any TRSM can be realised with explicit ReLU MPNN/FNN architectures with fixed number of layers, width, and parameters (Table 1). ReLU FNN are universal so this cannot make sense that any ReLU MPNN architecture can be realized with fixed parameters just by a counting argument. There has to be some dependence of these on the problem parameters (e.g. input/output dimension). > GC characterizes the complexity of a neural network to approximate functions. In particular, a high value indicates high expressivity of the architecture (since, each linear region could potentially be assigned any of the class labels, independent of the other regions). While expressivity is desirable, it can be at odds with generalization [1], whence we must strive for a good tradeoff. I do not see the connection of generalization to geometric complexity which I assume refers to $\mathcal{N}$. This notion of complexity counts linear regions. Why should I believe this is anything more than a statement about expressivity? This is a worst-case statement as far as I can tell (bounds say at most how many regions there are) and bounds do not seem tight. For example, this quantity can grow exponentially with dimension for example. To argue otherwise for example: is there a nontrivial generalization bound that follows from this notion of complexity? ___ I would ask that the authors do the following both in the comments here (and update paper accordingly): - Give formal definitions of $\mathcal{N}(\chi)$ - the in-line description in the text now at line 154 does not make sense to me as mentioned above - Give formal definition of what the authors mean by equivalence of functions - State explicitly relations of geometric complexity and its relations to generalization ___ Separate from the examples above, in many places, I also could not follow the authors’ rebuttal and its relation to my comments and questions. I understand this may be due to space limitations, but since the comments are not space limited, I would ask the authors to go back and answer the questions/comments in full. --- Rebuttal 2: Title: Detailed comments for clarifications Comment: Thank you for the discussion, and apologies for any confusion. Indeed, we were hampered by the space constraints, which it seems led to some crucial misunderstandings. Below we provide detailed responses to address these. We will also update the paper accordingly. Let's first start with what we mean by the equivalence of functions. **Equivalence.** Consider the following sets of functions. $\mathcal{F}_{\text{ReLU MPNN}}$ : the set of functions represented by all ReLU MPNNs. $\mathcal{F}_{\text{ReLU FNN}}$ : the set of functions represented by all ReLU FNNs. $\mathcal{F}_{\text{CPLM}}$ : the set of all continuous piecewise linear maps. $\mathcal{F}_{\text{TRSM}}$ : the set of all tropical rational signomial maps. By equivalence, we mean that (apologies for having to break the equations and mathematical descriptions below into separate lines due to math formatting issues here) $\mathcal{F}_{\text{ReLU MPNN}}$ $= \mathcal{F}_{\text{ReLU FNN}} $ $=\mathcal{F}_{\text{CPLM}}$ $=\mathcal{F}_{\text{TRSM}}$ In other words, $f: \mathbb{R}^{m} \to \mathbb{R}^{p}$ is a CPLM --- iff $f$ is a tropical rational signomial map --- iff $f$ can be represented by a ReLU FNN $\nu: \mathbb{R}^{m} \to \mathbb{R}^{p}$ --- iff $f$ can be represented by a ReLU MPNN $\chi:\mathbb{R}^{|V| \times d} \times \mathbb{R}^{|E| \times d'} \to \mathbb{R}^{|V| \times d_\text{out}} \times \mathbb{R}^{|E| \times d'_\text{out}}$, where $m = |V|d + |E|d'$ and $p=|V|d_\text{out} + |E|d'_\text{out}$ Now, with this equivalence, we can proceed to addressing the following: `ReLU FNN are universal so this cannot make sense that any ReLU MPNN architecture can be realized with fixed parameters just by a counting argument. There has to be some dependence of these on the problem parameters (e.g. input/output dimension).` Indeed, as Table 1 in the paper summarises, in order to learn any TRSM with specified input and output dimension, which consists of $r$ monomials, the complexity (in terms of number of layers, and parameters) of the different ReLU FNN architectures as well as ReLU MPNN architectures depends on the problem parameters (such as input dimension as well as $r$). By 'fixed' we simply meant that we know the (respective) exact specifications for the proposed ReLU MPNN architectures that realize any TRSM with given input and output dimensions and $r$. In particular, we did not mean that by fixing a particular ReLU MPNN architecture, we can realize all TRSMs independent of input/output dimension or $r$. Please revisit Proposition 6 and Table 1 for the precise claim apropos of this discussion. We next address the concern about $\mathcal{N}(f)$ and $\mathcal{N}(\chi)$, pertaining to the following comment `This does not make sense. E.g., set all the weights of the neural network to 0 and there is only one linear region. The theorem cannot possibly hold then.` **Clarifying $\mathcal{N}(f)$ and $\mathcal{N}(\chi)$**: Apologies, the confusion here does seem to stem from our poor choice of (overloaded) notation, which we will fix as you suggested. For now, to make the argument precise, we first define $\mathcal{N}(f)$. For a continuous piecewise linear map (CPLM) $f: \mathbb{R}^{m} \to \mathbb{R}^{p}$, we define $\mathcal{N}(f)$ to be the least number of connected regions $C_i$ of $\mathbb{R}^{m}$ such that $F \oslash G|_{C_i}$ is linear. Equivalently, following [1], we can also define $K = \mathcal{N}(f)$ as follows. $f: \mathbb{R}^{m} \to \mathbb{R}^{p}$ is a continuous piecewise linear map if $f$ is continuous and there exists a set $\\{f_k: k \in \\{ 1, . . . , K \\}\\}$ of affine functions and nonempty closed subsets $(\Omega_k)_{k=1}^K$ satisfying the following conditions: $\Omega_i \cap \Omega_j = \emptyset; \quad \bigsqcup_{i=1}^K \Omega_i = \mathbb{R}^m; \quad f|_{\Omega_k} = f_k. $ [1] Alexis Goujon, Arian Etemadi and Michael Unser, On the Number of Regions of Piecewise Linear Neural Networks, 2023. Now as we claimed above (and proved in the paper), any such CPLM $f$ can be realized by some ReLU MPNN $\chi$ with a particular setting of weights (parameters). However, if we change the weights, the function represented by $\chi$, as well as the number of linear regions, also changes correspondingly. The particular result gives a lower bound on the maximal number of regions of functions - obtained with different possible settings of weights - that can be represented by a ReLU MPNN. We state this bound below to make things precise: (Contd...) --- Rebuttal 3: Title: Continuation of the clarification thread... Comment: Consider any ReLU MPNN $\mathbb{R}^{|V |\times d} \to \mathbb{R}^{|V |\times d_T}$ such that (1) for $t = 1, \dots, T$, $\phi^{(t)}_1$ has $L^{(t)}_1$ layers: intermediate dimension is $n^{(t, l)}_1$ for the $l$-layer ; (2) for $t = 1, \dots, T$, $\phi^{(t)}_2$ has $L^{(t)}_2$ layers: intermediate dimension is $n^{(t, l)}_2$ for the $l$-layer; (3) $t_0$ MPNN-layers have max as the aggregation operator. Then, the maximal number of linear regions of functions computed by any such ReLU MPNN is lower bounded by $S^{t_0} \left ( \prod_{t=1}^{T-1} \left ( \prod_{l=1}^{L_1^{(t)}} \left \lfloor \frac{n^{t, l}_1}{d_0} \right \rfloor^{d_0} \prod_{l=1}^{L_2^{(t)}} \left\lfloor \frac{n^{t, l}_2}{d_0} \right\rfloor^{d_0} \right ) \right ) \left (\prod_{l=1}^{L^{(T)}-1} \left\lfloor \frac{n^{T, l}_1}{d_{0}} \right \rfloor^{d_0} \prod_{l=1}^{L^{(T)}-1} \left\lfloor \frac{n^{T, l}_2}{d_{0}} \right \rfloor^{d_0} \right ) \sum_{j=0}^{d_0} \binom{d_T}{j}$, where $S$ is the maximum degree of input graph $G$. (Apologies, despite our best efforts, the latex for the above equation does not render here in this environment). We will update the description of Theorem 3 as above to avoid any possible misinterpretations. Finally, we sketch below an argument that clarifies the connection of Geometric Complexity (GC) with generalization. We did not include this connection in the original submission, but provide it here based on your comments. **Geometric complexity and Generalization:** We claim that if the geometric complexity is upper bounded by $r$, then the VC-dimension is at most $r$. This immediately translates into a generalization bound (which can be loose though as we explain below) using a classical result from the statistical learning theory for the binary classification setting. Here's a proof sketch. Since there are (at most) $r$ linear regions, we can select one instance from each linear region to obtain a set of (at most) $r$ instances. We can assign a label for each region independently of others, and moreover, this label can be any of the two possibilities. That is we can "shatter" this set of instances. Moreover, we cannot shatter $r+1$ (or more) instances. This follows since, otherwise, by pigeonhole principle, at least one region will have two instances. As a result, we cannot get perfect classification for this region when these two instances have different labels. Thus, the VC-dimension cannot exceed $r$ (it can, however, be lower than $r$ since the upper bound for the geometric complexity can be loose). We will add a discussion in the paper to point this out as well. Thanks for your patience and for your constructive remarks that have helped us elucidate some subtle aspects of this work. We hope that this clarification thread has satisfactorily addressed your concerns. Please let us know if there's something else we can address or elaborate. --- Rebuttal Comment 3.1: Comment: Thank you for your clarifications. I appreciate your attempts to render all these equations in markdown. No problem that they did not all render correctly. I still have concerns about your answers. **On the definition of complexity:** Your definition of $\mathcal{N}(f)$ does not seem to agree with the later statements. You state that the input of $\mathcal{N}$ is a continuous piecewise linear map (CPLM), but later on, your definitions apply to a family of such CPLMs since you take a maximum over all possible weights of the function. So the input to $\mathcal{N}$ should be a family of CPLMs. The way you phrase it here and in the paper is very confusing and simply not correct as far as I can tell. **On VC dimension:** The VC dimension you describe bounded by $r$ is correct but incredibly loose. This bound can grow exponentially with the dimension or number of parameters. In contrast, standard VC bounds for neural networks of bounded width are typically on the order of the number of parameters. E.g. see work by Bartlett et al. Nearly-tight VC-dimension and pseudodimension bounds for piecewise linear neural networks. So I am still very unconvinced this is a more informative notion of generalization. --- Rebuttal 4: Comment: Greetings, and thank you for continuing the discussion - we're grateful for your engagement. We're willing to update/qualify notation pertaining to \mathcal{N}, as well as, appropriately position the work by Bartlett et al. in the context of generalization. **(On the definition of complexity)**: We acknowledge that overloading $\mathcal{N}$ could be confusing. One way to take care of this would be as follows. Define $\mathcal{Q}(\mathcal{F}) = \max_{f \in \mathcal{F}} \mathcal{N}(f)$ for a family of CPLMs $\mathcal{F}$ where each function $f$ corresponds to a different CPLM obtained with a particular choice of the weights. The lower bound in Theorem 3 can then be stated by replacing $\mathcal{N}(\lambda)$ in equation 2 of the paper with $\mathcal{Q}(\mathcal{F_{\lambda}})$ where we define $\mathcal{F}_{\lambda}$ to be the family of functions that can be represented by a specified ReLU MPNN architecture $\lambda$ with all possible settings of the weights. This would also make everything consistent with the rest of our notation that we adopt from [Zhang, Naitzat, and Lim. Tropical geometry of deep neural networks, 2018.] Would you be fine with updating Theorem 3 as above, or do you have any other suggestions to streamline the notation? **(On VC-dim)**: Thank you for drawing our attention to the seminal work by Bartlett et al. We believe that our work has taken an important step by showing equivalence between ReLU MPNNs and ReLU FFNs - both exactly represent CPLMs (though with different tradeoffs in terms of number of parameters etc. as summarised in Table 1) . The generalization analysis by Bartlett et al applies to piecewise linear networks so could potentially be leveraged (applied/adapted/extended) to exploit this connection of ReLU MPNNs with CPLMs. Indeed, we also agree that bounding VC-dim by $r$ can be very loose. In fact, this was the reason why we sidestepped the question of generalization in the current work, instead choosing to focus on other important aspects where our analysis and results made concrete, non-trivial contributions. We can add a discussion in the paper along these lines if you think that would be helpful? We also welcome any other suggestions you have in this context. We hope our response alleviates your concerns. Please do let us know if there's any further clarification we can provide here or update the paper with. Many thanks! --- Rebuttal Comment 4.1: Comment: The proposed notation is ok with me as long as it is correct. To be clear, this is not about being "willing to update/qualify" the results or "streamline the notation". As the definition is written now, the paper's results are not correct and this must be corrected. I also do not see the point of a discussion about relations to VC dimension when you do not have formal results and the obvious extension of your results produce far worse bounds on FFNs than existing work. I would instead ask you to qualify your statements about generalization since they are to me not convincing and bordering on wrong. --- Reply to Comment 4.1.1: Comment: Thanks - please see our response below. `The proposed notation is ok with me as long as it is correct.` Yes, it is correct. We will update the notation as described in the previous post. `I would instead ask you to qualify your statements about generalization since they are to me not convincing and bordering on wrong.` Could you specify what is wrong with what we said here, or in the paper, about generalization? Only at one place in the main text we mention generalisation in the context of gradient flow [66] (line 91). The referenced paper does use a bound on number of linear regions to get a generalization bound. Do you want us to remove this reference altogether?
Summary: This paper uses tropical geometry to understand MPNNs in the broader general context of GNNs. In the face of the WL framework which studies limitations of the GNNs, this paper proposes to use the rich and powerful theory of tropical geometry to uncover their potential. This paper makes some important contributions in this direction in terms of counts of the number of linear regions analytically and studies particular cases of popular architectures. Strengths: The paper is quite well-written and well-motivated. It also lays the ground for further work in this direction where tropical geometry is a powerful tool that has much potential that has so far been limitedly utilized to understand neural networks. The paper also provides thorough analyses and considerations of the aspects of their theoretical contributions. Weaknesses: I am very familiar with this area of work and so I was able to understand what the authors meant when claiming to establish equivalence between tropical rational signomial maps, but care needs to be taken to phrase this appropriately because the first equivalence between tropical signomial maps and feedforward neural networks was already established in 2018. A better literature review is needed on the intersection between tropical geometry and machine learning. This is a new and promising direction (which I believe that the submission has the potential to make an important contribution) and not much work has been done in this area and so it should be relatively easy to do a comprehensive literature review on this and I am surprised that this was not done. In general, the references on tropical geometry theory are quite seriously lacking. It's clear that the proofs of some of the main theoretical contributions rely on known and well-established results from tropical geometry theory and on some of the existing literature at the intersection of tropical geometry and machine learning builds on, I think that it's misleading not to cite these papers and remark explicitly that the approach and idea for the proofs borrow from these existing resources. It's important that the revision be edited to include this information in order to give appropriate credit where it is due. Technical Quality: 3 Clarity: 3 Questions for Authors: Analytic bounds were given that build on existing theoretical approaches in the literature. How would the same question to be approached numerically, for example, under experimental training of the network where it will be likely very difficult to derive analytic solutions? The ideas for future directions of research in the conclusion are not very convincing and seem to be unfounded – can more details please be provided on how "other aspects of tropical geometry" such as "tropical cycles, the Chow group, and Tropical Jacobian" might reveal further insights about the structure of these MPNNs? Also, this is likely to be a matter of personal taste, but I found the color-coded theoretical results and presentation quite distracting, is this really necessary? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations were discussed throughout the paper in the form of remarks and a discussion in the concluding remarks. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for such a detailed, constructive and thoughtful review. We're grateful for your acknowledgment of the contributions of this work, and share your enthusiasm for leveraging tropical geometry to better understand successful modern architectures. Below we address all your questions, comments, and suggestions. `I am very familiar with this area of work... but care needs to be taken to phrase this appropriately because the first equivalence between tropical signomial maps and feedforward neural networks was already established in 2018.` Thank you for an excellent suggestion. Indeed, the seminal work of Zhang et al. [1] established the equivalence between ReLU FFNs and tropical relational signomial maps, which we crucially leverage in Proposition 1. Therefore, we acknowledge this result in the introduction (line 41), at the beginning of Section 3 (line 138), as well as the proof of Proposition 1 (line 513 in the Appendix). Based on your feedback we will state this result prominently at the beginning of Section 3 as a Lemma due to Zhang et al [1], clarifying what the equivalence entails, before we proceed to Proposition 1. We also welcome any other suggestions you might have in this context. [1] Zhang, Naitzat, and Lim. Tropical geometry of deep neural networks, 2018. `A better literature review is needed on the intersection between tropical geometry and machine learning. This is a new and promising direction (which I believe that the submission has the potential to make an important contribution) ... it should be ... easy to do a comprehensive literature review. In general, the references on tropical geometry theory are quite seriously lacking.` Thank you for drawing our attention to this! We will be sure to cite more works at the intersection of tropical geometry and machine learning, and position their contributions. In particular, we will include the following references: [2] Charisopoulos and Maragos. A Tropical Approach to Neural Networks with Piecewise Linear Activations, 2018. [3] Maragos, Charisopoulos, and Theodosis. Tropical Geometry and Machine Learning, 2021. [4] Alfarra, Bibi, Hammoud, Gaafar, and Ghanem. On the Decision Boundaries of Neural Networks. A Tropical Geometry Perspective, 2023. [5] Brandenburg, Loho, and Montúfar. The Real Tropical Geometry of Neural Networks, 2024. [6] Smyrnis and Maragos. Tropical Polynomial Division and Neural Networks, 2019. [7] Montufar, Ren, and Zhang. Sharp bounds for the number of regions of maxout networks and vertices of minkowski sums, 2022. [8] Trager, Kohn, and Bruna. Pure and spurious critical points: a geometric study of linear networks, 2019. [9] Mehta, Chen, Tang, and Hauenstein. The loss surface of deep linear networks viewed through the algebraic geometry lens, 2021. [10] Grigsby and Lindsey. On transversality of bent hyperplane arrangements and the topological expressiveness of relu neural networks, 2022. [11] Williams, Trager, Panozzo, Silva, Zorin, and Bruna. Gradient dynamics of shallow univariate relu networks, 2019. In addition, we welcome (with gratitude) and will include any other relevant references that the reviewer might be able to recommend. `...proofs of some... contributions rely on known ... results from tropical geometry ... existing literature at the intersection of tropical geometry and machine learning builds on ... it's misleading not to cite these papers and remark explicitly that the approach and idea for the proofs borrow from these existing resources. It's important ...... to give appropriate credit where it is due.` Thank you for the opportunity to reflect on this. Certainly, as you pointed out, some of our results either invoke directly, or build on, the tools and analysis of influential existing works (e.g., Zhang et al. [1] who laid the foundations for analysis of neural networks via tropical geometry with their pioneering work). This work very much stands on the shoulders of such giants, and so while we tried our best to acknowledge the contributions of others (e.g., in the proofs in the Appendix), we apologise for any oversights that could potentially be misconstrued in this regard. We will revisit all the results in this paper, and make sure to acknowledge these contributions in the main text also when we state the propositions. `Analytic bounds ... build on existing ... literature. How would the same ... be approached numerically, ..., under experimental training of the network where it will be likely very difficult to derive analytic solutions?` This is an interesting topic for further research. For example, in [12], Serra et al. constructed a method for counting and analyzing, practically, the number of linear regions. Adapting their method for MPNN will require more work, but we believe that it is a promising direction. [12] Serra, Tjandraatmadja, and Ramalingam. Bounding and Counting Linear Regions of Deep Neural Networks, 2018. `... future directions ... in the conclusion are not very convincing... can more details please be provided on how ... "tropical cycles, the Chow group, and Tropical Jacobian" might reveal further insights about the structure of these MPNNs?` Chow group can be viewed as an analog of homology in algebraic geometry, so we believe it can serve as a type of ”invariance” of a space and its action might reveal interesting invariants in the setting of ReLU MPNNs. It also entails questions about tropical cycles and tropical jacobian. However, absent this context (due to space constraints), this might indeed be confusing and unmotivated, so we'd consider removing these directions from the conclusion. `I found the color-coded theoretical results ... distracting, is this ... necessary?` Thank you. We used color-coding to emphasize different components, but will consider removing it. Thank you very much again, and please let us know if we've sufficiently addressed your concerns. We're also happy to engage further. --- Rebuttal Comment 1.1: Title: Acknowledging authors' reply and responding Comment: Dear authors, thank you for your thoughtful and detailed reply to my concerns and comments. I am happy to read that you have considered them carefully and am satisfied with the proposed changes. I additionally have read the other reviewers' reports and am satisfied with the authors' responses to them. As mentioned in my review, I would prefer the color-coded theoretical results to be removed. I also think it is important to remove the directions for future research if more context/details cannot be given (perhaps due to space constraints), because, upon rereading the paper as it stands, these ideas appear to be an ungrounded list of concepts from tropical geometry. I would also add the following references on tropical geometry to the references list: [1] Maclagan, D., & Sturmfels, B. (2021). Introduction to tropical geometry (Vol. 161). American Mathematical Society. [2] Joswig, M. (2021). Essentials of tropical combinatorics (Vol. 219). American Mathematical Society. As long as these changes are implemented, I am happy to keep my positive rating of the paper. --- Reply to Comment 1.1.1: Comment: Greetings, and thank you for your thoughtful and constructive reply. We're glad to hear that your concerns and comments have been addressed. We will remove color-coding and the future direction on Chow group as you kindly suggested. Many thanks for also bringing to our attention the works by (Maclagan, D., & Sturmfels, B, 2021) and (Joswig, M., 2021). We will include, and appropriately position, these references in the updated version. We're grateful for your support for this work.
Summary: This paper aims to characterize the class of functions learned by message passing Graph Neural Networks (GNNs) with ReLU activations through the lens of Tropical Geometry. Specifically, it characterizes the functions learned by ReLU-based Message Passing Neural Networks (MPNNs) by establishing their equivalence to ReLU Feedforward Neural Networks (FNNs), Tropical Rational Splines Models (TRSMs), and Continuous Piecewise Linear Maps (CPLMs). The paper provides both lower and upper bounds for the number of linear regions and decision boundaries. Additionally, it compares the expressive power of different aggregation operators, revealing that the coordinate-wise max operator has greater geometric complexity than the sum operator. Strengths: 1. The paper is well-structured and clear. 2. The theoretical foundation is solid and thorough. 3. It provides the first lower and upper bounds for the number of linear regions in ReLU MPNNs. 4. It demonstrates that the max aggregation operator is more expressive than the sum operator in terms of geometric complexity, highlighting how expressivity varies with different message aggregation operators and update functions. Weaknesses: 1. The theoretical analysis may not be directly applicable in practice for graph predictions, as it assumes that the ReLU MPNN processes the same graph structure, which is not typically the case in graph classification tasks. 2. The technical condition used in the analysis—that the dimension of the new embedding is at least the sum of the dimensions of the aggregated message and the previous embedding—may be restrictive and not hold for general MPNN architectures. 3. It is unclear how the input graph's spectral properties affect the geometric complexity of ReLU MPNNs within the established bounds. 4. While the theoretical results indicate that ReLU MPNNs are less expressive or as expressive as ReLU FNNs, in practice, ReLU MPNNs significantly outperform FNNs in graph learning tasks. This discrepancy suggests the theoretical results may not fully explain the practical successes of ReLU MPNNs. Other Comments: 1. Line 123: $x_d^{a_m}$ should be $x_m^{a_m}$. 2. What does $D$ stand for in Theorem 4? 3. It may be better to use different notations to represent the set of neighboring vertices ($\mathcal{N}(v)$) and the linear degree ($\mathcal{N}(f)$) for a piecewise linear function to avoid ambiguity. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and constructive feedback. We're glad to note your recognition of several contributions and strengths of this paper. We address your comments, concerns, and suggestions below. `The theoretical analysis ... assumes that the ReLU MPNN processes the same graph structure, ....` We agree with the reviewer. This is a good observation, and indeed we assume the same graph structure for our analysis while deriving the upper bound on geometric complexity. We suspect that this is a technical artefact that somewhat simplifies our analysis (previous work on number of linear regions of GCNs [1] also made a similar assumption), but retains the essential aspects of the typical (i.e., the more general different graph structures) setting with respect to the role played by the aggregation and update steps. We indeed hope that this assumption can be relaxed, or removed altogether, in some future work. [1] Chen et al. Lower and upper bounds for numbers of linear regions of graph convolutional networks, 2022. `The technical condition ... that the dimension of the new embedding is at least the sum of the dimensions of the aggregated message and the previous embedding—may be restrictive ...` We agree with the reviewer on this as well. This restriction generally arises in the analysis of geometric complexity (in particular, it showed up in the analysis of tropical geometry of ReLU FFNs [2] as well) to ensure an amenable arrangement of hyperplanes, ruling out the pathological possibilities. [2] Zhang et al. Tropical geometry of deep neural networks, 2018. `It is unclear how the input graph's spectral properties affect the geometric complexity of ReLU MPNNs within the established bounds.` Indeed, while our bounds reveal the dependence of some input graph properties such as max degree, sum etc., they do not involve spectral quantities such as the spectrum of Laplacian. It's an interesting future direction to investigate whether such quantities have any effect on the geometric complexity of ReLU MPNNs. `While the theoretical results ... ReLU MPNNs are less expressive or as expressive as ReLU FNNs, in practice, ReLU MPNNs significantly outperform FNNs in graph learning tasks. This discrepancy... may not fully explain the practical successes of ReLU MPNNs.` Thank you for the opportunity to clarify this question that motivated most of the second half of our work, where we consider several ReLU MPNN architectures to represent continuous piecewise linear maps and compare them to ReLU MPNNs. While the first half of our work showed an equivalence between ReLU MPNNS and ReLU FFNs in terms of the class of functions that they can represent, our subsequent analysis (especially, Remark 3, Proposition 6, and Table 1 in the paper) suggests why ReLU MPNNs might be more effective in practice. In particular, as Table 1 summarises, ReLU MPNNs outperform ReLU FNNs in terms of both the number of learnable parameters as well as the number of layers required in order to be able to represent the same continuous piecewise linear map. That said, we sidestepped the role of optimisation algorithm (e.g., SGD/Adam), which is another consideration that might also add to this discrepancy between ReLU MPNNs and ReLU FNNs in practice. Based on your feedback, we will make this clear. `Other Comments: Line 123: should be . What does stand for in Theorem 4? It may be better to use different notations to represent the set of neighboring vertices () and the linear degree () for a piecewise linear function to avoid ambiguity.` Many thanks - we'll incorporate all your suggestions. Indeed, in line 123, $x^{a_m}_d$ should be $x^{a_m}_m$. In Theorem 4, $D$ stands for the sum of degrees of all the vertices. We would also modify the notation to avoid confusion. We're grateful for your review, and hope our response has satisfactorily addressed your questions and concerns. Thank you very much! --- Rebuttal Comment 1.1: Comment: Thank you for your response. I think my score still accurately reflects my belief in the paper and I retain the score.
null
null
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Improved Sample Complexity Bounds for Diffusion Model Training
Accept (poster)
Summary: This paper investigates theoretical guarantees for the performance of diffusion models. More precisely, while the previous works have mainly focused on the iteration complexity by assuming to have access to an accurate diffusion model, this paper targets the question of the sample complexity for learning an accurate diffusion model (score function). The main contribution is an exponential improvement of the existing bound with respect to Wasserstein error and depth. Strengths: The main contribution is to improve the existing sample complexity for learning the score function, exponentially with respect to some on $\Theta$, $\gamma$, and $D$. This may contribute to a better theoretical understanding of diffusion-based models. Weaknesses: - The impact of the results obtained and the scope of the theoretical insights are limited. I am not sure that the results of this paper bring concrete new implications/understandings for diffusion models. - Some parts of the paper are not clear enough (see Questions section) Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the authors develop more why and when the dependence on $\theta$, $\gamma$, $D$ and $P$ is more important than the dependence on $\epsilon$? Can a numerical estimate be provided here? 2. Can the authors discuss assumption A2 in more detail? Is it true that the assumption implies $\inf_{f} E_{X \sim q_t} [||f(x) - s_t(x)||_2]=0$? Does this assumption implicitly assume that the score function is "simple" enough to belong to the family $\mathcal{F}$? Does it lead to any limitation in practice? 3. What are $\gamma$ and $R$ in line 41? They seem to be undefined up to that point. 4. For clarity, it may be better to replace $z_i$ with $z_{i,t}$ in line 54 to show that such samples are distinct and i.i.d. for each sample and time index. 5. The paper and the considered setting 1.1. depend strongly on the DDPM algorithm. To make the paper self-contained, it is necessary to describe this algorithm. 6. In Setting 1.1, does $P$ denote the number of parameters per layer? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Not Applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thorough review and valuable suggestions on the presentation of the paper. We appreciate the time and effort you have invested in providing this feedback. Implications of our work ---- Our work is the first to show a polynomial sample complexity for learning a diffusion models using a deep network under a reasonable assumption -- that the scores of the distribution can be represented using a $P$-parameter neural network. From a theoretical point of view, we believe this is a significant contribution -- it was not clear prior to our work whether minimizing the score-matching objective is sufficient to learn a diffusion model using a small number of samples. More broadly, we hope that this style of assumption will inspire future work in understanding diffusion models and other deep learning models. %We would like to highlight the concrete new insights our research contributes to the field of diffusion models. Responses to specific questions ---- > *Can the authors develop more why and when the dependence on $\theta$, $\gamma$, $D$ and $P$ is more important than the dependence on $\varepsilon$? Can a numerical estimate be provided here?* To illustrate the dependence on these parameters, we provide some natural choices of parameters: - TV error $\varepsilon = 0.01$. - Wasserstein error $\gamma = 0.01$. - Space dimension $d = 12288 = 64 \times 64 \times 3$. - Total Parameter Number $P = 270M = 2.7 \times 10^8$. - Network Depth $D = 12$. - Parameter Weight Range $\Theta = 5$. For parameter number and the space dimension, we used the parameters in stable diffusion[1]. Our bound gives $\frac{d^2 PD}{\varepsilon^3} \cdot \log \Theta \cdot \log^3 \left(\frac{1}{\gamma}\right) \approx 7.7 \times 10^{25}$, compared to the previous bound of $\frac{d^{5/2}}{\gamma^3 \varepsilon^2} \left(\Theta^2 P\right)^D \sqrt{D} \approx 5.2 \times 10^{138}$. While we do not claim that $\theta$, $\gamma$, $D$, and $P$ are universally more important than $\varepsilon$, our results demonstrate a significant *exponential* improvement in $D$ and $\gamma$ with only a minor loss in $\varepsilon$, resulting in a substantially better overall bound. > *Is it true that assumption A2 implies $\inf_f \mathbb{E}_{X \sim q_t}[\| f(x) - s_t(x) \|_2] = 0$? Does this assumption implicitly assume that the score function is "simple" enough to belong to the family?* Actually, we only need the $L^2$ error to be polynomially small, namely $\frac{\delta \cdot \varepsilon^3}{N^2 \sigma^2_{T - t_k} \log \frac{d}{\gamma}}$, as specified in Theorem C.3. We will put this in the main body of the paper. Intuitively, this assumption states that neural networks can effectively represent the score function. This assumption is both reasonable (neural networks have proven to be surprisingly powerful function approximators) and necessary (if the distribution's score isn't well approximated by a neural network, diffusion won't work). > *What are $\gamma$ and $R$ in line 41?* In line 41, $R$ represents the radius of the support size of the distribution, and $\gamma$ is a parameter used to specify the Wasserstein error. > *In Setting 1.1, does $P$ denote the number of parameters per layer?* In Setting 1.1, $P$ denotes the total number of parameters, not just per layer. We will clarify this in the final version to avoid any ambiguity. [1] Rombach, Robin, et al. "High-resolution image synthesis with latent diffusion models." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022. --- Rebuttal Comment 1.1: Comment: Thanks for the response and the numerical estimate. Apart from the presentation, which I believe will be improved by the authors, I think the main debate is about the importance of the paper, as I mentioned in the first weakness and discussed in more detail by other reviewers. Unfortunately, I am not in a position to make an informed judgment, but I think the paper contains interesting results and I maintain my rating.
Summary: In this paper, the authors analyze the sample complexity of training a score-based diffusion model. They show that, with a sufficiently expressive neural network, \tilde O(d^2 P D log \Theta \log^3 \frac{1}{\gamma} / \epsilon^3) samples are needed to learn an accurate diffusion model. Compared to the existing result, their bound has exponentially better dependence on \Theta, \gamma and D, and a better polynomial dependence on d and P, at the cost of a worse polynomial dependence in \epsilon. Strengths: The paper improves the existing bound for training a diffusion model with sufficiently expressive neural network. They analyze the barrier for L_2 accuracy as a criterion and propose to use a (1-\delta)-quantile error in their analysis, which would finally scale polylogarithmically in 1/\gamma, and suffices for fast sampling via the reverse SDE. Basically, the organization of this paper is good, and the proof sketch is clear, though it is a hard to check all the details. Weaknesses: 1. Generally, this paper discusses the training sample complexity of a score-based diffusion model. The assumption is somehow strong, e.g., the neural networks can represent the score. Besides, the author do not include the optimization error in the analyses. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. As the bound in lemma 1.4 is with probability and in lemma 1.3, the requirement must hold for all N, what if we make sampling step N \to \infty in Theorem 1.2? Does the bound in lemma 4 for each t independent or is controlled by the same stochasticity? 2. What is the quantitive requirement on \mathcal{F} in Assumption 2? What does the "sufficiently small" mean? 3.Why is it required to constraint the functions represented by a fully connected neural network with ReLU activations and depth D, with P parameters, each bounded by \Theta? Is it possible to be extended to other function classes? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your comments and are glad that you like and support the paper. **The assumption that networks can represent the score is somehow strong.** We agree that it is somewhat strong, but view this more as a statement about data; analogous to "the data is sparse in X basis". Over the past decade neural networks have proven to be surprisingly effective function approximators. We ask: supposing this is true, how many samples do you need to accurately learn a generative model? (And if it's not true, then trying to learn the score with a neural network is doomed to fail regardless of how many samples you use.) **Q1 (Applying Lemma 1.4 for all N):** That's right, Theorem 1.2 applies Lemma 1.4 in a union bound over the N time steps to invoke Lemma 1.3. Since the dependence on probability $\delta_{train}$ is logarithmic, this loses at most a $\log N$ factor. Not good if $N$ is really $\infty$, but fine for more reasonable values of $N$. That said, there is an interesting question here. In our analysis, it doesn't matter if the image samples are chosen independently or jointly across different scales. In practice, they are typically chosen jointly (so each sample image is then used at all noise scales). It seems plausible to us that jointly sampling images could improve the sample complexity in Theorem 1.2 from $d^2$ down to $d$, by correlating the "bad" events across scales. **Q2 (Quantitative Assumption 2):** We give the precise bound in Theorem C.3 in the appendix. It's a polynomial bound, namely $\frac{\delta \varepsilon^3}{N^2 \sigma^2 \log \frac{d}{\gamma}}$. **Q3 (Other function classes):** Our proof actually applies to any Lipschitz activation function, we should have specified this. We think measuring a neural network by its parameter count / weight / depth is a simple way to bound its complexity. One can certainly analyze other classes -- Lemma A.2 applies to arbitrary finite function classes, and one can take a net for other classes -- but we're not sure of any more general statement that applies cleanly to neural networks. (Some work (e.g. BMR20) tries to use Rademacher complexity to get a general statement, but the Rademacher complexity of neural networks is exponential in their depth so this is unsatisfying.) --- Rebuttal Comment 1.1: Comment: Thank you for your reply! 1. I am still doubt about the assumption that the score function can be effectively. As the authors constraint the functions represented by a fully connected neural network with ReLU activations and depth D, with P parameters, each bounded by \Theta. The question now becomes whether this family is strong enough to represent the score of the complex dynamics? What is the influence of these network parameters other than the representative ability? 2. As confirmed by the authors, I am a little confused. If N could not be too large, so how do we bound the discretizaiton error? The authors should discuss this more sufficiently, and especially when jump from the individual bound to the union bound. 3. Note that the authors did not discuss the optimization error, it is really strange to use the description "trained from". Does the authors assume the global minimum can be achieved? --- Reply to Comment 1.1.1: Comment: Thank you for your reply! We respond to your questions below: 1. Our goal is to understand the sample complexity of using the score matching objective to approximate the score by a neural network for diffusion, because that is the process people are using successfully in practice. There are a couple questions you might be asking, and we're not sure which: - Why study this particular family of neural networks (relu, fully connected, etc.)? Our techniques are pretty general, and could be applied to any continuous neural network with Lipschitz activation. This just seemed like the simplest setting to describe (of course we need some bound on the complexity: bigger, more complex networks need more samples). - Why would we expect neural networks to be able to represent the score accurately? We think this is somewhat orthogonal to the sample complexity question: if they can't represent the score, then diffusion isn't going to work no matter how many samples you have. In practice, neural networks do seem to be able to represent scores accurately. Why that happens is a very interesting question, but we view it as more a question about *data* than about neural networks or optimization. (Analogous to: images tend to be sparse in the wavelet basis.) It's trivial to construct synthetic data that cannot be represented by a neural network (or any other compact representation). 2. **Choice of $N$**. There are competing factors: $N$ needs to be large enough for the discretization error to be small, but the sample complexity and sampling time degrade with increasing $N$. As we specify in Lemma 1.3, the sweet spot is $N = \tilde{O}(\frac{d}{\varepsilon^2 + \delta^2} \log^2 \frac{1}{\gamma})$. 3. **Optimization error.** We did assume that the global minimum can be achieved, as specified in Setting 1.1. This is a standard setting in the line of works that aim to bound sample complexity. While analyzing the optimization process is indeed interesting, it is also extremely challenging. Our focus is on determining how many samples are needed for the minimizer of the neural network to be effective in the diffusion process. In practice, SGD has been shown to approximate the global minimum well. Our analysis can easily extend to handle approximation error in the optimization. Given an approximate minimizer with error comparable to the bound we give on line 712, the same analysis would give the same result. We can clarify the wording "trained from" to specify that we mean the ERM of the score matching objective on the given samples.
Summary: The paper studies the sample complexity of training diffusion models. The bound derived in the paper is exponentially better than the previous results. The paper also discusses the difficulty of learning the score function in $L^2$. Strengths: 1. The paper derives better sample complexity results for diffusion models. The bound in this work is exponentially better than the existing results. 2. The paper discusses the difficulties of learning the score in $L^2$. Two examples are provided to show this challenge. Weaknesses: 1. The major concern is the presentation of this paper. I understand the results are important. However, the authors should clearly position this work in the literature. I suggest the authors provide a table that compares the sample complexity to the ones in prior work. In particular, the authors should discuss the improvement in terms of both the TV bound and the Wasserstein results. 2. I notice the authors use BMR20 as a benchmark in Section 1.1. However, there are many better sample complexity results on diffusion models since that work. I suggest the authors carefully state how this work differs from those results. Also, I suggest the authors emphasize the main technical contributions in the proof instead of stating all the steps one by one. In particular, which step and what technique lead to the exponential improvement? 3. The importance of Section 4 is not clear. Recent work has explored the learning score function in $L^2$; see [1-5]. All the work listed shows learning the score in $L^2$ is possible, and some of them show the minimax optimal rate. Specifically, learning the score function for an arbitrary distribution can be hard. However, learning the denoised score function during the forward process can be much easier. Therefore, I suggest the authors clarify why the examples in Section 4 can be interesting. [1] Chen, Minshuo, et al. "Score approximation, estimation and distribution recovery of diffusion models on low-dimensional data." International Conference on Machine Learning. PMLR, 2023. [2] Han, Yinbin, Meisam Razaviyayn, and Renyuan Xu. "Neural network-based score estimation in diffusion models: Optimization and generalization." arXiv preprint arXiv:2401.15604 (2024). [3] Wibisono, Andre, Yihong Wu, and Kaylee Yingxi Yang. "Optimal score estimation via empirical bayes smoothing." arXiv preprint arXiv:2402.07747 (2024). [4] Zhang, Kaihong, et al. "Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions." arXiv preprint arXiv:2402.15602 (2024). [5] Oko, Kazusato, Shunta Akiyama, and Taiji Suzuki. "Diffusion models are minimax optimal distribution estimators." International Conference on Machine Learning. PMLR, 2023. Technical Quality: 3 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Comparison to previous work ---- We would like to clarify that most of the prior work in the literature, including the works you mentioned, ask about the *approximation* power of neural networks for representing the score of *arbitrary* distributions, and/or make strong assumptions on the distribution. Typically, for $d$ dimensional distributions, the number of parameters necessary is *exponential in $d$* -- indeed, the bounds shown in all the works you mentioned suffer from this exponential dependence. But in practice, neural networks are surprisingly powerful function approximators of real-world functions, and they seem to be able to represent $d$-dimensional distributions with much less than $\exp(d)$ parameters. To circumvent this issue, we take a different route: we assume that a $P$ parameter neural network can approximate the score of the distribution that we are trying to learn. Note that such an assumption is *necessary*: if our network cannot even represent the scores, there is no hope of learning it. We then ask: if $P$ parameters suffice, how many samples do we need to learn this approximation? We show that the sample complexity for this problem is only *polynomial* in all relevant parameters, and even logarithmic in the Wasserstein error, unlike *all* prior works. We believe that this question and our answer more accurately capture the situation in practice than the one above. We appreciate the feedback to provide a table -- please see the general response. Note that [HRX24], which is focused on the two-layer network case in the NTK regime, actually makes the *assumption* that gradient descent in RKHS using a **sufficient** number of samples is sufficient to learn the score (Assumption 3.11). Our paper *proves* that a small number of samples is sufficient to learn the score under the reasonable assumption that the network can represent it; moreover, we provide a quantitative bound that is polynomial in all relevant parameters. Technical contributions --- Here is a brief summary of our technical contributions: * We exploit the fact that recent works on *sampling* only require an error of $\varepsilon^2/\sigma_t^2$ for each $t$ to show a sample complexity bound *independent* of $\sigma_t$ for score estimation with this error. That is, the accuracy required fortuitously matches the accuracy achievable at each scale $\sigma_t$, with a sample complexity independent of $\sigma_t$. * This lets us run until $\sigma_t$ is extremely small; this leads to a sample complexity *logarithmic* in the final Wasserstein error. * To show our score estimation result for a single time $t$, instead of making the strong assumption that the distribution is bounded as in [BMR20], we observe that *with high probability* the score at time $t$ is bounded in terms of $\sigma_t$. This observation, combined with a technical argument allows us to remove the $R^3$ dependence in the sample complexity in [BMR20]. * Other contributions that we will refrain from describing in detail here in the interest of space -- we show that we learn the score in a new $1-\delta$ robust sense, rather than standard $L^2$ to circumvent barriers in learning in $L^2$, and we make use of a careful net argument to obtain our *exponential* improvements in the dependencies on the depth and range of the neural network relative to [BMR20]. Section 4 --- - As above, section 4 shows that learning in $L^2$ *requires* $\text{poly}(1/\gamma)$ samples to obtain a final Wasserstein error of $\gamma$. We can circumvent this barrier by learning the score in our weaker $1-\delta$ robust sense, to obtain a final sample complexity only logarithmic in $1/\gamma$. - Minimax optimality: As explained above, prior works show minimax optimality with respect to the class of arbitrary distrbutions, which results in an exponential in $d$ sample complexity. This does not accurately capture the situation in practice -- neural networks are surprisingly powerful approximators of real-world functions and seem to be able to represent $d$ dimensional distributions with much less than $\exp(d)$ parameters/samples. We circumvent this by making the reasonable assumption that the scores of the distribution can be represented with a $P$ parameter neural network. [HRX24]: Neural Network-Based Score Estimation in Diffusion Models: Optimization and Generalization. Yinbin Han, Meisam Razaviyayn, Renyuan Xu. https://arxiv.org/abs/2401.15604 --- Rebuttal 2: Comment: Thank you for your detailed response. I have raised the score. However, the presentation should still be improved before publication.
Summary: In this paper, the authors studied the sample complexity of training diffusion models. By using a sufficiently expressive neural network, the authors showed an exponential improvement in the dependence on Wasserstein error and network width, which is expressed as $\tilde{O}(d^2PD\log\Theta \log^3(1/\gamma)/\varepsilon^3)$. This bound has a better polynomial dependence on the dimension $d$ and $P$, a better exponential dependence on $\Theta, \gamma$ and $D$ but a worse polynomial dependence on $\varepsilon$ as a cost. Strengths: This paper is clearly written and the presentation is fairly good. The theorems and lemmas proposed in this paper are sound and solid. Weaknesses: This contribution of this paper seems not to be enough for an accept of a top tier machine learning conference. As we know, the TV error or Wasserstein error of learning diffusion models mainly lies in the discretization error caused by solving backward ODE (or SDE) through Euler Maruyama method, as well as the score estimation error. The discretization error is dominated by the latter error according to Chen et al.'s work. In this paper, the authors mainly studied the score estimation error. My biggest concern is that inserting a better generalization bound of score estimation (with regard to polynomial dimensional dependence) is not novel enough. For more questions, please refer to the next section. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. For the assumption A2, I think you can directly put the result of Theorem C.3 into the main text since the function approximation is also important, and the statement like "sufficiently small" doesn't seem to be strict. 2. Please provide some explanations on the novelty of the sample complexity theory, other than an insert of polynomial dimension learning error rate of expressive neural networks. 3. The work focuses on DDPM. However, in the original work of Song et al, they used VE (variance exploding) for consistency training. Is it possible for the sample complexity bound in the paper adapted to the VE case? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: There is not potential societal impact of this work since it is a completely theoretical work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments and questions. As you state, prior work like Chen et al. have shown that "sampling is as easy as learning the score." Thus the main open question is: how easy *is* learning the score? We think that question is clearly important enough for top tier machine learning conferences. We address it by providing the *first* polynomial sample complexity bound, under the assumption that the score can be effectively represented by neural networks. And this assumption is both reasonable (neural networks have proven to be surprisingly powerful function approximators) and necessary (if the distribution's score isn't well approximated by a neural network, diffusion won't work). **Assumption A2**: Thank you for the suggestion, which is the consensus of reviewers. We will include the precise definition of ``sufficiently small" into the main text in the final version of the paper. As stated in C.3., the required $L^2$ error is $\frac{\delta \varepsilon^3}{N^2 \sigma^2 \log \frac{d}{\gamma}}$. **Technical novelties**: Here is a brief summary of our technical contributions: * We exploit the fact that recent works on *sampling* only require an error of $\varepsilon^2/\sigma_t^2$ for each $t$ to show a sample complexity bound *independent* of $\sigma_t$ for score estimation with this error. That is, the accuracy required fortuitously matches the accuracy achievable at each scale $\sigma_t$, with a sample complexity independent of $\sigma_t$. * This lets us run until $\sigma_t$ is extremely small; this leads to a sample complexity *logarithmic* in the final Wasserstein error. * To show our score estimation result for a single time $t$, instead of making the strong assumption that the distribution is bounded as in [BMR20], we observe that *with high probability* the score at time $t$ is bounded in terms of $\sigma_t$. This observation, combined with a technical argument allows us to remove the $R^3$ dependence in the sample complexity in [BMR20]. * Other contributions that we will refrain from describing in detail here in the interest of space -- we learn the score in a new $1-\delta$ robust sense, rather than standard $L^2$, to circumvent barriers in learning in $L^2$, and we make use of a careful net argument to obtain our *exponential* improvements in the dependencies on the depth and range of the neural network relative to [BMR20]. **VE vs VP:** The VE and VP processes are simply reparameterizations of each other. That is, if $x_t$ denotes a solution to the VE SDE, then $y_t = e^{-t} x_{e^{2t}-1}$ is a solution to the VP SDE. The reverse process and its discretization can be easily adjusted to match this change of variables. So, these processes are pretty much identical, and the same results hold.
Rebuttal 1: Rebuttal: We thank the reviewers for their feedback. One common request was for a table placing our results in context of related work. Here it is: | **Work** | **Sample Complexity** | **Notes** | |-------------|------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------| | [ZYLL24] | $\widetilde O\left( \frac{1}{\epsilon^2 \gamma^{d/2}}\right)$ | Assuming Distribution is $\alpha$-subgaussian, satisfies an LSI. Gaussian Kernel estimator, rather than NN. | | [WWY24] | $\widetilde O\left(\frac{d^{d/2} \alpha^d R^{d+2}}{\gamma^{d+2} \epsilon^{d+4}} \right)$ | distribution is $\alpha$-subgaussian. Gaussian Kernel estimator, rather than NN. | | [OST23] | $\widetilde O\left( \frac{1}{\epsilon^{O(d)}}\right)$ | Density supported on $[-1, 1]^d$, belongs to a Besov space | | [CHZ+23] | $\widetilde O\left(\frac{1}{(\epsilon \gamma)^{O(d)}} \right)$ | Assuming density supported on a $d$-dimensional subspace | | [BMR20] | $\widetilde O\left(\frac{d^{5/2}R^3}{\gamma^3 \epsilon^2} \left(\Theta^2 P \right)^D \sqrt{D} \log \frac{1}{\delta} \right)$ | Assuming NN can represent scores, distribution is bounded | | Ours | $\widetilde O\left(\frac{d^2}{\epsilon^3} PD \log \Theta \log^3 \frac{1}{\gamma} \right)$ | Assuming NN can represent scores | Our paper primarily discusses [BMR20] because it is the only related work that is not exponential in the data dimension $d$; but as you can see, it is exponential in the depth $D$ of the neural network. Ours is the only work giving sample complexity polynomial in these parameters. [WWY24]: Wibisono, Andre, Yihong Wu, and Kaylee Yingxi Yang. "Optimal score estimation via empirical bayes smoothing." arXiv preprint arXiv:2402.07747 (2024). [ZYLL24]: Zhang, Kaihong, et al. "Minimax Optimality of Score-based Diffusion Models: Beyond the Density Lower Bound Assumptions." arXiv preprint arXiv:2402.15602 (2024). [OST23]: Oko, Kazusato, Shunta Akiyama, and Taiji Suzuki. "Diffusion models are minimax optimal distribution estimators." International Conference on Machine Learning. PMLR, 2023. [CHZ+23]: Chen, Minshuo, et al. "Score approximation, estimation and distribution recovery of diffusion models on low-dimensional data." International Conference on Machine Learning. PMLR, 2023. [BMR20]: Generative Modeling with Denoising Auto-Encoders and Langevin Sampling. Adam Block, Youssef Mroueh, Alexander Rakhlin. https://arxiv.org/abs/2002.00107
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
HGDL: Heterogeneous Graph Label Distribution Learning
Accept (poster)
Summary: This paper studies heterogeneous graph label distribution learning with the aim of predicting label distributions of unlabeled nodes in a heterogeneous graph. This paper elaborates the challenges for generalizing LDL into networked data, and proposes an LDL algorithm HGDL to overcome the challenges. Besides, this paper derives the PAC-Bayes error bound for HGDL and conducts experiments to show the superiority of the proposal. Strengths: This paper studies a new problem, i.e., label distribution learning in heterogeneous graphs. Besides, this paper proposes an end-to-end HGDL learning approach to jointly learn the optimal meta-path graph topology and node representation. The effectiveness of the proposed method is studied theoretically and empirically. Weaknesses: The contribution of this paper is unclear. This paper attempts to combine heterogeneous graph learning with label distribution learning. However, the paper addresses challenges about the topology, heterogeneity, and inconsistency in terms of instances, and pays little attention to the challenges of learning label distributions of instances, i.e., the proposed challenges do not arise from label distributions. Therefore, the contributions for LDL are not clear. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why can't we just replace the output layer of existing heterogeneous node classification model with MSE (Mean Squared Error) or KL loss to learn the label distribution? What challenges arise when replacing categorical labels with label distributions in HGDL? And which of these problems does this paper address? What is the contribution of this paper to the field of label distribution learning? 2. Why use PAC-Bayes theory to analyze the error bound instead of PAC theory? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: I don't believe that the paper has a potential negative social impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We address the reviewer's concerns below one by one in a Q&A fashion. --- Q1: Why can't we just replace the output layer of existing heterogeneous node classification model with MSE (Mean Squared Error) or KL loss to learn the label distribution? A1: Simply replacing the output layer with MSE or KL loss to learn the label distribution, like the reviewer has suggested, will not work well due to two prominent challenges imposed by the network heterogeneity. First, in heterogeneous graph, the label distribution of each node is influenced by their neighboring nodes that can vary through multiple factors, including their different types of nodes and edges, nodal contents, and topological features. This complicates the message-passing mechanism, because aligning those varying factors would result in a combinatorial issue. Second, nodes sharing similar contents may be frequently positioned far apart in heterogeneous graphs, separated by nodes of other types, resulting in substantial topological distances between them. During message-passing, the impact of distantly positioned nodes within a graph is substantially diminished, consequently steering the LDL model to prioritize individual nodal vectors, overlooking graph topology. Our LDL research targets to resolve the heterogeneous graph challenges with two building blocks. First, we propose to use multiple weighted meta-paths in addition to regular KL divergence loss for class distribution learning. The weights are learned, allowing each meta-path to individualize its message-passing paths without being negatively impacted by the different node and edge types. Second, HGDL uses a consistency-aware graph transformer architecture to harmonize local topology and global feature information. The graph transformer aligns nodal features with the learned optimal topology through weighted meta-paths, capturing both local neighborhood information and global feature similarities. This harmonization is crucial for ensuring that nodes with similar features, even if topologically distant, are represented in a way that reflects their content-based similarities. --- Q2: What is the contribution of this paper to the LDL research? A2: From the label distribution learning perspective, our research conveys four key contributions as follows. (1) Our research is the first to generalize the LDL to heterogeneous networks. The practical implications such as for urban functionality prediction or drug function prediction (presented in Sec 6.1) and the theoretical analysis, to our knowledge, has not yet been explored by any contemporary research. (2) Our research provides a simple yet effective way of modeling message-passing in heterogeneous network for label distribution learning. It also offer transparent interpretation to explain which meta-path play a more important role for the final outcome. (3) Our theoretical study not only offers assurance of the proposed model performance, but also paves the way to enrich theoretical understanding of label distribution learning for networked data. (4) We have created new datasets to validate the algorithm performance. Both our data and algorithms are published to stimulate future growth of research in label distribution for networks, which has been thus far under-explored. --- Q3: Why use PAC-Bayes theory to analyze the error bound instead of PAC theory? A3: We analyzed the algorithm performance in a PAC-Bayes regime instead of PAC for three main reasons. (1) Traditional PAC learning theory focuses on a single, deterministic model/hypothesis, which often leads to worst-case analysis. In contrast, PAC-Bayes extends it by integrating a Bayesian perspective, offering probabilistic bounds over a distribution of hypotheses. PAC-Bayes gauges how well a learner can generalize from a prior (model-dependent) to a posterior (data-dependent), thereby providing insights into the learning process that are not captured by PAC theory alone. This often results in tighter and more informative generalization bounds, especially for parameter-rich models such as neural networks [1]. (2) GNNs are inherently complex to model graph data with non-linear relationships. The high dimensionality and complexity of GNN models make PAC-Bayes particularly suitable for their analysis, as also indicated by [2]. In particular, PAC-Bayes lends insights into the interplay of model choice (prior) and data observation (posterior), allowing for the integration of specific model and data properties into the resultant bounds. Thanks to this, our analysis can establish a correlation between the generalization error and the maximum degree of the searched meta-path graph $\tilde{A}$ in HGDL, as detailed in Theorem 2. Traditional PAC theory that mainly makes the underlying data IID assumption lacks the capacity to incorporate the properties of graph data into analysis. (3) PAC-Bayes leverages KL-divergence as an information-theoretic metric, which is well-suited for measuring distances in continuous spaces, such as those encountered in our LDL modeling. Our approach to quantifying discrepancies between label distributions relies on KL-divergence, making PAC-Bayes a natural fit for our analysis. This alignment simplifies our derivation and ensures that our theoretical insights aligns with the empirical results. [1] Neyshabur, Behnam, Srinadh Bhojanapalli, and Nathan Srebro. "A pac-bayesian approach to spectrally-normalized margin bounds for neural networks." ICLR 2018. [2] Liao, Renjie, Raquel Urtasun, and Richard Zemel. "A pac-bayesian approach to generalization bounds for graph neural networks." arXiv 2020.
Summary: This paper studies the problem of heterogeneous graph label distribution learning. To deal with it, this paper proposes an HGDL method that optimizes meta-path graph topology and aligns it with nodal features for consistent message-passing, backed by theoretical support. Experimental results on five datasets demonstrate the effectiveness of the proposed model. Strengths: S1. This paper is the first to investigate the LDL problem in heterogeneous graphs, which seems interesting. S2. HGDL effectively combines meta-path aggregation and transformer-based methods to ensure consistent node label distribution learning, validated by both theoretical analysis and empirical studies. S3. The contributions are clearly written, with codes and datasets provided for reproducibility and practical utility. Weaknesses: W1. The intuitions behind the techniques lack explanation. More details can be found in Q1 and Q2. W2. The approach section is somewhat difficult to follow and could benefit from improved presentation. W3. It would be beneficial to include baselines for the LDL problem, such as GLDL [1]. While these methods are not specifically designed for heterogeneous graphs, it is still important to treat heterogeneous graphs as homogeneous ones and apply these methods for comparison. This will provide a more comprehensive evaluation of the proposed approach. W4. This paper compares only three baselines, which are relatively dated for heterogeneous graph learning. For example, GCN was published in 2017, HAN in 2019, and SeHGNN in 2023. This makes it challenging to convincingly demonstrate the advantages of the proposed HGDL. Thus, it is essential to include more recent baselines, such as HINormer [2]. References: [1] Y. Jin, R. Gao, Y. He, and X. Zhu, “Gldl: Graph label distribution learning,” in Proceedings of the 38th Annual AAAI Conference on Artificial Intelligence, 2024. [2] Qiheng Mao, Zemin Liu, Chenghao Liu, and Jianling Sun. "Hinormer: Representation learning on heterogeneous information networks with graph transformer", in Proceedings of the ACM Web Conference, 2023. Technical Quality: 2 Clarity: 2 Questions for Authors: Q1. Why only search for meta-paths connecting nodes of the target type rather than aggregating information from neighbors of different types? Q2. Why can the design in Section 4.2 effectively capture global feature consistencies? Could you provide more detailed explanations? Q3. Could you provide an explanation for why HGDL performs worse on the CLD metric? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We address the reviewer's concerns below one by one in a Q&A fashion. --- Q1: Why only search for meta-paths connecting nodes of the target type rather than aggregating information from neighbors of different types? A1: We justify our meta-path aggregation design as follows. Assume we have types T, A, and B, with T being the target node type. We assume that the reviewer was asking: why we only consider meta-path type like T-T, T-A-T, T-A-B-A-T, etc., but not consider information aggregation via other paths such as A...A, B...B, T..A..B, etc. We answer this question from two perspective. First, for heterogeneous network with different types of nodes, each node type may have its own nodal feature space. Aggregation information from different node types is a major challenges in this setting. One may think of treating all node types equally by mapping those different feature spaces onto one shared latent subspace, on which the aggregation can be carried out. However, this solution would require a very high-dimensional adjacency matrix that encodes the graph topology consisting of all types of nodes, with dense node connectivity, making it both computational and memory demanding. In contrast, our proposed solution leverages meta-path to yield homogeneous graph of the target type only. This results in sparse meta-path graphs, alleviating computational overhead and preventing oversmoothed node embeddings during the message-passing [1]. Second, in our LDL modeling, the main objective is to predict label distributions for target nodes. The meta-paths starting from and ending with the target nodes provide a clear model of how the information within the neighborhood substructures of those target nodes can be aggregated. Other paths that do not entail the target node such as ABA, BAB, etc., do not serve for this goal directly, and there could be many such meta-paths, resulting in combinatorics numbers that makes our model overly complicated. Despite that exhausting all such meta-path combinations is in general possible, the homogeneous graphs obtained from them will again incur inconsistent feature spaces, e.g., consider two graphs resulted from T-A-T and A-A, of which the feature dimensions equate to the dimensions of T and A node types, respectively. Merging these graphs is still challenging. [1] Kenta Oono, & Taiji Suzuki. Graph Neural Networks Exponentially Lose Expressive Power for Node Classification. ICLR 2020. --- Q2: Why can the design in Section 4.2 effectively capture global feature consistencies? A2: In our paper, the global feature consistency refers to a regularization that ensures the nodes with similar original feature inputs, albeit being topologically faraway, should have similar embedding vectors. To have an explicit model on this, we draw insights from the attention mechanism in graph transformers to construct a feature adjacency matrix $(ZQ) (ZK)^{\top} \in \mathbb{R}^{n \times n}$, as stated in Eq. (2). Intuitively, the attention scores over nodes of similar content are larger, compared to those nodes with unrelated contents. The Hadamard product between this feature adjacency matrix and the weighted meta-path graph $\tilde{A}$ will balance the (global) feature consistency and (local) topology consistency. We shall add more justification regarding our design in Section 4.2 in our camera-ready version. --- Q3: Could you provide an explanation for why HGDL performs worse on the CLD metric? A3: This is mainly because of the inconsistency between the KL Divergence and Clark distance (CLD) measurements, with the same phenomenon observed and reported in Table 2 of [2]. Specifically, we can make a concrete example that demonstrates this inconsistency. For instance, let y = [0.01,0.01,0.98], $\hat{y_{1}}$ = [0.05,0.05,0.9], and $\hat{y_{2}}$=[0.03,0.07,0.9], where $\hat{y}_1$, $\hat{y}_2$ indicate the predicted results from model1, model2, respectively. (KL divergence, CLD) of Model1 and Model2 are (0.4467,0.0222) and (0.2528,0.023), respectively, implying a tradeoff region. Generally, there exists a tradeoff region in the small probability values between Clark distance and KL divergence. We visualize this tradeoff in Figure~2 in the uploaded PDF file. In the figure, the black and blue curves delineate the loss trends of KL divergence and CLD, respectively. We can observe that tradeoff region of the two loss function falls in range [0.019, 0.1] along x-axis. This tradeoff explains the cases where HGDL does not attain superior CLD performance but still excels in terms of KL divergence. In general our HGDL algorithm outperforms the competitors in both KL and CLD in most cases. [2] X. Geng, "Label Distribution Learning," in IEEE Transactions on Knowledge and Data Engineering, vol. 28, no. 7, pp. 1734-1748, 2016. --- Q4: Need a comparison with GLDL. A4: We have completed the comparative study and presented the results in the Table~1 of the uploaded pdf file. Please refer to A1 to the Reviewer EYc1 for more details. --- Q5: Include more recent baselines such as HINormer. A5: We have carried out the experiments on DRUG and DBLP dataset using HINormer as an additional competitor. Results are presented in Table~1 in the uploaded PDF file. We only use target node features to level the comparison, which has been applied to all baselines. Due to time limitation, we searched the hyperparameter of hidden dimension in [32,64,128,256], $\beta$ in [0.5,1], temperature in [0.5,1], and the number of GNN layers in [2,3]. We can observe that HINormer does not outperform other baselines. One reason might be the inconsistency of node feature aggregation and the relation encoding scheme. The node vectors are set to zero other than the target nodes, according to original code provided by the HINormer authors. Simply replacing BCE with KL-divergence does not make HINormer well for LDL. We will include the results in camera-ready. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. Most of my concerns have been addressed, I keep my rating intact.
Summary: This paper introduces a novel approach to Label Distribution Learning (LDL) specifically tailored for heterogeneous graphs, addressing the inherent complexities and challenges associated with this domain. By highlighting the necessity of LDL in heterogeneous settings and outlining the unique challenges involved, the paper lays a foundation for advancing research in this emerging field. The proposed method is underpinned by a robust theoretical framework, providing a coherent rationale for its design and implementation. Experimental validation across five distinct datasets and evaluation using six metrics demonstrate the method's effectiveness over established baselines. Strengths: 1.This paper is the first to identify the necessity of Label Distribution Learning (LDL) on heterogeneous graphs, addressing the unique challenges and complexities associated with this task. 2.The proposed method is well-grounded in theory, providing a solid foundation for its design and implementation. The authors present a clear and thorough theoretical framework that supports the efficacy and rationale behind their approach, enhancing the credibility and robustness of the method. 3.The effectiveness of the proposed method is demonstrated through extensive experiments on five diverse datasets and six different metrics. The results consistently show that the method outperforms the baselines, indicating its superiority and practical applicability across various scenarios and evaluation criteria. Weaknesses: 1.The paper categorizes current LDL methods into three distinct types in the related work section. However, the experimental evaluation only includes three models (GCN, HAN, and SeHGNN) with the KL-divergence loss function as baselines. This limited selection raises questions about whether the experiments sufficiently demonstrate the proposed method's superiority over the broader range of existing LDL methods. 2.The study employs γ as a hyperparameter in the proposed model but does not provide any analysis of how variations in this parameter affect the model's performance. A thorough analysis of the hyperparameter's impact would offer valuable insights into the model's sensitivity and robustness. 3.The paper does not present a detailed comparison of the computational time required for the proposed method versus the baselines. Without this information, it is unclear whether the proposed method is more efficient in terms of runtime. 4. The motivation of this paper is not very convincing. It seems that this work is done just because LDL has not been applied in heterogeneous graph. It is not clear why LDL can solve some key problems in heterogenous graph. Technical Quality: 3 Clarity: 3 Questions for Authors: I don't have any questions. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: While the authors provide a complexity analysis and mention that scalability is not the main concern of this paper, they acknowledge that the current model may not scale well with larger datasets. Although a modification to improve scalability is suggested, it is left for future work, indicating that the current version might struggle with large-scale data. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We address the reviewer's concerns below one by one in a Q&A fashion. --- Q1: Limited comparison to the existing LDL methods. A1: We would like to kindly point out that the existing LDL methods were not tailored for handling graphs (networked data). In fact, GLDL is the only method specifically designed for label distribution learning on graph data, yet has been restricted to homogeneous network. We have supplemented a comparative study with GLDL on the DBLP and DRUG datasets, with results presented in the Table~1 of the uploaded pdf file. The results indicate that GLDL outperforms other baselines but is still inferior to our proposed HGDL method. This is mainly because that, although GLDL and HGDL both considered label distribution in their learning processes, HGDL individualizes the learning of weights associated with different meta-paths and integrates them to construct optimal graph topology homogenization. In contrast, GLDL only generates one single meta-path for LDL by heuristics, which may end up with suboptimal solutions. We will include these comparative results with GLDL and supplement a discussion in our camera-ready. --- Q2: Sensitivity analysis on the hyperparameter $\gamma$. A2: We present the sensitivity analysis on the hyperparameter $\gamma$, which controls the impact of the second term of Eq. (3). We illustrate trends of KL divergence between predicted and true label distributions w.r.t. values of $\gamma$ on all five datasets in Figure~1. Specifically, different values of $\gamma$ have been applied, including 0, 1e-5, 1e-4, 1e-3, 1e-2, and 0.1. We observe that the lowest KL divergence value occurs at various $\gamma$ choices, e.g., the optimal KL divergence occurs on the DRUG and URBAN datasets at $\gamma = 0.1$, and YELP and DBLP dataset at 1e-3 and 1e-5, respectively. The non-zero values of $\gamma$ necessitates design of the second term in Eq. (3). We will include the sensitive analysis on $\gamma$ in our camera-ready. --- Q3: A comparison of the computational time to show the efficiency of the proposed approach against baselines. A3: We have completed a runtime comparison as presented in Table 2 in the uploaded PDF file. Each method has been trained with 100 epochs. Note, due to various neural architecture designs, different models may take varied number of training epochs to converge. Thus it has become a norm to compare their runtime performance on a fixed number of epochs (or per epoch training time), which has also been done in [1]. We can observe that our proposed HGDL algorithm enjoys good scalability by attaining runtime results that are comparable to the baseline GCNs. Compared to other baseline such as SeHGNN$_{KL}$, HGDL is in general much faster. This indicates that our method is more efficient for learning heterogeneous networks. This can be mainly attributed to the automated learning of weights to control individual meta-paths and then integrates them to yield optimal graph topology homogenization for label distribution learning. We will add the runtime comparison in our camera-ready. [1] Hao Yuan, Yajiong Liu, Yanfeng Zhang, Xin Ai, Qiange Wang, Chaoyi Chen, Yu Gu, and Ge Yu. "Comprehensive Evaluation of GNN Training Systems: A Data Management Perspective," VLDB 2024. --- Q4: The motivation of the research should be further clarified. Why LDL can solve the key problems imposed by heterogenous graph? A4: Indeed our research is mainly motivated by the fact that no LDL model has been tailored for heterogeneous graph. We note two prominent challenges imposed by the network heterogeneity that motivate our research, as follows. First, in heterogeneous graph, the label distribution of each node is influenced by their neighboring nodes that can vary through multiple factors, including their different types of nodes and edges, nodal contents, and topological features. This complicates the message-passing mechanism, because aligning those varying factors would result in a combinatorial issue. Second, nodes sharing similar contents may be frequently positioned far apart in heterogeneous graphs, separated by nodes of other types, resulting in substantial topological distances between them. During message-passing, the impact of distantly positioned nodes within a graph is substantially diminished, consequently steering the LDL model to prioritize individual nodal vectors, overlooking graph topology. Our LDL research targets to resolve the heterogeneous graph challenges with two building blocks. First, we propose to use multiple weighted meta-paths in addition to regular KL divergence loss for class distribution learning. The weights are learned, allowing each meta-path to individualize its message-passing paths without being negatively impacted by the different node and edge types. Second, HGDL uses a consistency-aware graph transformer architecture to harmonize local topology and global feature information. The graph transformer aligns nodal features with the learned optimal topology through weighted meta-paths, capturing both local neighborhood information and global feature similarities. This harmonization is crucial for ensuring that nodes with similar features, even if topologically distant, are represented in a way that reflects their content-based similarities. In addition, from a data perspective, the label distribution is often more informative than a single class scalar. Consider our motivating example in the manuscript, delineating the local urban region with a distribution over multiple city functional classes will lead to richer understanding of the composition of the particular region, such as the number of buildings for various civil purposes, including housing, healthcare, education, etc. Examples of such application abound, including business network (YELP), citation network (ACM), protein-disease network (DRUG), with their datasets all studied in our experiments. --- Rebuttal Comment 1.1: Comment: Thanks for your response. I keep my score.
Summary: This paper advances Label Distribution Learning (LDL) into the realm of graph domains, specifically addressing the heterogeneous graph label distribution learning (HGDL) problem. The authors highlight that graph heterogeneity, reflected in node types, node attributes, and neighborhood structures, poses significant challenges for generalizing LDL to graphs. To tackle these challenges, the authors propose a new learning framework with two key components: 1.Proactive Graph Topology Homogenization: This component focuses on learning optimal information aggregation between meta-paths to address node heterogeneity before the embedding learning phase. 2. Topology and Content Consistency-aware Graph Transformer: This component uses an attention mechanism to learn the consistency between meta-paths and node attributes, ensuring that network topology and nodal attributes are equally emphasized during label distribution learning. Strengths: 1.The introduction of a framework that proactively addresses graph heterogeneity and incorporates consistency-aware mechanisms is a novel contribution to the field. 2.The use of KL-divergence and additional constraints in an end-to-end learning process provides a robust solution for label distribution learning on graphs. 3.The theoretical and experimental validations are thorough, and the availability of code and datasets promotes transparency and reproducibility. Weaknesses: 1. The paper lacks an analysis of the algorithm's complexity, and the framework diagram in Figure 2 is somewhat cluttered. 2. The authors' motivation is to address the challenges posed by graph heterogeneity, but these challenges are not clearly described in the introduction. 3. For optimization formula (3), the authors need to provide a detailed explanation of the advantages of this design. Technical Quality: 3 Clarity: 3 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We address the reviewer's concerns below one by one in a Q&A fashion. --- Q1: The paper lacks an analysis of the algorithm's complexity. A1: We have conducted the complexity analysis, which has been deferred to the Appendix H.3 due to page limits. We briefly summarize the results of our analysis for the reviewer's convenience. Specifically, our method involves learning a new adjacency matrix from all meta-paths. Without considering any decomposition methods, the number of learnable parameters has to be at least $\mathcal{O}(n)$ for the graph topology homogenization stage, where $n$ denotes number of nodes. So by the attention mechanism, our proposed architecture entails a number of $knf+k^{2}f$ parameters, where $k$ is the number of meta-paths used and $f$ is the dimension of latent space. Omitting constants this amounts to an $\mathcal{O}(n)$ number of parameters to be learned. To prepare the feature adjacency matrix that mitigates feature-topology inconsistency, it requires a number of learnable parameters $h^2 + mh$, where $h$ and $m$ represent the dimensions of node embedding and input nodal content, respectively. This reduces to a constant $\mathcal{O}(1)$ by screening out factors $h$ and $m$, which are relatively neglectable compared to $n$. Overall, the number of learnable parameters of our proposed model is $\mathcal{O}(n)$, which seemingly scales up with the total number of nodes in a linear way. However, we note that such linearity is not necessary if the message-passing at each iteration is conducted among a subset of node, instead of all nodes, such as through sampling as done by GraphSage. As such, we can further reduce the computation complexity without being bottlenecked by sizeable graphs with large number of nodes. --- Q2: Figure 2 somewhat cluttered. A2: We have improved the visual clarity of Figure 2 and put it into the pdf. file as "Figure 3". --- Q3: The challenges posed by graph heterogeneity have not been clearly described in the introduction. A3: We will enrich the two challenges using the motivating example shown in Figure 1 in the manuscript. Specifically, we will correlate the challenges to the URBAN dataset as follows. Challenge (1): Graph heterogeneity that complicates the message-passing between nodes of a specific type, as the label distributions of those nodes are influenced by their neighboring nodes that may vary in terms of types, content, and topological features. For example, R3 and R4 are two immediately neighboring residence regions but are connected to different functional areas, i.e., R3 connects to Leisure and Service areas but R4 connects to Service region only. Thus, the urban functionality (i.e., label distribution) of R3 and R4 eventually differs. Challenge (2): Graph topology and nodal features that may suggest inconsistent label distributions, where nodes sharing similar contents are positioned far apart and separated by various nodes of different types on the graph. Unlike traditional LDL that focuses on instance vectors only, an effective LDL model on graphs require harmonizing nodal contents with topological structures for a unified representation. For example, R2 to topologically more faraway from R3 compared to R4; however, comparing their nodal contents, local neighborhood structures, and functionality distributions, R2 is more similar to R3 over R4. We will add this motivating example in our camera ready. --- Q4: For Eq. (3), please explain the advantages of this objective design. A4: We rationale our design as follows. The optimization objective in Eq. (3) has two KL divergence terms. (1) The first term minimizes $D_{\text{KL}}(Y \| \hat{Y})$, enforcing the predicted label distributions to be similar to the ground-true distributions (if exist). (2) The second maximizes the distance between the weight distribution for all meta-paths $\Theta[i, :]$ and a uniform distribution $U[1, k]$. The intuition is to encourage the diversity of the learned weights associating with different meta-paths. The reason is that KL divergence naturally favors uniform distribution on small values, encouraging all weights to be learned similar. The second term is to counter against this: the noisy meta-path that has been ill defined will diminish its negative impact on the model performance, whereas other meta-paths resulting in better performance will be promoted. This ensures that the important meta-path information will dominate the resultant adjacency matrix, leading to optimal meta-path topology harmonization. In addition to the intuitions, we present the sensitivity analysis on the hyperparameter $\gamma$ as empirical evidence. $\gamma$ controls the impact of the second term of Eq. (3). We illustrate the trends of KL divergence between the predicted and true label distributions w.r.t. the value of $\gamma$ on all five datasets in Figure~1. Specifically, different values of $\gamma$ have been applied, including 0, 1e-5, 1e-4, 1e-3, 1e-2, and 0.1. We observe that the lowest KL divergence value occurs at various $\gamma$ choices, e.g., the optimal KL divergence occurs on the Drug and URBAN datasets at $\gamma = 0.1$, YELP and DBLP dataset at $\gamma = 1e-3, 1e-5$, respectively. The non-zero values of $\gamma$ necessitates the design of the second term in Eq. (3). We will supplement the intuition of the design of Eq. (3) and include the sensitive analysis on $\gamma$ in our camera-ready. --- Rebuttal Comment 1.1: Comment: Thank you for your reply, which solved most of my concerns, and I keep my score.
Rebuttal 1: Rebuttal: We thank the reviewers for their positive and constructive comments. Here, we summarize the major concerns and our responses. (1) Add GLDL and HINormer as new rival models (suggested by Reviewers EYc1 and pXs2). - We have added new comparative results with the two models, presented in Table 1 in the uploaded PDF file. From the results, we can observe that our HGDL model still outperforms GLDL and HINormer in our benchmark datasets. (2) Add sensitivity analysis on the hyperparameter $\gamma$ (suggested by Reviewers FJkr and EYc1). - We have conducted this sensitivity analysis, with results illustrated in Figure~1 in the uploaded PDF file. We note that the optimal choice of $\gamma$ varies across different datasets, which substantiates the impact of the regularization term in the overall objective design. (3) Add runtime comparison to evaluate the algorithm efficiency (suggested by Reviewers FJkr and EYc1). - We have compared the runtime performance of our algorithm and its compared methods, with results documented in Table~2 in the uploaded PDF file. The results show that the efficiency performance of our HGDL approach is on a par with the GCN and HAN methods and is faster than the graph transformer including SeHGNN models. In below, we have provided rebuttal to each individual reviewer to address their concerns in a Q&A fashion. If there are any points that require further clarification or additional information to facilitate the decision-making process, please do not hesitate to let us know. We are more than willing to provide further explanations or details. Pdf: /pdf/166f1cec98053955da09be99af843bc90a8c9891.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Cost-aware Bayesian Optimization via the Pandora's Box Gittins Index
Accept (poster)
Summary: The paper proposes the Pandora's Box Gittens Index, a cost-aware acquisition function. Moreover, it questions the behavior of previous cost-aware acquisition functions, and motivates the behavior of PBGI, through connections to the Pandora's Box problem and the behavior under different, uniform costs. Theoretical results are presented to support the claims that are made. Results under both uniform and cost-varying setups display strong performance. Strengths: Firstly, I am generally positive towards the paper. However, I believe the presentation is in part a substantial strength of the paper, but also its biggest drawback (see "Why does it work well?" and "Constant Cost Experiments"). I will provide the specifics under each section. __Presentation__: The connection to the Pandora's Box problem is handled very nicely. As someone unfamiliar to the problem, the introduction to it, the connection to BO, and the resulting algorithms were clear, pedagogical and had the right level of detail. I appreciate the effort on the authors' end to do this, specifically in 3.1 and 3.2. __Relevant Problem(s)__: Both cost-aware and budgeted BO are highly relevant and practical problems, that are commonly treated as distinct. An acquisition function which accounts for both is enticing. Moreover, the obvious flaws in existing methods enhances the relevance further. __Thorough Experimental Evaluation__: Experimental results are provided on a wide range of tasks, from highly controlled to real-world. The benchmarked algorithms appear to be the relevant ones. Some experiments would do well with additional replications to reduce the error bar widths, but this is a _very_ minor remark. __Theoretical Results__: Meaningful theoretical results on the behavior of the algorithm are presented. __Visuals__: The plots are mostly very well made. Weaknesses: __Why does the algorithm work well?__ This is mostly related to 3.3, and the results section. After reading up until 3.3, I believe I have understand what I need to know up until that point. However, I am still left wondering how the algorithm will behave, and why this is beneficial (specifically for fixed-cost). PBGI appears to be a more exploratory variant of EI (which I believe is desperately needed, but only from anecdotal evidence). It should be visualized more precisely what PBGI will do compared to EI for a simple, either 1 or 2D example to see the differences - ideally under different costs or budgets. __Constant Cost Experiments__: If we are effectively tuning the exploratory behavior of EI or UCB (the latter is perfectly tunable as-is) by making up a constant cost objective on Ackley or Levy, what is there to learn? Are we saying that EI should intrinsically be more exploratory? Can we equally well find costs that are highly detrimental to PBGI? Should we only local search if costs are consistently high? Do we get drastically different behavior if we multiply all Ackley output by 100, and if so, why is this sensible? __Acquisition Computation Details__: The acquisition computation requires a solve for $\alpha^*$ involving the cost and EI. This seems like a rather tedious excercise, so is it computationally expensive? 3.3 provides explanation, but it would be nice to see empirical evaluation on this, too. Moreover, can gradients still be computed through autodiff? How does this affect acquisition optimization? Since there is no longer a simple forward pass, this should be expanded on. __Normalization of cost__: How are costs and objective value normalized in PBGI? Ackley ranges from 0 to 24 (if in the -32 to 32 range) whereas Rosenbrock ranges from 0 to $10^6$, roughly. Does that mean that PBGI is vastly more exploratory for Rosenbrock, since the cost would be way lower relative to the objective? Since I found this to (unfortunately) be rather important and not clear, I am putting it as a weakness. With that said, I believe these weaknesses are what separates a currently good paper from a very good one. Technical Quality: 4 Clarity: 2 Questions for Authors: __Details on cost__: For unknown costs, the cost is modeled as a GP. Is the mean cost used for acquisition computation, and why do the authors consider the log cost? While I am aware that $\log c$ is convention, I would have thought that the authors' framework did not need transformations of the metrics to work well. Why is log cost the right metric to consider? __Lack of normalization__: The examples that are considered in the plots do not normalize the input space. Is this meaningful for the behavior of the algorithm? Confidence: 5 Soundness: 4 Presentation: 2 Contribution: 4 Limitations: Limitations have been properly addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Strengths: > [...] > Presentation: [...] > Relevant Problem(s): [...] > Thorough Experimental Evaluation: [...] > Theoretical Results: [...] > Visuals: [...] Thank you for the kind feedback - we're glad you appreciated the paper! > Weaknesses: > Why does the algorithm work well? [...] wondering how the algorithm will behave, and why this is beneficial [...] It should be visualized more precisely what PBGI will do [...] This is a great idea! **Please see Figure A of the PDF attached to the main response, which includes an additional visual** that we will add to the appendix in the next draft. > Constant Cost Experiments: [...] what is there to learn? [...] EI should intrinsically be more exploratory? Can we [...] find costs [...] detrimental to PBGI? [...] Should we only local search [...]? [...] behavior if we multiply all Ackley output by 100 [...]? We largely agree, but the devil is in the details. In our view, a good way to think about this is via a "needles-in-haystacks" metaphor: some haystacks might have better needles than others, but **exploring a large haystack is only worth it if one expects to actually find the needle in it before running out of budget**. In the discrete expected-budget-constrained problem, Proposition 2 tells us that tuning a risk-seeking-vs.-risk-averse parameter is necessary to handle this tradeoff optimally for each budget, and EI includes no such parameter. In the correlated case, we expect such tradeoffs to loosely map to local-vs.-global tradeoffs, but we believe mainly that **it is the "haystack-size" that matters, not locality** in the sense of whether that haystack comes from a nearby or far-away location. We have **not yet been able to find costs that are detrimental to PBGI**, though some objective functions - such as the unimodel Rosenbrock objective - can be detrimental. We think because other acquisition functions' behavior - which otherwise serve as a weakness - in this specific case counterbalance model mismatch. Since **multiplying outputs is equivalent to scaling costs and normalizing**, and our examples do use normalization (see below), we expect this behavior to be **comparable to that shown in Figure 3** which studies the effect of varying $\lambda$: we expect large-$\lambda$ values to reach better objective values faster at first, but eventually lose-out to small-$\lambda$ values which start off slow but eventually find a better location. > Acquisition Computation Details: [...] is it computationally expensive? [...] can gradients still be computed through autodiff? [...] Since there is no longer a simple forward pass, this should be expanded on. This involves solving a one-dimensional convex optimization problem: we do so very efficiently using bisection search. Moreover, Appendix A2 shows that **no additional optimization is needed to compute gradients using autodiff - one forward pass is enough**, in contrast with other situations. Finally, **Figure 9 in Appendix C provides a runtime comparison** which shows that the solution is a little bit more expensive than EI, but significantly less expensive than non-myopic and other more-computationally-expensive baselines. > Normalization of cost: [...] This is an extremely important question: we accidentally omitted certain details regarding normalization from our appendix, which we have now added. In short, **we normalize our inputs and outputs using standard practices** in all cases where this makes sense. Specifically: * We **always normalize input values to be in $[0,1]^d$**, except for the illustration of the EIPC counterexample in Figure 2, where this is omitted in order to present easier-to-visualize numbers. There, normalization is equivalent to changing the length scale, so this change does not affect algorithmic behavior. * We **normalize output values** in all cases except where the objective functions are sampled from the prior (Bayesian regret), for exactly the reasons you mentioned. For synthetic benchmark functions specifically, we scale the Ackley, Levy, and Rosenbrock functions by constants given in the script `synthetic.py`. This results in **objective values roughly in $[0,10]$** which matches the prior's scale in most regions of space, except a small volume of outlier-regions the algorithms learn to avoid. * For Bayesian regret, we omit normalization because **this particular setting by definition has no normalization-mismatch**, as the prior and objective come from the same distribution, with the same kernel, amplitude and length scale parameters. * To empirically test the effect of scaling on Rosenbrock, where our scaling constant was perhaps not ideally chosen, we **include an additional plot in Figure B attached to the PDF of our main referee response**. These show slight improvement for PBGI and PBGI-D, with the latter now competitive with the best baselines. * Very regrettably, we accidentally forgot to document normalization details in the submission manuscript draft, though it can be seen in our code. We have **added the normalization information in full detail in an updated appendix** in the current manuscript draft. > Questions: > Details on cost: [...] why do the authors consider the log cost? [...] We consider log-costs because **this ensures that costs are positive**. If negative costs are allowed, they can create undesired behavior where the algorithm repeatedly evaluates a location with negative costs over and over again, because the costs are effectively interpreted as a deterministic reward. > Lack of normalization: [...] plots do not normalize the input space. [...] This is a critically important question: we **do normalize the input space to be $[0,1]^d$** in all cases where it makes sense, which can be seen in the code in but was accidentally omitted from the manuscript draft's appendix. This has been explained in detail in the current version. Please see the preceding answer with bullet-points for more details on normalization. --- Rebuttal 2: Title: Response to Rebuttal Comment: Thanks to the authors for their effort in producing a nice rebuttal. Having read the other reivewers' reviews, I feel more confident in my my own. The algorithm is novel, addresses an important niche and is well-supported both theoretically and empirically. Moreover, it is carefully presented. While there is certainly room for relevant follow-up (i.e. setting $\lambda$, noise treatment), that in itself suggests that the paper can be impactful. Moreover, I am personally curious about its application in a multi-fidelity context. ____________ ## Why does the algorithm work well? Thank you, this is exactly the type of plot I think the paper needs. To me, it explain what the purpose of the algorithm is, and combined with the haystack argument, captures the essence of it nicely. On this topic, it would seem like PBGI achieves a very similar trade-off to the rarely used improvement threshold in EI, e.g. https://automl.github.io/SMAC3/main/_modules/smac/acquisition/function/expected_improvement.html#EI (see the xi hyperparameter). Are the authors aware of any similarities between the two? __Note:__ I do _not_ think this restricts the novelty of the method. ## Normalization and Constant Cost Thanks for the clarification. I still do not perfectly grasp this, as values in [0, 10] are not exactly conventional, either (rather, standardizing the current set of observations). As such, it does still seem to me like the algorithm is dependent on the output range of the function, and perhaps this is unavoidable in the cost-aware setting. ______ I strongly feel that this paper should be accepted. As such, I will happily vouch for it. --- Rebuttal 3: Comment: We sincerely appreciate your thoughtful and encouraging feedback! We are delighted to hear that you find our method novel and appreciate your interest in potential follow-up work! Below, we aim to clarify further the points raised in your review. Thank you once again for your valuable feedback and support. # Similarities between PBGI and the rarely-used EI Thanks for bringing up this point. The $\xi$ parameter in the variant of EI you mention is different from but related to the Gittins index. In particular, for the uniform-cost maximization problem, **the Gittins index should be [the value of $\xi$ such that the threshold-adjusted EI equals $\lambda$] - [the current best observed value]**. For the minimization problem which the EI in your reference link is used for, the minus sign should be replaced with a plus sign. However, **$\xi$ and $\lambda$ exhibit similarity in monotonicity when controlling the exploration vs exploitation trade-off** -- both larger $\xi$ and larger $\lambda$ imply higher exploitation. This can be illustrated using a toy Pandora's Box example involving a choice between a closed box and an open box with the current best-observed value inside: as $\xi$ increases, you are more likely to take the open box, which is a sign of exploitation. Similarly, as $\lambda$ (representing the cost to open the closed box in this setting) increases, you are also more likely to take the open box. **Note that this similarity is only qualitative, not quantitative!** # Normalization and Constant Cost You are correct that our algorithm relies on the output range of the objective function when the costs are not scaled. To clarify, we normalize the outputs not merely to adhere to standard practice, but primarily for two reasons. First, theoretically, for a fixed GP prior, a fixed budget, and fixed costs, the best value of $\lambda$ should be consistent across problems, assuming a well matched prior. Second, in practice, normalization allows us to treat different problems, such as Rosenbrock and Ackely, which may have varied output ranges, as if they are drawn from the same fixed GP prior. This approach enables us to apply the same $\lambda$ value across different problems: normalization makes using such a value at least reasonable. The above reasoning also applies to the uniform-cost setting. --- Rebuttal Comment 3.1: Title: Acknowledgement Comment: Thanks to the authors for their further response, The responses make sense so me, I appreciate the clarification.
Summary: This paper draws connections between cost-aware Bayesian Optimization (BO) and a problem from the economics literature, called the Pandora's Box problem. Based on these connections, the paper proposes a new cost-aware acquisition function, called Pandora's Box Gittins Index (PBGI). Numerical experiments show that PBGI achieves competitive performance against usual acquisition functions. Strengths: Viewing cost-aware BO under the lens of the Pandora's box problem is interesting. This work suggests that promising solutions might lie at the intersection between economics and black-box optimization. The paper is also well-written. Weaknesses: **Impact of the lengthscale $\kappa$ (1)**. In the experiments, the hyperparameter $\kappa$ is fixed arbitrarily. Although an ablation study about $\kappa$ is given in Appendix C, it seems unlikely that a GP with an arbitrary lengthscale will properly fit to the observed data. Because it is difficult to compare the performance of optimization algorithms when they use models that are unfit to the observed data, I think this weakens the experimental part of the paper. **Impact of the lengthscale $\kappa$ (2)**. Furthermore, as far as I understand the paper, the performance of PBGI seems to heavily rely on a trade-off between the magnitude of $\kappa$ and the dimensionality of the function domain. Roughly speaking, PBGI should also have good performance on low-dimensional spaces if $\kappa$ is low enough. Conversely, it should perform poorly on high-dimensional spaces if $\kappa$ is large enough. Studying the relation between $\kappa$ and $d$ would strengthen the contribution. **Calibration of $\lambda$ and $\beta$**. In the numerical experiments, $\lambda$ and $\beta$ are also set arbitrarily. I understand that calibrating them is less obvious than calibrating $\kappa$, but it is nevertheless a crucial part of the PBGI. As far as I know, the paper does not provide a principled way to calibrate these parameters. I believe providing one would strengthen the contribution. **Noisy setting**. The experiments seem to have been conducted in a noiseless setting. However, in zeroth-order optimization tasks, objective black-boxes often cannot be observed without an observational noise. Therefore, it would have been interesting to evaluate PBGI's performance within a noisy setting. The results, including the ranking of the acquisition functions, may differ. **Notation**. The notation for the posterior distribution $f | y_1, \cdots, y_t$ is incorrect and should be replaced by $f | \mathcal{D}$, with $\mathcal{D} = \\{(x_1, y_1), \cdots, (x_t, y_t)\\}$. Technical Quality: 2 Clarity: 3 Questions for Authors: Here are some questions to spark the discussion with the authors. My reservations are mainly about the experimental part of the paper, I will be happy to increase my score if these are addressed during the discussion period. (1) Why not use MLE to find the appropriate lengthscale for each experiment? (2) Have you studied the relation between $\kappa$ and the dimensionality of the function domain $d$ w.r.t. PBGI performance? (3) Do you have any insights on how to calibrate $\lambda$ and $\beta$? (4) Do you consider noise in your experiments? If not, do you have any insights on the behavior of PBGI in a noisy setting? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes they did, and I believe the discussion period will shed even more light on the limitations of PBGI. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Strengths: > [...] interesting. [...] > [...] well-written. Thank you very much for your review! We are delighted that key strengths - specifically, **novelty, via a brand-new technical perspective on Bayesian optimization** - are recognized. We would like to draw your attention to these points: **we think our strengths have significant merit**, and that they may outweigh some of the weaknesses and points of improvement discussed below. In light of a certain unfortunate typo, we also believe that our manuscript already addresses at least some of your concerns about the length scale mismatch (see below), though we will make additional effort to make this clearer in the next draft. On these bases, we would like to gently ask you to reconsider your score. > Weaknesses: > Impact of the lengthscale $\kappa$ (1). In the experiments, the hyperparameter $\kappa$ is fixed arbitrarily. Although an ablation study about $\kappa$ is given in Appendix C, it seems unlikely that a GP with an arbitrary lengthscale will properly fit to the observed data. [...] There was a typo in the paper on Line 245: please let us apologize and clarify how our experiments treate length scales: * In the Bayesian regret experiments (Sec. 4.1), **the objective function is drawn from the prior, which has a fixed length scale,** and all of the algorithms use a kernel with this same lengthscale. * In other experiments, all kernel hyperparameters - including length scales - are **learned using max marginal likelihood**. We believe that this concern is primarily a result of this typo. In both types of experiments, each algorithm's GP either **has the correct lengthscale from the beginning or learns the correct lengthscale from data**. > Impact of the lengthscale $\kappa$ (2). [...] relation between $\kappa$ and $d$ [...] This is a good point: what seems to matter is the "number of effective boxes" induced by $\kappa$, $d$, and the volume of $X$. We include results on this in the appendix. We suspect the best way to get a more precise picture would be to prove a suitable regret bound. While we are **extremely interested in studying this question**, doing so is likely to **involve enough new theoretical machinery for a whole separate paper**, so we prefer to defer it to a follow-up work. > Calibration of $\lambda$ and $\beta$. [...] We agree: studying how to set these parameters automatically is a great research question! In principle, proving a regret bound and setting these parameters to minimize the bound could provide for an avenue by which to do so. For the same reasons as in the prior answer, we believe this would make for **a great follow-up to our work**. > Noisy setting. [...] The primary reason we don't study this is that **in the presence of noise, the analog of the Pandora's Box model and Gittins index machinery is significantly more complicated**. One needs to create a "noisy Pandora's box" model, but this requires one to potentially inspect a noisy Pandora's box multiple times to better estimate its value. There is Gittins index machinery that can handle this type of multi-step inspection in principle, but computation becomes significantly harder because one will not obtain an analytic, expected-improvement-like optimization objective. One must also consider how to translate to the Bayesian optimization setting, for instance by inspecting many nearby points instead of reinspecting the same point many times. We believe this is one of several interesting potential extensions that our work opens the door to. > Notation. [...] Thank you for drawing our attention to this point. While it is reasonably-standard in the Gaussian process literature to omit dependence on $x$-values (for example, the paper *Efficiently Sampling Functions from Gaussian Process Posteriors*, Wilson et al., ICML 2020, also does this), in retrospect we agree that it could be made clearer and will examine this in the next manuscript draft. > Questions: > [...] > (1) Why not use MLE to find the appropriate lengthscale [...] **We do this:** our synthetic and empirical experiments include length scale optimization. We apologize again for the typo which incorrectly stated this was absent. > (2) [...] relation between $\kappa$ and the dimensionality $d$ [...] w.r.t. PBGI performance? **In short: yes, our Apdx. C5 and C6** study this, but developing a more principled evaluation would make for a well-motivated follow-up paper. Please see the preceding answers on length scales. > (3) [...] how to calibrate $\lambda$ and $\beta$? **We deliberately did not tune these parameters for each problem in order to make our claims stronger: we get good performance even if they are not set optimally**. Theorem 2 and the right-hand-side plot in Figure 3 tell us that $\lambda$ should be tuned to match the expected budget: a natural way to do so is to (a) run the algorithm, (b) look at whether the regret first quickly goes down, then levels-off too soon, and if so, (c) make $\lambda$ smaller. Our PBGI-D variant is a first attempt at doing this automatically, though it introduces the problem of calibrating $\beta$). We believe there is more to explore regarding calibrating the parameters, e.g. through a theoretical regret analysis, but doing so is beyond the scope of this work. > (4) Do you consider noise [...] No, our experiments are all noise-free. As we discussed in a previous answer, we have ideas about how one could create a Pandora's box model and Gittins index that handles noise, but it would require enough novel technical development that it could well be a follow-up paper in its own right. --- Rebuttal Comment 1.1: Title: Rebuttal Ack Comment: Thank you for the detailed rebuttal and interesting discussions. I'm glad to see the lengthscales are optimized with MLE. I am positive about the paper, but I still believe finding an automatic way to calibrate $\lambda$ and $\beta$ would strengthen the contribution. Furthermore, taking noise into account is quite important as many (most) applications of BO do involve a noisy objective function. I've increased my score to 5. --- Rebuttal 2: Comment: # Automatically Tuning Lambda Thank you very much for your response! In our response, we had originally included an idea for how to automatically tune $\lambda$ in a setting where we can run Bayesian optimization multiple times, but this got cut due to rebuttal space limitations. We expand on this below. Our automatic tuning idea below **applies in settings where evaluating the true objective function is the key computational bottleneck** (for example, running a large physics simulation, training a large neural network). In this setting, it is (a) important to choose $\lambda$ well to ensure high-quality acquisitions, and (b) worthwhile to spend some degree of computation on choosing $\lambda$. The key to our proposal comes from Figure 3 (which is an 8-dimensional Bayesian regret experiment), from which we can make two observations: * **As one varies $\lambda$, PBGI's regret curve varies in a qualitatively predictable way.** Specifically, large-$\lambda$ regret curves first drop sharply, then flatten out, whereas small-$\lambda$ regret curves drop less-sharply, but eventually reach a better value. * **Testing a relatively small number of $\lambda$ values is sufficient to find a good choice.** Figure 3 uses a small number of geometrically spread $\lambda$ values, namely $10^{-n}$ for $n \in $ {$2, 3, 4, 5$}. Motivated by this - and by theoretical considerations given below - we propose the following tuning scheme: every few iterations, **run a Bayesian regret experiment comparing PBGI across a range of geometrically spread $\lambda$ values, then set $\lambda$ to the one with best performance at the desired budget**. For the prior GP in the Bayesian regret experiments, use the current posterior GP from the "real" Bayesian optimization loop, and draw the objective function for the experiment from that same GP. For an expensive-enough true objective function, running such as experiment can be computationally viable, and it outputs a $\lambda$ value that is very likely to be good, if not optimal, for the real experiment. **This approach is theoretically principled**, in the following sense. One of the lemmas used in our analysis of Theorem 2 shows that **the optimal $\lambda$-value in Pandora's Box satisfies an optimization problem whose gradient relates the Gittins index policy's expected costs with the budget.** Setting this gradient to zero, we obtain an equation which balances the two terms, which the scheme above directly targets. Furthermore, **using PBGI-D with initial value set to the tuned $\lambda$-value for the real selections provides an additional layer of safety against getting stuck with a too-large $\lambda$-value**. When PBGI-D decides to decrease $\lambda$, one could use that as a signal that another Bayesian regret experiment to tune $\lambda$ is warrented: this may be more efficient than simply running the Bayesian regret experiments every few iterations. In summary: **the above idea gives a clean and principled way to automatically tune $\lambda$**. One can also apply a similar Bayesian-regret-based scheme to automatically tune $\beta$, but **our empirical results reveal that performance is not very sensitive to the $\beta$ parameter**. Thank you for bringing this point back to our attention! We will add these ideas to the next manuscript draft. # Noisy Observations Let us also expand on what happens in the noisy observation case, to provide **justification for why this should be out of scope of the current work**. The optimality claim in Pandora's Box arises by comparing one closed box with one open box. In the case where we have observations with noise, including the ability to repeat observations to reduce the effect of noise, the one-closed-one-open box model now consists of a **Nested Pandora's Box** - a Pandora's Box which contains another Pandora's Box inside of it, repeated recursively. Modulo certain technical details, we think that one could show using abstract Gittins-index-theoretic machinery that (in the discrete case) this setup also admits an optimal Gittins index policy. The key technical question becomes: **what is the objective of the convex optimization problem defining the Gittins index policy?** In the Pandora's Box case, this is analytic, given by expected improvement. In the nested analog of Pandora's Box, **we cannot expect an analytic formula, and would need to rely on novel numerical methods yet to be developed** to compute the respective optimization objective function. We therefore believe that this situation, while - as you say - extremely relevant, important, and interesting, would be **too complex to describe within the 9-page NeurIPS limit and would therefore best be presented in a follow-up paper.**
Summary: The paper introduces the Gittins index, a novel perspective from the pandora box problem, to address the cost-aware optimization problem on unknown rewards. It offers a theoretical justification for adapting the Gittins index as an acquisition function and offers empirical results against previous works, demonstrating the effectiveness of the proposed method. Strengths: 1. A novel perspective connecting previously independent works on different types of cost-aware optimization problems. 2. The figures help illustrate the connection between the pandora box problem and cost-aware BO. 3. The performance seems robust in various tasks. And it makes sense that a myopic cost-aware acquisition behaves well when only aiming at minimizing simple regret. Weaknesses: 1. The justification for the algorithm poses a major concern. The conversion to BO lacks sufficient discussion, especially when the c(x) surrogate could be constantly updated. One potential problem with the adopted conversion is that the original Gittins index ignores posterior dynamics. When the cost function is unknown, the noisy rewards and noisy costs let the posterior converge to the underlying functions at the picked candidate. Then the PBGI at that location could remain superior when the cost functions posterior converge much faster relatively, meaning the acquisition is trapped locally. I believe more discussion is crucial for the soundness of the proposed method. 2. There are several problems with clarity. Most outstanding is the availability of the cost function. It seems that most of the paper regards the cost function as given, while in the experiment section and some of the appendix, the cost function is unknown and extracted from another Gaussian process in some cases. This could be confusing, especially given that the unknown cost function typically requires corresponding treatment, and it differentiates from the original Pandora's box problem. The discussion should highlight the differences from the beginning. Also, the paper lacks discussion over the connection to previous work, especially on how the proposed acquisition function avoids arbitrary performance deterioration, as shown in Astudillo et al. [3]. Another problem lies in the Bayesian optimality. Is that defined with respect to formula (4)? Then how is that connected to the optimality in cost-aware Bayesian optimization? The optimality in cost-aware Bayesian optimization should be agnostic to the choice of $\lambda$. 3. There are also known practical concerns (Wu et al. 2020) that different magnitudes of costs could amplify the misspecification in GP, which has not been discussed, at least in the related work section. Reference: - Wu, Jian, Saul Toscano-Palmerin, Peter I. Frazier, and Andrew Gordon Wilson. "Practical multi-fidelity Bayesian optimization for hyperparameter tuning." In Uncertainty in Artificial Intelligence, pp. 788-798. PMLR, 2020. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. What are the expectations in (4) with respect to? 2. Could the author clarify the discussion in line 104 to line 109. I'm not sure if the context assumes constant cost or not. 3. Could the author clarify the optimality claimed in line 103? I'm not aware that EI bears the tightest BCR or BSR bound. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Discussed above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Strengths: > [...] novel perspective [...] > [...] figures [...] > [...] performance seems robust [...] Thank you very much for your review! We are delighted that these key strengths - including **novelty, specifically a brand-new technical perspective on Bayesian optimization** - are recognized. We would like to draw your attention to these points: **we think our strengths have significant merit**, and that they may outweigh some of the weaknesses and points of improvement discussed below. We would thus like to gently ask you to reconsider your score. > 1. The justification for the algorithm poses a major concern. [...] I believe more discussion is crucial for the soundness of the proposed method. Thank you very much for this insightful comment, which has led us to **carefully think about consistency properties and related phenomena in a new way**. In short: the finite-$\lambda$ version of our algorithm is designed for finite budgets. Asymptotic consistency requires one to think about cases where the budget is infinite. We think PBGI-D may be a better algorithm for this setting, and your comment has led us to make some preliminary theoretical investigations here that we describe in more detail in the **follow-on comment that we will post as soon as this is enabled on OpenReview.** > 2. [...] availability of the cost function. [...] most of the paper regards the cost function as given, while in experiment section and some of the appendix, the cost function is unknown [...] Thank you for pointing this out! We will clarify that: * We consider both known and unknown costs. * **The primary focus of our work, and the default assumption throughout, is known costs**. * When needed, we model unknown costs as log-normal processes, and show how this integrates into our algorithms. * Potential limitations of our strategy for handling unknown costs. Our motivation for focusing on known deterministic costs is that, given we develop novel techniques, **it makes sense to thoroughly understand the simpler case first**. Thoroughly understanding how to handle unknown costs, as well as theoretical analysis (such as regret bounds), are a promising direction for future work that our work opens the door to. > [...] previous work [...] avoids arbitrary performance deterioration [...] Astudillo et al. [3]. **The key is in Figure 2**: in this example, there is a high-variance high-cost point, and many low-variance low-cost points. The **correct decision (in terms of simple regret) is to pick the high-variance high-cost point**: PBGI does this, while EIPC does not. We discuss this in the paragraph titled *Qualitative behavior and comparisons* on lines 225-233. We also discuss a similar example regarding EI in Figure 8, and have added discussion on these points to the appendix. > [...] Bayesian optimality. Is that defined with respect to formula (4)? [...] optimality [...] should be agnostic to the choice of $\lambda$. You're (mostly) correct: Bayesian-optimality in the **cost-per-sample setting, where there is no $\lambda$**, is defined with respect to (4). Bayesian-optimality in the **expected budget-constrained setting**, which requires a choice of $\lambda$ given budget $B$, is defined in Section 2 using the same objective as (4) but with two modifications: (i) costs are not part of the objective, and (ii) only policies whose sum of costs does not exceed some budget $B$ *in expectation* are allowed. The relationship between the two settings, particularly the existence of an optimal $\lambda$, is given by Proposition 2. We have **clarified this relationship in full detail in an updated technical appendix** which has been substantially reworked and expanded in the time since this work's submission. > 3. [...] practical concerns (Wu et al. 2020) that different magnitudes of costs could amplify the misspecification in GP [...] related work [...] As requested, we'll discuss Wu et al. 2020 in the related work. **The concern in that paper (see their Section 2.4) is specific to the multi-fidelity setting with continuously-varying fidelity**, where one can measure fidelities with near-zero value at near-zero cost. This concern does not typically arise in the single-fidelity setting we study. > Questions: > What are the expectations in (4) with respect to? These are as follows: * The random function values $f(1), \dots, f(N)$ for each of the $N$ points in the discrete domain. * Choices made by the algorithm, namely the stopping time $T$ and the inspected points $x_1, \dots, x_T \in$ {$1, \dots, N$}. These choices can depend on function values observed so far, and they may also depend on external randomness (though, by standard MDP theory, external randomness is not necessary to solve the problem optimally). Specifically, after inspecting at time $t$, the algorithm chooses whether to stop and, if not, the next point to inspect $x_{t + 1}$, based on the data $\mathcal{D}_t = \{(x_1, f(x_1)), \dots, (x_t, f(x_t))\}$. We will clarify these details in the next revision. > [...] clarify the discussion in line 104 to line 109. [...] assumes constant cost or not. This discussion assumes constant cost, but an analogous discussion could be made for heterogeneous costs too. In particular, the discussion also applies to ordinary EI, not just EIPC. We will reorganize the discussion so that its scope is clearer, and expand on appropriate details, such as the idea behind the counterexample of Astudillo et al. (2021). > Could the author clarify the optimality claimed in line 103? [...] We mean EI is the one-step-lookahead greedy algorithm. That is, if we consider an arbitrary set of data observed so far $\mathcal{D} = \{(x_1, f(x_1)), \dots, (x_t, f(x_t))\}$, then EI chooses the point $x_{t+1}$ maximizing $\mathbb{E}(\max(f(x_1), \dots, f(x_t), f(x_{t + 1})) \mid \mathcal{D})$. One can thus view EI as optimal for a one-step-truncation of the original problem. We will clarify this in the next revision. --- Rebuttal 2: Comment: # Reviewer iB7T - follow up comment regarding posterior dynamics > The justification for the algorithm poses a major concern. The conversion to BO lacks sufficient discussion, especially when the c(x) surrogate could be constantly updated. One potential problem with the adopted conversion is that the original Gittins index ignores posterior dynamics. When the cost function is unknown, the noisy rewards and noisy costs let the posterior converge to the underlying functions at the picked candidate. Then the PBGI at that location could remain superior when the cost functions posterior converge much faster relatively, meaning the acquisition is trapped locally. I believe more discussion is crucial for the soundness of the proposed method. We wanted to start by thanking Reviewer iB7T again for these insightful questions about soundness. In our view, the most important soundness properties are performance bounds - that is, bounds on simple or cumulative regret. **We view regret bounds as an important direction for follow up work, but we expect it to be a hard enough analysis to be outside the scope of the present paper.** Nevertheless, the comment inspired us to think about **(a) whether there are weaker soundness properties which we could investigate** in our current submission, such as consistency; and **(b) whether or not the type of example you describe could occur**, with PBGI getting "trapped", and how to mitigate if so. This led to two outcomes: * **We now have a proof sketch that our PBGI-D algorithm is consistent.** We plan to incorporate this into our next revision - either as a fully fleshed out proof, or as a high-level discussion motivating PBGI-D, or possibly both. * **We now believe that one might be able to achieve performance comparable to the better between our two algorithms** - namely, that of PBGI with constant $\lambda = 1/10000$, and PBGI-D with initial $\lambda = 1/10$ and $\beta = 1/2$ - by simply using PBGI-D with initial $\lambda = 1/10000$ (and still $\beta = 1/2$). We will evaluate this hypothesis as soon as it is feasible for us to do so comprehensively. Below, we discuss these in more detail. We note that **the reasoning throughout is similar for known and unknown costs**, so we focus mainly on the known-cost case of primary interest in our paper. ## Achieving the best of both PBGI and PBGI-D We believe that **for large time horizons, decreasing $\lambda$ as the algorithm proceeds is important**. Otherwise, at time points which occurs sufficiently-long after the Gittins stopping rule triggers, without decreasing $\lambda$ the algorithm might become too-greedy. The **purpose of the fixed-$\lambda$ PBGI is to do well in the setting where we know we will eventually stop**: this setting is by definition finite-time, so one need not expect good asymptotic behavior without modifications such as decay. In our experiments, we examined PBGI-D with large initial values of $\lambda$, while we examined PBGI for a broader range of values. From thinking carefully about your comments, **we suspect PBGI-D with small initial values of $\lambda$ may in some cases outperform PBGI with constant $\lambda$**, particularly if the initial value and constant are set equal. * The intuition here is that, from Figure 3, the regret of PBGI with any constant value of $\lambda$ eventually levels off, causing PBGI-D to later catch up in performance. * Since, before the first $\lambda$-decrease, PBGI-D and PBGI make the same decisions and achieve the same regret, it might be possiboe **achieve the best of both worlds by mimicking PBGI on short horizons, but removing the eventually-level-off behavior via $\lambda$-decrease** as in PBGI-D. We will run comprehensive experiments to evaluate this possibility for the next manuscript version. ## Consistency of PBGI-D **We believe that PBGI-D is consistent**. Recall that every time the Gittins stopping rule triggers, the algorithm decreases $\lambda$. This causes it to eventually explore more in a manner similar to other acquisition functions. **We believe this holds regardless of whether the costs are known - our main focus - or they are unknown.** Our preliminary calculations show that for small $\lambda$, point $x$'s Gittins index is $\approx \mu(x) + \sigma(x) \sqrt{2\log \frac{\sigma(x)}{\lambda c(x)}}$, where $\mu(x)$ and $\sigma(x)$ are the mean and standard deviation of the objective model, and $c(x)$ either the known cost or, if unknown, the expected value of the cost model's mean. This should prevent any open neighborhood of the domain from being ignored by PBGI-D, because **for sufficiently small $\lambda$, unexplored regions where $\sigma(x) \gg 0$ will always have better Gittins index than explored regions where $\sigma(x) \approx 0$, thus implying consistency**. Even unknown costs do not pose a great obstacle, because as $\lambda \to 0$, the dominant term is $\sigma(x) \sqrt{2\log\frac{1}{\lambda}}$, which is independent of the cost. --- Rebuttal Comment 2.1: Comment: I appreciate the authors' comprehensive response, especially in answering my questions about the scope of the paper, focusing on known cost settings and the consistency of the proposed algorithm. The authors actually promise much beyond the current presentation to be available in future revisions, and I place my trust in most of the points. Yet there are several remaining concerns. First is the claim that the unknown cost setting would bear similar property to the known cost setting when analyzing the consistency. This is not immediately clear to me, as for an unknown cost, I'm not sure what would happen if the model suggests that specific candidates' costs are arbitrarily close to zero due to observation noise, and the assumption is that cost has to be above 0. It seems necessary to truncate the scope of the model for the cost function with an additional assumption that is not typical in BO. This also resonates with my remaining concern over the problem formulation. The author claims that the extreme values only trouble the multi-fidelity setting, so what differentiates the assumption of cost in multi-fidelity BO and the paper's assumptions on the cost? Second is the experiment; in the uniform cost setting with a finite budget, UCB and TS reduce to their non-asymptotic version, where the $\beta_t$ controlling the exploration-exploitation trade-off could be much smaller than the asymptotic version, achieving the exact cost-awareness as is discussed in line 104 to line 109. Though it is limited to the uniform cost setting, such straightforward solutions to make UCB and TS cost-aware should not be omitted in both the discussion and experiments. Otherwise, it could be partially misleading. --- Rebuttal 3: Comment: # Consistency in the unknown-cost setting > I appreciate the authors' comprehensive response [...] several remaining concerns. Thank you very much for your comments! **While we do promise to discuss consistency of PBGI-D, given the above reasoning and aforementioned sketch - which amounts to reasonably-standard theory - we are confident we can do it.** Let us address your remaining concerns below. > [...] unknown cost [...] similar property to the known cost [..] analyzing the consistency. [...] costs are arbitrarily close to zero due to observation noise [...] extreme values only trouble the multi-fidelity [...] what differentiates the assumption of cost in multi-fidelity BO and the paper's assumptions [...]? The key property of single-fidelity Bayesian optimization is that for practical cost functions, **the infimum of costs of inspecting any point is strictly positive**. This is assumed implicitly in our consistency sketch from the previous reply: we regret, in light of space, that we had omitted saying this. The specific way this assumption is used is to ensure $\frac{1}{\lambda c(x)}$ approaches $\infty$ uniformly in $x$ as $\lambda \to 0$. In contrast, **the assumption of a strictly positive minimum cost need not hold in the continuous-fidelity variant of the multi-fidelity setting**. There may be a spectrum of fidelity levels whose costs converge to zero at the low-fidelity end of the spectrum - this is studied by Wu et al. (2020). You raise a good point about unknown costs: here is the extent to which our proof sketch covers them. **Our consistency sketch requires that the $c(x)$ values being plugged into our algorithm have a strictly positive minimum value.** We believe this should hold, as long as the true costs have a positive minimum and match the smoothness properties of the lognormal cost model's kernel, but other assumptions might work too. For what it's worth, we believe a global lower bound on the cost of an inspection - which need not be tight - is generally known in practice, arising for instance from the minimum amount of time to run an experiment. Your point about noisy observations of costs possibly complicating this picture is well taken, but **noisy observations - whether of data or of costs - are outside the scope of our work**. Our work shows that building Gittins index machinery that handles observation noise is a promising future direction, but **handling noise properly requires non-trivial technical development on the Gittins index side (see [this official comment's second heading](https://openreview.net/forum?id=Ouc1F0Sfb7&noteId=la5dFZIeR7))**. # Cost-awareness of UCB and TS > Second is the experiment; in the uniform cost setting with a finite budget, UCB and TS reduce to their non-asymptotic version, where the controlling the exploration-exploitation trade-off could be much smaller than the asymptotic version, achieving the exact cost-awareness as is discussed in line 104 to line 109. Though it is limited to the uniform cost setting, such straightforward solutions to make UCB and TS cost-aware should not be omitted in both the discussion and experiments. Otherwise, it could be partially misleading. We appreciate your suggestion to extend the discussion of the limitations of EI and EIPC to UCB and TS, and we will include this in the next version of our manuscript. However, we have two points of confusion that we hope you can clarify: 1. **Thompson sampling is by definition not a cost-aware algorithm, and includes no tuning parameters (and, in particular, no learning rates) that make it naturally extend to a tunable cost-aware algorithm.** Moreover, (a) we are not aware of any papers proposing cost-aware extensions of Thompson sampling, and (b) we are not sure what you mean by "asymptotic version of Thompson sampling" as this algorithm has no parameters to tune that could be chosen in an asymptotic or non-asymptotic way. As consequence, could you please clarify precisely how you are thinking about making Thompson sampling into a cost-aware algorithm, so that the notions above make sense? 2. On the other hand, since the uniform-cost setting is equivalent to the finite-horizon setting studied in bandits, **UCB can be extended naturally to a uniform-cost-only cost-aware algorithm** by tuning the learning rate appropriately. Using this correspondence, **non-asymptotic tuning (that is, tuning which yields a regret bound valid for any finite T) for controlling explore-exploit tradeoffs is given by Srinivas et al. (2009) - we are already using their higher-performing empirical tuning in our experiments**. While this tuning might not be perfect due to gaps in the analysis of Srinivas et al. (2009), **our tuning of lambda is deliberately set to be imperfect to demonstrate some degree of robustness** and therefore indicating that our improved performance - particularly outside the Bayesian regret setting where we improve significantly on UCB - is not due to superior tuning. --- Rebuttal Comment 3.1: Comment: Thank you very much for your prior responses! We very much appreciate your engagement with our submission and your follow-up questions. We were wondering if our replies have adequately addressed some or all of your most significant concerns, both from your original review and your follow-up comment. If there are major concerns you feel have not been sufficiently addressed, could you please highlight them? And, if there are major concerns we have addressed, we would appreciate it if you were willing to reconsider your score, in light of these clarifications and improvements. --- Rebuttal Comment 3.2: Comment: I appreciate the follow-up by the authors regarding the problem formulation and the comparison to existing methods. I'd like to conclude my evaluation with the following. I believe that all reviewers appreciate the idea of this work, which brings a novel perspective to the cost-aware BO. And I also think that this one-step optimal algorithm could possibly bear a strong performance. On the other side, though the high-level idea is conveyed well, I believe the details in the presentation, especially the problem formulation, the scope of the work, and the analysis of the consistency and Bayesian optimality, require non-trivial revision to qualify it for acceptance, from my perspective. Nevertheless, I trust the author on this part. I'm also concerned about the experiments. Regarding the UCB and TS questions I raised in my previous comments. I want to bring the paper by Chowdhury and Gopalan (2017) to the author's attention as TS should also allow applying a $\beta_t$. My previous comments mentioning the discussion in lines 104 to line 109 are intended to say that if there exists theoretical analysis on dealing with different constant costs, it could be misleading. At least to me, the discussion in lines 104 to line 109 leaves an impression that there is no such solution in literature, not limited to EI. In addition, the information-theoretic method is missing in the baselines. Despite the authors' argument that the formulation differentiates from the multi-fidelity setting, the known cost scenarios could naturally be handled by the existing SOTA entropy-based method (MF)BO algorithm, e.g., MF-MES by Takeno et al. (2020). Such an algorithm could efficiently approximate the acquisition using MC estimation while the known cost problem regresses to a special case discussed by the corresponding work. Since the author argues that known cost is the primary focus of the paper, missing such baselines could downgrade the significance of the empirical strength, which is one important merit of the paper, given that the regret rate is not available. ***References*** - Chowdhury, Sayak Ray, and Aditya Gopalan. "On kernelized multi-armed bandits." In International Conference on Machine Learning, pp. 844-853. PMLR, 2017. - Takeno, Shion, Hitoshi Fukuoka, Yuhki Tsukada, Toshiyuki Koyama, Motoki Shiga, Ichiro Takeuchi, and Masayuki Karasuyama. "Multi-fidelity Bayesian optimization with max-value entropy search and its parallelization." In International Conference on Machine Learning, pp. 9334-9345. PMLR, 2020. --- Rebuttal 4: Comment: Thank you for making us aware of this variant of Thompson sampling that allows for an exploration / learning rate parameter! We will add this to our references, and we will feature UCB and Thompson sampling our discussion on Lines 104-109 accordingly. We appreciate the suggestion to compare against an information-theoretic acquisition function. In the time since submission, we've benchmarked against predictive entropy search (PES). **Our results indicate that on uniform-cost problems, the performance of PES is significantly worse than other baselines, such as UCB.** We believe this may be in part due to the difficulty of optimizing the PES acquisition function, which requires a Monte Carlo scheme for handling gradients. Given that Takeno et al. (2020) provides a similar acquisition function using the max-value entropy search (MES) divided by the cost (i.e., it is "MES per unit cost"), we believe it would suffer from similar difficulties. Moreover, we expect this acquisition function to suffer from issues similar to those of EIPC. --- Rebuttal 5: Comment: I appreciate the authors' responses to my concerns over the literature and experiments. From my perspective, MES (and multi-fidelity MES, which is the cost-aware variant) alleviates the problem of optimizing the PES acquisition function and shows better performance than PES, as was reported in the MES paper. They are widely available as part of the Botorch standard library and are easy to implement. One tutorial is available at the following link: https://botorch.org/tutorials/max_value_entropy. Most importantly, the mutual information-based method bears better theoretical interpretation than EIPC in both the asymptotic and finite-budget scenarios. I don't foresee the problem of EIPC being extended to MF-MES unless the author offers corresponding proof or empirical evidence. Then, I would be pleased to increase my score and trust that the authors could amend the corresponding results in the camera-ready version. --- Rebuttal 6: Comment: # Theory sketch: counterexample for MES **Here is a proof sketch showing that MES or MES/cost can have arbitrarily poor performance compared to the optimal policy.** This proof sketch is inspired by the Astudillo et al. 2021, though the construction is slightly different and takes specific advantage of the characteristics of MES. While this example is discrete and uses binary-valued arms for simplicity of understanding, **we believe that these ideas can be extended to construct a smooth counterexample where MES variants (with or without cost)** will perform poorly when used with Gaussian processes. The example has two groups of arms: * Group 1: $N$ independent arms ("high-value arms") * Cost is $1$ * Value is $1$ with probability $\epsilon$ and 0 with probability $1-\epsilon$. * Group 2: $M$ additional independent arms ("lower-value arms") * Cost is $1$ * Value is $1-\delta$ with probability $1/2$ and $0$ with probability $1/2$ The probability distribution over the maximum is: * $\max = 1$, with probability $p(N)$, where $p(N) = 1-(1-\epsilon)^N$ * $\max = 1-\delta$, with probability $(1-p(N))(1-2^{-M})$ * $\max = 0$, otherwise We send $M$ to infinity so that these probabilities become: - $\max = 1$ has probability $p(N) = 1-(1-\epsilon)^N$ - $\max = 1-\delta$ has probability $1-p(N)$ - $\max = 0$ does not happen Here, $\epsilon$ and $\delta$ are strictly positive values. The information gain is: * Strictly positive for pulling a high-value arm (this can be seen via direct computation of the expected entropy or via an argument based on mutual information) * Goes to zero as $M$ goes to infinity for pulling a lower-value arm (since pulling the arm does not change the distribution of the max, in the limit as $M$ goes to infinity) First, observe the following, holding $\epsilon$ and $\delta$ fixed. Suppose we have a budget $B$ less than $N$. * For sufficiently small $\delta>0$, it is optimal to pull the lower-value arm repeatedly until one gives value $1-\delta$, after which it is optimal to pull a higher-value arm. This gives value at least $(1-\delta)(1-2^{-B})$, where $B$ is the budget. This problem meets the assumptions of the expected budget constrained Pandora's box setting and so, for an appropriately chosen $\lambda$, by optimality, the Gittins index policy chooses this optimal arm. * But MES/cost continually pulls the higher-value arm, and obtains value $1-(1-\epsilon)^B$. Then, we send $\epsilon$ to $0$ (we must ensure that $M$ goes to $\infty$ at a sufficiently fast rate relative to $\epsilon$). Under this change, MES/cost obtains value converging to $0$, while the value of the optimal policy remains strictly positive and bounded below by $(1-\delta)(1-2^{-B})$. Thus, the value of MES/cost divided by the value of the optimal policy converges to $0$. **We will incorporate this argument - together with a Bayesian optimization example in the spirit of Figure 2 - in the next manuscript version. We will also add MES and MF-MES as baselines to all our experiments in the next manuscript version.** --- Rebuttal 7: Comment: I sincerely appreciate the expedited response by the authors. I apologize for finding it difficult to understand the sketch proof fully. Here are my questions about this proof. 1. The proof claims that MES/cost favors the high-value arms. I, in general, understand that the intention is to dilute the mutual information of high-value arms slower through the slow decrease of $\epsilon$ and a fast increase of $M$. Yet the exact rate is unclear to me. The mutual information for both sets is strictly positive unless $M \rightarrow \inf$ for the lower value set and $\epsilon\rightarrow\inf$. I prefer that the authors offer the exact rate as the feasibility of directing MES/cost to the high-value arms is not obvious, especially when the 1/2 distribution for each arm in the lower-value set makes the arms actually attractive to MES. In addition, when $\epsilon\rightarrow\inf$, the mutual information also goes to zero for the higher value set. If the $\epsilon$ does not go to $\inf$, I don't think the MES/cost is arbitrarily worse. Instead, the higher value set is arguably more favorable with a sufficient budget, though the expected reward could possibly be lower. 2. The unit cost actually regresses the scenario to standard BO. In this case, the analysis of MES reveals an order optimally simple regret bound, yet the results seemingly contradict that. Could the author comment on it?
Summary: This work tackles cost-aware Bayesian optimization using the Pandora's box Gittins index. In particular, in this paper, the authors focus on expected budget-constrained cost-aware Bayesian optimization and cost-per-sample cost-aware Bayesian optimization. Then, they provide some evidence of the proposed method in order to show the validity of their method. Finally, several experimental results are demonstrated to show the performance of the proposed method. Strengths: - Cost-aware Bayesian optimization is an important research topic in Bayesian optimization. - The method proposed in this work provides a new perspective on cost-aware Bayesian optimization. Weaknesses: - Some parts of this work are not clear. - Experimental results seem not promising. - Writing can be improved more. Technical Quality: 3 Clarity: 2 Questions for Authors: - For "Thus, EIPC is perhaps best suited to settings where heterogeneity is the main factor at play," what is heterogeneity here? I think it is not clear. - Line 111, for "is in widespread use," can it be a real reason? - For the Pandora's boxes, do you assume that there exist infinite boxes? or, there exist a finite number of boxes? - Line 152, a period is missing. - How did you optimize Equations (5) and (6) in practice? - Figures 2, 3, and 4: which function is used to be optimized for each figure? - Figure 3: I don't understand the effect of $\lambda$. According to Figure 3, it should be as small as possible. - For dynamic decay, it is just constant decay, right? $\beta$ is a constant. - Line 248, how did you choose $\lambda$ and $\beta$? Is there any guidance for these selections? - Line 250: for "even though per-problem tuning could be advantageous," is it guaranteed? Is there any evidence for this? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: I don't think that there are any particular societal limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Strengths: > [...] important research topic [...] > [...] new perspective [...] Thank you for your review! We are delighted the key strengths - (a) **the importance of the topic**, and (b) **novelty, specifically a brand-new technical perspective on Bayesian optimization** - are recognized. We would like to draw your attention to these points: **we think our strengths have significant merit**, and that they may outweigh some weaknesses and points of improvement, particularly given clarifications below. We would like to gently ask you to reconsider your score on this basis. > Weaknesses: > Some parts [...] not clear. We would be delighted to hear pointers to specific paragraphs so that we can improve our work. > Experimental results seem not promising. **We respectfully disagree.** Bayesian optimization is reasonably mature, along with the algorithmic ideas underlying state-of-the-art baselines. **Introducing a simple new algorithmic idea that on most benchmarks matches or outperforms these baselines - sometimes significantly - is, we believe, promising progress.** For instance, our results show strong performance on cost-aware Bayesian regret (objective functions sampled from the prior) in $d=16$ and $d=32$, where both Gittins variants are **significantly better than every competitor**, including the non-myopic and significantly more elaborate MSEI/BMSEI acquisition function. This performance is repeated on synthetic examples (Ackley and Levy) and empircal ones (Pest Control, Lunar Lander), where Gittins variants come out on top, with a small set of exceptions for which we discuss limitations. Finally, _note that our plots display quartiles, not standard error_. One can thus expect overlap in the interquartile regions even for large sample sizes, and cases where ranges do not overlap - such as the cost-aware variants mentioned above - are especially significant. > Writing [...] We plan to clarify the answers to your and other reviewers' questions (see below). We'd be delighted to hear other specific suggestions regarding clarity. > Questions: > For "Thus, EIPC [...] settings where heterogeneity is the main factor [...]" what is heterogeneity here? [...] _Heterogeneity_ refers to a non-constant cost function $c(x)$. We have edited this to explicitly state that this refers to heterogeneity of costs. > Line 111, for "is in widespread use," can it be a real reason? Could you please clarify this? Here we say EIPC is in widespread use in the cost-aware setting. Given that this acquisition function is default in BoTorch, we believe this correct to our best understanding. > [...] do you assume that there exist infinite boxes? [...] In the Pandora's Box problem, the number of boxes $|X|$ is finite: this is given on line 126. In general cost-aware Bayesian optimization, we allow general compact domains $X$, but no longer refer to it as a "set of boxes". > Line 152 [...] Thank you for pointing out this typo! > How did you optimize Equations (5) and (6) [...]? This is given in the *Computation* paragraph on Line 220-224, which also refers to Appendix A.1. In short: we solve the one-dimensional convex optimization problem via bisection search, then apply an analytic gradient formula that does not require solving any additional optimization problems. > Figures 2, 3, and 4: which function is used [...]? These are in the captions. For Figures 2 and 3, we intentionally defer most details in the appendix as these figures serve primarily as visual illustrations. In Figure 2, the objective and cost are scaled squared exponentials, plus a small constant. For Figures 3 and 4, the objectives are drawn from a Fourier feature approximation of the prior, where Figure 3 is the same as the $d=8$ case of Figure 4. > Figure 3: I don't understand the effect of $\lambda$. According to Figure 3, it should be as small as possible. Not necessarily: from the right-hand-side plot of Figure 3, we see that **large $\lambda$-values perform better if the time horizon is small**. For instance, $\lambda = 0.01$ performs best for time horizons less than 50, whereas smaller values perform better on longer horizons. Therefore, we view $\lambda$ as a parameter which controls risk-seeking vs. risk-averse behavior in the algorithm, with larger $\lambda$-values being more suitable for smaller budgets and smaller $\lambda$-values better for larger budgets. This is stated in the caption: "We see that large $\lambda$-values decrease regret sooner, but eventually lose out to smaller $\lambda$-values." > For dynamic decay, it is just constant decay, right? $\beta$ is a constant. While the _decay multiplier_ is a constant $\beta$, the _times when the multiplier is applied_ are dynamic. Specifically, the dynamic decay variant update $\lambda$ to $\beta \lambda$ every time the Gittins index stopping rule triggers - that is, when every point's expected improvement is worse than its inspection cost. This is described formally in the list item of Line 202 which introduces it. > Line 248, how did you choose $\lambda$ and $\beta$? [...] In contrast with what is done in many other works, we **deliberately chose a non-optimally-tuned value** in order to demonstrate that our algorithm can achieve good performance _even if these parameters are not explicitly tuned, especially on a per-problem basis_. This is important, since some of our baselines such as EIPC contain no hyperparameters. We chose a relatively small $\lambda$ based on the observation from the right-hand-side plot in Figure 3, where PBGI with large $\lambda$-values initially decrease regret faster, but eventually lose out to PBGI with smaller $\lambda$-values. > Line 250: for "[...] per-problem tuning could be advantageous," [...] evidence for this? Yes, this is shown in the right-hand-side of Figure 3: there, picking $\lambda$ to match the intended time horizon well is shown to result in stronger performance. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response. > We would be delighted to hear pointers to specific paragraphs so that we can improve our work. This comment is the summary of my review. I have already described specific things in Questions and you have answered them. > Could you please clarify this? Here we say EIPC is in widespread use in the cost-aware setting. Given that this acquisition function is default in BoTorch, we believe this correct to our best understanding. I don't agree with your statement. How does the fact that the default setting of BoTorch is EPIC support this sentence "In spite of this rather negative theoretical outlook, EIPC is in widespread use"? Do the potential users of BoTorch use this algorithm despite of the negative theoretical outlook? I believe that scientific writing should be humble and respectful to other previous work. > These are in the captions. For Figures 2 and 3, we intentionally defer most details in the appendix as these figures serve primarily as visual illustrations. I don't understand these sentences. I think that they are not in the captions (you also mentioned that you intentionally defer most details). I think it is a minor thing. You can point where the details are described more specifically. > In the Pandora's Box problem, the number of boxes $|X|$ is finite: this is given on line 126. Do you update a set of boxes? If so, when is it updated? > The times when the multiplier is applied are dynamic. I don't think it is dynamic because a decay rate is constant. Imagine learning rate decay. We don't call it dynamic decay if a decay rate is constant. But, if you want to call it dynamic decay, you need to clarify this. > In contrast with what is done in many other works, we deliberately chose a non-optimally-tuned value in order to demonstrate that our algorithm can achieve good performance even if these parameters are not explicitly tuned, especially on a per-problem basis. I trust the authors' argument that a non-optimally-tuned value is used. However, it should be supported by scientific and numerical evidence. > Yes, this is shown in the right-hand-side of Figure 3: there, picking $\lambda$ to match the intended time horizon well is shown to result in stronger performance. It should be also supported by numerical results. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. Let us clarify further the points raised in your comment. > I don't agree with your statement. How does the fact that the default setting of BoTorch is EIPC support this sentence "In spite of this rather negative theoretical outlook, EIPC is in widespread use"? Do the potential users of BoTorch use this algorithm despite of the negative theoretical outlook? I believe that scientific writing should be humble and respectful to other previous work. **This comment took us by surprise.** Let us quote the relevant lines in full rather than in part: * _"In spite of this rather negative theoretical outlook, EIPC has been shown to work well on many practical problems, is computationally efficient and reliable, and is in widespread use. We therefore ask: can one develop a technically-principled and computationally-straightforward alternative with at-least-comparable empirical performance?"_ We acknowledge that EIPC being the default setting in BoTorch does not _prove_ that it is in widespread use, however much this may suggest it. The theoretical limitations of EIPC are from rather recent work (Astudillo et al., 2021), so it is possible that the community is not yet aware of them. With this said, **proving widespread use of EIPC is not relevant to our contributions**. Finally, **it is absolutely standard to discuss strenghs and weaknesses of a key baseline**, especially in the context of motivating the ideas introduced in a paper. We firmly believe our discussion (quoted above) of EIPC's strengths and weaknesses is fair and respectful. > I don't understand these sentences. I think that they are not in the captions (you also mentioned that you intentionally defer most details). I think it is a minor thing. You can point where the details are described more specifically. For Figure 2, the prior distribution from which the objective functions are sampled and the cost function are **explicitly displayed in the left and middle column of that figure.** For Figure 4, these are **given in lines 261 and 499-501:** these state that the objective functions are random functions sampled from a Gaussian process prior using a Matérn-5/2 kernel. To avoid any omissions, **we will ensure detailed versions of the points below are added to the updated experiment description appendix:** * Figure 2: The probability density function of the prior distribution and the cost function are both scaled Gaussian densities plus a constant. * Figure 3: We believe the question refers to the right plot. The objective functions are the same as those for the $d=8$ uniform-cost plot in Figure 4 (see below). * Figure 4: The objective functions are drawn from the respective Gaussian process prior, drawn in such a way that different baselines with the same random number seed share the same objective, but objectives for different seeds are different draws from the same prior. (To be fully precise: sample paths are computed up to a negligible approximation using 1024 Fourier features - this technique is standard, see Section 4 of "Pathwise Conditioning ..." by Wilson et al. JMLR 2020 for a description.) > Do you update a set of boxes? If so, when is it updated? In the Pandora's Box problem, the indices, values, and costs of the boxes are part of the definition and are therefore fixed. **The only change throughout the decision process is the replacement of the reward distribution with the actual value inside once a box is opened.** This is described in full on lines 125-135. > I don't think it is dynamic because a decay rate is constant. Imagine learning rate decay. We don't call it dynamic decay if a decay rate is constant. But, if you want to call it dynamic decay, you need to clarify this. We chose to use "dynamic" in the name because **the times at which decay occurs are dynamic:** that is, the decay times depend on the data observed by the algorithm. This is clarified on Lines 205–206 in the same sentence that introduces the decay parameter $\beta$. > I trust the authors' argument that a non-optimally-tuned value is used. However, it [the claim that problem-specific tuning is advantageous] should be supported by scientific and numerical evidence. **This claim is supported by Figure 3, which shows direct numerical evidence that tuning $\lambda$ according to the evaluation budget is advantageous**. This is because the color of the curve with the smallest regret value depends on the time point. > It should be also supported by numerical results. Please see our response to the previous point. --- Rebuttal 2: Comment: Thank you for your response. > Proving widespread use of EIPC is not relevant to our contributions This is my point. The authors cannot prove this and it might not be a real reason. I would like to say that the authors used this sentence in the submission and I just asked if it can be a real reason. Then, the authors' answer was it is because of the default setting of BoTorch, which is not relevant to this proof. Now, the authors stated that it does not prove that it is in widespread use. I simply recommend to remove this sentence if you cannot prove this. > Finally, it is absolutely standard to discuss strenghs and weaknesses of a key baseline No, you wrote the weaknesses you cannot prove. It is not absolutely standard. > Answer for "I don't understand these sentences. I think that they are not in the captions (you also mentioned that you intentionally defer most details). I think it is a minor thing. You can point where the details are described more specifically" I don't understand why the authors explicitly explain them to me. My point is that I (including potential readers) cannot know which functions are exactly used based on their captions. You have to improve the presentation of the figures. I don't matter if their details are described in the appendices or the captions. You need to exactly indicate where they are described, when you prepare for a revision. > In the Pandora's Box problem, the indices, values, and costs of the boxes are part of the definition and are therefore fixed. The only change throughout the decision process is the replacement of the reward distribution with the actual value inside once a box is opened. This is described in full on lines 125-135. No, Lines 125-135 don't explain whether boxes are updated or not. Thus, I was curious about it. > We chose to use "dynamic" in the name because the times at which decay occurs are dynamic: that is, the decay times depend on the data observed by the algorithm. This is clarified on Lines 205–206 in the same sentence that introduces the decay parameter $\beta$. In Lines 205-206, there is no description why it is dynamic. > This claim is supported by Figure 3, which shows direct numerical evidence that tuning $\lambda$ according to the evaluation budget is advantageous. It is only for a single function and also there are no confidence intervals. It is not enough for the numerical evidence of how we can choose $\lambda$ and $\beta$. I was just curious about the missing details of the submission, which can be added to improve this work. The authors didn't want to clarify them, and just mentioned the unclear sentences the details are missing. I still think that this manuscript should be improved more. --- Rebuttal Comment 2.1: Comment: Thank you again! Your specific comments on individual phrases in our manuscript have been very helpful. **(1)** We have had an extended discussion of the phrase "EIPC ... is in widespread use" on line 111 of our manuscript. We understand your viewpoint. We propose to replace "is in widespread use" with the text below. * _Several factors point to EIPC's continued use: it is the default in the BoTorch software package, it has been used in a recent studies (for instance, Pricopie et al. 2024), and the paper that proposed it continues to receive high citation counts (1588 in 2023 alone)._ * Reference: _Pricopie, Stefan, et al. "Bayesian Optimization with Setup Switching Cost." Proceedings of the Genetic and Evolutionary Computation Conference Companion. 2024._ **(2)** Regarding our discussion about per-problem tuning, your original comment was: * _"Line 250: for "[...] per-problem tuning could be advantageous," [...] evidence for this?"_ The full sentence to which you referred is: * _"To ensure that performance differences are not primarily due to tuning, we deliberately use the same $\lambda$-and-$\beta$-values on all problems, even though per-problem tuning could be advantageous."_ On basis re-examining your comments, we will simply remove the phrase "even though per-problem tuning could be advantageous." This is not central to our point. **(3)** You gave several useful pieces of detailed feedback on improving the following: figure captions, clearly explaining the motivation for using the word "dynamic" in the name PBGI-D, and more clearly explaining that the boxes themselves do not change (our updated appendix now includes a formal mathematical presentation, as needed to handle the updated Theorem 2). We will make these updates to the text in our revision. While we previously offered detailed answers to your questions, we now understand that you were not trying to tell us that you were confused about those points, and were simply drawing attention to opportunities to improve the clarity of the manuscript. Thank you for these suggestions. --- Rebuttal 3: Comment: Thank you for your prompt response. Indeed I was confused about some points, which have been already resolved. Because most of my concerns are answered, I will raise my score to 5. Please update your manuscript considering all the comments provided by reviewers.
Rebuttal 1: Rebuttal: # Summary We thank all reviewers and the area chair for their time and thoughtful feedback and evaluating this work! We are delighted that **all four reviewers recognized the work's key strengths**, including: * **Importance of the topic (Reviewer ykvm)**. * **Novelty in the form of a brand-new technical perspective on Bayesian optimization (all reviewers)**. * **Clarify of presentation (Reviewer 7Tfe)**, with some exceptions that we have identified and fixed thanks to the feedback. * **Thorough experimental evaluation (Reviewer q648)**. We believe these strengths rank favorably relative to the standard for accepted papers at NeurIPS. We'd also like to draw all readers to the text content of the reviews, which were quite positive - based on the text alone, we would have predicted scores of weak accept. Many of the reviewer concerns focus on small suggestions for improving clarity. Other questions about the technical aspects of our paper and its significance were smaller in scope and we believe are largely addressed by our detailed response. We look forward to engaging with the reviewers in discussion. Once the reviewers have a chance to review our response and discuss any remaining concerns they might have, we hope that scores can be changed to be consistent with the text of the reviews. ## Key Reviewer Concerns *Reviewer ykvm*: * Reviewer ykvm states that our empirical results seem "not promising" but does not offer further explanation or detail. We point out that our empirical results show us **matching or outperforming a comprehensive set of baselines**, including a pair of state-of-the-art non-myopic acquisition functions (MSEI, BMSEI), on problems of large-enough dimension. Given the importance of Bayesian optimization and the maturity of its literature, we believe introducing a new algorithm with such results is a significant contribution. Also, Reviewers iB7T and q648 listed our algorithm's performance as a strength. *Reviewer iB7T*: * Reviewer iB7T notes aspects of our presentation that could be made clearer, particularly on whether the cost function is known or unknown. We appreciate the feedback, and will clarify this in revision. * Reviewer iB7T also asks technical questions about our algorithm's behavior, involving posterior dynamics and consistency properties, especially in the unknown-cost setting. **Our primary focus is the known-cost setting, and the fact that the Pandora's Box Gittins index performs well here is our main empirical finding.** Still, we found that thinking about the aforementioned unknown-cost setting and consistency properties significantly helped us better understand our algorithm's behavior, potential limitations, and avenues for improvement. We have therefore included an **extended discussion on these points in an OpenReview comment** - which we will post once review comments are enabled - and view theoretical analysis thereof as a **promising direction for follow-up work prompted by our results and Reviewer iB7T's comment.** *Reviewer 7Tfe*: * Reviewer 7Tfe is worried that our results might be influenced by situations where the prior's length scale does not match well with the objective. This discrepancy is primarily caused by a certain unfortunate typo: *in the Bayesian regret experiments, the objective function is sampled from the prior with the same length scale*, ensuring perfect match between length scales in that setting. In contrast, our *synthetic benchmark and empirical experiments address the setting of learned length scales*. Taken together, our results demonstrate **good performance with both perfectly-matched and learned length scales**. * In addition, Reviewer 7Tfe raises a number of interesting technical questions one could study in follow-up, such as calibrating our algorithms' parameters and accounting for noisy observations. Since we deliberately avoid using carefully-tuned hyperparameters for our method, **our results show performance improvement even if these parameters are not perfectly calibrated**. Handling noisy observations properly is an interesting question, but involves handling non-trivial technical details on the side of Gittins index theory: we discuss this in our response. For noise observations, therefore, we see the **opportunity to study interesting but non-obvious follow-up questions as strength rather than weaknesses**, since the creation of well-motivated technical questions is itself a significant source of research impact. *Reviewer q648*: * Reviewer q648, who rated our work the highest, asks about normalization and says this is the major reason preventing an even higher score. In short: we **already normalize the way Reviewer q648 wants us to** in all settings where this makes technical sense - namely, all except Bayesian regret. However, our submission **forgot to precisely document this in the appendix**, which we regret and have now fixed. * Two figures requested by Reviewer q648 is available as PDF attached to this post. ## Additional contribution: Theorem 2 Our submission did not claim Proposition 2 - now called Theorem 2 - which relates the expected budget-constrained and cost-per-sample problems, as a contribution. We discovered Theorem 2 independently, then during literature review discovered the work of Aminian et al. (2024), which appeared before we were able to submit our work. This work contains a related result that we thought at time of submission might imply our Theorem 2. Since submission, we have determined that our paper's Theorem 2 is not implied by Theorem 1 of Aminian et al. (2024) due to differences in assumptions. We therefore now believe that **Theorem 2 with non-discrete-support reward distributions (for instance, Gaussian rewards) is an original contribution of our work**. Our updated manuscript now has a complete proof, which we will reproduce here if requested. We are also happy to comment on detailed technical differences with Aminian et al. (2024) on request. Pdf: /pdf/0f73a008e21136e253d4ffbfda92c154e48f3b1a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Accelerating Relative Entropy Coding with Space Partitioning
Accept (poster)
Summary: The authors provide a formalization on how to introduce search heuristics for channel-simulation (sometimes called "relative entropy coding" in the ML literature). The encoder and decoder agree a priori on a binning scheme that divides the support of the prior/public distribution, which is used to control the search during encoding. Disclaimer: while I'm familiar with the basic literature (e.g. REC, PFRL, ORC), I haven't kept up with the literature since 2022. I'm more than willing to change my scores if the authors point out I missed something or didn't fully understand the contributions. Strengths: - The paper is nicely written. This is notably a complicated topic to understand and I feel the authors made quite a didactic effort. - The excess code-length is bounded for the distributions found in practice for NTC. Weaknesses: - Line 21: I feel this statement is misleading. In the first scenario (lines 14 to 20), the actual value taken on by the random variable is what is being transmitted. Meanwhile, in channel-simulation, all we care about is that the received quantity be distributed according to $Q_{Z | X}$. These are fundamentally different problems. The comparison would make some sense if, in the first scenario, the decoder is necessarily stochastic, as decoding can be seen as sampling from some posterior distribution over the data conditioned on the latent. Technical Quality: 4 Clarity: 4 Questions for Authors: - Line 66: Why is it important for the state $S$ to be random? I don't see this being used anywhere. - Line 213: The non-exactness here is with respect to $Q$. REC-like algorithms are usually formulated to be used in the latent space, i.e., the encoder wants to send a sample from $Q_{Z | X}$, but what we care a about from a practical standpoint is to transmit the data $X$ according to some rate-distortion trade-off. Shouldn't this discussion be formulated around the data space (maybe this is too hard of a problem?)? - Regarding the binning prior $\pi$. The target distribution is a function of $X$, i.e., $Q_{Z | X}$. Isn't it hard to pick a good $\pi$ given this distribution is constantly changing? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: - Line 267: The results are computationally intensive, and thus impractical, even for MNIST. Requiring the value of mutual information to be known by encoder/decoder seems a bit much (the authors mention this in the limitations section). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and insightful review. We are delighted that the reviewer recognizes the difficulty of this task and appreciates the contribution of our work. Below, we respond to the reviewer's concerns and questions. Should the reviewer find our answers satisfactory, we kindly invite them to consider raising their score. **Weakness: Line 21 in the Introduction is misleading, quantization and REC are fundamentally different** We thank the reviewer for raising this concern. Upon revisiting the paragraphs the reviewer mentions, we agree that our argument can easily appear as an apples-to-oranges comparison. Hence, to clarify: the point we wanted to make here is that quantization makes end-to-end learning difficult due to its non-differentiability. Hence, we advocate for REC, which swaps out quantization for some continuous perturbation as a means of limiting the latent representation’s information content, as the latter integrates well with end-to-end training thanks to the reparameterization trick. We will clarify the introduction in our camera-ready version based on this discussion. However, we’d also like to note that the difference between using 1) quantization + entropy coding or 2) relative entropy coding for lossy compression is not necessarily as big as the reviewer is claiming. The analysis of quantization often relies on treating the quantization error as uniform noise. This approximation has been remarkably successful in the study of quantization and in practice, as it is the foundation of Balle et al.’s neural compression methods as well. Furthermore, Agustsson and Theis (2020) show that quantization-based schemes can be turned into relative entropy coding-based schemes (with uniform target distributions) with a simple modification. **Regarding Questions:** 1) > Why is it important for the state $S$ to be random? Thanks for raising this point. By random state $S$, we mean the common randomness between communication parties, a standard requirement by all channel simulation schemes. We will clarify our terminology in the camera-ready version of the paper. 2) > Shouldn't this discussion be formulated around the data space? This is a great question! While we do not discuss this in our paper, this concern is precisely why we use total variation to measure the approximation error, as bounding the error in latent space gives us an immediate guarantee in data space. More formally, denote the decoder by $f$, the true target distribution as $Q$ and the approximate target $\tilde{Q}$. Furthermore, assume $D_{TV}[Q \Vert \tilde{Q}] \leq \epsilon$. By the data processing inequality, we have $D_{TV}[f_{\*}Q \Vert f_{\*}\tilde{Q}] \leq\epsilon$, where $f_{\*}Q$ denotes the pushforward measure. For more discussion on this topic, please Section III in Flamich and Wells (2024); we will update the camera-ready version to include this discussion. 3) > Isn't it hard to pick a good $\pi$ given this distribution is constantly changing? No, thankfully is not difficult! - first, the partition is fixed across all data points, and hence, this operation is amortized and not expensive; - second, for ORC (Sec 4.2), to sample from eq (15), we only need to first sample from $Q$ and find the bin in which the sample lies. Therefore, we do not need extra operations for pre-processing; for PFR (Sec 4.1), we need to calculate $\pi$ in eq(12) per dimension for each data point before sampling. But this is still much cheaper compared to sampling; - additionally, we highlight that the receiver does not need to be aware of $\pi$ to reconstruct the sample. Therefore, while the $\pi$ we use differs for different data points, there is no extra cost in transmitting. **Regarding Limitations:** > The results are computationally intensive, and thus impractical, even for MNIST. We do not necessarily agree that the VAE results are computationally intensive and impractical. We use $2^{16}$ samples to encode a block that requires around $2^{50}$ samples by previous methods. With our method, we have already largely reduced the computation cost and made REC more practical, though we definitely believe that this can be further improved. > Requiring the value of mutual information to be known by encoder/decoder seems a bit much. We agree that this is not a totally general assumption. However, it is not too much for a practical neural compression application. In most neural compression applications, including data compression and (potentially) federated learning, we have access to training data on which we can easily estimate the dimensionwise mutual information. Also, compared with assumptions for previous faster methods (either assuming 1D or uniform/Gaussian with a known scale per dim), our assumption is already a lot more general. **References** Agustsson, E., & Theis, L. (2020). Universally quantized neural compression. Advances in neural information processing systems, 33, 12367-12376. Flamich, G., & Wells, L. (2024). Some Notes on the Sample Complexity of Approximate Channel Simulation. arXiv preprint arXiv:2405.04363. --- Rebuttal Comment 1.1: Comment: I'm satisfied with the responses in general and have updated my score accordingly. Thank you for the detailed response. > However, we’d also like to note that the difference between using [...] Agreed. To be clear, I in no way was trying to make a technical distinction at the practical level, but just thought it might be easier for a reader outside the field to digest. > We do not necessarily agree that the VAE results are computationally intensive and impractical. [...] This is a fair point: within the REC family this paper significantly improves upon the number of candidates needed, which relates to computational complexity. I guess my comment was geared towards REC methods in general, but I believe that isn't a fair way to review a paper. > We agree that this is not a totally general assumption. [...] I guess this makes sense. It would be nice if this discussion was in the paper, even if in the appendix (but referenced in the main text), as it seems a bit arbitrary at first glance. Maybe an example where this is already assumed to be true, even if implicitly by calculating, for example, cross-correlations in the gaussian setting. --- Rebuttal 2: Title: Thank you for your review! Please consider our response Comment: We thank the reviewer once again for their effort in the reviewing process. As there are only a few working days left in the discussion period, we would like to ask if our response has satisfied the reviewer’s concerns. If so, we kindly invite the reviewer to consider raising their score. If any concerns remain, we are happy to discuss them further here.
Summary: **Global disclaimer:** I am very unfamiliar with the topic of the paper. I did my best to try and read the literature and understand as much as I could but my input may be very limited. **Summary:** The paper focus on relative entropy coding (REC) algorithms and propose to circumvent a major pitfall which is the prohibitive encoding time. Indeed the commonly used PFR algorithm requires to sample points until finding a good enough alignment with the target distribution. Efficient algorithms only exist in dimension 1. The authors propose to refine the algorithm by partitioning the search space into bins that will better drive the sampling from the coding to the target distribution. The author then theoretically prove that their approach by is founded (Theorem 3.1), and derive precise codelength bounds for the space partition algorithm. They also discuss a generic partitionning strategy. Finally, a benchmark highlight the benefits of the space partition approach for compression against the standard PFR algorithm on toy example and real datasets. Strengths: The paper is very didactic, well written and is quite nice to read. The proposed approached is well structured and detailed (sections 3.1). The theory exposed in the paper is sound and the proof seem correct to me. The benchmark is well made and honest. Finally the appendix is very helpful, I particularly appreciated appendix B that shed some light on the partition strategy. Weaknesses: The idea is finally quite elementary: dividing the space once for all into bins. The proofs are "elementary" (still technical) refinements from the classical PFR theory (starting point is often [Li and El Gamal, 2018] and [ Flamich, 2024]). Technical Quality: 3 Clarity: 4 Questions for Authors: - How the bounds from Theorem 3.1 do compare with the bound from [Li and El Gamal, 2018, Theorem 1] in the same setting, i.e. when $J=1$? More generally, could we recover a bound such as Proposition 3.2 but with a dependence in $J$. - The number of intervals for each axis is justified in Appendix B, but could the author give more details about how these interval are constructed? - The proposed approach to the construction of the bins follow depends on the chosen representation. Would it be possible to automatically find a transformation prior to the bining that would provide better results while keeping the same strategy? - Is there a way to update $\pi$ along the algorithm, using e.g. the Gumbel trick and gradient based method? - It would be interesting to add an ablation study on the number of intervals per axis. - Table 1 shows some improvements in number of steps but it would be interesting to have the same thing in time. Could you elaborate about Figure 3, I do not understand what is meant by runtime here. Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The approach is not fully generic and despite very good success on some examples, it could fail on other as clearly explained in Appendix B. Typos: the punctuation missing in the maths of the appendix **Proposed score:** My rating is based on the fact that the paper is sound, the maths are correct but the scope of the idea is quite simple. I do believe that this work is well presented, interesting and worth publication. On the other hand I am not familiar with this field. As such, I propose a score of 5/10 (a weak accept) but I am very open to discussion with the authors and the other reviewers. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their dedicated time and constructive review, which we feel largely enhances the quality of our paper. While the reviewer may not be familiar with this field, we believe they understand most of our manuscript and method well. We are delighted that the reviewer found our paper well-written and recognized its soundness. Below, we respond to the reviewer's concerns, and we are happy to discuss any further questions the reviewer might have. Should the reviewer find our answers satisfactory, we kindly invite them to consider raising their score. **Weakness: the idea and proofs are elementary**: While we agree that the main idea behind our method is simple, we respectfully disagree that it is a weakness on three grounds. (a) ideas that are simple and work well in practice have the largest potential to be impactful in the field as they are easy to understand and replicate. This is the case for our method: our ideas improved performance by many orders of magnitude in terms of the largest KL values that many future REC algorithms could consider. (b) the theory justifying this idea required new insight; mainly around deriving the overhead term $\epsilon$ that previous works did not deal with or consider. (c) the general insight that we can use the dimensionwise mutual information to devise schemes to improve compression speed is a significant contribution to the field, and we believe that any future scheme that does not use this additional information will be either limited in speed (as all previous schemes are) or codelength. **Regarding Questions:** 1) > How Theorem 3.1 compare with the bound from [Li and El Gamal, 2018, Theorem 1] when $J=1$ when $J=1$, our algorithm is equivalent to PFR, and therefore, we end up with the same bound as [Li and El Gamal, 2018, Theorem 1]. This can be seen from eq(38) on page 17: when $J=1$, eq (38) becomes $E\log_2[ \frac{q(z)}{p(z)}+1]$. We can then follow the proof of [Li and El Gamal, 2018, Appendix p13]. > Could we recover a bound with a dependence in $J$? Please note that $\epsilon$ in Theorem 3.1 already expresses a fine-grained dependence on $J$. Furthermore, Proposition 3.2 also depends on $J$ in the sense that it is derived by the extreme case $J = 2^{KL}$ and holds for all $J \leq 2^{KL}$. It is an interesting question if there is a bound that interpolates $J=1$ and $J = 2^{KL}$. In our paper, we did not explore this because, for our method, we always want to set $J$ as large as possible to maximize the gain in speed. 2) > Could the author give more details about how these intervals are constructed? As we discussed in our manuscript from line 241 to line 249, we partition the d-th axis into approximately $2^{n_d}$ intervals, where $n_d = D_{KL}[Q||P] I_d / \sum_{d’} I_{d’}$. We provide the implementation details of constructing these bins from line 621 to line 630. In fact, we empirically found that the methods are NOT very sensitive to how we construct the intervals, as long as we avoid partitioning along uninformative dimensions. Please see the rebuttal PDF (Fig 3) for further empirical verification. 3) > Would it be possible to automatically find a transformation prior to the binning that would provide better results while keeping the same strategy: It is not possible to design such a transformation. Note that to avoid sending the transformation (which can be expensive), it should be shared across all data points X. Therefore, if we want to find a transformation to the prior, it is equivalent to asking if we can find a better prior that provides better results on average. 4) > Is there a way to update $\pi$ along the algorithm Since our algorithm will work for any $\pi$, it is possible to update it. However, there is no point in doing so as the optimal form of $\pi$ is already given in eq (12) and eq (15), which we can use at negligible computational cost. 5) > It would be interesting to add an ablation study on the number of intervals per axis Thank you for this suggestion. We added ablation studies on toy examples and INR experiments. Please find the results in the PDF we uploaded to global rebuttal. We will add these plots to our camera-ready version. We now explain these studies, and please also see the global rebuttal for a more detailed description: - We first provide a qualitative visualization in Figure 1 in our rebuttal comparing 3 different binning strategies: only partitioning the collapsed dimension; randomly assigning the number of intervals per axis; and assigning intervals per axis according to MI (our suggested strategy). We also include standard ORC for reference. As we can see, our suggested strategy works best, whereas partitioning only the collapsed dimension yields almost the same results as standard ORC. This verifies our discussion on how the partitioning strategies will influence the runtime in Appendix B.2 in our manuscript. - We then provide quantitive ablations on toy examples (Fig 2) and INR experiments (Fig 3). We can see our suggested partition strategy (partitioning according to MI) is always better than randomly assigning intervals per axis. Also, we found if we first remove uninformative dimensions (mutual information $\approx$ 0), and then randomly assign intervals to other dimensions, the performance is only slightly worse than our proposed partition strategy. This indicates that our algorithm is not very sensitive to how we construct the intervals, as long as we avoid partitioning along uninformative dimensions. 6) > it would be interesting to have Tab 1 in time The sample size is linearly proportional to the runtime, so we only provide this in Fig 3 and Tab 1. The sample size is a better metric as it is implementation-agnostic, unlike wallclock time. > I do not understand what is meant by runtime in Fig 3 This figure shows ‘how many samples in ORC we need for a certain bias’. The y-axis shows the sample size to achieve a certain bias (measured by MMD). --- Rebuttal 2: Title: Thank you for your review! Please consider our response Comment: We thank the reviewer once again for their effort in reviewing our paper and for their constructive review. As there are only a few working days left in the discussion period, we would like to ask if our response has satisfied the reviewer’s concerns. If so, we kindly invite the reviewer to consider raising their score. If any concerns remain, we are happy to discuss them further here. --- Rebuttal Comment 2.1: Title: Response to authors Comment: I read the response, as well as the other reviews, and took time to understand the author's responses to all raised points. Overall I am pleased with the author's answers, some points I missed during the review have been clarified. **Weakness: the idea and proofs are elementary** I fully agree with the author's response, thank you for the clarifications. I would encourage the author to make point (c) maybe a little bit clearer in the introduction of the paper. I believe the two added figures in Appendix B2 fill an important gap in the original manuscript and shed some light on the proposed approach. I think I understood the paper, the ideas, and the approach the author are proposing and I think this work is strong and worth acceptance. As such I raise my grade to 8.
Summary: The paper proposes a space partioning technique to speed up relative entropy coding. In entropy coding the sender first transforms X into a representation Z that they encode. This particular step can done by Poisson functional representation (PFR). Unfortunately, PFR’s random runtime can be a significant drawback. To circumvent this issue, the author propose a space partitioning scheme to reduce runtime in practical scenarios. This can be seen (sic) as a search heuristic to a standard REC algorithm The author then proceed to give a theoretical analysis of the method and support their improvement with numerical results. Strengths: The paper adresses an important practical problem with many important application. The algorithm and theoretical results are stated clearly. The context and related work is clear to the reviewer (who is not an expert in the field). The numerics show clearly the advantage/improvements yielded by the technique. Weaknesses: Although the proposed technique is "simple and efficient", it lacks discussion whether the binning technique impacts the final result (besides codelength). Technical Quality: 3 Clarity: 3 Questions for Authors: - it is unclear to the reviewer how the grid should be set. -it is unclear to the reviewer how the choice of prior for the binning distribution influences the algorithm. - the weakness section raises a question that I would like to see adressed: how is the binning technique impacting the final result (besides codelength). I agree that the uniform distribution seems like a natural choice, but it would be good to discuss the choice of prior here I will happily increase my grade if those questions are properly adressed. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors recognize some clear limitations of their approach in the conclusion section. In particular, the approach proposed by the authors is too limited and does not extend to all possible practical cases. The reviewer agrees with those limitations, although still believes they should not be an obstacle for publication. as this work is a "first step" in this direction. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are thankful for the reviewer's time and detailed review and are delighted that the reviewer found our paper clear and appreciated our technical and theoretical contributions. Below, we respond to the reviewer’s questions. We are happy to discuss any further concerns the reviewer might have. Should we have addressed the reviewer’s concerns and questions, we kindly invite them to raise their score. 1. > it is unclear to the reviewer how the grid should be set: We discuss how we set the bins in our manuscript from line 241 to line 249; we will signpost this discussion better in the camera-ready version. In short, in most neural compression applications, we can compute the mutual information per dimension $I_d$ from the training set. Specifically, for each training data $X$, we have access to its posterior $Q_{Z|X}$ during training, and we can calculate the KL divergence between $Q_{Z|X}$ and the prior $P_Z$ along each dimension. The dimension-wise MI $I_d$ is estimated by averaging the dimension-wise KL across all training data. Knowing the mutual information per dimension $I_d$, we partition the d-th axis into approximately $2^{n_d}$ intervals, where $n_d = D_{KL}[Q||P]\cdot I_d / \sum_{d’} I_{d’}$. We discuss the motivation for this choice in Appendix B.2. We also provide the implementation details from line 621 to line 630 on Page 24. 2. > how the binning technique impacts the final result (besides codelength): In Appendix B.2, we discuss how different binning strategies influence the results. For a further illustration of the effect of the chosen grid on the efficiency of the algorithm, please see Figure 1 in the attached rebuttal PDF. Essentially, if we only partition the space by dividing the axes where the prior $P$ and the target $Q$ have the same marginal, we will not reduce the runtime. However, luckily, as we discussed above, in most neural compression applications, we can compute the mutual information per dimension $I_d$ on the training set. If $I_d> 0$, it tells us, on average, that the prior $P$ and the target $Q$ will not have the same marginal along the d-th axis. Therefore, we propose to partition the d-th axis into approximately $2^{n_d}$ intervals, where $n_d = D_{KL}[Q||P]\cdot I_d / \sum_{d’} I_{d’}$. To help the reviewer further understand this argument, we use a toy example for visualization in our attached rebuttal PDF. In Figure 1 in our rebuttal PDF, we create a toy 5D Gaussian target, and we set dimension 1 to have 0 mutual information (we call it a collapsed/uninformative dimension). We compare our proposed method with 3 different binning strategies: only partitioning the collapsed dimension; randomly assigning the number of intervals per dimension; and partitioning the d-th axis into approximately $2^{n_d}$ intervals, where $n_d = D_{KL}[Q||P] I_d / \sum_{d’} I_{d’}$. We also include standard ORC for reference. We run the 4 settings with the same number of samples (20) and repeat each setting 5000 times. The error can be seen from the histogram of 5000 encoded results. We can see that partitioning according to mutual information yields the best results, whereas partitioning only the collapsed dimension yields almost the same results as standard ORC. 3. > how the choice of prior for the binning distribution influences the algorithm: We assume the reviewer is asking how the results will be influenced when each bin has different probabilities under the prior. We answer this question below, and we are happy to address any further questions if we have misunderstood the reviewer's question. Setting all bins to have the same probability mass under the prior is the basic requirement for our algorithm to adhere to the codelength outlined in Theorem 3.1 and Proposition 3.2. We note this in lines 104 and 414. This can be understood intuitively: under an ideal Bayesian treatment, the average Q_{Z|X} across all data X should equal the prior $P_Z$. Also, we note that the partitions are shared across all data X in the dataset. Therefore, ideally, we will expect the average probability (average is taken over all randomness and all X) that the encoded sample falls into any bin should be the same. We can also see the reason from our proof of Theorem 3.1. Specifically, we replace all $P(B_j)$ in eq(37) by $1/J$ in eq(38), which simplifies the expression and is essential for the subsequent derivation. **In Summary:** our algorithm and theory require all bins to have the same probability under the prior. Theorem 3.1 and Proposition 3.2 will hold for all binning strategies that satisfy this requirement. However, there are many (actually, infinitely many) binning strategies that satisfy this requirement. Different choices of binning strategies can have different runtimes or different biases given the same runtime budget. To this end, we propose to partition the d-th axis into approximately $2^{n_d}$ intervals, where $n_d = D_{KL}[Q||P] I_d / \sum_{d’} I_{d’}$. We have discussed the motivation and implementation details in our manuscript and provided an additional visualization to aid understanding in our rebuttal PDF(Fig 1) in the global response. --- Rebuttal 2: Title: Thank you for your review! Please consider our response Comment: We thank the reviewer once again for the effort put into reviewing our paper. As there are only a few working days left in the discussion period, we would like to ask if our response has satisfied the reviewer’s concerns. If so, we kindly invite the reviewer to consider raising their score. If any concerns remain, we are happy to discuss them further here. --- Rebuttal Comment 2.1: Title: Thank you for your response. Comment: I'm satisfied with the responses provided by the authors. I'm happy about the clarification and apologize for missing any details. I updated my score to reflect the discussion.
null
null
Rebuttal 1: Rebuttal: We extend our gratitude to all the reviewers for their detailed and comprehensive reviews and for their time spent reviewing our manuscript. We are delighted that the reviewers found our paper easy to follow and recognized our method's technical and theoretical contributions. We addressed their concerns in our responses. We also attach a PDF showing additional illustrative and ablation experiments for our method. Namely: - **In Figure 1, we visualize the approximation error of using ORC with different partitioning strategies on a toy problem**. we create a 5D Gaussian target, and we set dimension 1 to have 0 mutual information(we call it a collapsed/uninformative dimension). We compare three partition strategies: only partitioning the collapsed dimension; randomly assigning intervals to dimensions; and assigning intervals according to MI (our proposed strategy; specifically, we partition the d-th axis into approximately $2^{n_d}$ intervals, where $n_d = D_{KL}[Q||P]\cdot I_d / \sum_{d’} I_{d’}$). We also include standard ORC for reference. We run the four settings with the same number of samples (20) and repeat each setting 5000 times. We show the histogram of 5000 encoded results. *As we can see, assigning intervals according to MI works best, whereas partitioning only the collapsed dimension yields almost the same results as standard ORC. This verifies our discussion on how the partitioning strategies will influence the runtime and how we should choose the intervals in practice in Appendix B.2 in our manuscript.* - **As suggested by the reviewers we4Y, we provide ablation studies showing how partition strategies will influence performance**. We run the ablation on the Gaussian toy example (the experiment described in line 260 in our manuscript) and the CIFAR-10 compression experiments (the experiment described in line 286 in our manuscript). The results are shown in Figures 2 and 3 in our rebuttal PDF. *As we can see, our proposed partition strategy (partitioning each dimension according to mutual information) is always better than randomly assigning intervals per dim*. For CIFAR-10, we also compare the results by first removing uninformative dimensions (mutual information $\approx$ 0), and then randomly assigning intervals to other dimensions. We find this is only slightly worse than our proposed partition strategy. *This not only further verifies our discussion in Appendix B.2, but also indicates that our algorithm is not very sensitive to how we construct the intervals, as long as we avoid partitioning along uninformative dimensions.* Pdf: /pdf/617af0bd640563545096667f298d27b8c4b699a7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Diffusion Priors for Variational Likelihood Estimation and Image Denoising
Accept (spotlight)
Summary: This paper proposed a diffusion-based image denoising method. The method leverages the MAP framework with proposed adaptive likelihood estimation method and a pre-trained diffusion prior. Experiments on four real-world datasets validate the advantages of the proposed method. Strengths: 1. The paper is well-written and easy to understand. 2. The integration of a reverse diffusion process with adaptive likelihood estimation is novel. 3. The denoising performance of the proposed method outperforms that of other single-image denoising methods. The experimental results are substantial. Weaknesses: 1. The title of the paper does not accurately reflect its contributions. The use of diffusion priors to address inverse problems is widely studied, as the author has noted, and this paper builds upon the framework presented in [1]. The significant contribution of this paper appears to be the proposed alternative updating scheme for estimating the likelihood term. However, the title is too generic and does not capture this contribution. 2. The denoising computational cost is high, and the running time for the reverse diffusion process is lengthy. 3. In Algorithm 1: Difusion -> Diffusion. Technical Quality: 3 Clarity: 3 Questions for Authors: The proposed algorithm involves numerous hyperparameters. How are these selected, and is the algorithm robust to variations in hyperparameter settings? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have acknowledged the limitations of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your recognition of our work. Q1: The title of the paper does not accurately reflect its contributions. The use of diffusion priors to address inverse problems is widely studied, as the author has noted, and this paper builds upon the framework presented in [1]. The significant contribution of this paper appears to be the proposed alternative updating scheme for estimating the likelihood term. However, the title is too generic and does not capture this contribution. Reply: Thanks for your suggestion, and we would like to modify our title to the following one in the revised version if allowed: "Diffusion priors for variational likelihood estimation and image denoising". Q2: The denoising computational cost is high, and the running time for the reverse diffusion process is lengthy. Reply: As pointed out in the limitation of the main paper, our method relies on DDPM and large sampling steps for good denoising performance. We tried several other sampling steps and report the performance in the following Table: |Steps | $T=1000$ | $T=500$ | $T=250$ | | ----------- | ----------- | ----------- | ----------- | | SIDD Val | 34.76/0.887 | 33.54/0.838 | 23.89/0.825 | | CC | 38.01/0.959 | 37.18/0.947 | 22.72/0.716 | It can be observed that $T=500$ degrades the performance, which is acceptable though. In the case of $T=250$, the denoising completely fails. As a result, speeding up the inference is our key direction in the future. Q3: In Algorithm 1: Difusion -> Diffusion Reply: Thanks for your careful review, and we will correct it in the revised version. Q4: The proposed algorithm involves numerous hyperparameters. How are these selected, and is the algorithm robust to variations in hyperparameter settings? Reply: Our method involves three hyperparameters: $\beta$ in the noise prior precision, temperature $\gamma$ and kernel scale $s$. Regarding $\beta$ value, it roughly represents the noise level of noisy image $y$ (given $\alpha=1$) and we can choose $\beta$ according to the empirical variance of the textureless area of $y$ for one testset. kernel scale $s$ is set by considering the spatial correlation of noise present in real-world images. Temperature $\gamma$ is an empirical parameter that scales the likelihood function in Bayes' theorem and is generally set to values smaller than 1. We ablate these three parameters in the following tables regarding the CC dataset, which indicate that they are relatively robust to moderate changes. | $\beta$($\gamma=5,s=1$) | 5e-3 | 1e-2 | 1.5e-2 | | :--- | :---: | :---: | :---: | | PSNR/SSIM |38.03/0.957| 38.01/0.959 | 37.47/0.953| | $\gamma$($\beta=0.01,s=1$) | 1/4 | 1/5 | 1/10 | | :--- | :---: | :---: | :---: | | PSNR/SSIM| 37.90/0.957| 38.01/0.959| 37.80/0.953| | $s$($\gamma=5,\beta=0.01$)| 0.8 | 1.0 | 1.2 | | :--- | :---: | :---: | :---: | | PSNR/SSIM| 37.876/0.957| 38.01/0.959|38.10/0.959| --- Rebuttal Comment 1.1: Title: Thanks for the response. Comment: Thank you for your response. Regarding using fewer sampling steps, are you referring to the DDIM sampler? Additionally, utilizing advanced diffusion samplers, such as DPM-Solver, may help accelerate the sampling speed. I have no further concerns and will maintain my score. --- Reply to Comment 1.1.1: Comment: Thanks for your reply and suggestion. Regarding using fewer sampling steps in the response to Q2, we used the DDPM sampler with fewer steps (e.g., $T=500$ means taking one sample per two steps compared with original $T=1000$). As you suggested, we will try to incorporate DDIM or DPM-Solver into our method to improve the sampling efficiency while maintaining the denoising performance.
Summary: The authors propose a way to use diffusion priors for real-world image denoising where the noise statistics are complex and signal-dependent. They use variational inference to estimate the joint posterior of the noise precision and image throughout diffusion time. The result is a MAP estimate of the denoised image that takes into account signal dependence and spatial correlation of the observed noise. The authors also propose to use a low-resolution diffusion model to denoise high-resolution images. In experiments, the authors compare to single image-based denoising methods, a self-supervised denoising method, and diffusion-based methods. They show improved quantitative and qualitative performance against the baselines. Strengths: * The proposed method is sound and addresses a difficult problem. * Plenty of baselines are provided, and experiments show convincing quantitative and qualitative performance. Plus a good number of ablation studies is provided. * The proposed method works for other non-Gaussian synthetic noise as well as real-world camera noise. Weaknesses: * The proposed method only provides MAP estimates (not posterior samples). * Nit: Eq. 6 should be explained more clearly in the text. For example, A should be explicitly defined. And why is the same $\epsilon$ used twice? Shouldn’t it be $y_{t-1}=\sqrt{\bar{\alpha}_{t-1}}(x_0+A\epsilon_1)+\sqrt{1-\bar{\alpha}_{t-1}}\epsilon_2$ for $\epsilon_1,\epsilon_2$ independently drawn (since the measured noise is different from the synthetic noise added in the diffusion process)? Technical Quality: 3 Clarity: 3 Questions for Authors: * How difficult would it be to adapt this method to provide posterior samples? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I appreciate the authors providing a limitations section. There are probably more limitations of the method that could be discussed, such as not addressing posterior sampling. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your recognition of our work. As some questions overlap with the weaknesses, we will integrate and answer them together. Q1: The proposed method only provides MAP estimates (not posterior samples). How difficult would it be to adapt this method to provide posterior samples? Reply: Employing MAP inference is more straightforward for our method and also reduces the sampling randomness. In order to obtain posterior samples, we have to sample $x_{t-1}$ from $p(x_{t-1}|y_{t-1},x_t)$ rather than derive its MAP solution as done in Eq. (15). For our method, it is not very difficult to achieve that. First, we can approximate $p(y_{t-1}|x_{t-1})$ by Monte-Carlo integration, resulting in $$p(y_{t-1}|x_{t-1})= E_{g(\phi_{t-1})}p(y_{t-1}|x_{t-1}, \phi_{t-1}) \approx \frac{1}{M}\sum_{s=1}^Mp(y_{t-1}|x_{t-1}, \phi^s_{t-1}) \approx \mathcal{N}(y_{t-1};x_{t-1}, \frac{1}{M}\sum_{s=1}^M (\phi^s_{t-1})^{-1})$$ where we utilize uni-Gaussian to approximate the mixture Gaussian regarding the last term; $\phi^s_{t-1} \sim g(\phi_{t-1})$; $M$ is the number of Mente-Carlo samples and should be as large as possible. Then, as both $p(y_{t-1}|x_{t-1})$ and $p(x_{t-1}|x_t)$ follow Gaussian distributions, $p(x_{t-1}|y_{t-1},x_t)$ is also the Gaussian distribution, from which we can sample $x_{t-1}$ and the final $x_0$. We provide some posterior samples of the denoising result in Figure 3 of the 6958_rebuttal.pdf for your reference. We would like to discuss this posterior sampling variant in the revised paper. Q2: Nit: Eq. 6 should be explained more clearly in the text. For example, A should be explicitly defined. And why is the same ϵ used twice? Shouldn’t it be $y_ {t-1} =\sqrt{\bar{\alpha}_ {t-1}}(x_0+A\epsilon_1)+\sqrt{1-\bar{\alpha}_ {t-1}}\epsilon_2$ for $\epsilon_1$,$\epsilon_2$ independently drawn (since the measured noise is different from the synthetic noise added in the diffusion process)? Reply: First, we are grateful for pointing out the mistake in Eq. (6), and it should be $$y_ {t-1}=\sqrt{\bar{\alpha}_ {t-1}}(x_0+A\epsilon_1)+\sqrt{1-\bar{\alpha}_ {t-1}}\epsilon_2=x_ {t-1}+\sqrt{\bar{\alpha}_ {t-1}}A\epsilon_1, \epsilon_1 \sim \mathcal{N}(0, I), \epsilon_2 \sim \mathcal{N}(0, I)$$ In the following, we will give a detailed derivation and explanation of this equation. As analyzed in Section 3.2, in principle we can model the likelihood function of real-world noisy images as $p(y_0|x_0)=\mathcal{N}(x_0, \Sigma(x_0))$, where $\Sigma$ is a non-diagonal covariance matrix and its variance is related to its mean $x_0$ (or signal). In order to incorporate $y_0$ into the inverse diffusion process and shorten its gap to $x_{t-1}$, we can construct $y_{t-1}$ based Eq. (2) to obtain $y_ {t-1} = \sqrt{\bar{\alpha}_ {t-1}}y_0 + \sqrt{1-\bar{\alpha}_ {t-1}}\epsilon_2$. For $y_0$, it can be sampled from the multi-variate Gaussian $p(y_0|x_0)=\mathcal{N}(x_0, \Sigma(x_0))$, i.e., $y_0=x_0+A\epsilon_1$, where $AA^T=\Sigma(x_0)$, and $A$ is obtained by Cholesky decomposition. Finally, we obtain $$y_ {t-1} = \sqrt{\bar{\alpha}_ {t-1}}y_0 + \sqrt{1-\bar{\alpha}_ {t-1}}\epsilon_2 = \sqrt{\bar{\alpha}_ {t-1}} (x_0+A\epsilon_1) + \sqrt{1-\bar{\alpha}_ {t-1}}\epsilon_2 = \sqrt{\bar{\alpha}_ {t-1}} x_0 + \sqrt{1-\bar{\alpha}_ {t-1}}\epsilon_2 + \sqrt{\bar{\alpha}_ {t-1}} A\epsilon_1 = x_ {t-1} + \sqrt{\bar{\alpha}_ {t-1}} A\epsilon_1 $$ We will add the above explanation of Eq. (6) in the revised paper. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for their response. It would be good to add the explanation of Eq. 6 to the revised paper, but I would prefer not to add the posterior sampling variant. I would want to see further explanation/experiments to verify the proposed approximation of the posterior. I appreciate the authors' efforts at extending their method to posterior sampling nevertheless. I have no further questions/concerns and maintain my rating.
Summary: This work considers the problem of using adapt diffusion models for solving real-world image denoising problem, that is, the noise is not assumed to be i.i.d Gaussian. The authors statistically model the real-world noise as independent, non-identically distribution noise, and then incorporate the adaptive MAP inference into the reverse process of the diffusion model. Strengths: N/A Weaknesses: 1. The paper does not well motivate their choice of i.ni.d noise model. It is not clear why such an i.ni.d noise model can properly characterize the statistics of the real-world noise considered in the paper. 2. The paper is not well-written, lacking proper motivation and explanation of the design of the proposed method. 3. The approximation adopted in Eq.15 may lead to incorrect $x^\ast$. For example, when the function is non-convex, maximizing its Jenson low bounder produce an estimate that is far away from $x^\ast$. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. Why is an i.ni.d noise model used to characterize the real-world noise? 2. Why do we want to adaptively update the parameters in reverse diffusion? Please provide some high-level explanation. 3. Please justify the accuracy of the approximation used in Eq. 15. 4. In Fig.3, self2self shows the best numerical performance. Please discuss the result. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have discussed the limitations. --------------------- After Rebuttal -------------------- The responses have addressed my questions. I have raised my score to 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your review of our paper. As there are some overlaps between Questions and Weaknesses, we will consolidate and answer them together. Q1: The paper does not well motivate their choice of i.ni.d noise model. It is not clear why such an i.ni.d noise model can properly characterize the statistics of the real-world noise considered in the paper. Why is an i.ni.d noise model used to characterize the real-world noise? Reply: As analyzed in Section 3.2 of the main paper, the real-world image noise is spatially correlated and signal-dependent. A structured and heteroscedastic Gaussian likelihood function can theoretically model real-world noise, but it is computationally expensive due to the large covariance matrix and also impractical due to the unknown noise variance. The reason we chose *i.ni.d.* Gaussian likelihood function $p(y_0|x_0, \phi_0)=\mathcal{N}(y_0;x_0, \phi_0^{-1})$ to model real-world noise is that it allows us to model the spatially variant feature (*ni* in *i.**ni**.d*. means not identical) of real-world noise and meanwhile frees the modeling of covariance (*i* in **i**.ni.d. means independent), as already explained in lines 147-149 of Section 3.3. Such a choice trades off the modeling precision for practical feasibility. Based on this *i.ni.d.* Gaussian model, we can dynamically estimate the noise precision posterior during the inverse diffusion process and then obtain an optimal $x^{*}_{t-1}$ that balances the prior and measurement. As shown in Figure 4 of the main paper, the estimated variance of *i.ni.d.* noise model well matches the noise level of noisy images and is *signal-dependent*, implying that the estimated *i.ni.d.* model is suitable for real-world noise. Another feasible choice is the *i.i.d.* Gaussian likelihood combined with our adaptive likelihood estimation (ALE) for the precision **scalar** (not the **vector** in *i.ni.d.* model). This model assumes the same noise variance across the spatial locations of real-world images. We compare this variant against our method and show the result in the following Table: |Datasets|CC|FMDD| |-|-|-| |*i.ni.d.*+ALE (ours)| **38.01/0.959**| **33.14/0.860**| |*i.i.d.*+ALE| 33.66/0.856|27.29/0.549| It is observed that adopting *i.ni.d.* noise model is necessary for real-world denoising and significantly outperforms the *i.i.d.* noise model Q2: The paper is not well-written, lacking proper motivation and explanation of the design of the proposed method. Reply: As replied to Q1, we would like to argue that the motivation of introducing the *i.ni.d.* Gaussian likelihood for real-world noise has been explained in Section 3.2 and Section 3.3 of the main paper, please see lines 137-144 and lines 146-149. This *i.ni.d.* noise model has also been checked in Figure 4(a) of Section 4.3, where the non-identical variances successfully capture the spatially varying noise level in noisy images. Nevertheless, we noticed that such motivation was not mentioned in the Introduction section, which may confuse readers and impede the readability. We will modify our introduction in the revised version to include this motivation. Q3: Why do we want to adaptively update the parameters in reverse diffusion? Please provide some high-level explanation. Reply: The reason we estimate the precision $\phi_{t-1}$ during the inverse diffusion process is that specifying the accurate noise precision for each spatial location of $y_{t-1}, t=[0,\cdots T]$ is difficult and impractical, as indicated by lines 94-96, 150-152 of the main paper. Therefore, we assigned $\phi_{t-1}$ a prior and adaptively infer its posterior using variational Bayes. The updated precision successfully captures the spatially varying noise of noisy images, see Figure 4(a) in Section 4.3. Without the prior modeling and adaptive likelihood estimation, the denoising performance significantly degraded, as indicated in Table 3 of the main paper. Q4: The approximation adopted in Eq.15 may lead to incorrect x∗. For example, when the function is non-convex, maximizing its Jenson low bounder produce an estimate that is far away from x∗. Please justify the accuracy of the approximation used in Eq. 15. Reply: We argue that as $\log(\cdot)$ is a concave function and $\log p(y_{t-1}|x_{t-1})$ in Eq. (15) equals to $\log E_{g(\phi_t)}p(y_{t-1}|x_{t-1}, \phi_{t-1})$, applying Jensen's inequality **guarantees** to find its lower bound $E_{g(\phi_{t-1})}\log p(y_{t-1}|x_{t-1}, \phi_{t-1})$. Optimizing the lower bound generally produces satisfactory solutions, like in variational inference, VAE, and diffusion models. We can empirically check the approximation accuracy in Eq. (15) by approximating $p(y_{t-1}|x_{t-1})$ via Monte-Carlo integration $\frac{1}{M}\sum_{s=1}^Mp(y_{t-1}|x_{t-1}, \phi^s_{t-1}), \phi^s_{t-1} \sim g(\phi_{t-1})$. Then the solution $\bar{\pi}_ {t-1}y_ {t-1} + (1-\bar{\pi}_ {t-1}\mu_\theta(x_t, t))$ with $\bar{\pi}_ {t-1} = \frac{\sigma_{t}^2}{\sigma_{t}^2 + \frac{1}{M}\sum_{s=1}^M (\phi^s_ {t-1})^{-1}}$ approaches to the optimal $x^\dagger_{t-1}$, which can be considered as the ground-truth MAP estimate. We compare the denoising performance of using $x^\dagger_{t-1}$ against using $x^*_{t-1}$ in Eq. (15) in the following Table, which shows that the two methods perform similarly and hence indicates the approximation error is small. |Method|Using $x^\dagger_{t-1}$ (GT)| Using $x^*_{t-1}$ (in Eq. (15))| |-|-|-| |SIDD Val|34.78/0.888|34.76/0.887| |CC|38.01/0.958| 38.01/0.959 Q5: In Fig.3, self2self shows the best numerical performance. Please discuss the result. Reply: We note that Self2Self only performs well on real-world images with relatively light noise, e.g., PolyU and CC datasets, as shown in Table 3. And self2self happened to achieve the largest PSNR for that image of PolyU in Figure 3. However, its numerical performance on the whole PolyU underperforms ours. Moreover, Self2Self fails to handle heavy noise, as shown in Table 3 and Figure 5 of the paper. --- Rebuttal Comment 1.1: Title: Thanks for the response. Score raised. Comment: I thank the authors for the thorough response to my comments. I found them very clear and helpful for me to better understand the motivation of the noise model and derivation of the proposed method. After a careful re-evaluation, I have raised my score from 3 to 5. --- Reply to Comment 1.1.1: Comment: Thank you for taking the time to read the rebuttal. We are glad that our response solved your questions and thank you for increasing your score.
Summary: Overall, this paper presents a novel method in real-world image denoising by proposing a sophisticated method that combines adaptive likelihood estimation, MAP inference, and variational Bayes within the diffusion model framework. Strengths: 1. Utilizing variational Bayes to dynamically infer the precision posterior is innovative and provides a more accurate modeling of real-world noise. 2. By exploring local priors in low-resolution diffusion models, the method can directly handle high-resolution noisy images without extensive patch-based or resize-based operations. 3. The proposed method's effectiveness is demonstrated through extensive experiments on diverse real-world datasets, showcasing superior performance compared to other unsupervised denoising methods. Weaknesses: 1. The method relies on hyperparameters (e.g., prior precision, temperature parameter, kernel scale etc.), which might require careful tuning for optimal performance for different datasets. Authors criticize previous methods for being dependent on hyperparameters but do not overcome such limitation in this paper. 2. The contribution of local Gaussian convolution might be overclaimed. As shown in Table 3, the performance improvements brought by rectification are pretty limited, which look like experimental variations. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. What is the NFE for DDRM used in this paper? What is the inference time for DDRM and your method respectively? 2. In Algorithm 1, the threshold is set to 1e-6. Is performance sensitive to different values? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Authors discussed the limitation in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your recognition of our work. Q1: The method relies on hyperparameters (e.g., prior precision, temperature parameter, kernel scale etc.), which might require careful tuning for optimal performance for different datasets. Authors criticize previous methods for being dependent on hyperparameters but do not overcome such limitation in this paper. Reply: Our method does rely on some hyperparameters, but they are relatively robust for different choices. We ablate different $\beta$, temperature $\gamma$, and kernel scale $s$ on CC dataset, and report the corresponding performance in the following Tables: | $\beta$($\gamma=5,s=1$) | 5e-3 | 1e-2 | 1.5e-2 | | :--- | :---: | :---: | :---: | | PSNR/SSIM |38.03/0.957| 38.01/0.959 | 37.47/0.953| | $\gamma$($\beta=0.01,s=1$) | 1/4 | 1/5 | 1/10 | | :--- | :---: | :---: | :---: | | PSNR/SSIM| 37.90/0.957| 38.01/0.959| 37.80/0.953| | $s$($\gamma=5,\beta=0.01$)| 0.8 | 1.0 | 1.2 | | :--- | :---: | :---: | :---: | | PSNR/SSIM| 37.876/0.957| 38.01/0.959|38.10/0.959| It can be observed that the denoising performance resulting from different hyperparameters does not change significantly, indicating that they are quite insensitive to moderate changes. In addition, regarding compared methods like GDP and DR2, we carefully tuned their hyperparameters, but they still perform poorly in real-world denoising. In the modified version, we will highlight their inability to handle real-world noise rather than the reliance on hyperparameters. Q2: The contribution of local Gaussian convolution might be overclaimed. As shown in Table 3, the performance improvements brought by rectification are pretty limited, which look like experimental variations. Reply: We agree that the local Gaussian convolution introduced in Eq. (17) is a small technical improvement rather than one of the main contributions of our paper. Therefore, we did not include it into our main contributions, see lines 59-63 of the main paper. But we believe that the local Gaussian convolution is indeed helpful as it consistently enhances the denoising performance of all test datasets as shown in Table 3 regardless of the minor improvement on FMDD and PolyU datasets. Q3: What is the NFE for DDRM used in this paper? What is the inference time for DDRM and your method respectively? Reply: We used the DDIM sampling and 20 NFEs for DDRM, following its default setting. The inference time of DDRM for images with resolution $256\times 256$ is about 4s while our method costs about 230s. As discussed in the limitation section of the main paper, our method relies on the DDPM and large sampling steps for good performance. Speeding up the inference is our next research point. Q4: In Algorithm 1, the threshold is set to 1e-6. Is performance sensitive to different values? Reply: The denoising performance is insensitive to different thresholds as the alternate optimization in lines 5-8 of Algorithm 1 is guaranteed to converge, as indicated by [1]. In the following table, we ablate different thresholds on CC dataset, and results indicate that the threshold is robust. | Threshold | 1e-5 | 1e-6 | 1e-7 | 1e-8 | | ----------- | ----------- | ----------- | ----------- | ----------- | | PSNR/SSIM | 38.00/0.958 | 38.01/0.959 | 38.01/0.959 | 38.01/0.959 | [1] Bishop C M, Nasrabadi N M. Pattern recognition and machine learning[M]. New York: springer, 2006. --- Rebuttal Comment 1.1: Comment: Thank you so much for the response. I have no further questions and will maintain my initial rating.
Rebuttal 1: Rebuttal: We thank all reviewers for their review of our paper. The response to each reviewer has been posted separately in the following. The 6958_rebuttal.pdf contains figures related to responses to Reviewers UCEq and sawX. Pdf: /pdf/ce75c835ed7817d00674791785eaf5562f3c70d2.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes a method to tackle real world noise using an adaptive likelihood estimation . For this the authors develop a technique to dynamically infer the precision posterior using variational bayes. The authors perform comprehensive evaluation on real world denoising datasets to show the effectiveness of the method. Strengths: 1. The authors propose an method that could adaptively estimate the posterior distribution and make efficient use of off the shelf methods for real world denoising 2. The authors find that local priors from diffusion models pre-trained with LR image are more effective in real world denoising tasks. 3. The paper is well written and easy to follow. Derivations seems to be accurate. Weaknesses: 1. The authors have utilized a gamma based hyperprior without reasoning the design choice or comparing with other possible prior distribution. In order to show the need for a gamma prior, further experimental analysis are required 2. The methods seems to be applicable only for real world denoising limiting its practical utility. 3. Analysis with key diffusion based restoration methods like [1,2] are missing [1] Diffusion Posterior Sampling for General Noisy Inverse Problems [2] Manifold Preserving Guided Diffusion 4. [2] achieves good results with just 50 steps of sampling.Could the authors provide an analysis of the variation of performance with different number of sampling steps. Technical Quality: 2 Clarity: 4 Questions for Authors: 1. Can the authors provide an intuition of utilizing a gamma hyperprior over a gaussian prior like in [1,2]. [1] Diffusion Posterior Sampling for General Noisy Inverse Problems [2] Manifold Preserving Guided Diffusion 2. Is the method generalizable to other linear inverse problems than denoising [1,2] and DDRM, are all generic methods providing a general formulation to all linear inverse problems. 3. Could the authors provide an analysis of the variation of performance with different number of sampling steps. Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 2 Limitations: Yes, The authors have presented the limitations in a separate section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your recognition of our work. As there are some overlaps between Questions and Weaknesses, we will consolidate and answer them together. Q1: The authors have utilized a gamma based hyperprior without reasoning the design choice or comparing with other possible prior distribution. In order to show the need for a gamma prior, further experimental analysis are required Reply: In our paper, we utilized the *i.ni.d.* Gaussian distribution to model real-world noise. And we were dedicated to dynamically estimating the noise precision $\phi_{t-1}$ at step $t$ by introducing the precision prior and conducting posterior inference. The reason we chose gamma prior for $\phi_{t-1}$ is that gamma distribution is the conjugate prior for the Gaussian likelihood function with unknown precision, and the corresponding precision posterior also follows a gamma. Such property is commonly used in variational Bayes and allows us to derive the closed-form expression for the precision posterior, see Eqs. (13, 14). As indicated by Table 3 in the main paper, without the prior modeling and variational Bayes, the denoising performance significantly degraded, highlighting the significance of introducing the gamma prior. Other informative priors are also feasible in principle, but there may be no closed-form posterior and the problem becomes intractable. We also experimented with a non-informative prior, Jeffery Prior. Particularly, Jeffery Prior for the precision of Gaussian results in closed-form posterior. If we choose $p(\phi_{t-1})\propto \frac{1}{\phi_{t-1}}$, then the precision posterior $g(\phi_{t-1})$ is a gamma distribution with shape $\hat{\alpha} _ {t-1}=\frac{1}{2 \gamma}$, and rate $\hat{\beta} _ {t-1}=\frac{(y_ {t-1} -\hat{\mu} _ {t-1} )^2 + \hat{\sigma}^2 _ {t-1}}{2 \gamma}$. Hence, we do not need to specify any extra parameters for $p(\phi_ {t-1})$ (i.e., $\alpha$ and $\beta$, refer to Eq. (14)). However, we found that such prior results in pure zero images and is useless, indicating the importance of utilizing informative priors, e.g., gamma. Q2: The methods seems to be applicable only for real world denoising limiting its practical utility. Is the method generalizable to other linear inverse problems than denoising [1,2] and DDRM, are all generic methods providing a general formulation to all linear inverse problems. Reply: The primary objective of this paper is to introduce diffusion priors for real-world image denoising, which is significant in photography, microscopy, and CT imaging. In addition to real-world denoising, Section 4.4 of the main paper also verified the effectiveness of our method on synthetic noise removal, including Poisson and Bernoulli noise. Moreover, our method is readily available for image restoration with pixel-wise degradation, including image demosaicing and inpainting. To adapt our method to these tasks, we define the forward process $y_0 = M \odot x_0$, where M is the degradation operator, and $\odot$ denotes element-wise multiplication. For image inpainting and demosaicing, $M$ is the binary mask with 0 values indicating missing pixels of $y_0$. We can incorporate $M$ into $p(y_{t-1}|x_{t-1},\phi_{t-1})$ in Eq. (15), which results in $\hat{\pi}_ {t-1 }=\frac{M\sigma^2_t}{M\sigma^2_t+1/\text{E}(\phi_ {t-1} )}$. We provide visual results of inpainting and demosaicing in Figure 1 of the 6958_rebuttal.pdf. In the following table, we also compare our method against DDRM on image demosaicing, and our method shows better results. |Demosaicing (CFA:RGGB)|Set14|CBSD68| |-|-|-| |DDRM|24.68/0.714|24.52/0.705| |Ours|**26.02/0.756**|**25.43/0.732**| When $M$ is not the pixel-wise degradation, our method needs more effort to adapt to. Q3: Analysis with key diffusion based restoration methods like [1,2] are missing. Reply: DPS [1] approximates the gradient of likelihood $\nabla_ {x_t} \log p_t(y_0|x_t)$ at step $t$ to $\nabla_ {x_t} \log p(y_0|\hat{x}_ 0)$, which acts as a hard data-consistency term that relates $y_0$ to $x_t$ in the generation process. If the forward model $p(y_0|x_0)$ is given, $x_t$ can be guided by $-s\nabla_{x_t}\log p(y_0|\hat{x}_0)$ with $s$ as the guidance level. MPGD [2] further proposed to first guide $\hat{x}_0$ based on $p(y_0|x_0)$, and then incorporated the modified $\hat{x}_0$ into the DDIM sampling. Although they perform well in general noisy inverse problems, they still rely on accurate noise models and are only verified on synthetic noise removal. The following experiment shows that DPS underperforms our method in real-world noise removal. |Datsets|SDDD Val|CC|PolyU| |-|-|-|-| |DPS|34.46/0.881|34.48/0.904|36.26/0.940| |Ours|**34.76/0.887**|**38.01/0.959** |**38.71/0.970**| Q4: Can the authors provide an intuition of utilizing a gamma hyperprior over a gaussian prior like in [1,2] Reply: The intuitive interpretation of our method is hijacking the unconditional $x_{t-1}$ at each step and forcing it to approach $y_{t-1}$. The modified $x^*_{t-1}$ is the convex combination of the mean of $x_{t-1}$ and $y_{t-1}$, with the coefficient $\hat{\pi}_ {t-1}$ defined by the relative magnitude between $\sigma^2_{t}$ and the inverse of gamma posterior $g(\phi_{t-1})$. We provide a visual illustration of our method in Figure 2 of the 6958_rebuttal.pdf. Q5: [2] achieves good results with just 50 steps of sampling.Could the authors provide an analysis of the variation of performance with different number of sampling steps. Reply: We ablate different sampling steps in the following: | Steps| $T=1000$|$T=500$|$T=250$| | -| - | - | - | | SIDD Val| **34.76/0.887**|33.54/0.838| 23.89/0.825| | CC| **38.01/0.959**|37.18/0.947|22.72/0.716| It is observed that $T=500$ degrades the performance, which is acceptable though. When $T=250$, the denoising fails. As pointed out in the limitation Section, our method relies on DDPM and requires large sampling steps for good performance. Speeding up the inference is our key direction in the future. --- Rebuttal Comment 1.1: Comment: Dear authors, I thank you for the rebuttal. After going through the rebuttal, I have decided to maintain my rating. The key factors impacting this score is the large number of diffusion steps needed for good performance and the limited applicability to just denoising problems. --- Reply to Comment 1.1.1: Comment: Thanks so much for your reply. We acknowledge that the proposed method requires large sampling steps, and improving the sampling efficiency will be our next move. Regarding the application to other image restoration tasks, we show that it is straightforward to apply our method to pixel-wise degradation tasks, e.g., image inpainting and demosaicing, as replied to Q2. In the future, we would like to adapt our method to other non-pixel-wise degradations, e.g., image deblurring and super-resolution.
null
null
null
null
null
null
A Theoretical Understanding of Self-Correction through In-context Alignment
Accept (poster)
Summary: This paper investigates how large language models (LLMs) can improve their performance through self-correction without external feedback. The authors provide a theoretical framework for understanding self-correction as an in-context alignment process. They demonstrate that LLMs can refine their responses based on self-generated feedback using Transformer modules. They conduct experiments on synthetic datasets, showing that self-correction can significantly enhance model performance and mitigate issues like social bias and jailbreak attacks. Strengths: 1. The paper offers a robust theoretical framework for understanding self-correction in LLMs, explores application of self-correction to real-world issues such as social bias and jailbreak attacks, highlighting its practical significance. 2. The detailed exploration of various transformer components (e.g., softmax attention, multi-head attention) and their roles in self-correction provides valuable insights for model design. 3. The paper is well-written and the results are clearly demonstrated. Weaknesses: This paper only focuses on two scenarios: social bias and jailbreak attacks. It lacks of showing effectiveness of this method on other types of tasks like reasoning. Technical Quality: 3 Clarity: 4 Questions for Authors: What potential strategies can be employed to improve the robustness of self-correction when the self-generated feedback is noisy or inaccurate? Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, they discussed the limitations in checklist Limitation section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback and acknowledging our theoretical insights! We address your remaining concerns below. --- **Q1.** This paper only focuses on two scenarios: social bias and jailbreak attacks. It lacks of showing effectiveness of this method on other types of tasks like reasoning. **A1.** We note that, as mentioned in the paper, there are many ongoing studies and debates on whether self-correction works for reasoning, e.g. [1,2]. Although the ability of large language models (LLMs) to self-correct in reasoning tasks remains controversial, some existing studies on real world reasoning datasets have shown that self-correction can actually improve the response accuracy, which aligns well with our theory. For instance, - Zhang et al [3] examined the impact of different critics on correction, finding that stronger critics can lead to higher correction gains. This finding aligns well with our theory and is consistent with our conclusions drawn from the noisy critic experiments on both synthetic dataset and the BBQ dataset. - In addition, Lin et al [4] explored the linear relationship between generation, critic, and correction. Their conclusion (1) that critique-focused training markedly enhances performance also aligns well with our theory. - For synthetic reasoning dataset, the (very recent) ICML’24 tutorial shows that training with mistakes+corrections in a controlled setting does boost model accuracy (78% $\rightarrow$94%) [5]. We will include these useful reference results in the revision for a comprehensive discussion. As a theoretical study, we do not intend to develop a superior approach among these variants, but to validate our theoretical insights on real-world tasks. As explained in Sec 5 (Line 307), our theory suggests that self-correction indeed has the potential to improve the alignment of LLMs, especially when the critics are relatively accurate”, which motivates us to study its gains in the social bias and jailbreak scenarios where models can provide accurate critics. Since reasoning tasks require more accurate critics and better refinement strategies, it requires a more careful practical designs, which is beyond the scope of this work. Nevertheless, we believe that our understanding of self-correction as an in-context alignment can provide principled insights into future designs. **Ref:** [1] Huang, Jie, et al. "Large language models cannot self-correct reasoning yet." ICLR 2024. [2] Valmeekam, Karthik, Matthew Marquez, and Subbarao Kambhampati. "Can large language models really improve by self-critiquing their own plans?." arXiv preprint arXiv:2310.08118 (2023). [3] Zhang et al. “Small Language Models Need Strong Verifiers to Self-Correct Reasoning.” arXiv preprint arXiv:[2404.17140v1](https://arxiv.org/pdf/2404.17140v1)(2024) [4] Lin, Guo, et al. “CRITICBENCH: Benchmarking LLMs for Critique-Correct Reasoning” arXiv preprint [arXiv:2402.14809](https://ui.adsabs.harvard.edu/link_gateway/2024arXiv240214809L/arxiv:2402.14809) (2024) [5] Allen-Zhu et al. [Physics of Language Models - Part 2.2: How to Learn From Mistakes](https://physics.allen-zhu.com/part-2-grade-school-math/part-2-2). ICML 2024 tutorial. July 2024 (no arxiv yet). --- **Q2.** What potential strategies can be employed to improve the robustness of self-correction when the self-generated feedback is noisy or inaccurate? **A2**. That’s a great question! To answer this problem, we conducted a series of real-world controlled experiments on LLMs, and summarized them in the **General Response** on the top. Based on these results, we find the answer to this question has two folds. 1. **LLMs themselves are robust to critic noise.** We find that, a very noisy critic (eg only 25% accuracy) is already enough to attain gains over the baseline, showing that LLMs are robust to noise and can benefit from noisy critics | Critic Accuracy | Prediction Accuracy after correction | | --- | --- | | w/o Critic | 20.16% (baseline) | | 0% | 18.97% (-1.19%) | | 25% | 24.70% (+4.54%) | | 50% | 29.00% (+8.82%) | | 75% | 33.22% (+13.05%) | | 100% | **37.26% (+17.09)** | 2. **CoT prompting can improve critic quality.** In the meantime, we also observe that more accurate critics lead to larger gains. So secondly, one way to obtain more accurate critic is through chain-of-thought (CoT) prompt, i.e., instructing the model to think step by step by the final critic. As shown in the comparison below (quoted from General Response), CoT can improve critic accuracy from 20.52% to 36.99%, and consequently, improving the final prediction from 23.01% to 33.54%. | Critic Type | no self-correction ( baseline) | natural language critic | explicit critic (w/o CoT) | explicit critic (w/ CoT) | | --- | --- | --- | --- | --- | | Critic acc | / | / | 20.52% | **36.99%** | | final prediction acc | 20.16% | 28.64% | 23.01% | **33.54%** | In summary, self-correction still works well under noisy critics, and when critics are very noisy, we can use CoT to improve the critic quality. --- Thanks for your insightful comments. Hope our new real-world verification on LLMs could address your concerns. Please let us know if there is more to clarify. --- Rebuttal Comment 1.1: Title: Thanks for your detailed reply Comment: This is really helpful, great work!
Summary: This paper investigates the ability of large language models (LLMs) to improve their responses through self-correction from a theoretical perspective. Specifically, the authors prove the self-correction mechanism through in-context alignment formulation and analyze why self-correction naturally improves LLM performance in the transformer based model. Additionally, they conduct experiments on synthetic datasets to validate their findings and also verify them in two applications. This paper provides a strong foundation for understanding the self-correction mechanism. It also offers theoretical insights for explaining the prompting of LLMs through agent workflows. Strengths: 1. The author presents a theoretical proof of the self-correction mechanism for transformer-based LLMs from the perspective of in-context alignment. Given that self-correction is a common practice when prompting LLMs, and there is considerable research on designing related algorithms and prompts, this theoretical derivation enhances the transparency of LLM inference. Furthermore, understanding the self-correction mechanism is crucial for designing new mechanisms. 2. From the perspective of proof, the author's simplication and formulation of the self-correction mechanism into a self-alignment mechanism is interesting. By introducing an in-context learning task with triplet examples {x, y_i, r_i}, the author presents feedback in the form of a reward. The derivation process is both rigorous and clear. 3. Through theoretical derivation, the author presents several effective and meaningful insights (Lines 216-238). These insights, being derived from theory, are significant for designing structures and algorithms related to self-correction. 4. The author utilized synthetic data to construct experiments related to theoretical derivations. Through these experiments, they validated the inferences made in the theoretical proofs (Line 216-238). This further strengthens the solidity of the work. Weaknesses: 1. In the proof, the author simplifies the concept by converting criticism into a reward, represented as a real number. Could you discuss the discrepancy between this simplification and the actual scenario of Natural Language Criticism? Additionally, could you please discuss whether this simplification significantly affects the theoretical proof if switch to Natural Language Criticism. 2. Strictly speaking, this cannot be considered a weakness. Although Section 5 provides improvements in self-correction in two application scenarios, I think these are unnecessary, given the numerous relevant empirical studies already available. The performance improvement due to self-correction has been supported by numerous experiments and has become a consensus in the field. If additional experiments, using either synthetic data or real self-correction trajectory data, could explore different types of critics (i.e., rewards under the formulation of in-context alignment) and their impact on the self-correction mechanism (such as the number of self-correction rounds and heterogeneity of critics), it would deepen our understanding and exploration of the self-correction mechanism. Technical Quality: 4 Clarity: 4 Questions for Authors: Please refer to the questions in the "Weaknesses" section. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer zSQV for appreciating the novelty of our theory and the solidness of our experiments! Below, we address your remaining concerns about the verification experiments. --- **Q1.** In the proof, the author simplifies the concept by converting criticism into a reward, represented as a real number. Could you discuss the discrepancy between this simplification and the actual scenario of **Natural Language Criticism**? Additionally, could you please discuss whether this simplification significantly affects the theoretical proof if switch to Natural Language Criticism. **A1**. Indeed, with LLMs, one can instruct the models in different ways to attain different forms of self-critics, which have an influence on the self-correction performance as well. **Experimental Comparison.** Following your suggestions, we compare two sources of different types of self-critic on LLM: - **Natural language vs. explicit labels**: for natural language, we ask the plain critic question like “Is there bias in the response?”, where the model will reply in free-form natural languages; instead, for explicit labels, we instruct the model to answer a binary question, eg “Is this response biased or unbiased? (a) biased (b) unbiased.” From the table below, we can see that the explicit label alone performs worse than natural language (28.64% vs 23.01%), which might be due to that natural language provides fine-grained reward signals to alignment. - **Direct answer vs chain-of-thought (CoT):** one way to remedy the limit of direct response is CoT. Here, we use zero-shot CoT to instruct the model to think step by step before giving the explicit labels. The table below shows that this does improve the critic does enhances the critic accuracy ($20.52\% \rightarrow 36.99\%$ ) and improves the correction accuracy as well ($23.01\% \rightarrow 33.54\%$ ). This shows that CoT is also a powerful technique to enhance self-correction by improving the critic quality, which also aligns with our theory. *Table B. Prediction and Critic Accuracy vs different types of critics on BBQ.* | Critic Type | no self-correction ( baseline) | natural language critic | explicit critic (w/o CoT) | explicit critic (w/ CoT) | | --- | --- | --- | --- | --- | | Critic acc | / | / | 20.52% | 36.99% | | final prediction acc | 20.16% | 28.64% | 23.01% | 33.54% | **Our theory is compatible with different reward formats.** In our alignment loss, **the essential role of the rewards is to provide a ranking $\tau$ of the responses $y_i$,** while the specific reward values $r_i$’s do not matter. Therefore, our theory is general and applicable to rewards of different formats, as long as they can provide such a ranking. For LLMs trained on natural languages, natural langauge critic also provides the critic that LLMs can understand and use for ranking the responses. Compared to explicit labels, natural language and CoT prompting with more detailed analysis could provide a fine-grained critic (that is akin to multi-dimensional reward vector) that gives more accurate preferences. --- **Q2.** Strictly speaking, this cannot be considered a weakness. …. If additional experiments, **using either synthetic data or real self-correction trajectory data, could explore different types of critics (i.e., rewards under the formulation of in-context alignment)** and **their impact on the self-correction mechanism (such as the number of self-correction rounds and heterogeneity of critics)**, it would deepen our understanding and exploration of the self-correction mechanism. **A2**. Thank you for your insightful thoughts! We totally concur with this point, and following your advice, we further validate our theoretical insights extensively on real-world LLMs, covering the influence of **critic accuracy, critic types, and self-correction rounds** on self-correction performance, which further validate our theory (summarized in **General Response** at the top). Here we additionally quote the results on self-correction rounds below. According to our theory, more self-correction rounds amount to more in-context alignment examples, which helps the in-context alignment process. We validate this further on real-world LLMs by applying Checking-as-Context for multiple rounds. As shown in the table below, with groundtruth critic, more rounds lead to increasing accuracy, which aligns well with our theory. The performance peaks at the 3rd round potentially because of the model’s limited capability of handling long context. With self-critic, we instead find that 1-round critic gives the best performance; multi-round checking still outperforms the baseline but does not bring further benefits. This is akin to the accumulative errors that widely occur in pseudo-labeling methods, where the model deteriorates under iterative self-labeling. Our analysis builds a theoretical connection between intrinsic self-correction and self-labeling alignment, which provides a principled explanation for this phenomenon. | Critic Source | 0 round (baseline) | 1 round | 2 rounds | 3 rounds | 4 rounds | | --- | --- | --- | --- | --- | --- | | Correction acc with ground truth critic | 20.16% | 35.50% | 35.20% | 46.31% | 38.54% | | Correction acc with self critic | 20.16% | 35.67% | 33.33% | 32.78% | 32.33% | --- Thank you again for your insightful comments, which significantly strengthen our work! We are happy to address your further concerns in the discussion stage. --- Rebuttal Comment 1.1: Comment: I appreciate the authors' point-by-point rebuttal and supplementary experiments, which addressed my questions. I particularly liked the experimental design of "Natural language vs. explicit labels" and the examination of "the influence of critic accuracy, critic types, and self-correction rounds on self-correction performance." These experiments undoubtedly increase the technical depth of the paper. After carefully rechecking the paper and reading the reviews and rebuttals from other reviewers, I believe this paper is of high quality. I will maintain my positive opinion. Thank you! --- Reply to Comment 1.1.1: Title: Thanks Comment: Thank you for the prompt response and for carefully reviewing the other reviews as well. We are glad that you find our rebuttal satisfactory and the new experiments "undoubtedly increase the technical depth of the paper." We will be sure to incorporate these results in our revision.
Summary: # Summary This paper provides a theoretical analysis of self-correction from in-context learning, demonstrating that LLMs can refine their responses by using accurate self-examinations as feedback. # Contributions 1. Theoretical Framework: The paper develops a theoretical framework that explains how self-correction capabilities arise in LLMs, extending beyond simplified linear transformer models to realistic transformers. 2. Applications: The authors demonstrate the real-world relevance of their findings, including jailbreaks and mitigating social biases. Strengths: 1. Theoretical Insights: The paper offers a new theoretical perspective on self-correction in LLMs. 2. Validation: Validation on synthetic datasets and BBQ benchmark supports the theoretical claims. 3. Applications: Practical applications, such as improved AI safety and bias mitigation, show the real-world impact of this paper. Weaknesses: 1. Validation Scope: The validation is primarily on synthetic datasets. 2. Dependence on Critic Quality: The paper acknowledges that the effectiveness of self-correction heavily relies on the quality of the critics, which may not always be reliable or available. And it's also costly in practice. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Could you please explain why the primary validation is on synthetic datasets? (The BBQ benchmark is in the appendix.) 2. If possible, could you add more experiments on realistic benchmarks? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: No potential negative societal impact. For improvement, please check the Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer jEFz for appreciating our theoretical insights, empirical verification, and real-world applications. We address your concerns below. --- **Q1.** Validation Scope: The validation is primarily on synthetic datasets. **A1**. For completeness, following your advice, we further validate our theoretical insights extensively on real-world LLMs, covering the influence of initial response, critic quality, and model sizes on self-correction, which further validate our theory. The results are summarized in the **General Response** at the top. Please take a look. Thanks! --- **Q2.** Dependence on Critic Quality: The paper acknowledges that the effectiveness of self-correction heavily relies on the quality of the critics, which may not always be reliable or available. And it's also costly in practice. **A2**. That’s a great question! According to our real-world verification experiments, the answer has two folds. **LLMs can work with noisy critics.** First, as we show in the real-world verification experiment (quoted from General Response), a very noisy critic (eg only 25% accuracy) is already enough to attain gains over the baseline, showing that LLMs are robust to noise and can benefit from noisy critics | Critic Accuracy | Prediction Accuracy after correction | | --- | --- | | w/o Critic | 20.16% (baseline) | | 0% | 18.97% (-1.19%) | | 25% | 24.70% (+4.54%) | | 50% | 29.00% (+8.82%) | | 75% | 33.22% (+13.05%) | | 100% | 37.26% (+17.09%) | **CoT prompting for better critics.** In the meantime, we also observe that more accurate critics lead to larger gains. So secondly, one way to obtain a more accurate critic is through chain-of-thought (CoT) prompt, i.e., instructing the model to think step by step by the final critic. As shown in the comparison below (quoted from General Response), CoT can improve critic accuracy from 20.52% to 36.99%, and consequently, improving the final prediction from 23.01% to 33.54%. | Critic Type | no self-correction ( baseline) | natural language critic | explicit critic (w/o CoT) | explicit critic (w/ CoT) | | --- | --- | --- | --- | --- | | Critic acc | / | / | 20.52% | 36.99% | | final prediction acc | 20.16% | 28.64% | 23.01% | 33.54% | To summarize, self-correction still works well under noisy critics and when critics are very noisy, we can use CoT to improve its quality. --- Thanks for your insightful questions. Hope our new real-world verification on LLMs could address your concerns. Please let us know if there is more to clarify. --- Rebuttal Comment 1.1: Title: Nice rebuttal! Comment: Your rebuttal convinced me a lot and thanks so much. I see you evaluated your methods on two 7B models. I am wondering about its effectiveness on smaller models. Could you also test your methods on these two models and **include the results in your appendix**: Phi-3: https://arxiv.org/abs/2404.14219 MiniCPM: https://arxiv.org/abs/2404.06395 Since a lot of edge users really care about these SLMs. If show possible results, I gonna increase my score. And also, sorry for the late reply. Let me know what I can do. --- Reply to Comment 1.1.1: Title: Thanks! Comment: Thank you for appreciating our response. We are very glad to hear that you find it convincing. Indeed, for real-world LLMs, model size is an important factor that reflects model capabilities. Here, follow your suggestions, we further evaluate Phi-3 and MiniCPM-2B on BBQ. We also include Qwen-1.5 1.8B and Qwen-1.5 7B, to study in the influence of model size.We follow the same evaluation protocol for all settings for a fair comparison. **Results.** As shown in the table below, we find that very small models ,like MiniCPM-2B and Qwen1.5-1.8B, although they have good self-critic accuracy (enough to gain from self-correction, as we show in Experiment 1 in **General Response**), but they can hardly benefit from self-correction. Instead, when the model becomes larger, e.g., Phi-3-4.8B and Qwen1.5-7B, the model shows great improvements via self-correction. **Analysis.** Our theoretical analysis indicates that a full Transformer with sufficient capability and depth is required to perform in-context alignment (Sec 3.2), and our synthetic experiment confirms it by ablating each Transformer component (Sec 4). These new our real-experiments further validate our theoretical insights, that self-correction requires the model to be expressive enough. Meanwhile, since Phi-3-3.8B and Qwen1.5-7B are not very large models (they can be run on a single GPU), we believe that self-correction can also find many successful applications. As the small model regime is also an active research area, we believe that in the future, with better training or model designs, smaller models could potentially benefit from self-correction as well. | Model | prediction acc (initial) | self-critic acc | prediction acc (with correction) | | --- | --- | --- | --- | | MiniCPM-2B-sft-bf16-llama-format | 36.27% | 62.82% | 25.75% | | Phi-3-mini-4k-instruct (3.8B) | 85.34% | 48.87% | 92.56% (+7.24%) | | Qwen1.5-1.8B | 1.18% | 99.41% | 0.00% | | Qwen1.5-7B | 66.51% | 97.37% | 73.32% (+6.81%) | Hope you find this new experiment satisfactory! Thank you for the insightful comment, which makes our work more complete. We will add these new results in the revision. Please let us know if there is more to clarify.
Summary: This paper analysis self-correction theoretically from the in-context learning perspective. It extends the theoretical analysis from previously over-simplified transformers to more realistic scenario: softmax attention, multi-head attention... It also provides experiments on how self-correction can serve in practical applications such as defending against jailbreaks. Strengths: 1. Shed a light on theoretical explanation of intrinsic self-correction. The authors build connection between self-correction and alignment model. It provide a theoretical analysis which proves that Bradley-Terry model and Packett-Luce model can be used to perform in-context gradient descent. 2. Both synthetic and real-world experiments are provided in this paper. For example, in Jailbreak Attack test, intrinsic self-correction shows excellent defending performance. Weaknesses: In real-world experiment, the author provides several techniques: Multi-round Checking, Diverse Checking, Self-instruct. It's not mentioned that how these different self-correction settings can be fit into the proposed theoretical analysis. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weakness session above. If the connection between real-life experiment setting can be better explained, this theoretical framework would be more complete. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer bH8U for appreciating the novelty of our theory and our empirical verification. We address your concerns below. --- **Q1.** In real-world experiment, the author provides several techniques: Multi-round Checking, Diverse Checking, Self-instruct. It's not mentioned that how these different self-correction settings can be fit into the proposed theoretical analysis. **A1**. Thank you for your careful reading of our supplementary materials! Indeed, these methods are motivated by our theoretical analysis, and we briefly touched upon these connections in Appendix B.1. We appreciate the opportunity to elaborate further: - **I. Multi-round Checking**. Our theory establishes a connection between self-correction and in-context alignment with multiple query-answer-critic contextual examples:$(x,y_1,r_1,x,y_2,r_2,\dots,x,y_N,r_N)$. According to our theory, one-round checking provides only one in-context training example $(x,y_i,r_i)$, and thus it’s natural to extend this to multi-round checking that provides more training examples $(x,y_i,r_i)_{i=1}^{N-1}$ for the in-context alignment task. Our synthetic experiments also confirm it by showing that an increased number of in-context samples results in smaller errors. - **II. Diverse Checking**. Although our theory primarily focuses on a single query $x$, we can easily extend it to the context with multiple queries, i.e., $(x_1,y_1,r_1,x_2,y_2,r_2,\dots,x_N,y_N,r_N)$ — see a theoretical discussion in Appendix F.1. Compared to single-query alignment, multi-query one ensures that the optimized policy is generalizable across different queries. This theoretical insight led us to develop Diverse Checking which adopts different queries at different rounds. - **III. Self-instruct**. There are two variants to instruct the model to refine its prediction $y_i \to y_{i+1}$ : 1) asking the same query $x$ again (illustrated in Figure 2), or 2) directly instructing the model to refine the response (i.e., Self-instruct). In practice, we found that both methods achieve self-correction, while Self-instruct is more robust against jailbreaks, especially when the query may contain harmful instructions. Therefore, these three variants are deeply rooted in our theoretical analysis. We will elaborate on these connections better in the revision. --- We hope this explanation addresses your concerns. Additionally, we added extensive verification of our theory on real-world LLMs, which further confirmed our theoretical insights (see **General Response** at the top). Please let us know if any further clarifications are needed! --- Rebuttal Comment 1.1: Title: Thank you for your reply Comment: I think the authors have successfully addressed my question in weakness part. In general, this is a good theoretical paper combined with convincing real-world experiments. I will hold my postive rating here.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for careful reading and for giving positive feedback on our manuscript regarding the novelty and significance of our analysis. We have addressed the remaining concerns carefully in each response. Notably, besides synthetic tasks, **we further verify our theoretical insights extensively on real-world LLMs, which significantly strengthen our work.** We provide an overview below and will include more complete results in the revision. See also the **Rebuttal PDF** for a better visualization of the main results. **Setup.** As in Sec 5.1, we evaluate on the real-world BBQ datasets with Vicuna-7B unless specified. Following our theory, we instruct the model to self-correct in a response-critic-refinement process. For the critic step, we either instruct the model to review its own response (intrinsic self-correction) or provide the model with the groundtruth critic (external self-correction). Then, we instruct the model to refine the answer based on the critic. ### **Experiment I. Influence of critic quality on self-correction** Akin to the study in Figure 1(b), we start by evaluating the effect of critic quality, where we control the critic to have p% accuracy by randomly flipping the ground-truth labels. As shown in the table below, **the self-correction accuracy increases linearly as critic accuracy grows, showing a strong relationship between critic quality and correction effect, consistent with our synthetic experiments (Fig 1b).** It further confirmed our theory by showing that self-correction is a noisy in-context alignment process and less noisy critic leads to better self-correction. Notably, although fully incorrect (0%) critics are harmful, we find that **very noisy critics with only 25% accuracy can still show benefits over the no-correction baseline**, which explains why self-correction with noisy LLMs’ own critics can also be useful. *Table A. Prediction Accuracy vs critic accuracy on BBQ.* | Critic Accuracy | Prediction Accuracy after correction | | --- | --- | | w/o Critic | 20.16% (baseline) | | 0% | 18.97% (-1.19%) | | 25% | 24.70% (+4.54%) | | 50% | 29.00% (+8.82%) | | 75% | 33.22% (+13.05%) | | 100% | **37.26% (+17.09%)** | ### **Experiment II. Influence of the type of self-critic** With LLMs, one can instruct the models in different ways to attain different forms of self-critics, which have an influence on the self-correction performance as well. Here, we compare two sources of different types of self-critic: - **Natural language vs. explicit labels**: for natural language, we ask the plain critic question like “Is there bias in the response?”, where the model will reply in free-form natural languages; instead, for explicit labels, we instruct the model to answer a binary question, eg “Is this response biased or unbiased? (a) biased (b) unbiased.” From the table below, we can see that the explicit label alone performs worse than natural language (23.01% vs 28.64%), which might be due to that natural language provides fine-grained reward signals to alignment. - **Direct answer vs chain-of-thought (CoT):** one way to remedy the limit of direct response is CoT. Here, we use zero-shot CoT to instruct the model to think step by step before giving the explicit labels. The table below shows that this does improve the critic does enhances the critic accuracy ($20.52\% \rightarrow 36.99\%$ ) and improves the final prediction accuracy as well ($23.01\% \rightarrow 33.54\%$ ). This shows that CoT is also a powerful technique to enhance self-correction by improving the critic quality, which also aligns with our theory. *Table B. Prediction and Critic Accuracy vs different types of critics on BBQ.* | Critic Type | no self-correction ( baseline) | natural language critic | explicit critic (w/o CoT) | explicit critic (w/ CoT) | | --- | --- | --- | --- | --- | | Critic acc | / | / | 20.52% | **36.99%** | | final prediction acc | 20.16% | 28.64% | 23.01% | **33.54%** | ### **Experiment III. Influence of self-correction rounds** According to our theory, more self-correction rounds amount to more in-context alignment examples, which helps the in-context alignment process. We validate this further on real-world LLMs by applying Checking-as-Context for multiple rounds. As shown in the table below, with groundtruth critic, more rounds lead to increasing accuracy, which aligns well with our theory. The performance peaks at the 3rd round potentially because of the model’s limited capability of handling long context. With self-critic, we instead find that 1-round critic gives the best performance; multi-round checking still outperforms the baseline but does not bring further benefits. This is akin to the accumulative errors that widely occur in pseudo-labeling methods, where the model deteriorates under iterative self-labeling. Since our analysis builds a theoretical connection between intrinsic self-correction and self-labeling alignment, it provides a principled explanation for this phenomenon. | Critic Source | 0 round (baseline) | 1 round | 2 rounds | 3 rounds | 4 rounds | | --- | --- | --- | --- | --- | --- | | Correction acc with ground truth critic | 20.16% | 35.50% | 35.20% | **46.31%** | 38.54% | | Correction acc with self critic | 20.16% | **35.67%** | 33.33% | 32.78% | 32.33% | In summary, the three experiments above provide strong evidence that our theoretical understanding through the in-context alignment perspective offers valuable insights into the self-correction dynamics and aligns well with the actual behaviors of LLM self-correction. Pdf: /pdf/61e7b3f2109a048fff92a9c47b09aa9c01fd021d.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: This paper provides a theoretical framework for understanding self-correction in large language models (LLMs), framing it as a form of in-context alignment. The authors prove that a standard multi-layer transformer can optimize common ranking-based alignment objectives using self-correction samples, generating responses with higher rewards. Their analysis reveals the importance of key transformer design elements for self-correction, including softmax attention for ranking, multi-head attention for token discrimination, feed-forward networks for token transformation, and multiple stacked layers for handling multiple examples. These theoretical results are validated through extensive experiments on synthetic datasets. Inspired by these theoretical insights, the authors demonstrate real-world applications of self-correction, including alleviating social bias in LLM outputs and defending against jailbreak attacks. Their experiments show that self-correction can significantly improve performance on these tasks, with a strong correlation between self-checking accuracy and final performance. This work provides the first theoretical foundation for understanding self-correction in LLMs, offering valuable insights into how this capability emerges and how it relates to model architecture, with implications for improving LLM safety and reliability. Strengths: The idea of in-context alignment for theoretical analysis is interesting. The proof by construction shows that transformers can implement the gradient descent optimization of a Bradley-terry model, and more specifically, how to use a two-head softmax attention layer to implement the algorithm. Driven by the insights from the theory, the authors also conduct experiments on synthetic and jailbreak tasks, showing the self-critic can bring good improvements. Weaknesses: My main concern is about modeling the self-critic process as a ranking problem. The author's modeling ranks the existing hypotheses generated in the context, as shown in equation (7), as the proof of construction uses the information from y1 and other ys. However, self-critic is about generating a better hypothesis one by one. The generated ones should not be in the list of hypotheses to rank. If the authors can modify the theory to mainly consider the possible hypotheses in the next round generation, it would make more sense to me. Furthermore, a more important question to answer is why LLMs can have such the capability to regenerate better answers, including providing a good critic and being able to use this critic to further improve the results. However, the theory in this paper focuses on the possibility of using a transformer layer to implement the algorithm, but it does not necessarily mean transformers will implement such an algorithm, leading to a gap here. Finally, in line 43, the authors mentioned that you are the "first theoretical analysis showing that LLM can improve alignment in context". It sounds a bit over-claim to me. Your theory is mainly about transformers, and it still has a huge gap to LLMs. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can you provide theory about the next round possible candidates? 2. Why, in your opinion, LLMs can have the ability of self-critic? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: see weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank Reviewer pXYm for appreciating the novelty of our idea. We further address your main concerns on how models generate new responses below. --- **Q1.** My main concern is about modeling the self-critic process as a ranking problem. Can you provide theory about the next round possible candidates? **A1**. We understand your concern that the theory should be able to generate new responses instead of ranking the existing ones. We note that **our theory does consider generating new responses of higher reward for the test point** $x_N$, although the objective may initially appear to be a ranking problem. The key point is that we adopt a **DPO-like reward function** in our alignment loss (Eq 5), whose reward $r(x,y)=\|WX-Y||^2$ is directly calculated from the model policy $f(X)=WX$. Consequently, when optimizing the alignment loss in-context, **the policy (the prediction** $f(X)=WX$) **is concurrently refined to generate better responses**. Specifically, to make the computation compatible with the given training examples, we 1) **use a "dummy" response** $y_N=W_0 x_N$ (i.e., the initial guess of LLMs with weights $W_0$ (Section 2.2.3)) as **an initialization** for the test output, and 2) **initializes its "dummy" reward $r_N$** to have the lowest reward among the input examples. Under this configuration, we can prove that after an iteration, the test output becomes $y'_N=W'x_N$, where $W'=W+\Delta W$ corresponds to the updated weights. Therefore, **the model can generate a new response based on the updated weights instead of copying from the context**. Since the weight is updated to optimize the alignment objective, the new prediction can be better than those (potentially noisy) ones in the training samples. Please let us know if there is anything else you would like us to elaborate on. We will certainly expand on this part in the revision. --- **Q2-1.** Furthermore, a more important question to answer is why LLMs can have such the capability to regenerate better answers, including providing a good critic and being able to use this critic to further improve the results. **A2-1**. This is an excellent question! A prevalent understanding of how ICL arises is that **LLMs are pretrained on a highly diverse set of contextual data, enabling them to handle a variety of in-context tasks**. Likewise, we believe that **LLMs are all exposed to numerous self-correction-like data** during training, which typically includes an initial statement, a critique of this statement (e.g., "this article is not good enough"), and often a refined version (e.g., a revised article), which are quite common in natural language. **This exposure to self-correction data enables the model to learn to 1) accurately critique its previous outputs and 2) generate improved versions based on these critiques**. The (very recent) ICML’24 tutorial confirmed this by showing that training with mistakes+corrections in a controlled setting does boost model accuracy [3]. Nevertheless, this data-based understanding does rigorously address 1) the mechanisms of self-correction — we formulate it as in-context alignment (ICA) process, and further 2) “how transformers perform ICA” — we show that they can optimize an alignment loss in-context and underpins the roles of each module. In this way, we can rigorously show that transformers can perform self-correction in an in-context alignment way. Thank you for raising this question, and we will elaborate on this part in the revision as well. [1] Chen, Ziru, et al. "When is tree search useful for llm planning? it depends on the discriminator." arXiv preprint arXiv:2402.10890 (2024). [2] Teaching Large Language Models to Self-Debug - Arxiv-2304.05128 [3] Allen-Zhu et al. [Physics of Language Models - Part 2.2: How to Learn From Mistakes](https://physics.allen-zhu.com/part-2-grade-school-math/part-2-2). ICML 2024 tutorial. July 2024 (no arxiv yet). --- **Q2-2.** However, the theory in this paper focuses on the possibility of using a transformer layer to implement the algorithm, but it does not necessarily mean transformers will implement such an algorithm, leading to a gap here. **A2-2**. Indeed, the main theorem provides a construction proof that a $(N-1)$-block transformer can implement the algorithm. Unlike studies on linear attention, our proof requires a deep nonlinear transformer, whose convergence analysis hasn't been established in the literature yet. Due to this obstacle, we adopt a construction proof to show that transformers can perform ICA. Further, we confirm our theoretical insights from the construction through a controlled synthetic experiment, showing that **a trained transformer, though may not implement the same weights, *does behave quite similarly to the gradient descent algorithm we analyzed — in particular,*** the necessities of each transformer module we outlined (see Fig 1). This indicates that our analysis provides valuable insights into the behavior of trained transformers, forming a basis for further convergence studies. We will elaborate on this in the limitation part and outline potential paths toward further convergence studies. --- **Q3.** Finally, in line 43, the authors mentioned that you are the "first theoretical analysis showing that LLM can improve alignment in context". It sounds a bit over-claim to me. Your theory is mainly about transformers, and it still has a huge gap to LLMs. **A3**. Since transformers are the *de facto* architecture of LLMs, our analysis on transformers does apply to LLMs. By the word “can”, we mean that it has the capability to perform in-context learning, as we rigorously proved in Theorem 3.2. Meanwhile, we agree that “transformer” is a more accurate choice here, and we will replace it in the revision following your advice. --- Thank you again for the insightful comments. If you find it satisfactory, we respectfully hope that you can re-evaluate our work. We are happy to address your further concerns. --- Rebuttal Comment 1.1: Title: Reviewer responses Comment: Thanks for your detailed response. My concerns are addressed to some extent, and I will raise my score.
null
null
null
null
null
null
Improved Guarantees for Fully Dynamic $k$-Center Clustering with Outliers in General Metric Spaces
Accept (poster)
Summary: This paper proposes a simple but effective method that can solve "fully dynamic k center problem with outliers" in general metric space. The fully dynamic setting requires the algorithm to adjust its output efficiently when deletion or insertion operations occur. This paper uses a "ball cover" strategy to obtain a $(4+\epsilon)$-approximate solution of k center problem with outliers. The radii of balls will be guessed close enough to the optimal k center radius after at most $O(\log \Delta)$ iterations. When a deletion or insertion occurs, the algorithm needs to re-cluster (offline algorithm) if necessary. However, the paper proves that the re-clustering operation will not happen frequently. As a result, even though the time complexity of re-clustering may be as high as $O(n\epsilon^{-1}k^3\log k \log \Delta)$ in worst case, the amortized time can be reduced to $\epsilon^{-2}k^6\log k \log \Delta$ for handling the insertion or deletion of a single point. Strengths: 1. The approximate ratio of the proposed algorithm is $4+\epsilon$, far better than previous paper like [5]. 2. The algorithm of this paper do not need extra assumption of metric space, *e.g.*, low doubling dimension assumption in [2], which makes the algorithm have strong applicability. 3. The presentation is nice. Weaknesses: 1. Time complexity. The amortized update time $\epsilon^{-2}k^6\log k \log \Delta$ is worse than the method in [5] (if $k$ is large), whose time complexity is $O(\frac{k^2\log \Delta}{\epsilon^2\tau}\log^2\frac{1}{\delta})$​. 2. High update time complexity in the worst case. The proposed algorithm still need $O(n\epsilon^{-1}k^3\log k \log \Delta)$ time to handle an insertion or deletion of a single point in the worst case. 3. No experimental result. Although this work is mostly theoretical, it does not means that experiment is not necessary (especially for NeurIPS, not a purely theory venue). The previous work [5] provides the experimental results, so I strongly encourage the authors to compare the performance of both running time and cost to [5]. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. About the time complexity of binary search of $r_{OPT}$, you have mentioned that the set of all possible $r$ is $R = \{(1+\epsilon)^i:d_{min}\le (1+\epsilon)^i\le d_{max}, i\in N\}$. So $|R| = \log_{1+\epsilon}\Delta = \frac{\log \Delta}{\log (1+\epsilon)}$. So if you binary search in the set $R$, it seems like that you only need $O(\log |R|) = O(\log \log \Delta)$ steps. So can your time complexity improved to $O(\epsilon^{-2}k^6\log k \log \log \Delta)$? 2. If I want to delete some point which is also a center of the solution. I think this center point should also be deleted since this point has been removed from this metric space. But it seems that Procedure 2 in the paper do not consider this situation. How do your algorithm handle the deletion of the center points? 3. What is the size of your memory usage? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please see weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment:** Time complexity. The amortized update time $\epsilon^{-2}k^6\log k\log \Delta$ is worse than the method in [5] (if $k$ is large), whose time complexity is $O(\frac{k^2\log\Delta}{\epsilon^2\tau}\log^2\frac{1}{\delta})$. **Comment:** High update time complexity in the worst case. The proposed algorithm still need $O(n\epsilon^{-1}k^3\log k\log\Delta)$ time to handle an insertion or deletion of a single point in the worst case. **Response:** Besides the improvement in approximation factor with respect to the algorithm in [5], another interesting aspect of our method is that it is **adversarially robust**, meaning it can handle an adversary who observes the computed solution after each update and adjusts the subsequent updates accordingly. In contrast, the dynamic $14$-approximation algorithm developed by Chan et al. is **not** adversarially robust. In particular, if their solution is revealed at any time, the adversary can impose an expensive update that requires time in $\Omega(n)$. Regarding the question about handling an insertion or deletion of a single point in the worst case with $O(n \epsilon^{-1} k^3 \log k \log \Delta)$ time, we believe this can be managed with some adjustments to our algorithm. Specifically, we propose running two parallel instances of our algorithm. In the first instance, we compute and output the solution, while the second instance divides the large operations of $O(n \epsilon^{-1} k^3 \log k \log \Delta)$ time into smaller chunks of size $\text{poly}(\epsilon^{-1}, k, \log k, \log \Delta)$ and processes each chunk sequentially after an update. When the solution maintained by the first run becomes invalid—potentially after $\Omega(n)$ updates—we switch to the second run, obtain the solution from there, and continue the process. It is important to note that when switching between runs, we obtain a new solution and maintain that updated solution. **Comment:** No experimental result. [...] I strongly encourage the authors to compare the performance of both running time and cost to [5]. **Response:** We agree with the reviewer that providing empirical results or practical evaluations would be valuable and interesting work. Our main focus was to improve the $14$-approximation guarantee and determine how much we could refine the approximation factor, leading to the development of the first adversarially robust dynamic $4$-approximation algorithm for the metric $k$-center problem with outliers. We consider conducting experiments and benchmarking our dynamic algorithm against the algorithm by Chan et al. for both real and synthetic scenarios as future work. **Question:** About the time complexity of binary search of $r_{OPT}$, you have mentioned that the set of all possible $r$ is $R=\{(1+\epsilon)^i\colon d_{\min} \leq (1+\epsilon)^i \leq d_{\max}, i\in N\}$. So $|R| = \log_{1+\epsilon}\Delta = \frac{\log\Delta}{\log(1+\epsilon)}$. So if you binary search the set $R$, it seems like you only need $O(\log|R|) = O(\log\log\Delta)$ steps. So can your time complexity improved to $O(\epsilon^{-2}k^6\log k\log\log\Delta)$? **Response:** For each guess $r \in R $, we run an instance of the algorithm for that specific $ r $. After each insertion or deletion, we update all these instances. As a result, the update time is affected by a factor of $ |R| = \log_{1+\epsilon} \Delta = O(\epsilon^{-1}\log{\Delta})$. We apologize for any confusion caused by the title of that paragraph and will revise it accordingly. **Question:** If I want to delete some point which is also a center of the solution. I think this center point should also be deleted since this point has been removed from this metric space. But it seems that Procedure 2 in the paper do not consider this situation. How do your algorithm handle the deletion of the center points? **Response:** As mentioned in the paper, we studied the non-discrete version of the problem, where centers can be any point in the metric space. Therefore, a center can be a deleted point. We would like to thank you for bringing to our attention the interesting question about the discrete version of the problem we study in the paper. Indeed, we have thought about this question during the short rebuttal period and have provide a proof that explains how our dynamic data structure can also provide a $6$-approximation solution for the discrete metric $k$-center problem with outliers, where centers must be selected from the input set of points. You can find the proof of this claim in the response to reviewer CFX4. We also attached a figure in the global rebuttal for a visual representation. Note that any feasible solution for the discrete version is also feasible for the non-discrete version. Therefore, the optimal radius for the discrete version is at least as large as that for the non-discrete version. This implies that, for some applications, the non-discrete version can offer more precise clustering. We believe both the discrete and non-discrete versions are important, each being better suited for different applications. Besides the improvement in approximation factor, another interesting aspect of our method is that it is **adversarially robust**, meaning it can handle an adversary who observes the computed solution after each update and adjusts the subsequent updates accordingly. In contrast, the dynamic $14$-approximation algorithm developed by Chan et al. is **not** adversarially robust. In particular, if their solution is revealed at any time, the adversary can impose an expensive update that requires time in $\Omega(n)$. **Question:** What is the size of your memory usage? **Response:** The space complexity of our algorithm is $O(\epsilon^{-1}\log(\Delta)n) $. This is because we maintain $ \epsilon^{-1} \log(\Delta) $ instances corresponding to different radius guesses, each containing up to $ n $ points. The space required to temporarily store the sample sets used in Algorithm $\texttt{OfflineCluster}$ is in $O(n)$.
Summary: The paper studies the k-center clustering problem with outliers in the fully dynamic setting. Specifically, given a metric space (M,d), in the (k,z)-clustering problem, the goal is to find at most k balls minimizing the maximum ball radius while excluding up to z points from the clustering. In the fully dynamic setting, the points are inserted or deleted from the underlying metric space, and the goal is to maintain the clustering faster than recomputing from scratch after each insertion or deletion. The main contribution of this paper is a fully-dynamic, provable approximation algorithm for the (k,z) clustering problem that achieves (4+\eps)-approximation ratio, and can support point updates in O(\eps^{-2} k^6 \log k \log \Delta), with the caveat that the algorithm covers all but at most (1+\eps)z points (also known as a bi-criteria guarantee in the approximation algorithms literature). The algorithm is randomized, and works against an oblivious, online adversary. It also improves upon the approximation ratio of (14+\eps) achieved in the recent work Chan, Lattanzi, Sozio, and Wang [5] for the same problem, at the cost of increasing the running time by a factor of k^4. From a technical point of view, the paper first introduces a static (4+\epsilon) approximation algorithm for the problem. The algorithm is itself based on the idea of successive random sampling, which proceeds as follows: * guess an optimal radius r (this is standard in (clustering) algorithms, and can be achieved by binary searching at the cost of paying (1+\eps) in the approx. factor, and a factor that is log of aspect ratio in the underlying metric space in the runtime); * pick uniformly at random a subset of points S of the current point set, and pick a center c from S that covers a good fraction of the current points; * grow a ball of radius 4r around the center c; * remove the points belonging to the ball from the current point sets, and continue with the next iteration (level) * the number of levels is bounded by roughly k * the centers computed in each level are output as the solution the (k,z) clustering problem The bulk of the effort is then to carefully set up a data-structure that maintains these levels (and the associated information) under point insertion and deletions. There are two main invariants: (i) the level invariant and (ii) the dense invariant (which is roughly controlling the number of points that remain unclustered in each level). The moral message of the paper is that when the invariants are broken at some level i, then you basically re-cluster everything from that level up. Strengths: *The k-center clustering is a fundamental problem in the clustering/unsupervised learning literature. The outlier version of the problem makes perfect sense, and it's of equal importance. The dynamic setting is very natural and has received increasing attention in the clustering literature in the last 5 years or so, with many papers trying to nail down the dynamic complexity of basic clustering objectives such as k-center, k-median and k-means. This work can be thought as a continuation of this line of work. * The algorithm is simple and elegant (it should also be very implementable). Appropriate effort is put in explaining high level ideas. The paper reads well. Weaknesses: * Personally, I wouldn't agree that this is an improvement over the work of Chan et al. [5]. Indeed, it does achieve a better approximation ratio, but at the cost of increasing the runtime. The paper indeed provides a new, interesting trade-off, but I believe the abstract we should discuss this more carefully. * Regarding techniques, while one can argue that idea of successive sampling is now folklore, one influential work that employs a very similar algorithm is due to Mettu-Plaxton "Optimal Time Bounds for Approximate Clustering". While the original algorithm is for k-median and k-means, there are follow up works that employ the same algorithm for k-center (albeit with worst approx guarantee than 2). I would encourage the authors to discuss differences/similarities between Mett-Plaxton and their work. In my opinion, this doesn't diminish the contribution of the paper at hand. Technical Quality: 4 Clarity: 3 Questions for Authors: Minor comments: Lines 17-29, you should back up with citations the applications of clustering across many subfields of computer science and beyond Line 36, 'various complications' sounds a bit odd (maybe challenges would be a better fit here) Line 196, n_i, \alpha never defined before in text Line 219, should your j start from 1 and not from i? -- similar comment for Definition 3.2 Major comment/question: Line 146 -- can you please elaborate whether non-discrete version of the (k,z) clustering you study here is less challenging? I imagine this helps you a lot with deletions; even if you delete a point in P, you can still leave the underlying point in M serve as cluster center, which doesn't invoke a re-build. Does the work of Chan et al. [5] have the some restriction? Confidence: 5 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes, Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment:** Personally, I wouldn't agree that this is an improvement over the work of Chan et al. [5]. Indeed, it does achieve a better approximation ratio, but at the cost of increasing the runtime. The paper indeed provides a new, interesting trade-off, but I believe the abstract we should discuss this more carefully. **Response:** Thank you for the comment. We will discuss this trade-off in more detail in the full version. Besides the improvement in approximation factor, another interesting aspect of our method is that it is **adversarially robust**, meaning it can handle an adversary who observes the computed solution after each update and adjusts the subsequent updates accordingly. In contrast, the dynamic $14$-approximation algorithm developed by Chan et al. is **not** adversarially robust. In particular, if their solution is revealed at any time, the adversary can impose an expensive update that requires time in $\Omega(n)$. **Comment:** Regarding techniques,[...] I would encourage the authors to discuss differences/similarities between Mett-Plaxton and their work. In my opinion, this doesn't diminish the contribution of the paper at hand. **Response:** Thank you for highlighting the work of Mettu and Plaxton on 'Optimal Time Bounds for Approximate Clustering' and the subsequent research. We will be sure to discuss the differences and similarities between the work of Mettu and Plaxton and ours. **Question:** Lines 17-29, you should back up with citations the applications of clustering across many subfields of computer science and beyond Line 36, 'various complications' sounds a bit odd (maybe challenges would be a better fit here) Line 196, $n_i, \alpha$ never defined before in text Line 219, should your j start from 1 and not from i? -- similar comment for Definition 3.2 **Response:** Thank you for the valuable suggestions. We will make sure to add more references and the mentioned definitions at the correct positions in the full version. The subset $P_i$ of points not covered in level $i$ is disjoint from the clusters $C_1,\ldots,C_{i-1}$. Hence, a clustering of $P_i$ does not need to contain these clusters. Similarly, a deletion of a point at level $i$ does not lead to a violation of the invariants in levels $1,\ldots,i-1$, but can potentially violate the invariants at higher levels. Therefore, we only need to recluster the levels from $i$ upwards. We will make sure to write this part more clearly in the full version. **Question:** Major comment/question: Line 146 -- can you ,[...] Does the work of Chan et al. [5] have the some restriction? **Response:** We would like to thank you for bringing to our attention the interesting question about the discrete version of the problem we study in the paper. Indeed, we have thought about this question during the short rebuttal period and determined how our dynamic data structure can also provide a $6$-approximation solution for the discrete metric $k$-center problem with outliers, where centers must be selected from the input set of points. Observe that any feasible solution for the discrete version is also valid for the non-discrete version. Therefore, the optimal radius in the discrete version is at least as large as that in the non-discrete version. This suggests that, in certain applications, the non-discrete version may provide more precise clustering. We believe that both the discrete and non-discrete versions are valuable, each being particularly suited to different applications. Chan et al. [5] consider the discrete version; however, they need to reconstruct the leveling after the deletion of a center, making their algorithm not adversarially robust. Here, we prove how our data structure can also offer a $6$-approximation solution for the discrete version. We also attached a figure in the global rebuttal for a visual representation. **Claim**: Our data structure can report a $6$-approximation solution for the discrete version of the $k$-center problem with $z$ outliers without increasing the time or space complexity. **Proof:** Let $r$ and $i$ be fixed, and $c_i$ be the center of cluster $C_i=B(c_i,4r)$ (Recall that $B(x,\rho)$ is the ball of radius $\rho$ centered at $x$). To provide a solution for the discrete version, we do the following. As long as $c_i$ is not deleted, we report $c_i$ as the $i$-th center. After the deletion of the point $c_i$, we consider two cases: $\min(z+1,\frac{n_i-z}{k-i+1}-\frac{\epsilon z}{\alpha k})\leq 0$ or $\min(z+1,\frac{n_i-z}{k-i+1}-\frac{\epsilon z}{\alpha k})>0$. For the first case we have $n_i<(1+\epsilon)z$ as $i\geq 1$ and $\alpha>1$. Hence, we can report all the $n_i$ points as outliers and stop. The dense invariant states that $|B(c_i,2r\cap P_i|\geq\min(z+1,\frac{n_i-z}{k-i+1}-\frac{\epsilon z}{\alpha k})$. Therefore, $|B(c_i,2r)\cap P_i|>0$ holds in the second case and there exists a point $\hat{p}\in B(c_i,2r)$. Then we report the point $\hat{p}\in B(c_i,2r)$ after the deletion of $c_i$. We next prove the $6$-approximation guarantee. To do this, we show that $C_i\subseteq B(\hat{p},6r)$. This implies that any feasible solution in our data structure with radius $4r$ for the non-discrete version can be used to report a feasible solution for the discrete version. Also, see attached the figure in the global rebuttal for a visual representation. Let $q\in C_i$. We show that $q\in B(\hat{p},6r)$. Since $q\in C_i=B(c_i,4r)$, we have $d(q,c_i)\leq 4r$ , where $d$ is the distance function. Moreover, $d(c_i \hat{p}) \leq 2r$ since $\hat{p}\in B(c_i,2r)$. Then by the triangle inequality we have $d(q, \hat{p}) \leq d(q,c_i)+d(c_i,\hat{p})\leq 4r+2r=6r.$ The complexity of space and time remains to be discussed. To report a solution for the discrete version, it is enough to keep $B(c_i,2r)\cap P_i$. Note that our data structure already stores $C_i\cap P_i$ , and since $B(c_i,2r)\cap P_i \subseteq C_i\cap P_i$ , the space complexity and time complexity remain the same.
Summary: This paper studies the fully-dynamic $k$-center with outliers problem in the metric space. In this setting, operations (including insertion and deletion) appear over time. The performance evaluation of an algorithm is based on its cost approximation and the (amortized) update time. However, previous research has shown that when exactly excluding $z$ outliers is required, any $O(1)$-approximation algorithm incurs an $\Omega(z)$ update time. It is therefore reasonable to allow for a trade-off by permitting $(1+\varepsilon)z$ outliers in order to achieve efficient update times. The proposed algorithm can be summarized as follows: It partitions the dataset into into at most $k+1$ levels, where each level $i$ represents a cluster $C_i$ with center $c_i$. The algorithm dynamically maintains a data structure that adapts to the operations performed in real-time. At every time, the data structure should maintain the following two invariants: - level invariant: Each level $i$ has a cluster $C_i$ such that $P_{i+1} = P_i \backslash C_i$. - dense invariant: For each center $c_i$ in $F_r$, $B(c_i, 2r)$ covers sufficient points in $P_i$. The authors prove that it is a $(4+\varepsilon)$-approximation algorithm with $O(\varepsilon^{-2}k^6\log k\log \Delta)$ expected amortized update time. Strengths: - The paper is very well written and easy to follow. - The algorithm provides a $(4+\varepsilon)$-approximation algorithm for the fully-dynamic $k$-center with $z$ outliers problem, which is an improvement over previous work. - The paper includes a thorough theoretical analysis, proving the approximation guarantee and update time of the algorithm. Weaknesses: - the update time complexity $O(\varepsilon^{-2}k^6\log k\log \Delta)$ is not competitive, particularly for large $k$. - The paper does not provide empirical results or practical evaluations of the algorithm, which could demonstrate its performance in real-world scenarios. Technical Quality: 3 Clarity: 4 Questions for Authors: - can this method be slightly modified to support the change of $k$ and $z$. Alternatively, is there any negative results on handling the dynamic $k$ and $z$. - iDoes the analysis of the algorithm include a specific space complexity? Additionally, i am curious about the hardness of the fully-dynamic clustering with outliers. Can you provide the lower bounds of the space and the update time in the context of the fully-dynamic setting? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: thoretical paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment:** the update time complexity $O(\varepsilon^{-2}k^6\log{k}\log{\Delta})$ is not competitive, particularly for large $k$. **Response:** Our goal was to develop a dynamic algorithm with a low approximation guarantee and a simple, elegant data structure. We did not focus on optimizing the exponent of $k$ in our running time. An interesting aspect of our method is that it is **adversarially robust**, meaning it can handle an adversary who observes the computed solution after each update and adjusts the subsequent updates accordingly. In contrast, the dynamic $14$-approximation algorithm developed by Chan et al. is not adversarially robust. If their solution is revealed at any time, the adversary can impose an expensive update that requires time in $\Omega(n)$. **Comment:** The paper does not provide empirical results or practical evaluations of the algorithm, which could demonstrate its performance in real-world scenarios. **Response:** We agree with the reviewer that providing empirical results or practical evaluations would be valuable and interesting work. Our main focus was to improve the $14$-approximation guarantee and determine how much we could refine the approximation factor, leading to the development of the first adversarially robust dynamic $4$-approximation algorithm for the metric $k$-center problem with outliers. We consider conducting experiments and benchmarking our dynamic algorithm against the algorithm by Chan et al. for both real and synthetic scenarios as future work. **Question:** Can this method be slightly modified to support the change of $k$ and $z$. Alternatively, is there any negative results on handling the dynamic $k$ and $z$. **Response:** Our data structure is designed with up to $k$ levels, based on known values of $k$ and $z$. We are unsure how to modify our data structure to handle changes in $k$ and $z$ during the execution of the algorithm. Investigating the development of such a dynamic algorithm would indeed be an interesting challenge. **Question:** Does the analysis of the algorithm include a specific space complexity? Additionally, i am curious about the hardness of the fully-dynamic clustering with outliers. Can you provide the lower bounds of the space and the update time in the context of the fully-dynamic setting? **Response:** The space complexity of our algorithm is $O(\epsilon^{-1}\log(\Delta)n)$. This is because we maintain $ \epsilon^{-1} \log(\Delta) $ instances corresponding to different radius guesses, each containing up to $ n $ points. For the sample sets needed in $\texttt{OfflineCluster}$, we also need $O(n)$ space because they form a subset of the dataset. Establishing a space lower bound for fully dynamic algorithms for $ k $-center with outliers would be an intriguing question. --- Rebuttal Comment 1.1: Comment: Thanks for the clarification. I will keep my original rating.
Summary: The paper gives a new algorithm for the dynamic version of k-center clustering with outliers. The algorithm works in the fully dynamic model with both point insertions and deletions allowed. The points can belong to an arbitrary metric space, compared to some previous algorithms addressing low-dimensional metric spaces. Strengths: This is an important problem and the paper significantly improves the approximation factor. Weaknesses: The algorithm is randomized and works only against oblivious adversaries. This is a shared characteristic with the previous paper on this topic. The algorithm achieves a better approximation factor than the previous algorithms, but it comes at the cost of significantly higher dependency in the update time on the number of clusters, k. In particular, the factor of $k^6$ is probably prohibitive in practice. Overall, even though this is a solid contribution to the study of the problem, this is not a breakthrough. I also wish the authors did a better job highlighting the technical ideas that lead to the improvement over the previous algorithm. Technical Quality: 4 Clarity: 3 Questions for Authors: Is there a good reason why the algorithm is randomized? What are the obstacles to obtaining deterministic algorithms in this model? The factor of $k^6$ in your upper bound is large. Is there a reason why in practice the algorithm would be significantly less costly? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 2 Limitations: No specific societal limitations to address. This is a theoretical work concerning a traditional computational problem. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Comment:** The algorithm is randomized and works only against oblivious adversaries. This is a shared characteristic with the previous paper on this topic. **Question:** Is there a good reason why the algorithm is randomized? What are the obstacles to obtaining deterministic algorithms in this model? **Response:** We address these two points together in our response. Interestingly, we have discovered that our dynamic algorithm not only can handle oblivious adversaries but is even adversarially robust. A dynamic algorithm is called adversarially robust if its performance guarantees (including the approximation factor and update time) are preserved even when the sequence of updates (i.e., insertions and deletions) is adaptively chosen by an adversary who observes the outputs of our algorithm throughout the sequence and adjusts them accordingly. An adversarially robust dynamic algorithm is more powerful than one designed to handle only an oblivious adversary, as an oblivious adversary cannot access the algorithm's outputs after each update. In particular, we only use random bits to sample from large clusters at each level and maintain a solution. (This is the only component of our algorithm that uses randomness.) The adversary, however, can observe the maintained solution after each update and adaptively modify future updates (insertions and deletions) accordingly. In contrast, the dynamic $14$-approximation algorithm developed by Chan et al. is designed only for oblivious adversaries and is therefore not adversarially robust. In particular, if their solution were revealed at any time, the adversary could impose costly update time $\Omega(n)$. **Question:** The factor of $k^6$ in your upper bound is large. Is there a reason why in practice the algorithm would be significantly less costly? **Response:** Thank you for raising this interesting question. The worst case occurs when the gap between the size of the ball chosen in the offline algorithm and the threshold is negligible, but we believe this is not always the case in practice. To increase this gap, we propose selecting the ball of maximum size in Line 8 of the OfflineCluster procedure. This adjustment does not change the theoretical bounds but can make the algorithm faster in practice. --- Rebuttal Comment 1.1: Comment: Thank you for your response! > we have discovered that our dynamic algorithm not only can handle oblivious adversaries but is even adversarially robust This is very interesting and if true, it definitely makes your algorithm more interesting. I'm very curious about this but I'm not sure I'll have time to reread your paper to see if I have any doubts about this. Whatever happens to your paper, it would of course be great to explicitly address this in any future version of the paper.
Rebuttal 1: Rebuttal: Thank you to the reviewers for their insightful comments and valuable feedback. We also appreciate the time they dedicated to reviewing our work. Here, we also provide a file with a figure illustrating our responses to reviewers CFX4 and fWP4. Pdf: /pdf/9d3bb97f3532ac596d7e1e9d903c446952bf20f5.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Leveraging Separated World Model for Exploration in Visually Distracted Environments
Accept (poster)
Summary: To address the challenge of visual distractions in unsupervised reinforcement learning, this paper proposes a bi-level optimization framework called SeeX, which utilizes a separate world model to mitigate the disturbance caused by visual distractions. The authors evaluate the proposed method in multiple tasks. Strengths: - While unsupervised RL relies on intrinsic reward to promote exploration, the existence of visual distractions might contaminate the evaluation of intrinsic reward. SeeX proposes to use a separate world model to mitigate such disturbance. Weaknesses: - The separation model assumes decoupled endogenous and extraneous transition, which could be a strong assumption. For example, in robot manipulation, obstacles might be irrelevant to reward signals but still affect the movement of the robot. - The separation model uses a mask model to combine endogenous and extraneous reconstruction. However, this might not work well when the distractions and embodiments are inseparable, e.g. Color Change or Camera Jittering [1]. - The proposed EX-BMDP seems to be similar to Task-Informed MDP in [2], while the proposed intrinsic reward is similar to the latent disagreement in [3], I have some reservations about the novelty of combining these two methods as they are generally orthogonal. [1] Denoised MDPs: Learning World Models Better Than the World Itself [2] Learning Task Informed Abstractions [3] Planning to Explore via Self-Supervised World Models Typos: - Line 51we use the task-irrelevant (-> task-relevant) information to predict the reward and the action. Technical Quality: 2 Clarity: 1 Questions for Authors: - Line 193: "It is also an approximation to the world model TV error in Problem 3.3". Can the author provide more explanation on why Eq (11) is consistent with Eq (7)? - The authors wish to have a separate world model. However, how do you ensure that $s^-$ does not contain information about the reward? Since you have $\hat o_t^{exo}$? - In the experiment, what RL algorithm are the baselines implemented with? - In Eq (6), the REGRET is bounded by the estimation error of $\hat{\mathcal{T}}^+$ and $\hat{\mathcal{T}}^-$. However, if $s^-$ is irrelevant to reward signals and endogenous state $s^+$, and $\pi$ only depends on $s^+$, why would the estimation error of $\hat{\mathcal{T}}^-$ affect the REGRET? Confidence: 4 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and feedback. The term "task-irrelevant" on L51 should be "task-relevant". Here are our responses to your questions: - **Strong assumption** - Firstly, many Visual RL works use separation assumptions, such as Denoised MDP[3] and TIA[2], which have natural assumptions and practical significance. SeeX is inspired by this line of research. We believe no single assumption is universally applicable (e.g., CNNs for images and RNNs for text). So whether an assumption is "strong" is subjective and context-dependent. - Secondly, we focus on URL problems with task-irrelevant and action-independent noise and we model URL as minmax problems through theoretical analysis. A **practical scenario** fitting our assumption is a book-finding robot in a library. The robot must differentiate between task-relevant items (e.g., books) and task-irrelevant items (e.g., posters or people) to find the specific book. Thus, our work is grounded in both theory and practical application. - Finally, the robot manipulation you mentioned is discussed in L331-334. - **Inseparable noises** - We emphasized in the paper (caption of Figure 1 and the conclusion) that we only focus on handling moving distractors. Moving distractors and inseparable noises represent **distinct research directions**. According to the No Free Lunch theorem, no single method can be effective for all tasks. - **Discussion on whether EX-BMDP and latent disagreement are orthogonal** - **Separated world model** - Firstly, we **adopted**(L78-79) rather than **proposed** EX-BMDP[1]. The similarity you see between EX-BMDP and TIA is due to their shared use of the separation concept, **but separation is not a specific method but rather a general idea**[1,2,3,4]. - Additionally, EX-BMDP differs significantly from TiMDP[2]. TiMDP applies action to the exogenous branch, while EX-BMDP does not. Did the reviewer notice this? The agent can only influence the endogenous branch under our assumption, which is why EX-BMDP is suitable. - **Intrinsic reward** - As we all know, during pretraining phase of unsupervised RL, task-specific rewards are typically avoided in favor of intrinsic rewards, which come in various forms[5,6,7,8]. **Any intrinsic reward can support SeeX's exploration**, and we chose the classic disagreement[5]. - Our method formalizes URL as a minmax problem, where the outer layer minimizes REGRET’s upper bound using a separation assumption (Eq 2), leading us to adopt EX-BMDP. Inner layer maximize state uncertainty, so we approximate Eq 7 with Eq 11, making disagreement a reasonable choice. Reviewer considers the methods to be orthogonal, but we believe that the underlying ideas abstracted (e.g., separation and intrinsic reward) can inspire cross-field applications. Our step-by-step design and component choices are based on necessity, and our results validate the framework's effectiveness. - **L193, the relationship between Eq 7 and Eq 11** - In Eq 7, the goal of $\pi_{expl}$ is to maximize the total variation (TV) distance between the learned model and the true latent dynamics in the task-relevant part. Practically, we estimate $D_{TV}$ using the disagreement among an ensemble of neural networks, inspired by [9]. - **How to ensure $s^-$ does not contain information about the reward?** - Firstly, our Exo-Rec design follows [2,3], which states that task-relevant information is a small part of the observation. The Exo-Rec term aims to encode as much information from $o$ into $s^-$ (L164-166), not to remove reward-related information from $s^-$ as you mentioned. - Secondly, We assume that only the task-relevant part of $o$ is affected by the action. Based on EX-BMDP, we design a separated world model where the action limits the encoding of task-relevant information into $s^+$. - In summary, **we believe there is a misunderstanding of our method and the Exo-Rec design**. In visual settings, reward-relevant and reward-irrelevant information are inherently mixed and cannot be perfectly separated. Are you asking us to ensure a 100% separation between the two? This is not feasible. Our results(Figure 3, Table 1) show that our method attempts to separate $s^-$ and $s^+$ as much as possible, benefiting the task of handling moving distractors. - **The implementation of baseline methods** - We have mentioned in the paper (L47-49, L73-77, L201) that our method builds on URLB[10] and URLB-pixel[11], using the same implementation for the baseline. The specific implementation of the baseline can be found in the URLB-pixel GitHub repository. - **The REGRET bound in Eq (6)** - According to your statement, if we extract **precise** task-irrelevant information ($\hat{T}^-$) from the observation, it wouldn’t help in reducing REGRET? This seems unreasonable because reward-relevant and reward-irrelevant information are complementary and together constitute the observation. Knowing one allows us to infer the other. - Although $s^-$ is unrelated to REGRET, we use losses of $s^-$ and $s^+$ to bound REGRET (Eq 6). If you believe that $s^-$ is insignificant, you might try deriving a tighter REGRET bound based solely on $s^+$. We hope this clarifies our work. Please feel free to ask any further questions. [1] Provably Filtering Exogenous Distractors using Multistep Inverse Dynamics [2] Learning Task Informed Abstractions [3] Denoised MDPs: Learning World Models Better Than the World Itself [4] SeMAIL: Eliminating Distractors in Visual Imitation via Separated Models [5] Planning to Explore via Self-Supervised World Models [6] Curiosity-driven exploration by self-supervised prediction [7] Exploration by random network distillation [8] Curiosity-driven exploration via latent bayesian surprise [9] Reward-free curricula for training robust world models [10] URLB: Unsupervised Reinforcement Learning Benchmark [11] Mastering the unsupervised reinforcement learning benchmark from pixels --- Rebuttal 2: Comment: Dear reviewer NjAE, Since the discussion phase deadline is approaching, we would like to send a friendly reminder. We greatly appreciate your time and dedication to providing us with your valuable feedback. We hope we have addressed the concerns, but if there is anything else that needs clarification or further discussion, please do not hesitate to let us know. Thanks, Authors --- Rebuttal Comment 2.1: Comment: Thank the authors for the explanation, I have read the rebuttal and decided to keep my score. --- Rebuttal 3: Comment: Thank you for reviewing our rebuttal. We are glad to hear that you do not have any new questions. If you have any further concerns or need clarification, please feel free to reach out at any time.
Summary: This paper studies the problem of intrinsic-driving exploration in visually distracted environments, in the context of unsupervised reinforcement learning (URL). To address the issue that intrinsic rewards might be biased by distractors, the authors propose a method (called SeeX) that separates exogenous and endogenous information by training separate world models. In this way, the pre-trained world will be able to focus mainly on the task-relevant part of the observation, and the intrinsic reward based on dynamic uncertainty will be more accurate. The proposed method is evaluated on 3 image-based continuous control domains with video distractors. The results show that SeeX outperforms other URL methods on some environments. Strengths: * Separating exogenous and endogenous information to produce a better intrinsic reward is an interesting idea. * The writing is mostly clear. The diagrams look nice. Weaknesses: (Not all points are weaknesses. Comments/questions/weaknesses are all gathered in this section for convenience.) * Line 157: what is BA? * Line 164: should the second $=$ be $+$? * Figure 4b (walker stand): this figure is a bit misleading. It seems to suggest that SeeX is much better than other methods for pretrain budgets between 100k ~ 500k. However, this is not necessarily true, as there are no data points at 200k/300k/400k for baselines. The claim that other methods require **at least** 500k frames to achieve the same level of performance is ungrounded. * Apart from Figure 9, could the authors provide more visualizations for the decoded observations? For example, $\hat{o}_t^-$, $m_t$, and trajectories with different embodiments. * Have the authors conducted ablation experiments (K/Se/Exo-Rec) on other environments than Quadruped? Considering the quadruped results are quite noisy (Figure 3), it would be better to also include ablation results on walker envs. * Ablation: how does SeeX perform when $\alpha=0$? Other issues: * The legend of Figure 5 is a bit confusing. It took me some time to understand that the bottom legend is for the bottom-right figure. Please consider rearranging its position. Technical Quality: 2 Clarity: 2 Questions for Authors: Please see weaknesses above. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The limitations have been briefly discussed in Section 7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We have noted the typographical and formatting issues mentioned by the reviewer. BA on Line 157 refers to the Barber-Agakov bound[1], named using the authors' initials, which is a commonly used bound for mutual information. The second "=" on Line 164 should indeed be a "+". We believe these adjustments do not affect the main content and conclusions of the paper. Below are our responses to the issues you raised. - **Regarding conclusions of Figure 4** - Our original intention was to show SeeX's performance stability between 100k and 500k steps; we sampled and tested the performance of other methods between 100k~500k, but none exceeded 90% expert performance, so we did not report them in the paper. We show the performance of all steps for the other methods in **Pdf-Table-B** and our refined conclusion is that SeeX achieves 90% expert performance with fewer pre-training steps than other methods, demonstrating the efficiency of our bi-level framework and the separated world model during exploration. - **Other trajectory plots except for Figure 9** - In Figure 9, we have shown the reconstruction trajectories of SeeX after fine-tuning in the Walker environment, but there was a lack of trajectory visualization of other environment during the exploration phase. Therefore, **Pdf-Figure-C** displays the reconstruction trajectories of SeeX during exploration in the Quadruped and Jaco environments. The trajectories are organized into five columns: from top to bottom, they represent the original observation $o$; the reconstruction $\hat{o}$ obtained from the joint use of $s^+$ and $s^-$ with the help of the mask $m$; the endogenous reconstruction $\hat{o}^+$ obtained using only $s^+$, representing the task-relevant part; the exogenous reconstruction $\hat{o}^-$ obtained using only $s^-$, representing the task-irrelevant moving distractors; and the mask $m$ used to reconstruct the entire image. By analyzing the reconstruction trajectories, we can draw the following conclusions: - Although moving distractors were not completely removed, the reconstruction results show that $s^+$ contains most of the task-relevant information (including the agent's torso), significantly reducing the impact of background noise. - During the exploration phase, we use intrinsic rewards to encourage the agent to explore a more diverse range of states, aiming to build as comprehensive a world model as possible. This foundation supports imagination during policy training in the fine-tuning phase. The reconstruction results indicate that the agent does not stick to fixed actions but attempts to explore diverse states, demonstrating the effectiveness of the intrinsic rewards used. - The Jaco agent is a multi-joint robot with high flexibility, making it more complex than both the Walker and Quadruped. The reconstruction quality for Jaco is noticeably worse than for Quadruped, and the removal of moving distractors is less effective, which aligns with the experimental results in our paper. However, some interesting observations can be made: (1) The red ball, while not part of the agent's torso, is related to the reward function. It appears in $\hat{o}^+$ but not in $\hat{o}^-$, demonstrating SeeX's ability to automatically identify task-relevant information. (2) The base of Jaco, though fixed and not controlled by actions, is part of the agent's torso but has little relation to the reward. Therefore, SeeX classifies it as task-irrelevant, corresponding to the reconstruction image $\hat{o}^-$. - **Regarding the ablation studies in Figure 5** - When formatting Figure 5, we only considered aesthetic factors and overlooked the potential for misleading the reviewers. Therefore, we have re-drawn the conclusions of the ablation study and improved the readability of the charts, and added the ablation experiments on the walker (**Pdf-Table-C**) as well as the performance with $\alpha=0$ (**Pdf-Figure-B**). By analyzing the charts, we can draw the following conclusions: - The choice of weight for the Exo-Rec term significantly impacts SeeX's performance. For Walker, selecting $\alpha=3$ is appropriate, while for Quadruped and Jaco, $\alpha=1$ is more suitable. Removing the Exo-Rec term (equivalent to setting $\alpha=0$) leads to varying degrees of performance degradation across different environments, indicating that the Exo-Rec design contributes positively to performance. - Observing **Pdf-Table-C**, SeeX's sensitivity to different hyperparameters shows similar conclusions for Walker and Quadruped: (1) Removing the separated world model significantly impacts performance; (2) Removing the Exo-Rec term aligns with the conclusions from **Pdf-Figure-B**; (3) SeeX's performance is generally positively correlated with the number of predictive heads $K$, but considering the computational cost associated with increasing $K$, $K=5$ is a suitable choice. Thank you again for your support and constructive feedback on our work. We believe these improvements will further enhance the quality and impact of the paper. [1] The IM algorithm: a variational approach to information maximization --- Rebuttal 2: Comment: Dear reviewer HYxU, Since the discussion phase deadline is approaching, we would like to send a friendly reminder. We greatly appreciate your time and dedication to providing us with your valuable feedback. We hope we have addressed the concerns, but if there is anything else that needs clarification or further discussion, please do not hesitate to let us know. Thanks, Authors --- Rebuttal Comment 2.1: Comment: I apologize for my late reply. Thank you to the authors for addressing my questions and providing additional results. I have updated my score to 4, but I still have a few questions: * Regarding the additional ablation results on Walker, why was a table used instead of a line plot, as in the paper? Additionally, for the ablations related to $\alpha$, why not use a line plot similar to Figure 5 (top-left)? I also didn't understand why different plots were used for the ablation results on $\alpha$ in Jaco and Walker. * I share similar reservations about the novelty as Reviewer NjAE. To me, the main takeaway from this paper is that using only task-related information to measure disagreement is a better approach. Describing this as a completely new framework is unconvincing. Furthermore, making statements like "Any intrinsic reward can support SeeX's exploration" without experimental results to support it can be misleading and overly generalizing. --- Rebuttal 3: Comment: Thank you for your prompt and thoughtful response to our rebuttal. We appreciate the time and effort you’ve taken to review our rebuttal and provide additional feedback. Below are our responses to the new questions and concerns you raised: - **Additional ablation results on Walker** - As you mentioned in your review, “there are no data points at 200k/300k/400k for baselines,” we understand your concern. To address this, we have included the results in a table, providing more specific quantitative values to better illustrate the performance comparison of different methods at various time steps. Considering your preference for the same type of qualitative line charts as before, we will update the new data points in the line charts of the later formal version. - **Ablation of $\alpha$​** - Firstly, thank you for pointing out in your review that Figure 5 was somewhat confusing. In Figure 5, the top-left, top-right, and bottom-left plots show ablations with different values of $\alpha$ in various environments; since their content is consistent, we present them as bar charts in the PDF for a clearer comparison of their normalized returns. The bottom-right plot depicts ablation experiments of different modules, and we provide a more detailed analysis of these experiments in the table, which shows the specific numerical improvements. - Secondly, to address your and future readers' confusion, in the later formal version, we will replace the previous inconsistent Figure 5 with a unified and clearer set of figures. Additionally, we will include the original line plots in the appendix for reference, so readers can still gain a qualitative understanding of the performance of the ablation experiments. - **Reservations about the novelty** - First, thank you for considering that our approach of using only task-related information to measure disagreement is an improvement. Below, I will provide a more detailed explanation of our novelty and contributions. - In real-world scenarios, there are many sources of action-independent and task-irrelevant noise. For example, when a drone is patrolling, surrounding buildings, or distant vehicles do not directly affect the drone's flight path or the completion of its patrol task. Many current works overlook this issue, and we are **the first** to address such noise in the URL setting. Additionally, we propose a bi-level optimization framework centered around task-related information and design our methods based on this theoretical foundation. Finally, our experimental results demonstrate the effectiveness of our approach, and ablation studies confirm the utility of different components. - **Statements without experimental results** - Considering the URL problem setting, our model-based framework requires intrinsic reward to assist in early exploration. Intrinsic reward is a modular and can be implemented in various ways. For example, the disagreement measure $Var(\hat{h}^+_{t+1,i})$ used in our work and the variance of the log of predicted probabilities $Var(\log \hat{T}_i(z_t))$ used in [1], both of the intrinsic rewards can be used within our framework. - Our main contribution is a theoretically-based framework for solving the URL problem with visual disturbances. In principle, intrinsic reward can be implemented based on the ideas from different methods in URLB[2], as well as the two specific approaches mentioned earlier, meaning our framework is **intrinsic-reward-agnostic**. As you mentioned, the experimental performance of these intrinsic rewards has indeed not been tested. We will consider conducting more extensive and comprehensive comparisons in future work. Thank you for your continued effort in providing multiple rounds of feedback to improve our work. Your questions have been very valuable. Please feel free to reach out with any further questions or concerns you may have in the future. [1] Offline Reinforcement Learning from Images with Latent Space Models [2] URLB: Unsupervised Reinforcement Learning Benchmark --- Rebuttal Comment 3.1: Comment: We hope our responses address your concerns. If you have any further questions or need clarification, please do not hesitate to reach out to us.
Summary: This paper considers the problem of unsupervised reinforcement learning (task-agnostic pretraining) from image observations in environments with visual distractors. The key technical contribution of this paper is a practical algorithm, SeeX, based on the world model framework common in MBRL literature. SeeX explicitly separates endogenous (agent has control over) and exogenous (agent has no control over) forward dynamics in the learned world model, such that downstream policy optimization can occur only with the endogenous branch of the world model. The intuition behind the approach is that, assuming that the agent has no control over visual distractors (i.e., actions do not affect the distractors), we can expect such information to be encoded only in the exogenous branch of the model, thus not affecting policy optimization. The proposed method is validated on URLB, a common benchmark for unsupervised RL from images, and compared against prior work in this area. **Post-rebuttal comment:** I believe that the authors have addressed my concerns and I am revising my score from 5 -> 6 and recommend acceptance. Strengths: - The problem is interesting and relevant to the NeurIPS community, as well as RL researchers and practitioners more broadly. I believe that the approach is fairly straightforward yet original. - Paper is generally well written and easy to follow. There is sufficient background for an unfamiliar reader to appreciate the technical contributions. - Theoretical analysis helps build an intuition for the proposed method. Experiments are well thought out and the proposed method is compared to a variety of strong baselines from the URLB. Results seem to indicate that the proposed method indeed benefits from its separate branches. Weaknesses: I do not have any major concerns with the paper in its current form. However, there's a couple of things that I had expected to see in the paper: - A very common and effective strategy for visual generalization in RL has been use of data augmentation. There are numerous papers on the topic, many of which leverage simple and highly general augmentations such as random shift/crop, color jitter, mixup/overlay. Based on the strong similarity in problem setting, I would expect data augmentation to work very well on the considered benchmark. Intriguingly, data augmentation seems to have the same limitations as the proposed method, in the sense that they cannot capture distractors that the agent can interact with (i.e., has some control over). I would like the authors to comment on the similarity between the two research directions, and potentially include some discussion on this in the paper. I would strongly suggest including 1-2 data augmentation baselines for the camera-ready version of the paper as well; I fully acknowledge that this may not be a reasonable ask for the rebuttal itself. - I believe that there is another limitation of the proposed method which I didn't see any mention of in the paper. As far as I understand, moving objects that are task-relevant but external to the agent may not be captured by the endogenous branch. For example, a multi-agent setting in which another agent may move independently, or perhaps an environment in which a task-relevant object is moving due to e.g. physics without the agent actively interacting with the object. Is my understanding correct? This seems somewhat distinct from the example given in L333. - I understand why the authors would choose to only optimize the policy using the endogenous branch. However, I believe it would be quite valuable to include an ablation in which the policy is optimized with access to both branches; it is not obvious to me whether this would have any notable impact on downstream task performance. Technical Quality: 3 Clarity: 3 Questions for Authors: I would like the authors to address my comments in the "weaknesses" section above. While I do have concerns regarding baselines, I believe that strong verbal arguments and/or changes to the writing is sufficient to address my concerns during the rebuttal. However, I will expect the authors to follow through with any promises of additional experimental results or changes to the paper for a future camera-ready version. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors discuss limitations in L331-334. However, I would like the authors to elaborate on the limitations of their approach. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review and valuable suggestions regarding our paper. We are pleased to hear that you find our work to have clear technical contributions and to perform well in experiments. In response to the questions and suggestions you have raised, we provide the following answers: - **Comparison between our research direction and data augmentation (DA)** - Our designed bi-level separation framework extracts task-relevant information, while the DA method extracts task-relevant representations through data augmentation. As you mentioned, there are indeed similarities between the two approaches. However, since there are few methods that directly use DA for exploration, we incorporated the DA method into both Plan2Explore and SeeX to investigate whether it enhances performance. This is to verify whether the DA method provides benefits in the moving distractor setting. - We used the classic data augmentation technique of random shift (pad by 4 pixels) proposed in DrQ[1], applying it 4 times to the same image. To thoroughly evaluate the effectiveness of DA, we tested its impact during both the pretraining (p-DA) and finetuning (f-DA) stages on the walker task. The performance results are shown in **Pdf-Table-A**. - By analyzing the data in the table, we can draw the following conclusions: - Applying data augmentation (f-DA) on specific tasks is effective, as it can mitigate the impact of distractors and improve performance, which supports the reviewer's insight. - However, for tasks with moving distractors, the separation design in SeeX performs better than data augmentation. - Using data augmentation during the pretraining stage leads to a performance drop, possibly because it interferes with the learning of the world model. The specific reasons will be explored in future work. - **Further clarification on the content from L331-334** - Based on whether they affect the agent's reward function and action, distractors can be categorized into four types: task-irrelevant + action-independent (the setting addressed in our work); task-relevant + action-independent (which I understand to be the multi-agent setting you mentioned); task-irrelevant + action-dependent; and task-relevant + action-dependent. The latter two are the settings I referred to on L331-334, where distractors can interact with agents and even influence their reward function. - We believe the reviewer's understanding is correct and we appreciate you pointing this out. We will include the settings you mentioned in the camera-ready version. - **Ablation in which the policy is optimized with access to both branches** - Firstly, by formalizing the URL problem as a minmax problem, we further implement it as a separated world model consisting of two branches: endogenous and exogenous. The exogenous branch extracts task-irrelevant information, which is unrelated to the task. Therefore, it is reasonable to use only the endogenous branch to optimize the policy. - Secondly, I understand that you are referring to the ablation experiments for using both branches as inputs to the policy. To address your concerns comprehensively, we consider the following two scenarios: (1) using both exogenous and endogenous states simultaneously for policy optimization, and (2) merging the endogenous and exogenous branches into one, using a single encoder to extract the state $s$ for policy optimization. - (1) We added the following comparative experiments: both $s^+$ and $s^-$ are simultaneously input to both the actor $\pi(a|s^+,s^-)$ and the critic $v(s^+,s^-)$, with the action still applied to the endogenous branch. We compared the performance of this setup (SeeX-both-branch) with SeeX in the walker environment, and the specific performance curves are shown in **Pdf-Figure-A**. In all tasks, SeeX outperformed SeeX-both-branch, especially in the last three challenging tasks. This indicates that using only endogenous information is advantageous for tasks with moving distractors. - (2) Using a single encoder to encode observations for training the policy $\pi(a|z)$, which is the approach of Plan2Explore[2]. The superiority of SeeX over Plan2Explore is demonstrated in Figures 3 and 4 of our paper, showcasing the effectiveness of the separated world model. - In summary, we believe that using only the endogenous branch for policy optimization is reasonable, and our experiments and the content in the paper have demonstrated this. Although using information from two branches under our assumption doesn't offer advantages, as noted by the reviewer, there are multi-agent settings where noise, despite action independence, can affect the agents' reward functions. In such cases, incorporating part of $s^-$ into policy optimization might be effective. This is an interesting research direction that we are glad to explore in future work. Thank you again for your support and constructive feedback on our work. We believe these improvements will further enhance the quality and impact of the paper. We will ensure that the additional experimental results and any necessary changes are included in the camera-ready version of the paper. [1] Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels [2] Planning to Explore via Self-Supervised World Models --- Rebuttal Comment 1.1: Comment: Thank you for the detailed response to my questions. I believe that the new experimental results and clarification addresses my main concerns, especially the ablation using both branches is helpful in motivating the approach. I have increased my score from 5 -> 6 with the expectation that the authors incorporate all reviewer feedback into a camera-ready version. --- Reply to Comment 1.1.1: Comment: Thank you very much for your thoughtful feedback and for taking the time to review our paper. We sincerely appreciate your constructive comments and are pleased to hear that the new experimental results and clarifications address your main concerns. We are especially grateful for your recognition of the ablation study involving both branches, and we will certainly incorporate all the feedback provided into the camera-ready version of the paper. Your input has been invaluable in helping us refine our work and ensure its quality. Thank you once again for your support and for increasing the score. We are committed to making the necessary revisions to meet the expectations and enhance the clarity and impact of our paper.
Summary: The authors propose a method for separating endogenous and exogenous latent states for unsupervised exploration under visual distractors. Motivated by a theoretical bound minimizing the regret under a latent world model, the algorithm learns both an endogenous and exogenous world model, as well as an exploratory policy trained to maximize an intrinsic reward. They evaluate on continuous control tasks with various distractors, showing their method outperforms other baselines. They perform additional experiments on how pretraining affects downstream finetuning RL performance and how their choice of intrinsic reward is a good approximation for the total variation found in the theoretical bound. They also provide analysis on parameter choices for their algorithm. Strengths: The algorithm is extremely well-motivated from theory and does a good job arguing that the assumptions made in theory match reality. The paper is clear and rich with useful insights and the experimental results are strong. Weaknesses: Since the method performed so well in these settings, it would have been nice to see how it performed on a task in the real world under realistic endo/exo genous conditions Technical Quality: 4 Clarity: 4 Questions for Authors: Can you see this method helping in something like a self driving or navigation task in the real world? Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes, they discuss the important limitation that the endogenous and exogenous state are always fully separate and don't interact with one another. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and valuable feedback on our work. We are very pleased that you found our algorithm theoretically motivated and experimentally successful. In response to your questions and suggestions, we have made the following points: - **Exploration of Real-World Applications** - Our work focuses on handling moving distractors, where we need to separate task-relevant states from task-irrelevant states. In this setup, there are many common downstream tasks. For example, consider a book-finding robot in a library that needs to scan the shelves to find a specific book. The robot will encounter task-relevant objects (such as books) and task-irrelevant objects (such as posters on the wall or passing people), so the robot needs to distinguish which information is beneficial for the task. - We agree with your view that real-world applications are crucial for validating the practicality of the algorithm. However, the self-driving and navigation challenges you mentioned are difficult to analyze and computationally expensive in practice. We would be willing to address these in future work. And this will involve designing experiments closer to real-world conditions to verify the algorithm's effectiveness in handling complex real-world scenarios. - **Discussion on Method Limitations** - The assumption of completely separating endogenous and exogenous states that you mentioned is indeed a limitation of our method. We have discussed this in the paper, and our work has preliminarily validated the effectiveness of the bi-level optimization framework and the separated world model under the setting where endogenous and exogenous information is fully separable. In future work, we will explore how to introduce interactions between states to better understand and improve the applicability of the algorithm. Once again, thank you for your support and constructive feedback on our work. We believe these improvements will further enhance the quality and impact of the paper. --- Rebuttal 2: Comment: Dear reviewer SZqk, Since the discussion phase deadline is approaching, we would like to send a friendly reminder. We greatly appreciate your time and dedication to providing us with your valuable feedback. We hope we have addressed the concerns, but if there is anything else that needs clarification or further discussion, please do not hesitate to let us know. Thanks, Authors --- Rebuttal Comment 2.1: Title: Rebuttal Response Comment: I thank the authors for their response and discussion! I have no further questions at this time --- Reply to Comment 2.1.1: Comment: Thank you for your kind words and for taking the time to review our responses and discussion. I appreciate your positive feedback and will carefully consider any potential improvements to further enhance the quality of our paper. If you have any additional comments or suggestions in the future, please do not hesitate to reach out.
Rebuttal 1: Rebuttal: We thank the reviewers for taking the time to give useful comments for our paper. We are glad that the Reviewers appreciate the correct motivation (Reviewer SZqk), original approach (Reviewer wcfo) and strong experimental results (Reviewer SZqk, wcfo). Reviewers pointed out the concerns and points for improvement, and we made responses for each of them. We presented more experimental results and detailed responses to questions about our assumption and approach. \ We kindly request the reviewers to inform us if there is anything else we can clarify. Thank you for taking the time to review our work. Pdf: /pdf/efaebe6c3f1876a075785a2ba3fbef083ca0e1ed.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
The GAN is dead; long live the GAN! A Modern GAN Baseline
Accept (poster)
Summary: A lot of GANs papers have shrunk in quality since diffusion arose as a strong method. The authors goes against the grain by focusing on modernizing GAN baseline in a principled manner and in doing so they obtain stable, convergent, diversity, and high-quality images comparable to diffusion models. The authors highlight the fact that a lot of the GAN literature is riled with empirical tricks with no theory to back them. They explain that StyleGAN without tricks is very close to the original DCGAN of 2015. Meanwhile a lot more progress as been done in diffusion models leveraging many architectural improves. They instead start with a Relativistic pairing GAN baseline which, unlike regular GANs, have been shown to have no local-minima basins where the loss must worsen before reaching the true global minimum. However, they show that, just like regular GANs, RpGANs are not guaranteed to converge to the global minimum. They show that, just like regular GANs, gradient penalty on real data (R1) or fake data (R2) guarantee convergence. They suggests using both together. They then show empirically that on StackMNIST, GANs with R1 alone fails, but having both R1, R2, and RpGAN lead to perfect mode coverage and strong stability. As further justification, they also leverage the fact that from a constructive approach, when deriving a maximum-margin GAN, one obtains a gradient penalty with both R1, R2. Then they use a very careful ablation approach by starting from StyleGAN2, removing all the complicated empirical tricks but keeping R1 penalty, which reduce performance a bit, then moving to RpGAN, adding R2 penalty, then modernizing the architecture, reaching slightly better performance at similar number of parameters, but without tricks. The describe extensively and carefully the effect of different architecture choices on performance. Experiments on CIFAR-10, Imagenet-32, and FFHQ-256 are shown were the method proposed shows extremely low FID, better than diffusion models and do so while keeping the parameter count low. Results are very good, but work is still needed to scale these methods to text-to-image settings. However, the current results are very promising and the method has been made very clean and carefully designed (similar to ConvNext). Strengths: Experiments on CIFAR-10, Imagenet-32, and FFHQ-256 are shown were the method proposed shows extremely low FID, better than diffusion models and do so while keeping the parameter count low. The derivation of their architecture is done carefully in a thorough manner. Weaknesses: Great results on image datasets, but it still need to be scaled to large-scale text-to-image setting. Many GPUs are used, even for CIFAR-10; this is also in line with diffusion models, but it seems a lot, are all these GPUs needed for CIFAR-10? Although cleaner, the new baseline still contains a lot of complicated hyperparameter tuning, which change from one dataset to another (in Table 7). It would be great to clean this set of hyperparameters, I suggested some potential ideas in the "questions". Technical Quality: 4 Clarity: 4 Questions for Authors: - Any intuition to why worse performance with GELU and swish activations since these are very successful with transformers? Is this true for both G and D? - Have you consider a transformer architecture as alternative to convnext-style architecture? - You mention that normalization can harm GANs, but groupnorm is used in diffusion models and layernorm is used in language models, could this be useful for your approach? Has it been tried before? I saw the references you mentioned to say that normalization is bad for GANs, but I haven't looked at them individually. - You mention that PresGAN and DDGANs obtain good performances, do you think that some of their ideas could be useful for improve the GAN baseline? - What is EMA half-life? Is the EMA .999? - Why do you schedule Adam B2? This seems very atypical. - Assume $\alpha$ is the lr. Rather than schedule the R1/R2 $\gamma$ manually, why not make it $\alpha\gamma$ so that by decreasing the lr $\alpha$ over time, $\gamma$ is also reduces automatically (thus reducing the hyperparameter tuning). The same could be done with Beta2, by doing $1-((\alpha/alpha_{init})*(1-beta2))$, so moving the 1-.9 = 0.1 to 0.01 (1-.99) when decreasing the lr to 1/10 of its value over training time. I just feel like choices like this would greatly reduce the arbitrary hyperparameter tuning used here which is quite complex in Table 7. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 9 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. > “... still need to be scaled to large-scale text-to-image setting.” For scaling, we are currently running ImageNet-64 experiments to include in our paper; please see our discussion in response to Reviewer 2Hzz. In general, we hope our work is a meaningful first step and that future works can better assess its scaling. > “Many GPUs are used, even for CIFAR-10; this is also in line with diffusion models, but it seems a lot, are all these GPUs needed for CIFAR-10?” Using 8 GPUs to train CIFAR-10 is not required; our models can be trained with fewer GPUs at the cost of slower training speed. But, it is becoming a common practice, e.g., for diffusion models; Karras et al. used 8 GPUs to train their CIFAR-10 models in EDM. > “Any intuition to why worse performance with GELU and swish activations since these are very successful with transformers? Is this true for both G and D?” In experiments, we tried to apply GELU/Swish/SMU to G and D and found that doing so deteriorates FID considerably. We did not try on G xor D. We posit two independent factors: - ConvNeXt in general does not benefit much from GELU (and possibly similar activations). Table 10 and Table 11 in https://arxiv.org/abs/2201.03545: replacing ReLU with GELU gives little performance gain to ConvNeXt-T and virtually no performance gain to ConvNeXt-B. - In the context of GANs, GELU and Swish have the same problem as ReLU: that they have little gradient in the negative interval. Since G is updated from the gradient of D, having these activation functions in D could sparsify the gradient of D and as a result G will not receive as much useful information from D compared to using leaky ReLU. This does not explain the strange case of SMU (https://arxiv.org/abs/2111.04682): SMU is a smooth approximation of leaky ReLU and does not have the sparse gradient problem. It is unclear why it also underperformed and future work awaits. We are happy to add this discussion to supplemental. > “Have you considered a transformer architecture?” We have not conducted any experiments, but are also interested to see how they perform. In particular, whether adding attention blocks to a ConvNet (similar to BigGAN and diffusion UNet) or using a pure transformer architecture (similar to DiT) will result in stronger performance. Given the impressive results of EDM2 (which uses UNet), it seems the argument has not yet settled for generative modeling. > “You mention that normalization can harm GANs, but groupnorm is used in diffusion models and layernorm is used in language models, could this be useful for your approach? Has it been tried before? I saw the references you mentioned to say that normalization is bad for GANs.” We tried adding groupnorm to G and D and it did not improve FID or training stability. We would like to clarify that we do not intend to claim that all forms of normalizations are harmful. Our claim only extends to normalization layers that explicitly standardizes the mean and stddev of the activation maps. This has been verified by prior studies: - Instance norm harms GANs: https://arxiv.org/abs/1912.04958 - Group norm harms diffusion models: https://arxiv.org/abs/2312.02696 - Group norm harms VAEs: https://arxiv.org/abs/2405.14477, pg. 5 The harm of normalization layers extends to the adjacent field of image restoration (https://arxiv.org/abs/1707.02921, sec 3.1; https://arxiv.org/abs/1809.00219, supplemental sec 1). To the best of our knowledge, EDM2 is currently the strongest diffusion UNet and it does not use normalization layers. However, it does apply normalization to the trainable weights and this improves performance considerably. We expect that applying the normalization techniques in EDM2 would improve our model's performance. > “You mention that PresGAN and DDGANs obtain good performances, do you think that some of their ideas could be useful?” Yes, it is possible. We excluded these ideas as they are not an indispensable part of a minimal GAN framework. DDGAN in particular combines GANs and diffusion models - a promising research direction. Prior work has explored this in various flavors: DDGAN formulates GAN as a fast diffusion solver; Diffusion GAN (https://arxiv.org/abs/2206.02262) applies diffusion to GANs as a non-leaky augmentation; Diffusion2GAN (https://arxiv.org/abs/2405.05967) distills a diffusion model into a GAN; PaGoDA (https://arxiv.org/abs/2405.14822) inverts a pretrained diffusion model to obtain the posterior p(z | x) and facilitates the combination of GANs and MLE. > “What is EMA half-life? Is the EMA .999?” We follow the notation of Karras et al. in StyleGAN and EDM. EMA beta can be obtained using the formula ema_beta = 0.5 ** (batch_size / ema_half_life). > “Why do you schedule Adam B2? This seems very atypical.” It is atypical. The gradient magnitude of G and D varies drastically early in training. This effect is further amplified when we use a large initial learning rate, leading to large loss oscillations early on. Using a constant Adam beta2 such as 0.99 produces a loss that always stabilizes in a few hundred steps, and the initial oscillations do not seem to have any long-term negative impact. However, by annealing Adam beta2, we can reduce the initial loss oscillations: now, we no longer need to wait a few hundred steps for the loss to settle. In particular, a lower initial Adam beta2 allows the optimizer to adapt to large gradient magnitude changes much quicker and so stabilize training earlier. Similar reasoning for tweaking Adam beta2 is given in EDM2 (https://arxiv.org/abs/2312.02696, pg. 18). The benefit of scheduling Adam beta2 is marginal, but since we already schedule multiple hyperparameters, we might as well schedule Adam beta2 too. > Hyperparameter scheduling (question truncated for space). We agree that your suggestion will ease the hyperparameter tuning, and we plan to introduce something similar for the automatic configuration of our model. --- Rebuttal Comment 1.1: Title: response Comment: Thank you for addressing my questions and concerns. Having some of these short discussions in the appendix would indeed be useful to the reader. The extra imagenet-64 experiment will also help. I will increase my score by 1. This method is particularly good and helps bring GANs extremely close to diffusion performance which is quite important for the field of GANs to regain usefulness. The design choices are quite interesting. This reminds me a bit of the very useful experiments detailed discussions in the BigGAN paper which really helped pave the way to better GANs.
Summary: This paper introduces R3GAN as a GAN baseline that simplifies and modernizes the architecture by replacing ad-hoc tricks with modern designs. R3GAN utilized a regularized relativistic GAN loss coupled with zero-centered gradient penalties on both real and generated data, to addresses mode dropping and non-convergence, providing local convergence guarantees. The proposed loss allows for the elimination of unnecessary tricks and the adoption of advanced architectures. R3GAN demonstrates improved stability and competitive performance against state-of-the-art models on multiple datasets such as StackedMNIST, CIFAR, ImageNet-32, and FFHQ-256. Strengths: + The writing of the paper is clear, making it easy to understand and easy to follow. + One contribution of this paper is the simplification and optimization of the StyleGAN network architecture, which, as demonstrated by experiments, achieves considerable performance without the need for many elaborate tricks. + The paper provides theoretical analysis to prove the convergence of the loss used. Weaknesses: - The novelty of the method is somewhat limited, as both relativistic pairing GAN (RpGAN) and zero-centered gradient penalties (0-GPs) are previously proposed and validated approaches in the field of GANs. - Small-scale experiments to validate the significant improvements in stability and diversity provided by the used loss are insufficient; more complex datasets should be used to verify its effectiveness. As shown in Table 2, the improvement from config C compared to config B is not significant. Generally, stability can allow the model to converge better, thereby enhancing the final results, and diversity can be measured by metrics such as recall or coverage. Technical Quality: 2 Clarity: 2 Questions for Authors: How scalable is the proposed model? It is suggested to validate the effectiveness of the method on higher-resolution datasets and more complex tasks. This would align with the motivation to serve as a modern baseline, especially in the current context where both model parameter size and training data scale are increasing. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors have discussed the limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. > “The novelty of the method is somewhat limited, as both relativistic pairing GAN (RpGAN) and zero-centered gradient penalties (0-GPs) are previously proposed and validated approaches in the field of GANs.” While these components have been proposed separately, none matches our contributions: - **New theoretical and empirical contribution**: 0-GPs have been proposed in the context of classic GANs, but nobody has studied how they interact with the training dynamics of RpGANs. We provide both theoretical and empirical evidence on how they address the non-convergence of RpGANs, which ultimately allows us to derive a well-behaved GAN loss that is resilient to GAN optimization problems. - **New practical contribution**: We establish a clean and cohesive GAN framework with just the must-have components. With the new loss, we show that a simple non-StyleGAN based architecture can beat StyleGAN2 by a large margin. > “How scalable is the proposed model? It is suggested to validate the effectiveness of the method on higher-resolution datasets and more complex tasks. This would align with the motivation to serve as a modern baseline, especially in the current context where both model parameter size and training data scale are increasing.” We present early evidence: To complement our ImageNet-32 result, we are currently running ImageNet-64 experiments by stacking another stage onto our ImageNet-32 architecture. Note that, to evaluate EDM, Karras et al. (2022) used ImageNet-64 as their biggest dataset. Without hyperparameter tuning, we achieved FID 2.56, which is in between ADM (FID: 2.91, 250 DDIM steps) and EDM (FID: 2.23, 79 steps with EDM sampler). We achieve this without techniques like adaGN and attention in ADM/EDM. Our result uses ~90M parameters while EDM and ADM use ~300M. This is evidence that our model is more efficient at its current scale and will hopefully scale well. For more complex tasks, we are unsure what the reviewer would like to see - please suggest something. To the best of our knowledge, ImageNet is the one of the largest and best studied closed-world datasets. If the reviewer meant extending the setting to open-world text-to-image generation, then this requires significantly more compute (and monetary cost!). It also may require additional techniques like text encoders, self/cross-attention, and conditional modulation. More generally, addressing scalability is secondary to our top priority of addressing the GAN “myths” that led to the shrinking (as YF9j puts it) of GAN research: GANs are difficult to train, GANs do not work without many tricks, GANs are fragile and do not work with powerful modern vision backbones. Scalability, while also highly important to GAN applications, is dependent upon proving the feasibility of a clean GAN framework, via a minimal model that beats prior work. Of course, we agree with the reviewer on the practical importance of scalability. We are not OpenAI or Meta with thousands of GPUs, but our added experiment will provide evidence for the scalability of our model given our compute constraints. Finally, we will open source our code and pretrained models and we welcome the community to test the scalability of our model. > "Small-scale experiments to validate the significant improvements in stability and diversity provided by the used loss are insufficient; more complex datasets should be used to verify its effectiveness." We acknowledge that stability and diversity are only directly validated on StackedMNIST (Figure 1 / Table 1). We have added diversity via the recall metric for all other experiments below. Concerning stability, we propose to add convergence plots for all datasets and for loss-ablated models, similar to Figure 1, in supplemental material, with discussion in the main paper. That said, StackedMNIST itself should not be dismissed so easily: it is a challenging case for stability because of the limited data and that data having an all-black background. This makes D's job to reject fake samples extremely easy early on, causing instability. FFHQ is generally more stable and the original StyleGAN loss is likely to work in terms of stability, but we show that our approach gives better results in terms of FID. > [Following on; concerning the impact of stability on performance] “As shown in Table 2, the improvement from config C [well-behaved loss] compared to config B [no well-behaved loss] is not significant.” Improving performance may need both a well-behaved loss _and_ a more powerful backbone, as is the case here. Both Config-B and C have a weak DCGAN-style backbone. Assuming convergence is stable, G and D must still be sufficiently powerful for a given dataset, otherwise performance will saturate regardless of the loss used. With a weak backbone, it is not surprising that replacing the loss does not result in a drastic improvement. With a stronger backbone in D and E, performance increases. Adding convergence plots for this more complex FFHQ data will contextualize this result with respect to stability. > “... and diversity can be measured by metrics such as recall …” Thanks - our oversight - we report recall numbers below. On ImageNet-32, we obtain a recall of 0.63, comparable to ADM (recall 0.63). On FFHQ-256, after gamma hyperparameter tuning, we achieved a lower FID of 2.77 and a recall of 0.50, outperforming StyleGAN2 (FID: 3.78, recall: 0.43). On CIFAR-10, we achieved a slightly better FID of 1.97 and recall 0.57 after gamma hyperparameter tuning. By comparison, StyleGAN-XL has a slightly lower FID of 1.85 but a much worse recall of 0.47. StyleGAN2-Ada has a considerably worse FID of 2.42 but a slightly higher recall of 0.59. We attribute the recall gap to hyperparameter tuning, as we tune hyperparameters for best FID performance. For ablations, Config-C, using our new loss, achieved recall 0.24, and Config-B with the original StyleGAN loss achieved a worse recall of 0.22. --- Rebuttal Comment 1.1: Comment: I appreciate the author’s response. Considering that the authors reply to my questions and addressed some of my concerns, I decided to increase my score to Borderline Accept. However, I still believe that conducting experiments on higher resolution datasets (such as ImageNet-128, ImageNet-256, ImageNet-512) would significantly enhance the contribution of this paper to the GAN research field.
Summary: The authors posit that the main reason GAN research has been slow in recent years is due to the most foundational StyleGAN2 not having undergone major architectural changes, essentially due to a lack of convergence guarantee in GAN objectives and being prone to mode collapse. This has limited the scope of architectural changes and further GAN research. Two identifiable goals of the paper are to mathematically derive a well-behaved regularized loss which guarantees GAN convergence and diversity, provide empirical evidence for the same, and now being free to make unconstrained design choices attempt to develop a GAN baseline which inherits features from modern vision architectures. The authors' methodology is inspired by two key works: (i) RpGAN [20], which addressed mode dropping in GANs with a better loss landscape, theoretically confirmed by Sun et al. [65] to contain only global minima; and (ii) Mescheder et al. [43], which provided evidence for local convergence with zero-centered gradient penalties in GAN objectives. By extending zero-centered gradient penalties to the RpGAN formulation, the authors address both mode dropping and unstable training. They offer theoretical proof for local convergence and present experimental evidence supporting global convergence, though no theoretical proof for the latter is provided. Design changes in Config D are motivated by ResNet [14] and ConvNeXt [41], selectively applying aspects consistent with the authors' observations. Following findings in [31, 29], they eliminate normalization in StyleGAN in B and introduce fix-up initialization [77] in D to prevent variance explosion. Changes in E are inspired by ResNeXt [74], incorporating group convolutions to increase bottleneck width and followed by bottleneck inversion for efficient use. This simple baseline is empirically found to outperform previous GAN methods and some recent diffusion based models on several unconditional and conditional generation tasks. Strengths: 1. The paper presents a combination of well-known techniques by extending the RpGAN formulation with zero-centered gradient penalties. 2. While the individual components like RpGAN, zero-centered gradient penalties, and modern vision architectures (ResNet, ConvNeXt) are not new, their integration into a cohesive GAN framework is a notable contribution. 3. The submission is technically sound with well-supported claims through theoretical analysis and experimental results. 4. The paper is clearly written and well-organized. 5. The results are significant, with improved performance over existing GANs and some diffusion models on various datasets. Weaknesses: 1. While experimental results are well presented, there seems to be a lack of information about the training setup, hyperparameters used, number of inference steps, etc pertaining to the diffusion based models in Tables 4, 5 and 6. It would be useful to include this information in the supplementary or the tables itself for a transparent comparison. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. To clarify, on line 132 on page 4, “857” corresponds to StyleGAN backbone trained on WGAN-GP loss and “881” corresponds to the mini-batch standard deviation trick added to the same? Would be better to clarify the exact configuration and include the two cases in table 1 and figure 1. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. > “there seems to be a lack of information about the training setup, hyperparameters used, number of inference steps, etc pertaining to the diffusion based models in Tables 4, 5 and 6.” The diffusion model numbers in these tables are directly taken from reports in existing papers. Below, we have added the commonly-reported number of function evaluations (NFEs) for each. For the camera ready, we will add more detailed training setup descriptions and a hyperparameter table. Table 4: - LDM: from https://arxiv.org/abs/2112.10752, Table 18, NFE: 200 - ADM (DDIM & DPM-Solver) & Diffusion Autoencoder: from https://arxiv.org/abs/2312.06285, Table 1, NFE: 500 Table 5: - DDPM: from https://arxiv.org/abs/2006.11239, Table 1, NFE: 1000 - DDIM: from https://arxiv.org/abs/2010.02502, Table 1, NFE: 50 - VE & VP: from https://arxiv.org/abs/2206.00364, Table 2, NFE: 35 Table 6: - ADM -> https://arxiv.org/pdf/2301.11706 Table 3, NFE: 1000 - DDPM-IP> https://arxiv.org/pdf/2301.11706 Table 3, NFE 1000 - VDM -> https://proceedings.neurips.cc/paper_files/paper/2021/file/b578f2a52a0229873fefc2a4b0 -> NFE 1000 - DDPM++ -> https://openreview.net/forum?id=PxTIG12RRHS -> NFE 1000 > “To clarify, on line 132 on page 4, “857” corresponds to StyleGAN backbone trained on WGAN-GP loss and “881” corresponds to the mini-batch standard deviation trick added to the same? Would be better to clarify the exact configuration…” Not quite. Although the mini-batch standard deviation trick was popularized by StyleGAN, it was introduced in Progressive GAN (https://arxiv.org/abs/1710.10196). Thus, we took these numbers from Table 4 of the Progressive GAN paper. “857” corresponds to a low-capacity version of the progressive GAN trained with WGAN-GP loss and “881” adds the minibatch stddev trick. We will clarify the reference in the camera-ready paper. > “... and include the two cases in table 1 and figure 1.” While in principle this is a good idea to contextualize the gain, in practice it is a little tricky: ProGAN/StyleGAN are quite different models, and Table/Figure 1 aim to show without complication the effect of different losses on our small simple model. We cannot ‘strip’ ProGAN/StyleGAN meaningfully, and so their performance may be better than what our simple model achieves. We do not want to confuse the reader by adding new lines to the plots that are incomparable, obfuscating the point we are trying to make about stability. If the reviewer has other ideas for how to make this comparison, we welcome them. --- Rebuttal Comment 1.1: Comment: Thanks for your response. After going through the response and other response to other reviewers, I have decided to keep my current ratings.
null
null
Rebuttal 1: Rebuttal: Thank you everyone for your constructive feedback. In summary, all reviewers found that the paper had strengths: - The paper is clearly written. - The claims are well supported with both theoretical and empirical evidence. - The theoretical insights allow a method that is simpler than past GAN works with fewer tricks. - The results are significant, being better than many existing GANs and some diffusion methods. However, there are some weaknesses and questions: - Novelty: Reviewer 2Hzz believes that novelty is somewhat limited as both RpGAN and zero-centered gradient penalty works both exist; reviewer otaU notes that this is true but that "their integration into a cohesive GAN framework is a notable contribution." - Technical Retails: Reviewer otaU requests training details for comparison methods and a technical clarification, and reviewer YF9j poses numerous small questions to help us improve our technical explanation. We answer each of these questions. - Stability/Diversity Evaluation: Reviewer 2Hzz states that "small-scale experiments to validate the significant improvements in stability and diversity ... are insufficient; more complex datasets should be used." We report the recall metric for all datasets in this rebuttal to evaluate diversity, and propose plots to show stability on our more complex datasets. - Scalability: Reviewer 2Hzz asks "How scalable is the proposed model?" Reviewer YF9j notes this issue too - "work is still needed to scale these methods to text-to-image settings" - but is satisfied to support the work as the "current results are very promising." To provide evidence for scalability, we report preliminary results on ImageNet-64 to contrast to our ImageNet-32 results for this task. But, we defer more significant scalability evaluation to future work - we will open source our code and pretrained models to make this easier. We hope to answer these questions in detail in response to each individual reviewer, and we look forward to further discussion with you all.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
MindMerger: Efficiently Boosting LLM Reasoning in non-English Languages
Accept (poster)
Summary: The paper introduces MergeMinds, a method that integrates LLMs with multilingual models to enhance reasoning capabilities across multiple languages. Through a two-step training scheme, MergeMinds effectively boosts performance in multilingual reasoning and language understanding tasks, particularly excelling in low-resource languages, as demonstrated by significant accuracy improvements on various datasets. Strengths: 1. This work propose MergeMinds method offers a novel solution by merging LLMs with external multilingual capabilities and wo-step training process, enhancing the robustness of the model. 2. The method shows substantial improvements in multilingual reasoning and language understanding tasks, particularly with notable gains in low-resource languages, demonstrating the effectiveness of the approach. Weaknesses: 1. From my perspective, the motivation of MergeMinds is extremely similar with LangBridge, both of which ultilize the mapping layer to map multilingual encoder to exsisting MLLM. 2. In fact, the work of this paper has already been explored on multi-modal large models a long time ago, such as InstructBlip's Qformer and LLaVA's two-stage alignment training. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. It appears that the Te language is not included in the MGSM. Could you explain the reason for this omission? 2. I noticed that some work similar to xcot was referenced but not included in the comparative table. What was the rationale behind this decision? 3. In my view, does this work adapt the paradigm from InstructBlip's Qformer and LLaVA's two-stage alignment training to a multilingual setting? 4. Missing Reference [1-3] [1] MAPO: Advancing Multilingual Reasoning through Multilingual-Alignment-as-Preference Optimization [2] The Power of Question Translation Training in Multilingual Reasoning: Broadened Scope and Deepened Insights [3] Multilingual large language model: A survey of resources, taxonomy and frontiers Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your encouragement of our work. We reply to your questions as follows. **W1: From my perspective, the motivation of MergeMinds is extremely similar with LangBridge, both of which utilize the mapping layer to map multilingual encoder to existing MLLM.** - In related works about MLLM, LLM does not have visual capabilities at all. Compared with this, the most significant difference in our research is that LLM has built-in multilingual capabilities, . - Based on this difference, we need to explore whether there is a more efficient training solution than MLLM to effectively activate the multilingual capabilities built-in LLM. - LangBridge is a solution like the works in MLLM that only considers integrating encoder to obtain capabilities that are not available in LLM. LangBridge does not consider boosting the built-in multilingual capabilities of LLM, which results in its performance being significantly lower than MindMerger. It lags behind MindMerger by 8.0% and 7.1% in low-resource languages and all languages in the MGSM dataset. Therefore, different from the existing work of MLLM, with MindMerger as a starting point, there are still many challenges to explore in the future to stimulate the multilingual as well as other bulit-in capabilities of LLM. **W2, Q3: The work of this paper has already been explored on multi-modal large models. Does this work adapt the paradigm from InstructBlip's Qformer and LLaVA's two-stage alignment training to a multilingual setting?** - Because the focus of our work is on how to combine multilingual capabilities from external model and LLM itself, we face the challenge of avoiding over-reliance on the capabilities of the external multilingual model during training. Similarly, we also face the challenge of avoiding over-reliance on the multilingual capabilities built into LLM during training. - Moreover, our goal is to activate the multilingual capabilities of the LLM rather than to increase new capabilities it does not have, it is possible to explore a more efficient training solution than the MLLM. It can be observed from Table 6 in the paper that MindMerger has surpassed all baselines under the using of only 5,000 training samples per language at the second stage. - In addition, the research on integrating multilingual models into LLM is still under-explored and different from the existing work on MLLM. For example, what data to use for training, what structure to use for integration, etc. still need more insightful discussions. | MGSM | Te | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | En | Lrl | Hrl | |------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------| | xCoT | 42.8 | 40.4 | 49.2 | 48.4 | 50.0 | 50.0 | 47.2 | 49.6 | 50.0 | 48.8 | 48.4 | 45.2 | 49.1 | | LangBridge | 34.8 | 42.8 | 50.4 | 43.2 | 40.0 | 45.2 | 50.8 | 52.4 | 56.4 | 58.0 | 63.2 | 42.8 | 52.3 | | MindMerger | **52.0** | **54.0** | **54.0** | **55.6** | **51.6** | **54.8** | **60.0** | **59.6** | **57.6** | **63.2** | **65.2** | **53.9** | **58.9** | **Q1: It appears that the Te language is not included in the MGSM. Could you explain the reason for this omission?** - This is not a technical issue, since the QAlign and MathOctopus work we followed did not compare Te, so we ignored Te as well. - We added the Te experiment in the table above. MindMerger is still the best model on Te, with a 9.2% improvement over xCoT. **Q2: I noticed that some work similar to xCoT was referenced but not included in the comparative table. What was the rationale behind this decision?** - We selected the two strongest methods for each of the two categories of baselines: relearning-based and replacement-based. - xCoT is a relearning-based method, but its performance is lower than MultiReason (MultiReason-Lora) and QAlign. Considering the page limit of the paper, we did not include it in the comparison. - We show the performance of xCoT in the table above, and we consider adding it to the appendix. **Q4: Missing Reference.** - Thank you for the reviewers’ kind reminder, we will cite them in the next version of paper. Best wishes. --- Rebuttal 2: Comment: Thanks for your great response. I have no additional problems.
Summary: This paper proposes a framework which merges LLMs with the external language understanding capabilities from multilingual models to improve multilingual reasoning performance. Specifically, the authors introduce a two-step training scheme: they (i) train the framework to embed the multilingual model into the LLM by using translation data, and (ii) further train the framework to collaboratively utilize the built-in capabilities of LLM and the embedded external language capabilities. Strengths: - This paper proposes a new method to boost the multilingual reasoning of LLMs, which embeds external language understanding capabilities from multilingual models into LLMs. - Extensive experimental results and analyses on several datasets demonstrate the effectiveness of the proposed method. Weaknesses: - In Section 3.2, the authors mention that they use translation data and query translation task data generated from public translation models for the two-stage training; while in lines 182-192, it shows that they also use several English task datasets, such as MetaMathQA. It is unclear how they use these data. - Although the author mentioned that only a small number of parameters are trained during two-stage training, the paper lacks a comparison with baselines in this apsect. Particularly, the proposed method requires the external multilingual model, which is expected to affect the inference efficiency. - I know asking for new results might be unrealistic at this stage, but I am curious what would happen if you only used two-stage training for LLM without including an external multilingual model. Technical Quality: 3 Clarity: 3 Questions for Authors: See above. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable reviews. We have added some experiments and will include them in the next version of paper. **W1. In Section 3.2, the authors mention that they use translation data and query translation task data generated from public translation models for the two-stage training; while in lines 182-192, it shows that they also use several English task datasets, such as MetaMathQA. It is unclear how they use these data.** - For all models, we first use English task datasets to fully fine-tune LLM in order to obtain the reason capabilities, and then use other data to train. - Taking the mathematical reasoning task as an example, both our model and all baselines are based on MetaMath-Llama (denote as MonoReason in our paper) as the base model, and MetaMath-Llama is fully fine-tuning Llama on the MetaMathQA dataset. **W2. Although the author mentioned that only a small number of parameters are trained during two-stage training, the paper lacks a comparison with baselines in this apsect. Particularly, the proposed method requires the external multilingual model, which is expected to affect the inference efficiency.** - **Compared with the parallel encoding input query, the serial generation time is much longer.** - As report in the follow table, compared with the single LLM model QAlign under batch size 1, the average inference speed for **MindMerger even faster than QAlign**, where MindMerger spends an average of 3.65s inferring a sample while QAlign uses 3.81. | Inference time per sample (s) | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | En | Avg. | |-------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | QAlign | 3.77 | 4.21 | 3.57 | 4.09 | 4.13 | 3.84 | 3.61 | 3.62 | 3.79 | 3.48 | 3.81 | | MindMerger | 3.88 | 3.57 | 3.64 | 3.53 | 3.71 | 3.49 | 3.47 | 3.94 | 3.74 | 3.55 | 3.65 | - This phenomenon is caused by the length of the text generated by QAlign is slightly larger than that of MindMerger as reported as belows: | The average length of generated text (token) | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | En | Avg. | |----------------------------------------------|---------|---------|---------|---------|---------|---------|---------|---------|---------|---------|----------| | QAlign | 126.30 | 139.94 | 118.34 | 135.86 | 135.94 | 126.52 | 120.20 | 119.76 | 128.24 | 117.60 | 126.87 | | MindMerger | 122.32 | 121.42 | 121.94 | 121.78 | 126.74 | 117.56 | 116.30 | 133.22 | 125.84 | 120.34 | 122.75 | - As for the cost of the external multilingual model, it is very insignificant. As reported in the below table, due to its small number of parameters, the time used to encode the query of external model is only half of QAlign (0.02s vs 0.04s). | Encoding time per sample (s) | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | En | Avg. | |--------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | QAlign | 0.05 | 0.04 | 0.04 | 0.03 | 0.03 | 0.03 | 0.04 | 0.03 | 0.03 | 0.03 | 0.04 | | Multilingual Encoder + Mapping Layer | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | 0.02 | --- Rebuttal 2: Title: (2/2) Response to Reviewer VBLh Comment: **W3. I know asking for new results might be unrealistic at this stage, but I am curious what would happen if you only used two-stage training for LLM without including an external multilingual model.** - There are three types of training data in the paper: **(a)** translation data, **(b)** English task data, and **(c)** query translation task data. - We experimented with four dataset usage settings (b, c, b->c, c->b) to fully fine-tune Llama and compare the influence of using translation data for training in the first stage (a->b, a->c, a->b->c, a->c->b). - The experimental results are shown in the table below. Regardless of the way the training data is used, MindMerger has an average accuracy that is at least 6.2 higher than the fully fine-tuned method. - Using translation data as the first stage training data can help improve performance on 3 out of 4 fully fine-tuning settings (a->c, a->b, and a->c->b). - The gain of translation data for MindMerger is significantly greater than that of fully fine-tuning Llama. With the help of translation data, MindMerger improved the average accuracy rate by 5.5%, while the improvements for fully fine-tuning Llama were 0.8%, 3.8% and 1.0% respectively. | | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | En | Lrl. | Hrl. | Avg. | |--------------------------------------------------------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | c | 33.2 | 36.8 | 34.8 | 36.0 | 37.2 | 39.6 | 39.6 | 38.4 | 37.6 | 40.4 | 34.9 | 38.4 | 37.4 | | a->c | 36.4 | 36.0 | 37.2 | 31.6 | 34.4 | 42.4 | 40.4 | 40.4 | 42.4 | 41.2 | 36.5 | 39.0 | 38.2 | | b (MonoReason) | 6.8 | 7.2 | 6.8 | 36.4 | 38.4 | 55.2 | 54.4 | 52.0 | 57.2 | 68.8 | 6.9 | 51.8 | 38.3 | | a->b | 21.2 | 21.2 | 24.4 | 38.8 | 34.4 | 52.8 | 56.8 | 52.4 | 57.6 | 61.6 | 22.3 | 50.6 | 42.1 | | b->c (MultiRason) | 33.2 | 40.0 | 42.0 | 42.0 | 42.0 | 45.2 | 44.8 | 45.2 | 48.0 | 52.0 | 38.4 | 45.6 | 43.4 | | a->b->c | 33.6 | 40.4 | 40.4 | 41.2 | 37.6 | 38.4 | 41.2 | 44.4 | 41.6 | 47.6 | 38.1 | 41.7 | 40.6 | | c->b | 40.8 | 46.4 | 45.2 | 44.8 | 50.4 | 52.8 | 52.8 | 52.0 | 56.4 | 59.6 | 44.1 | 52.7 | 50.1 | | a->c->b | 40.8 | 50.0 | 50.4 | 47.6 | 52.8 | 54.8 | 54.0 | 56.0 | 50.0 | 54.8 | 47.1 | 52.9 | 51.1 | Thanks again to the reviewer for the experimental suggestions. We believe that these additional experiments will help further enhance the impact of our work. Best wishes. --- Rebuttal 3: Title: Reviewer Response to Authors' Rebuttal Comment: Dear authors, Thanks for your response and update with the results! I decided to raise the soundness score.
Summary: The paper propose a way to improve multilingual reasoning in LLM in own native languages without relying on pivoting methods such as translating to English. The method assume a hypothesis that LLMs have built in knowledge and reasoning abilities in a lower-resource language and not just common language like English. The method plugs in LLMs with the external multilingual models as embedding layers and introduce a training scheme build those new knowledge it. Experiments on MGSM show better accuracy across languages. Strengths: * The method is pretty novel in ways of incorporating multilingual encoders as additional embedding layers to augment existing embedding layers of the LLMs. * The experiments are done with many languages comprehensively, including many low-resource languages, and ablation studies, which are good. Weaknesses: * The method is incomplete and ambiguous on the generation side. It makes sense that injecting multilingual embeddings at the bottom for the input may help the LLM understand the context better, but that does not necessarily make the LLM **generate** in those native language better, especially if the LLMs are weak in producing content in native languages. * It is no surprise that the tasks presented in the paper are all "understanding" tasks where the generation workload is light and trivial. Other "generation" tasks, such as summarization, writing, translation... may not fair well with this method. I would highlight if the authors could show good results for the method for these tasks. * Missing baseline comparison: https://arxiv.org/abs/2306.11372 , https://aclanthology.org/2023.findings-emnlp.826.pdf * Confusing word choices, such as "mapping stage", "merging" and "augmentation stage". These words often lead to different understanding rather than what are described in the paper. * "Query translation task" and "query translation data" and many parts of the methodology are not explained clearly. It would be better if there are explicit visual examples or diagrams. Technical Quality: 3 Clarity: 2 Questions for Authors: NA Confidence: 5 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper discussed limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your reviews. The main concern is the performance of our model on the generation task. We believe that adding experiments on the generation task will help further expand the scope of our work. We are grateful for the feedbacks on some writing improvement and we will revise them in the next version. The specific responses are as follows: **W1. The method is incomplete and ambiguous on the generation side. It makes sense that injecting multilingual embeddings at the bottom for the input may help the LLM understand the context better, but that does not necessarily make the LLM generate in those native language better, especially if the LLMs are weak in producing content in native languages.** - Our model can be a key step towards improving LLM multilingual generation capabilities, since **language understanding is the basis of language generation**. - To response this concern, we added an experimental comparison on the translation task based on Flores-101 dataset. We implemented four settings: X-En, En-X, X-Zh, and Zh-X. In each setting, we compare fully fine-tuned Llama2 (Llama-SFT) with MindMerger using 100 K and 200 K training samples per language, respectively. - The experimental results are shown in the following four tables. **MindMerger consistently outperforms Llama-SFT in translation quality** under all comparison settings. - Considering that the top of MindMerger is frozen and not trained, the improvement in generation quality comes from the enhancement of query understanding, which may be a valuable insight for future research on multilingual generation tasks. | X-En | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | Lrl. | Hrl. | Avg. | |----------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | Llama-SFT (100 K) | 14.8 | 9.6 | 22.7 | 16.4 | 14.6 | 28.7 | 26.3 | 25.0 | 20.7 | 15.7 | 22.0 | 19.9 | | MinderMerger (100 K) | 18.6 | 16.7 | 32.9 | 23.3 | 22.2 | 37.4 | 32.6 | 31.7 | 25.0 | 22.7 | 28.7 | 26.7 | | Llama-SFT (200 K) | 16.0 | 18.8 | 27.9 | 20.2 | 23.6 | 38.1 | 39.3 | 30.4 | 28.7 | 20.9 | 30.1 | 27.0 | | MinderMerger (200 K) | 27.1 | 24.5 | 40.2 | 26.0 | 28.9 | 42.5 | 42.4 | 34.1 | 30.1 | 30.6 | 34.0 | 32.9 | | En-X | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | Lrl. | Hrl. | Avg. | |----------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | Llama-SFT | 17.1 | 8.2 | 20.8 | 13.1 | 6.2 | 18.9 | 19.7 | 12.6 | 16.7 | 15.4 | 14.5 | 14.8 | | MinderMerger | 20.2 | 11.9 | 30.1 | 19.2 | 12.4 | 27.6 | 32.2 | 23.2 | 23.7 | 20.7 | 23.1 | 22.3 | | Llama-SFT (200 K) | 23.9 | 15.6 | 26.6 | 18.9 | 16.7 | 25.1 | 35.6 | 21.2 | 24.3 | 22.0 | 23.6 | 23.1 | | MinderMerger (200 K) | 24.4 | 17.4 | 32.6 | 23.0 | 20.7 | 31.6 | 40.8 | 26.6 | 26.1 | 24.8 | 28.1 | 27.0 | | X-Zh | Bn | Th | Sw | Ja | De | Fr | Ru | Es | En | Lrl. | Hrl. | Avg. | |----------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | Llama-SFT (100 K) | 7.0 | 5.7 | 6.9 | 12.9 | 15.7 | 13.5 | 14.3 | 11.6 | 16.3 | 6.5 | 14.1 | 11.5 | | MinderMerger (100 K) | 10.1 | 8.9 | 10.3 | 16.0 | 18.7 | 18.0 | 17.4 | 15.8 | 21.2 | 9.8 | 17.9 | 15.2 | | Llama-SFT (200 K) | 8.5 | 7.5 | 8.2 | 13.2 | 14.7 | 14.4 | 13.6 | 11.1 | 13.1 | 8.1 | 13.4 | 11.6 | | MinderMerger (200 K) | 11.8 | 11.7 | 12.0 | 17.3 | 20.5 | 20.0 | 19.0 | 17.0 | 23.9 | 11.8 | 19.6 | 17.0 | | Zh-X | Bn | Th | Sw | Ja | De | Fr | Ru | Es | En | Lrl. | Hrl. | Avg. | |----------------------|------|------|------|------|------|-------|-------|-------|-------|-------|-------|--------| | Llama-SFT (100 K) | 9.9 | 3.6 | 7.3 | 12.9 | 11.5 | 14.7 | 9.6 | 11.0 | 12.7 | 6.9 | 12.1 | 10.4 | | MinderMerger (100 K) | 14.1 | 7.8 | 13.9 | 19.4 | 18.8 | 24.8 | 17.5 | 17.7 | 25.9 | 11.9 | 20.7 | 17.8 | | Llama-SFT (200 K) | 14.2 | 7.8 | 11.3 | 16.5 | 17.7 | 20.5 | 14.1 | 15.7 | 18.6 | 11.1 | 17.2 | 15.2 | | MinderMerger (200 K) | 16.2 | 11.4 | 16.2 | 20.7 | 20.2 | 26.6 | 18.7 | 18.3 | 27.2 | 14.6 | 22.0 | 19.5 | --- Rebuttal 2: Title: (2/3) Response to Reviewer q66P Comment: **W2. Missing baseline comparison XLT.** - **We have discussed XLT in related work, lines 80-81**, which is a prompt-based approach that brings limited improvements over open source models such as Llama through carefully crafted prompts. - Our model and the baselines we compared are all supervised fine-tuning methods. Although the work of XLT is inspiring, prompt-based methods are not in the same category works as our model. - We added a comparison between XLT and our model. The experimental results are shown in the following two tables. Under the same base model, MindMerger has a huge lead over XLT. - Due to page limitations, we consider adding the experimental results of XLT to the appendix. | | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | En | Lrl | Hrl | Avg. | |---------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | XLT (Llama2-chat) | 3.2 | 5.6 | 4.0 | 9.2 | 12.8 | 16.0 | 17.6 | 14.0 | 19.6 | 19.6 | 4.3 | 15.5 | 12.2 | | MindMerger (Llama2) | 50.4 | 52.8 | 57.2 | 54.4 | 53.6 | 61.2 | 57.6 | 60.8 | 58.4 | 66.8 | 53.5 | 59.0 | 57.3 | | | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | En | Lrl | Hrl | Avg. | |-----------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | XLT (Llama3-Instruct) | 27.6 | 53.6 | 32.4 | 52.0 | 58.0 | 53.6 | 39.6 | 50.8 | 53.2 | 52.8 | 37.9 | 51.4 | 47.4 | | MindMerger (Llama3) | 64.4 | 65.6 | 66.4 | 62.0 | 68.0 | 71.6 | 72.4 | 73.2 | 72.8 | 75.2 | 65.5 | 70.7 | 69.2 | --- Rebuttal 3: Title: (3/3) Response to Reviewer q66P Comment: **W3. Confusing word choices, such as "mapping stage", "merging" and "augmentation stage". These words often lead to different understanding rather than what are described in the paper.** - Thank you for your feedback, we will fine-tune the wording and description in the next version based on your comments. **W4. "Query translation task" and "query translation data" and many parts of the methodology are not explained clearly. It would be better if there are explicit visual examples or diagrams.** - Thank you for your feedback again. We followed the settings of QAlign to translate the query from English to non-English automatically as training data. Below are three examples of query translation task data: | | Augmentation stage (Query translation task data) | |-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Math | **Input (Zh):** 伯特每天都填报纸上的每日填字游戏。他每两周就用完一支铅笔。平均而言,他用完一支铅笔需要1050个字。每个填字游戏平均有多少个字? <br> **Input (En):** Bert fills out the daily crossword puzzle in the newspaper every day. He uses a pencil to fill out the puzzles every two weeks. On average, it takes him 1050 words to use up a pencil. How many words are in each crossword puzzle on average? <br> **Training Target:** If Bert uses up a pencil to fill out the puzzles every two weeks and it takes him 1050 words to use up a pencil ... 1050/14 = 75 words in each daily crossword puzzle on average. #### The answer is: 75 | | Commonsense | **Input (Zh):** 酒后驾车的人可能会被指控什么? (A)惩罚 (B)逮捕 (C)车祸 (D)胡言乱语 (E)酒后驾驶 <br> **Input (En):** What is someone operating a vehicle likely to be accused of after becoming inebriated? (A) punish (B) arrest (C) automobile accidents (D) talking nonsense (E) drunk driving <br> **Training Target:** E | | NLI | **Input (Zh):** Premise: 她不太明白。Hypothesis: 事实上,她没有理解 <br> Input (En): Premise: She doesn’t really understand. Hypothesis: Actually, she doesn’t get it. <br> **Output:** Entailment | If you have any questions about our work, we look forward to discussing with you and, if possible, hope that you can improve our score appropriately. Best wishes. --- Rebuttal 4: Comment: Dear reviewer -- could you see whether the author response addressed your concern, especially about your comment on how their method can improve *generation* of LLMs? they provided new experiments, as well as a comparison with baseline you proposed. If you can modify the scores based on the response, that'd be very helpful! --- Rebuttal Comment 4.1: Title: Thanks for the rebuttal Comment: Thanks for the author response, I pump up the scores for the effort!
Summary: The paper suggests incorporating an embedding block using external multilingual models to improve the models' understanding. Additionally, comprehensive experiments are conducted to demonstrate its efficacy. Strengths: 1. The paper proposes a straightforward and easily implementable method to enhance models' multilingual capability. 2. Experimental results validate its effectiveness. Weaknesses: The paper lacks a clear understanding of why the method can work. 1. To my understanding, the Embedding Layer of the model merely converts the query from text space to the token/embedding space. This transformation may not be interpreted as "understanding the input" since the information hasn't been operated and combined by the attention layer. However, $X$ after the multilingual model has been "understood" by the model. Therefore, it does not make much sense to concatenate these two embeddings. 2. As $\tilde{X}$ contains the information of query q, $T$ includes duplicated information. This may be why the method can surpass baselines such as QAlign and Translate-En. It would be more convincing if baselines also input duplicated information. 3. This is further proved by Figure (4b), which illustrates that the mapping layer tends to obtain a unified representation for different languages and this is the same as QAlign. Therefore, it is confusing why this method can outperform QAlign. 4. The paper uses Llama2 as the base model. However, its multilingual performance is far from sota. Until submission, I believe Mistral, Gemma, and Llama3 have been released, whose multilingual performance is much better. Technical Quality: 2 Clarity: 3 Questions for Authors: The details of the two training stages lack clarity, specifically regarding the utilization of translation and parallel data, as well as the precise training tasks involved. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors suggest a limitation on the integration of external multilingual models with the model's inherent understanding capabilities, a concern partly explored through the ablation study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for taking the time to provide reviews. However, we would like to clarify some misunderstandings as follows: **W1. To my understanding, the Embedding Layer of the model merely converts the query from text space to the token/embedding space. This transformation may not be interpreted as "understanding the input" since the information hasn't been operated and combined by the attention layer. However, after the multilingual model has been "understood" by the model. Therefore, it does not make much sense to concatenate these two embeddings.** - Our model is based on the finding that providing LLM with inputs from two representation spaces can improve LLM's multilingual reasoning capability, where one representation comes from the native space of LLM's embedding layer, and the other representation is the mapping of multilingual queries to unified space provided by the multilingual model. - The two different types of representations are the key to boost better understanding of the query in subsequent layers of LLM. **W2. As contains the information of query q, includes duplicated information. This may be why the method can surpass baselines such as QAlign and Translate-En. It would be more convincing if baselines also input duplicated information.** - We duplicated the input for all baselines during training and inference. As shown in the below table, **the performance of 4 out of 6 baselines decreased with duplicated input**. - Note that our model outperforms all baselines by at least 6.7% and 8.0% across all languages and low-resource languages, which far exceeds the improvement of the baseline by duplicated input. Therefore, it can be considered that duplicate input is not the mechanism for improving the performance of our model. | MetaMath-Llama-7B, MGSM | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | En | Lrl | Hrl | Avg. | |-------------------------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | MonoReason | 6.8 | 7.2 | 6.8 | 36.4 | 38.4 | 55.2 | 54.4 | 52.0 | 57.2 | 68.8 | 6.9 | 51.8 | 38.3 | | MonoReason (duplicated input) | 8.4 | 7.2 | 4.8 | 33.2 | 42.8 | 53.6 | 53.6 | 53.6 | 52.0 | 63.2 | 6.8 | 50.3 | 37.2 | | MultiReason-Lora | 29.6 | 35.2 | 28.0 | 52.0 | 54.8 | 59.6 | 58.4 | 62.4 | 59.6 | 64.8 | 30.9 | 58.8 | 50.4 | | MultiReason-Lora (duplicated input) | 28.8 | 43.2 | 33.6 | 50.4 | 54.8 | 57.6 | 57.6 | 61.2 | 62.0 | 62.8 | 35.2 | 58.1 | 51.2 | | MultiReason-SFT | 33.2 | 40.0 | 42.0 | 42.0 | 42.0 | 45.2 | 44.8 | 45.2 | 48.0 | 52.0 | 38.4 | 45.6 | 43.4 | | MultiReason-SFT (duplicated input) | 31.2 | 41.2 | 39.2 | 42.0 | 47.6 | 49.6 | 52.0 | 49.6 | 48.8 | 58.8 | 46.0 | 49.8 | 46.0 | | QAlign | 32.4 | 39.6 | 40.4 | 44.0 | 48.4 | 54.8 | 56.8 | 52.4 | 59.6 | **68.0** | 37.5 | 54.9 | 49.6 | | QAlign (duplicated input) | 30.8 | 38.0 | 40.4 | 44.4 | 46.4 | 55.6 | 55.2 | 56.8 | 57.6 | 65.2 | 49.0 | 54.5 | 49.0 | | LangBridge | 42.8 | 50.4 | 43.2 | 40.0 | 45.2 | 56.4 | 50.8 | 52.4 | 58.0 | 63.2 | 45.5 | 52.3 | 50.2 | | LangBridge (duplicated input) | 34.0 | 42.8 | 42.0 | 33.2 | 35.6 | 53.6 | 52.0 | 51.2 | 57.2 | 61.2 | 39.6 | 49.1 | 46.3 | | Translate-En | 48.4 | 37.6 | 37.6 | 49.2 | 46.8 | 60.4 | 56.4 | 47.6 | 59.6 | 65.5 | 41.2 | 55.1 | 50.9 | | Translate-En (duplicated input) | 47.6 | 33.6 | 38.4 | 48.4 | 48.8 | 56.0 | 49.2 | 44.8 | 54.8 | 63.2 | 39.9 | 52.2 | 48.5 | | **MindMerger** | **52.0** | **53.4** | **54.0** | **59.0** | **61.7** | **64.1** | **64.0** | **63.3** | **65.0** | 67.7 | **53.1** | **63.5** | **60.4** | --- Rebuttal 2: Title: (2/3) Response to Reviewer iF4J Comment: **W3. This is further proved by Figure (4b), which illustrates that the mapping layer tends to obtain a unified representation for different languages and this is the same as QAlign. Therefore, it is confusing why this method can outperform QAlign.** - This reflects that **our model has better capability to obtain unified representation than QAlign**. - In Table 9 of the paper, we calculated the cosine similarity and top-1 retrieval recall between representations of the same query expressed in different languages to quantify the model’s capability to obtain a unified representation. The results show that our model significantly improves LLM’s capability to obtain a unified representation. - We further compare the last layer pooling representation of QAlign and our method in the following table. It can be observed that although QAlign slightly outperforms MonoReason, the score of cosine similarity and recall@1 are still far behind our model, which proves that our model is better able to obtain unified representations. | Cosine Similarity | Bn->En | Th->En | Sw->En | Ja->En | Zh->En | De->En | Fr->En | Ru->En | Es->En | Lrl | Hrl | Avg. | |-------------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|-------|-------|--------| | MonoReason | 0.08 | 0.07 | -0.08 | 0.11 | 0.13 | 0.04 | 0.12 | 0.05 | 0.09 | 0.02 | 0.09 | 0.07 | | QAlign | 0.07 | 0.04 | -0.06 | 0.11 | 0.14 | 0.10 | 0.18 | 0.06 | 0.15 | 0.02 | 0.12 | 0.09 | | MindMerger | **0.32** | **0.32** | **0.46** | **0.46** | **0.46** | **0.55** | **0.62** | **0.52** | **0.56** | **0.37** | **0.53** | **0.48** | | Recall@1 (%) | Bn->En | Th->En | Sw->En | Ja->En | Zh->En | De->En | Fr->En | Ru->En | Es->En | Lrl | Hrl | Avg. | |--------------|--------|--------|--------|--------|--------|--------|--------|--------|--------|-------|-------|--------| | MonoReason | 0.4 | 1.8 | 0.2 | 5.8 | 12.6 | 11.5 | 47.3 | 44.9 | 27.0 | 0.8 | 24.9 | 16.8 | | QAlign | 2.8 | 4.9 | 0.5 | 12.3 | 26.1 | 21.2 | 61.7 | 37.3 | 41.0 | 2.7 | 33.3 | 23.1 | | MindMerger | **52.8** | **76.0** | **64.3** | **91.6** | **96.4** | **98.0** | **99.2** | **98.1** | **88.9** | **64.4** | **95.4** | **85.0** | **W4. The paper uses Llama2 as the base model. However, its multilingual performance is far from sota. Until submission, I believe Mistral, Gemma, and Llama3 have been released, whose multilingual performance is much better.** - In fact, **we have compared Mistral as well as the larger MetaMath-Llama-13B in Table 5**. The experimental results show that our method is also consistently effective on LLM with stronger multilingual capabilities. - We further compare our method with baselines based on Llama3 in the following table. Our method still outperforms all baselines. | Llama3, MGSM | Bn | Th | Sw | Ja | Zh | De | Fr | Ru | Es | En | Lrl | Hrl | Avg. | |------------------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|-------|--------| | MonoReason | 40.4 | 53.2 | 32.0 | 53.2 | 58.0 | 64.4 | 67.2 | 67.6 | 69.2 | 76.4 | 41.9 | 65.1 | 58.2 | | MonoReason | 41.2 | 55.6 | 29.2 | 50.0 | 56.8 | 62.4 | 64.4 | 66.4 | 66.0 | 74.8 | 42.0 | 63.0 | 56.7 | | MultiReason-Lora | 52.0 | 62.0 | 52.4 | 58.8 | 64.8 | 66.0 | 68.8 | 72.4 | **74.0** | 75.2 | 55.5 | 68.6 | 64.6 | | MultiReason-SFT | 39.2 | 47.6 | 48.0 | 48.0 | 46.4 | 52.8 | 48.0 | 47.2 | 52.8 | 57.2 | 44.9 | 50.3 | 48.7 | | QAlign | 50.0 | 59.6 | 56.0 | 54.0 | 58.4 | 62.4 | 63.6 | 69.6 | 70.8 | 73.2 | 55.2 | 64.6 | 61.8 | | LangBridge | 49.2 | 51.6 | 56.4 | 48.0 | 54.8 | 69.6 | 68.4 | 67.6 | 69.6 | **78.0** | 52.4 | 65.1 | 61.3 | | Translate-En | 52.0 | 41.6 | 58.0 | 54.8 | 53.2 | 63.6 | 60.4 | 55.6 | 67.6 | 76.4 | 50.5 | 61.7 | 58.3 | | **MergeMinds** | **64.4** | **65.6** | **66.4** | **62.0** | **68.0** | **71.6** | **72.4** | **73.2** | 72.8 | 75.2 | **65.5** | **70.7** | **69.2** | --- Rebuttal 3: Title: (3/3) Response to Reviewer iF4J Comment: **Q1. The details of the two training stages lack clarity, specifically regarding the utilization of translation and parallel data, as well as the precise training tasks involved.** - Thank you for your feedback, we will improve the description of training data in the next version of our paper. - In the first stage (mapping stage), all tasks including Math, Commonsense, and NLI are trained using non-English to English translation data. The following is an example: | Mapping stage (Translation data) | |--------------------------------------------------------------------------------------------------------------------------------------------| | **Input (Zh):** 文档中没有元素属于该组时,该名称会被作为根结点显示在结构树上。 | | **Training Target:** It will be shown in the structure tree as a top node when there are no elements belonging to this group in the document. | - In the second stage (augmentation stage), the input query of each task is translated into non-English to construct multilingual training samples, both English and non-English samples are used to train our model. Below are examples of training samples for three tasks: | | Augmentation stage (Query translation task data) | |-------------|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------| | Math | **Input (Zh):** 伯特每天都填报纸上的每日填字游戏。他每两周就用完一支铅笔。平均而言,他用完一支铅笔需要1050个字。每个填字游戏平均有多少个字? <br> **Input (En):** Bert fills out the daily crossword puzzle in the newspaper every day. He uses a pencil to fill out the puzzles every two weeks. On average, it takes him 1050 words to use up a pencil. How many words are in each crossword puzzle on average? <br> **Training Target:** If Bert uses up a pencil to fill out the puzzles every two weeks and it takes him 1050 words to use up a pencil ... 1050/14 = 75 words in each daily crossword puzzle on average. #### The answer is: 75 | | Commonsense | **Input (Zh):** 酒后驾车的人可能会被指控什么? (A)惩罚 (B)逮捕 (C)车祸 (D)胡言乱语 (E)酒后驾驶 <br> **Input (En):** What is someone operating a vehicle likely to be accused of after becoming inebriated? (A) punish (B) arrest (C) automobile accidents (D) talking nonsense (E) drunk driving <br> **Training Target:** E | | NLI | **Input (Zh):** Premise: 她不太明白。Hypothesis: 事实上,她没有理解 <br> Input (En): Premise: She doesn’t really understand. Hypothesis: Actually, she doesn’t get it. <br> **Output:** Entailment | Thank you again for your comments. We hope our responses can resolve your concerns. We hope you can improve our score appropriately and are willing to have more discussions with you. Best wishes. --- Rebuttal 4: Comment: Dear reviewer, could you check at the authors new experimental results and see whether it address your concerns, and update the score accordingly if they do? --- Rebuttal 5: Comment: Thanks so much for so detailed reply in rebuttal to address my concern. Thay addressed my concerns, and I have raised my score accordingly.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Coevolving with the Other You: Fine-Tuning LLM with Sequential Cooperative Multi-Agent Reinforcement Learning
Accept (poster)
Summary: The paper introduces CORY, a novel reinforcement learning (RL) technique for fine-tuning language models that casts a multi-agent framework on the trained language model by duplicating it at initialization and then using the two copies to improve one another. Essentially, a "pioneer" gives the first guess, used as a reference to an "observer" which gives the final answer, then both agents are rewarded for both of their outputs in a cooperative fashion and trained independently with an underlying RL algorithm, here PPO. The authors claim that CORY improves on single-agent PPO in terms of downstream performance, training stability, and robustness to distribution collapse. Strengths: The paper is easy to follow and includes enough details about the experiments. Casting the problem into a multi-agent framework to change the optimization dynamics is original and will likely influence other work to build on it. The observations made by the authors about the multi-agent framework changing the dynamics training dynamics to make the observer more easily optimize for the reward while maintaining a low KL are interesting. The method can potentially be applied on top of any RL algorithm, either with independent learners as shown in the paper or by introducing centralized training critic network sharing, etc. Weaknesses: Overall, I find the contribution of the paper to be original and likely to have impact, however, I also find a number major issues in the work that make it not ready for publication at this venue. Unclear contextualization and missing baselines: - One main contribution of the paper is that it finds a policy that has a better Pareto frontier while still using the sum of task reward and KL divergence, but the KL divergence is an artificial reward specific to current RL fintuning methods in language modeling. If the paper's method is to be highlighted from a language modeling perspective then I would expect it to compare to methods like [1] which also claim to obtain a better Pareto frontier. And more than that, I would expect the method to show its benefits in downstream tasks for example with AlpacaFarm [2] as although a better frontier could be seen a a proxy to robustness to model collapse, it isn't clear if it translates to better language modeling capabilities. Otherwise if the method is to be highlighted in general multi-objective RL fine-tuning tasks, then I would expect comparison to multi-objective RL methods. - (minor) The method can also be seen a self-improvement method, so the pepper would benefit from discussing and contrasting methods from the same family. An insufficient number of benchmarks: - The authors claim to evaluate their method "across diverse reward function landscapes, encompassing both subjective and objective categories", but only one dataset per category is tested. Judging by the scope of the claims the authors make and that this is the only form of evidence provided, I would expect at least 3 or 5 datasets per category. Hyperparameter choice: - The authors use a single set of hyperprameters, with no apparent justification. This is not enough to make claims about a method being better than another. - (minor) PPO with 1 epoch and 1 batch per epoch (as in the hyperprameter table) becomes just a policy gradient, so the authors seem to be effectively using a vanilla actor-critic with a GAE value estimator and (usually useless) value clipping. Presentation: - (minor) The wording of the claims made in the abstract and introduction "policy optimality, "resilience to distribution collapse", and "robustness during training" is not reused in the experiments section which makes it hard to connect evidence to claims. - (minor) The $\geq$ symbol in equation 6 between vector-values objectives is undefined. One would expect a definition of Pareto dominance. [1] Noukhovitch, Michael, et al. "Language model alignment with elastic reset." Advances in Neural Information Processing Systems 36 (2024). [2] Dubois, Yann, et al. "Alpacafarm: A simulation framework for methods that learn from human feedback." Advances in Neural Information Processing Systems 36 (2024). Technical Quality: 2 Clarity: 3 Questions for Authors: How did the authors select the hyperprameters for their methods? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Although in the appendix, the authors adequately state the limitations of their work and its broader impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and time spent! We’re glad you found our idea original and likely to influence other work to build on it, our paper easy to follow, and the observations from the multi-objective RL perspective interesting. We would like to address your concerns below. ## **R4-1** Missing Baselines We appreciate your concern regarding the need for more convincing baselines. In response, we have added an additional baseline, Elastic Reset(ER). The original ER paper also tested using the IMDB dataset. We adopted the experimental setup from the paper with parameters set to decay=0.995, reset_freq=17, and eta=0.001, successfully replicating their results (ER-17). Moreover, we conducted comparisons with two other sets of reset frequencies. The specific experimental results are as follows: | | Task-reward ↑ | KL divergence ↓ | |-----|---------------|-----------------| | PPO | 2.17 | 44.33 | | ER-30 | 2.65 | 32.15 | | ER-17 | 2.63 | 21.73 | | ER-5 | 0.53 | 0.32 | | CORY | **2.67** | **15.18** | Furthermore, due to the reset mechanism, the training curve of Elastic Reset (ER) exhibits a peak-like shape, unlike the stable state observed in our case. ## **R4-2** Insufficient Benchmarks Thank you for the reminder. We will amend the statement "across diverse reward function landscapes, encompassing both subjective and objective categories" in the revised manuscript. Regarding the benchmarks, we believe they are currently persuasive enough. Reviewer Syvc mentioned that "the experiments are adequate and effectively support the proposed method," and Reviewer F4Ro also noted that we have utilized modern and relevant benchmarks and models. However, we understand your concerns regarding the benchmarks. Therefore, we have included an additional, widely used dataset, Anthropic-HH. On this dataset, CORY shows its superior ability to balance task reward and KL divergence than all the baselines (PPO, Elastic reset, REINFORCE). We hope this can addresses your concerns. Training curves are in Figure 4 in the additional PDF. ## **R4-3** Self-improvement Method In the domain of LLM fine-tuning, there exist some self-improvement methods, such as SPIN [1] and Iterative DPO [2]. However, these approaches are offline algorithms in supervised manner, where self-improvement stems from making more comprehensive use of the dataset. In contrast, RL as an online algorithm, is optimized on data generated by itself. In the MARL field, the success of AlphaGO and OpenAI Five has validated the effectiveness of the self-improvement among a multi-agent system that diverse data are generated through competition/cooperation among multiple agents. The problem we are concerned with is how to make the RL fine-tuning of LLM benefit from this self-improvement in multi-agent system [3, 4]. Therefore, CORY introduces a completely different self-improvement mechanism into the fine-tuning of LLMs. [1] Chen, Zixiang, et al. "Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models." Forty-first International Conference on Machine Learning. [2] Yuan, Weizhe, et al. "Self-Rewarding Language Models." Forty-first International Conference on Machine Learning. [3] A social network for AI. Nat Mach Intell 5, 1175 (2023). https://doi.org/10.1038/s42256-023-00769-4 [4] Duéñez-Guzmán, Edgar A., et al. "A social path to human-like artificial intelligence." Nature Machine Intelligence 5.11 (2023): 1181-1188. ## **R4-4** Hyperparameter Choice The parameter settings for fine-tuning GPT2 followed the default configuration in TRL for the IMDB dataset, while the fine-tuning for LlaMA2 primarily adhered to the guidelines established by StackLlama. In pursuit of a fair comparison, all parameters were carefully chosen to balance the stability and performance of PPO. And a grid search was conducted over learning rates and eta coefficients within the ranges lr[1e-6, 1e-5, 1e-4] and eta[1e-3, 1e-2, 1e-1, 0.2, 0.3] respectively, to select the parameters that yielded the most stable training for PPO. Given CORY's robustness to parameter variations (as evidenced by the results of adjusting learning rates and kl coefficients presented in Appendix E), the parameters for PPO, with the exception of the learning rate, were directly applied to CORY. In the GSM8K dataset, we adjusted the learning rate for CORY. Important parameters include lr and kl_coef, while parameters such as mini_batch, ppo_epoch are considered to be of relative lesser importance. ## **R4-5** PPO Hyperparameter Concern Yes, it is acknowledged that a strictly on-policy PPO is equivalent to A2C. However, the hyperparameter 'Epoch' in our hyperparameter table does not refer to PPO epochs. In our experiments, the ppo_epochs were actually set to 4, which aligns with the standard configuration of PPO. We will update our parameter table to prevent similar misunderstandings. ## **R4-6** Feedback on the Presentation Thanks for your suggestion! Regarding your first point, our paper already contains clear experiments related to "policy optimality" (Sections 4.1 & 4.2), "resilience to distribution collapse" (Sections 4.1 & 4.2), and "robustness during training" (Appendix E). We will explicitly add these descriptions to the experimental section of our paper. As for your second point, we will include definitions of the ">=" symbol in the revised manuscript. --- Rebuttal Comment 1.1: Comment: I thank the authors for the clarifications. Most of my concerns have been addressed. I am increasing my score to 5 as I still believe the submission would benefit from better contextualization (should it be compared to multi-objective methods?) and more baselines (I appreciate the authors comparing to ER). --- Reply to Comment 1.1.1: Comment: Thank you for your timely response! We're glad to have addressed most of your concerns. We'd like to provide further clarification on multi-objective RL and baselines. ### 1. **Clarification on multi-objective reinforcement learning (MORL)** ### From the perspective of RL, the RL fine-tuning of LLM indeed constitutes a multi-objective (specifically, dual-objective) RL problem. Algorithms specifically designed for MORL can be categorized into two main types: multi-policy methods and single-policy methods [1]. Multi-policy methods train multiple policies, each corresponding to a single combination of objectives [2]. This approach equates to altering the weights of KL and task reward in Section 3.2(constructing multiple combinations of objectives), training multiple policies through conventional RL methods, and selecting the more effective policies. However, as demonstrated in Fig.2, the performance of all policies (their Pareto frontier) is inferior to policy trained under any weight by CORY. On the other hand, single-policy methods address multi-objective problems through reward scalarization [2]. In our paper, the baseline such as PPO, considers the weighted loss of both the task reward and KL during training, even dynamically adjusting the weight [3], which essentially represents a naive form of reward scalarization. To further alleviate your concerns, we conducted experiments on the IMDB dataset using GGF-PPO [4], which is one of the SOTA algorithms in the MORL domain. As demonstrated in the attached Table 1, GGF-PPO shows only a marginal performance improvement over PPO (a 3.2% increase in Reward and a 13.3% reduction in KL). Therefore, the effectiveness of single-policy methods is also limited. Compared to CORY, they fundamentally cannot leverage the interaction and emergent intelligence of multiple LLMs to unleash the potential of Multi-LLM fine-tuning. Overall, we agree that the discussion on MORL is inevitable. However, we have already implicitly compared both multi-policy and single-policy methods in our manuscript. Thank you, we will incorporate this discussion into the appendix to explicitly discuss MORL algorithms. Attached Table 1 is shown as follows: | | Task-reward ↑ | KL divergence ↓ | |-----|---------------|-----------------| | PPO | 2.17 | 44.33 | | **GGF-PPO** | 2.24 | 38.45 | | ER-30 | 2.646 | 32.152 | | ER-17 | 2.626 | 21.73 | | ER-5 | 0.5284 | 0.3237 | | CORY | **2.668** | **15.179** | ### 2. **Clarification on baselines** ### More baselines are always better. However, we believe these two strong baselines, PPO and ER, should be sufficient to evaluate our method. PPO remains a strong baseline for RL fine-tuning of LLMs. Although some work has explored SAC [5], the experimental results show limited improvement comparing to PPO. In terms of preventing distribution collapse for RL fine-tuning, there is work such as NLPO [6], but its effectiveness is not as strong as ER. Thank you once again for your time. Should you have any further questions, please let us know. [1] Hayes C F, et al. "A practical guide to multi-objective reinforcement learning and planning." AAMAS (2022) [2] Hwang M, et al. "Promptable behaviors: Personalizing multi-objective rewards from human preferences." CVPR(2024) [3] TRL: https://github.com/huggingface/trl [4] Siddique U, et al. "Learning fair policies in multi-objective (deep) reinforcement learning with average and discounted rewards." ICML(2020) [5] Wen, Muning, et al. "Entropy-Regularized Token-Level Policy Optimization for Large Language Models." CoRR (2024). [6] Ramamurthy, Rajkumar, et al. "Is Reinforcement Learning (Not) for Natural Language Processing: Benchmarks, Baselines, and Building Blocks for Natural Language Policy Optimization." ICLR (2023).
Summary: This paper presents a Reinforcement Learning (RL) methodology to fine-tune LLMs based on Multi-Agent RL agents. One agent acts as the observer, and the other acts as the pioneer. They share knowledge through two interactions: transfer learning and role-switching. They named this methodology CORY (Coevolving with the Other You). They tested their framework with out-the-shelf LLMs and proved that learning benefits from using CORY. They compared against proximal policy optimization (PPO) in the Multi-Agent setup. Ablation studies showed the effect of model size, knowledge transfer, and role exchange. They provided theory related to RL in the context of this text-generation task and qualitative studies. Strengths: The paper is easy to follow for a person knowledgeable in RL who understands LLMs at a basic level. Their methodology provides a viable method to improve LLM fine-tuning using RL. They used modern and relevant benchmarks not only on the side of RL (PPO) but also on the side of models. Weaknesses: The Limitations section would've been a good addition to the main corpus of the paper as it addresses a big possible concern I imagined with your methodology: you require twice the amount of resources to train both agents. The average reader would enjoy this change, as some don't jump in appendices. Also, I think your claim about Moore's Law might be arguable, as current requirements for computing power are growing beyond expectation with the surge in LLMs. Technical Quality: 3 Clarity: 4 Questions for Authors: I think I lack intuition about how the reward signals work after the N steps. Also, I am unclear about the influence of the KL divergence and how far it is relevant not to diverge from the reference policy. Could you please give me your take on that? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors added a Limitations section in the appendix. Flag For Ethics Review: ['Ethics review needed: Discrimination, bias, and fairness', 'Ethics review needed: Deception and harassment'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive remark and insightful feedback! We’re glad you found our method viable for improving LLM RL fine-tuning, our paper is easy to follow, and modern and relevant benchmarks are used in our experiments. Below, we provide individual responses addressing your comments. ## **R3-1** Limitation Section Thank you very much for your suggestion. We will carefully use the discussion regarding Moore's Law and will directly incorporate the discussion on computational power consumption into Section 6 of the revised manuscript. ## **R3-2** Reward Signal in RL Fine-Tuning From a reinforcement learning perspective, the N steps to generate a sentence (assuming padding is included) constitute an episode. For PPO, at the end of the N steps, gradient ascent is used to optimize the token-level policy in the direction of maximizing the return for those N steps. ## **R3-3** KL in RL Fine-Tuning This is a profound question. KL divergence measures the distance between the current LLM’s token-level policy and the pre-trained token-level policy. The pre-trained token-level policy is quite delicate. Deviating too much can lead to the destruction of the language modeling in the pre-trained model, that is, distribution collapse. For instance, if the probability of the eos_token in the token-level policy is too low compared to the pre-trained one, it could prevent sentences from ending, or result in the repetition of the same word until the maximum output length is reached. Distribution collapse can largely be detected by KL divergence, and experimental results also reveal a strong correlation between them. To avoid distribution collapse, current RL fine-tuning often needs to be early-stopped before reaching a catastrophic level of KL divergence. This places RL fine-tuning in a rather awkward position. Overall, the benefits of maintaining a low KL can be summarized as follows: 1. Stable training without collapse: This means that RL can explore more data, increasing the chances of finding better policies. 2. Efficient improvement: This means achieving as much performance improvement as possible with as few changes as possible to the reference policy. --- Rebuttal 2: Title: Please Review Rebuttal Comment: We kindly ask the reviewer to read and respond to our rebuttal. During the rebuttal phase, we conducted new experiments that we believe address all the concerns raised regarding the paper and may merit an increase in the score. The experiments can be summarized as follows: 1. New Baselines: Introduced two new baselines: REINFORCE and a strong baseline, Elastic Reset [1] (see attached PDF Figure 3). 2. New Benchmark: All baselines have been compared on the Anthropic HH benchmark (see attached PDF Figure 4). If there are any outstanding issues, we would appreciate the opportunity to respond before the discussion period concludes. Thank you. [1] Noukhovitch, Michael, et al. "Language model alignment with elastic reset." Advances in Neural Information Processing Systems 36 (2024). --- Rebuttal Comment 2.1: Comment: First, I apologize for the late response to your rebuttal; I read it some days ago and was trying to form a better idea through the other reviews and how you addressed them. I want to thank the authors for addressing the questions and concerns in my review. I better understood how the RL part works in this NLP application, and I find it interesting how it can keep contributing to the fine-tuning tasks that are much required for LLM. I appreciate your efforts in creating new benchmarks and baselines. I will keep my score for the following reasons: my field of expertise is RL, and I don't feel confident giving you a higher score without a solid background in NLP. Also, my review didn't consider that you should add new baselines; for instance, after using PPO, I didn't consider it relevant to include REINFORCE, which is expected to underperform against PPO. I wish you all the best.
Summary: This paper presents CORY, a novel approach for fine-tuning large language models (LLMs) using a sequential cooperative multi-agent reinforcement learning framework. Traditional methods, primarily based on PPO and its variants, often show suboptimal performance and risk distribution collapse. CORY addresses these issues by duplicating the LLM into two agents: a pioneer and an observer. These agents generate responses based on queries and each other’s outputs, exchanging roles periodically to foster cooperation. Experiments with GPT-2 and Llama-2 on the IMDB Review and GSM8K datasets demonstrate that CORY outperforms PPO in terms of policy optimality, resistance to distribution collapse, and training robustness. This highlights its potential as a superior methodology for refining LLMs in real-world applications. Strengths: 1. The introduction of multi-agent reinforcement learning into LLM fine-tuning is both novel and compelling, with a well-justified motivation behind the designed role exchange. 2. The presentation is well-organized and clear. The comprehensive explanations and illustrative figures in section 3.2 enhance the understanding of the proposed method and its effectiveness. 3. The empirical experiments are adequate and effectively support the proposed method and its underlying rationale. Weaknesses: 1. The paper lacks a theoretical analysis of the introduced framework. 2. The baseline comparison is primarily with single-agent PPO. However, there are many more advanced algorithms for LLM fine-tuning. It would be beneficial if the authors conducted additional experiments using other baseline fine-tuning algorithms. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the authors provide a more detailed discussion on the relationship between emergent communication in multi-agent reinforcement learning (MARL) and the method in this paper? Since the authors mention that this work is inspired by the concept that "Languages develop through agent interactions and are shaped by societal and cultural influences," it would be beneficial to explore this connection further. 2. The paper suggests employing a collective task reward to facilitate cooperation, as outlined in Eq. 4. What would happen if the task reward is not the sum of individual task rewards? Can the authors provide empirical results or theoretical analysis to show the influence of the reward design? 3. Currently, the framework includes only two agents: the pioneer and the observer. Is it possible to introduce additional agents, and would doing so enhance performance? I hope the authors can discuss this possibility further. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations have been discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive remark and insightful feedback! We’re glad you found our idea is both novel and compelling, our presentation is well-organized and clear, and our experiments are adequate and effectively support the proposed method. Below, we provide individual responses addressing your comments. ## **R2-1** Theoretical Analysis Our paper has already provided detailed mathematical modeling for LLM RL fine-tuning (Section 2), token-level PPO (Appendix B), and CORY (Appendix C), along with an analysis from the perspective of multi-objective RL (Section 3.2). However, we are still willing to offer a theoretical analysis of why CORY works from the perspective of game theory. As shown in Figure 5 in the supplementary PDF, the training phase of CORY can be modeled as an extensive-form two-player game. This is a game of perfect but incomplete information, meaning that the observer is aware of all historical information, but the two agents do not know any specific utility functions. The relationship between the utility functions of the two LLM agents determines the type of game. In CORY, the utility function format $U=R1+R2$ corresponds to a fully cooperative game, while the format $U=R1-R2$ corresponds to a zero-sum fully competitive game. It is easy to prove that the policy combination corresponding to $U_{\text{obs}} \equiv U_{\text{pio}} \equiv 0$ is a Nash equilibrium of the zero-sum game, where $R_1 \equiv R_2$, meaning that the rewards of both agents are the same under any query. They tend to find policies that are similar in performance but not optimal during training, which is also verified by subsequent experiments (in **R2-4**). Conversely, the design of CORY cleverly utilizes the characteristics of the global optimal solution. Assuming that for any task query, there exists an answer that maximizes the task reward, it can be proven that 1. The game has Pareto optimums (constructive proof), and 2. Under the Pareto optimums, both agents receive the same and maximum rewards. However, there are two extremes among all the Pareto solutions. One is both the observer and the pioneer independently generate the optimal response (which is the true global optimum), and the other is the observer completely relies on the pioneer, merely repeating its optimal response. Thanks to the design of role-exchange and collective reward, the policies of the two agents will avoid evolving towards the second extreme and instead evolve towards the first extreme. Thanks, we will add this section to the appendix. ## **R2-2** Other RL Fine-tuning Baselines Thank you for your suggestion. We have added two baselines into our analysis: REINFORCE and Elastic Reset. Here is the results of IMDB. | | Task-reward ↑ | KL divergence ↓ | |-----|---------------|-----------------| | PPO | 2.17 | 44.33 | |REINFORCE|0.747|56.87| | ER-30 | 2.65 | 32.15 | | ER-17 | 2.63 | 21.73 | | ER-5 | 0.53 | 0.32 | | CORY | **2.67** | **15.18** | The results indicate that CORY has the ability to maintain a stable balance between the two targets. The detailed training curves are shown in Figure 3 in the supplementary PDF. ## **R2-3** Emergent Communication Prior work has discovered that defining communication protocols as vocabularies enables agents to spontaneously emerge their own languages during cooperation. Furthermore, incorporating human labels into the learning process of communication policy can facilitate the emergence (0 to 1) of communication policy that utilize natural language [1,2]. CORY is equivalent to a two-player cooperative game with unidirectional communication grounded in natural language. The communication policies are pre-trained LLMs. The LLM/communication policy has already undergone one emergence during pre-training. By casting the RL fine-tuning of the LLM into a multi-agent context, we anticipate a secondary emergence (1 to 100) of the LLM. [1] Lazaridou, Angeliki, Alexander Peysakhovich, and Marco Baroni. "Multi-Agent Cooperation and the Emergence of (Natural) Language." International Conference on Learning Representations. 2022. [2] Graesser, Laura, Kyunghyun Cho, and Douwe Kiela. "Emergent linguistic phenomena in multi-agent communication games." 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019. Association for Computational Linguistics, 2019. ## **R2-4** Total Reward Design This is indeed an interesting question. We have noticed that Reviewer 7TR2 also raised a similar issue. Due to the space constraints of the rebuttal, please refer to **R1-1** for details on the relevant experimental setup. The results indicate that competitive scenarios significantly underperform cooperative ones. Furthermore, under complete competition (zero-sum game settings), the agents trained to a state of low KL divergence but also low performance (similar but suboptimal), which aligns with our previous game-theoretic analysis. ## **R2-5** More LLM Agents Yes! introducing more agents could lead to performance improvements, which represents an exciting direction for future work. We look forward to the participation of more researchers in this area. We have already developed preliminary concepts, which fundamentally concerns the design of communication topologies. There are several simple communication topologies that could be explored, including linear (chain-like), mesh (fully connected), and randomly connected. Recent research [3] has uncovered scaling laws of agent numbers in the collaboration among multiple Large Language Models (LLMs). It also discussed several other insight communication topologies for LLMs (e.g. tree-like, radial). [3] Qian, Chen, et al. "Scaling Large-Language-Model-based Multi-Agent Collaboration." arXiv preprint arXiv:2406.07155 (2024). --- Rebuttal Comment 1.1: Comment: I've read the author's response and I appreciate the additional discussion and clarifications. I would like to keep my score.
Summary: In this paper, the authors study a multi-agent organization for LLM learning. Specifically, they have two LLMs, with the second one responding to the same query given the query itself and the response generated by another LLM. They author shows this method can achieve a better tradeoff of the task reward and the KL penalty. Strengths: The writing is clear and the question and proposed method is interesting. Weaknesses: Since this is an empirical paper, the expectation of the experiments, from its design to its implementation and results, would be higher. In the questions below, the reviewer mentions several points that might improve the experiments. If the authors address the concerns about the experiments, the reviewer will consider rising their score. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. As far as the reviewer is concerned, this paper is missing several important baselines to make their experiments more convincing. (1.1) The dual-LLM setup in the LLM debating literature. This line of research is highly related to the paper. There, two LLMs response in turn to the previous generation of the other part, as well as to the original query. As the authors are explicitly considering a cooperative setting, the important question then is: which one is more effective, cooperation or competition, in terms of multi-agent LLM learning. (1.2) Mixture of experts. It appears a bit unfair to compare with a single-LLM PPO baseline--after all the authors are using two LLMs. The question is whether this pioneer-observer structure is more effective than a simpler mechanism where opinions from multiple LLMs are aggregated together by recent research in MOE. 2. Fig. 2 needs to be further explained. The major drawback is the lack of legend regarding the eta values. The reviewer is asking for the which two points (one on the CORY frontier and another on the PPO frontier) corresponding which value of eta. A minor question: For the sub-optimal frontiers of CORY and PPO, it is unclear what is shown is the training KL and reward or that of the testing time. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The reviewer lists some questions that make them curious, expecting some clarifications. it seem that some analysis or experiments can improve the understanding of the proposed method. (1) In the current setup, the pioneer and the observer possess the same architecture, ie, they are the same LLMs. Does this homogeneity matter? If the two LLMs are different, can the proposed method still work? Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and valuable feedback! We’re glad you found our proposed method interesting and the writing clear. We would like to address your concerns below. ## **R1-1** Dual-LLM Setup and Multi-LLM Learning There has been some work studying the collaboration of multi-LLM (including dual-LLM setup) to accomplish a shared task [1], and we will add this to our reference. To our knowledge, CORY is the first work employing a multi-agent framework for the RL fine-tuning of LLM. As highlighted by the reviewer, the dynamics of competition and cooperation among agents are crucial in multi-agent learning framework. To investigate these dynamics, we modified the original collective reward of CORY from $R_{self} + R_{other}$ to an $R_{self} + λ * R_{other}$ format. By adjusting the value and sign of $λ$, we could represent varying degrees of competition and cooperation. Additionally, to mitigate the impact of reward magnitude on training, we normalized the reward values. As demonstrated in Figure 1 in the attached PDF, across both the GSM8K and IMDB datasets, the task rewards in competitive settings were significantly lower than those in cooperative settings. Beyond experimental analyses, we believe that training multiple LLMs in a cooperative setting can be modeled as MARL problems with a shared team reward. Research in the MARL domain on credit assignment (e.g., VDN[2],QMIX[3]) could further enhance the training levels of multiple LLMs. This represents a promising direction for future development! [1] Wu, Qingyun, et al. "AutoGen: Enabling Next-Gen LLM Applications via Multi-Agent Conversation." ICLR 2024 Workshop on Large Language Model (LLM) Agents. [2] Sunehag, P., et al. "Value-Decomposition Networks For Cooperative Multi-Agent Learning Based On Team Reward." Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems. 2018: 2085-2087. [3] Rashid, T., et al. "QMIX: Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning." International Conference on Machine Learning. PMLR, 2018: 4295-4304. ## **R1-2** MoE Baseline We agree that it is crucial to fairly compare the total model size as well as the number of models. Regarding the model size, our paper has already conducted ablation studies, comparing GPT2-l (770M) + CORY with GPT2-xl (1.5B) + PPO in Section 4.3, and LLaMA2 7b + CORY with LLaMA2 13b + PPO in Appendix E. The experiments demonstrate that the advantages of CORY do not come from an increase in training parameters. As for expanding the number of LLMs involved in training, we designed a simple baseline inspired by the MoE approach: aggregating the outputs of two models and selecting the output with a higher estimated value based on the Q-values. The experimental results on the IMDB dataset are as follows: | | Task-reward ↑ | KL divergence ↓ | |-------------|---------------|-----------------| | PPO | 2.17 | 44.33 | | Aggregate | 2.36 | 47.75 | | **CORY** | **2.67** | **15.18** | As you mentioned, we found that using MOE to aggregate responses from multiple LLMs can effectively enhance task rewards, resulting in a 9% improvement compared to PPO. However, by comparing with CORY's results, we discovered that CORY achieves a 23% improvement. This indicates that CORY provides a greater enhancement in performance because it significantly boosts the capability of individual LLM agents through collaborative training. These findings indicate that the key to the effectiveness of the pioneer-observer structure lies in facilitating interactions between models. ## **R1-3** Fig.2 Explanation We sincerely appreciate your valuable suggestion and apologize for any confusion caused. Regarding Fig. 2(c), the values of eta from left to right indeed are 1e-5, 1e-4, 1e-3, and 1e-2. We will ensure to annotate this clearly in the revised figure for better understanding. As for the sub-optimal frontiers, these actually represent the testing KL and reward, which we will explicitly clarify in the revised manuscript to avoid any ambiguity. ## **R1-4** Possible Heterogeneous Multi-LLM Training This is indeed a fascinating question. Although the paper emphasizes that CORY is a plug-and-play alternative to RL fine-tuning (requiring only a duplicate of the model to be finetuned, without the need for extra auxiliary models during training), the applicability of CORY's core ideas in heterogeneous multi-LLM finetuning is a topic worthy of exploration. To this end, we conducted experiments on the GSM8K dataset involving heterogeneous LLM training with LLaMA3-8B, LLaMA2-7B, and GPT-2. As shown in Figure 2 in the attached PDF, in the (LLaMA2-LLaMA3) combination, LLaMA2's training curve exhibits stability similar to that of original CORY, with its training KL values (consistently less than 1) significantly lower than those of single-PPO (which peak at 10). This suggests that the "knowledge transfer" mechanism can effectively alleviate the optimization pressure on LLaMA2 at the KL end. However, in the (LLaMA2-GPT2) combination, due to the insufficient capabilities of the GPT2 model, which produces many erroneous or empty responses, it fails to alleviate any training pressure on LLaMA2, resulting in a KL curve for LLaMA2 that closely mirrors the trend of single-PPO. The weaker model hinders the training of the stronger model, a similar outcome can also be observed in the (LLaMA2- LLaMA3) combination. It is evident that in a heterogeneous setting, CORY's underlying mechanisms still function, but the selection of heterogeneous LLMs and their interactions pose significant challenges, making this an issue worth further investigation. --- Rebuttal 2: Title: Please Review Rebuttal Comment: We kindly ask the reviewer to read and respond to our rebuttal. During the rebuttal phase, we conducted new experiments that we believe address all the concerns raised regarding the paper and may merit an increase in the score. The experiments can be summarized as follows: 1. Dual-LLM Setup: Evaluated both cooperative and competitive settings (see attached PDF Figure 1). 2. Heterogeneous Setup: Implemented two different LLMs in CORY to assess the effectiveness of the proposed mechanism (see attached PDF Figure 2). 3. New Baselines: Introduced two new baselines: REINFORCE and a strong baseline, Elastic Reset [1] (see attached PDF Figure 3). 4. New Benchmark: All baselines have been compared on the Anthropic HH benchmark (see attached PDF Figure 4). If there are any outstanding issues, we would appreciate the opportunity to respond before the discussion period concludes. Thank you. [1] Noukhovitch, Michael, et al. "Language model alignment with elastic reset." Advances in Neural Information Processing Systems 36 (2024). --- Rebuttal Comment 2.1: Title: Thanks for your response Comment: The authors' response has addressed my major concern. I have increased my score accordingly.
Rebuttal 1: Rebuttal: We would like to express our gratitude to all the reviewers for their contributions and insightful comments. We appreciate that all the reviewers find our writing and presentation is clear and well-organized. We are encouraged by the reviewers' appreciation that our idea is novel and interesting (Syvc, 7TR2), that our method is viable and possibly influence other work to build on it (F4Ro, chZh), that the analysis from the multi-objective RL is comprehensive and interesting (Syvc, chZh), that the experiments using modern and relevant benchmarks are adequate (Syvc, F4Ro), and includes enough details about the experiments (chZh). To help address the concerns of the reviewers and facilitate further discussion, we are attaching a PDF with five figures that we reference in each of our reviewer-specific rebuttals. The figures are: **Figure 1**: Training curves of cooperative and competitive settings between two LLMs on IMDB and GSM8K datasets. This is mostly in response to Reviewer 7TR2 and Reviewer Syvc. **Figure 2**: Training curves of heterogeneous LLM settings under the CORY framework on GSM8K dataset. This is mostly in response to Reviewer 7TR2. **Figure 3**: Comparisons to REINFORCE and Elastic Reset on IMDB and GSM8K datasets. This is mostly in response to Reviewer Syvc and Reviewer chZh. **Figure 4**: Training curves on Anthropic-HH. This is mostly in response to Reviewer chZh. **Figure 5**: Game-theoretic modeling of CORY. This is mostly in response to Reviewer Syvc. Pdf: /pdf/818b5e8cfdce78456d0cc6fe710d73cf7c881a7f.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Questioning the Survey Responses of Large Language Models
Accept (oral)
Summary: The paper focuses on evaluating 42 different language models on the American Community Survey, highlighting how model responses are governed by ordering and label biases and in general the fact that any demographic correlation with specific subgroups is actually due to the fact that those subgroups aggregated statistics are closest to uniform. Strengths: The study in the paper is well-designed and discussed, considering a very large variety of LLMs and in particular by considering both base and instruct-tuned models. The presented results are very relevant for any study that plans to either examine LLMs for understanding underlying characteristics and, most importantly, for researchers who plan to use LLMs as a proxy to study human sub-groups. The strong A-bias showed in the paper is a very important take-away. Weaknesses: There are two main weaknesses that I have encountered while reading the paper, namely: 1) It is not clear what the authors would expect the models should do based on their training data. It is true that we don't know exactly on what these models have been trained, but if we consider for instance the Common Crawl as the main corpus, would we expect that models trained on it would somehow produce answers correlating with certain sub-groups? I think a larger discussion in this paper on what we should reasonably expect models answers to be is needed. 2) given the emphasis on sub-groups, I would have expected the authors to explore impersonation of LLMs as a way of seeing whether that would steer responses to surveys in the direction of what we would expect those sub-groups typical responses. This would clarify whether LLMs have the ability of producing answers relevant to specific sub-groups, if instructed to do so. Technical Quality: 3 Clarity: 3 Questions for Authors: Do humans as well have A-bias? Have you considered exploring whether RLHF would introduce a slight steer for the model towards certain sub-groups (reflected in the way people give feedbacks) which was not present in the base model? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: I think authors have addressed the main limitation of this work (which would be the focus on US survey) by examining other surveys. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback. **What would you expect the models’ answers to be?** It is unclear what to expect models’ responses to be. Prior work has hypothesised that models may trend towards certain demographics; for example, younger demographics, which tend to be more present on the internet. Another candidate could be models trending towards the responses of particularly populous U.S. states, simply because they may produce larger volumes of data. However, we observe that base models' ACS responses are *qualitatively* different from those of human populations. Because of these qualitative differences, we argue that the quantitative analysis of prior work may be misleading (Section 5). This is not to say that base models do not have biases, or do not represent certain populations better than others. Rather, our findings signal the need to move beyond multiple-choice prompting towards a more holistic evaluation of LLMs (e.g., open ended survey responses) to elicit more faithful representations of the population a language model might represent. **Impersonation** We focus on evaluating whether the survey responses of LLMs are representative of certain U.S. demographic subgroups. In this setting, it is standard to prompt the model without any added context. Assessing whether models have the ability of producing answers relevant to specific sub-groups if instructed to do so is beyond the scope of our work. **Do humans have A bias?** There is evidence for ordering bias in humans in the context of opinion surveys, and a tendency not to pick extreme values. However, in the context of the ACS demographic survey, it is well-understood that ordering effects play a very minor role in the distribution of responses collected by the U.S. census. The recent work of Tjuatja et al., 2023 finds that the response biases of language models (e.g., A-bias) are generally not human-like. Tjuatja, Lindia, et al. "Do LLMs exhibit human-like response biases? A case study in survey design." arXiv preprint arXiv:2311.04076 (2023). **Does RLHF introduce steering?** We evaluate models that have undergone RLHF, particularly the Llama 2 Instruct models, text-davinci-003, GPT-4, and GPT-4 Turbo. However, these models have undergone both standard supervised fine-tuning (i.e., instruction tuning) as well as RLHF. Overall, we observe that the responses of fine-tuned models vary more across questions (e.g., are not as balanced as those of base models). We, however, find no evidence that the responses of RLHF models better represent those of human populations. This is not to say that RLHF introduces no steer, but rather that the multiple-choice survey methodology that has recently gained traction in the community may not be appropriate to study this phenomena. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed answers, all clear! I'm happy to increase my score to 7 --- Reply to Comment 1.1.1: Comment: Thank you for your response. We are very pleased to have addressed your concerns.
Summary: This paper prompts LLMs with 25 multiple choice questions (on basic demographic information, education attainment, healthcare coverage, disability status, family status, veteran status, employment status, and income) from the 2019 ACS. The authors use eight kinds of prompts which vary in additional context, instructions, and asking in the second person. However, each time, the next-token probabilities are used to determine the immediate reply by the LLM to the multiple choice question. To evaluate the responses and the differences between LLM and human generated responses, the authors compute the normalized entropy and use KL divergence. They find that "smaller" LLMs are vulnerable to ordering and labeling biases, and that after correcting for these through randomized answer ordering, the LLMs trend towards uniform distributions in their responses (~high entropy). Instruction-tuning seems to increase the variance in the entropy measure for LLMs, but nonetheless the entropy remains higher overall compared to the human generated responses. The authors state that the main takeaway from their paper questions the popular methodology of using survey responses from LLMs using multiple choice questions. They challenge prior work and give the explanation that models consistently appear to better represent subgroups whose aggregate statistics are closest to uniform. Strengths: - The paper has a clear goal to challenge previous work on survey-derived alignement measures by offering the explanation that models consistently appear to better represent subgroups whose aggregate statistics are closest to uniform. - The paper shows that LLMs should not be used out-of-the-box to replace human responses in census data. - The paper is generally well-written and clear which makes it easy to follow. Weaknesses: While I find the paper enjoyable to read and it challenges important earlier findings on LLMs, I am not convinced that the current experiment fully supports the claims: - The 2019 ACS uses stratified sampling. Therefore, I believe that the variance is already being increases to obtain a representative sample. Since the additional context given to the LLM is very limited + the draws are independent, it seems like a very hard task to generate a matching distribution by the LLM. This is less important when assessing, for example, the political view of an LLM. - First-token probabilities may be a biased measure to obtain the replies (see Wang et al., 2024). - Additionally to the first-token probabilities, more advanced prompting techniques, such as Chain-of-Thought (Wei et al., 2022a), could improve the coherence and dependencies in the responses, especially for the sequential generation. - I believe that the sensitivity to ordering and labeling biases is known (Wei et al. (2022b) and Wei et al. (2023)). Overall, the authors show that independent draws from an LLM with limited context generates a uniform distribution, which questions earlier findings made used by such as methodology. While I agree with the authors on that statement, I believe that their experiment adds little value to further support their claim on "better represent subgroups whose aggregate statistics are closest to uniform." I believe additional, more fine-grained analysis is required for this. References ------------- Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., ... & Zhou, D. (2022a). Chain-of-thought prompting elicits reasoning in large language models. Advances in neural information processing systems, 35, 24824-24837. Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., ... & Fedus, W. (2022b). Emergent abilities of large language models. arXiv preprint arXiv:2206.07682. Wei, J., Wei, J., Tay, Y., Tran, D., Webson, A., Lu, Y., ... & Ma, T. (2023). Larger language models do in-context learning differently. arXiv preprint arXiv:2303.03846. Wang, X., Ma, B., Hu, C., Weber-Genzel, L., Röttger, P., Kreuter, F., ... & Plank, B. (2024). "My Answer is C": First-Token Probabilities Do Not Match Text Answers in Instruction-Tuned Language Models. arXiv preprint arXiv:2402.14499. Technical Quality: 2 Clarity: 2 Questions for Authors: Minor comments: - There is a discrepancy between the 42 models mentioned in the abstract vs 39 in the main text. - Typo line 561: rompt instead of prompt (in title) Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The paper does not explicitly state the limitations of the experiments but carefully reassesses its findings in the conclusion. The checklist points to Section 2 where, for example, the authors point to the prompt ablations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the feedback. We hope to address your concerns and clarify some misunderstanding below. > their experiment adds little value to further support their claim on "better represent subgroups whose aggregate statistics are closest to uniform." We believe this to be a misunderstanding. We do not claim that models better represent subgroups whose aggregate statistics are closest to uniform. Let us clarify. Our experiments show that, using the de-facto standard multiple-choice methodology, models strongly trend towards uniformly random responses (Figures 4 and 5). This results in a very strong correlation between subgroup entropy and alignment (Figure 6). Such correlation consistently holds across surveys, subgroups, and models. Therefore, models **appear to** better represent subgroups whose aggregate statistics are closest to uniform. Our experiments explain the findings of earlier work (i.e., Santurkar et al. 2023, see the discussion in Section 5, “Beyond the ACS”), and why these findings may be misleading. Models appear to better represent younger demographics not because of the pre-training data, but because younger demographics happen to have more uniform responses for the ACS. We are not claiming that language models actually better represent certain populations intrinsically. We are arguing that the de-facto standard methodology to survey language models has strong limitations, and it can potentially lead to misleading insights. Our claims are not about whether language models better represent certain populations, but rather about the limitations of the dominant survey methodology itself. >The 2019 ACS uses stratified sampling. Therefore, I believe that the variance is already being increased to obtain a representative sample. Since the additional context given to the LLM is very limited + the draws are independent, it seems like a very hard task to generate a matching distribution by the LLM. If we understand correctly, your point is that the aggregate responses of the U.S. census (the reference population used in our work) appear to be more entropic than they actually are, due to the U.S. census not representing households uniformly at random. Our claim is that, when using the dominant multiple-choice methodology to survey models, model responses are *qualitatively* different from those of the U.S. census. Model responses strongly trend towards uniformly random, irrespective of the survey question being asked. The responses of the U.S. census do not – they are heterogeneous. If stratified sampling were to have a small effect on the U.S. census responses, it is still the case that model responses (e.g., the blue dots in Figure 4a) look nothing like those of the U.S. population (green dots). Please note that there are no “draws” – for each survey question, we extract the models’ survey response analytically by extracting its next-token probabilities over each of the answer choices (e.g., “A”, “B”, …). This is the standard methodology introduced by Santurkar et al. for surveying language models using multiple-choice questionnaires. Our contribution is to shed light on the properties of these output distributions. > First-token probabilities may be a biassed measure to obtain the replies [...] more advanced prompting techniques, such as Chain-of-Thought (Wei et al., 2022a), could improve the coherence and dependencies in the responses, especially for the sequential generation. We agree with your points. They support the overall conclusion of our work that the current multiple-choice methodology used to survey language models has strong limitations, and we should move towards a more holistic evaluation of LLMs (e.g., open ended survey responses rather than multiple choice) in order to elicit more faithful representations of the population a language model might represent. > I believe that the sensitivity to ordering and labelling biases is known (Wei et al. (2022b) and Wei et al. (2023)). Ordering bias has been observed in various works. We cite Robinson and Wingate (2023a). We are happy to include more references in the final version. Our work is different from prior work in studying the effects of ordering biases for models’ survey responses. We show that models’ survey responses can substantially change after adjusting for their ordering biases, leading to fundamentally different insights regarding the populations that models’ best represent. We hope the additional explanations helped address your concern. We are happy to answer further questions that you may have. --- Rebuttal 2: Comment: Thank you for the clarifications. Re-reading your paper from the perspective that your main goal is to question the validity of prior work on survey-derived alignment measures, which (unconditionally) sample survey responses from LLMs, made me realize that my initial rating and assessment of the paper could have been much higher. I have adjusted my rating accordingly. My initial assumption, related to the stratified sampling comment, was that surveys through LLMs were conducted conditionally on demographics. For example, the paper released today by Ashokkumar et al. (2024) requires LLMs to respond to survey questions conditional on random demographic characteristics. This is a more valid approach to conducting surveys through LLMs, and I agree with your paper that there are better approaches than unconditional sampling (or evaluating token probabilities). Similar to the remark of Reviewer gFgE, the conditioning on demographics gave me a conflict with the US Census data set, as many questions related to the demographic information would then be embedded in the prompt. However, as you point out, you also run experiments on survey opinions. It might be worthwhile to point out that conditioning LLMs on demographics via prompts may be part of the solution. However, you implicitly already do this with the sequential prompting strategy, keeping previous demographic information in context. Maybe this points toward conditional not being a solution? References ------------- Ashokkumar et al. (2024). Predicting Results of Social Science Experiments Using Large Language Models. --- Rebuttal Comment 2.1: Comment: Thank you for your response. We are very pleased to have addressed your concerns. Regarding sequential prompting, we condition on models' previous outputs rather on existing demographics from U.S. census' individuals, which again does not result in meaningful aggregate response statistics. However, we think that conditioning on *existing* demographics (e.g., U.S. census demographics) may be one way to obtain more reliable survey responses from language models. However, studying the effectiveness of such approach is beyond the score of this work.
Summary: This paper critically examines possible pitfalls of using the responses of LLMs to survey queries to study the model alignment. They found substantial bias, e.g., with respect to the order of response option, Strengths: The paper examines a very important methodological topic that has gained significant attention also beyond computer science: measuring the values and opinions, in which LLMs are rooted. As such, the topic is very relevant, interesting, timely, and certainly fitting for the conference. The paper is well written and well motivated. The provided material (i.e., code and documentation) are exemplary. Results are presented in a clear and concise way. Weaknesses: The paper is motivated by surveys on the "demographics, political opinions, or values best represented by current language model". For this, the paper mostly relies on the ACS dataset. However, this questionnaire mostly covers demographic information, for which naturally the LLM cannot have a "correct" answer. It would be criticial to see for which type of question the high entropy responses actually hold. From my own experience, I would not expect at all that similar uniform distribution also would occur for (political) opinion or value-based questions. Minor note: I would recommend to add "forward mentioning" in Section 2 that other datasets will also be covered in Section 5 Although a lot of LLMs have been included in the study, only the large-scale models of OpenAI have been studied. To see if the observations hold also for other, similary large models, including also other commercial providers (i.e., Anthropic or Google) would be nice. This is not required for the key results of the paper, however. Technical Quality: 4 Clarity: 3 Questions for Authors: - Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The limitations are mostly described well in the paper. For an exception, see weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive assessment and the feedback. Please note that we discuss in Appendix E how our findings for the ACS transfer to opinion surveys. We agree that our observations regarding models’ survey responses may be partially attributable to survey questions not having a “correct” answer. This is in stark contrast with the multiple-choice questions that are typically used to evaluate LLMs (e.g., MMLU), and reveals interesting new insights for model evaluation. --- Rebuttal Comment 1.1: Comment: Thank you for your comment and the pointer to the appendix!
Summary: This paper conducts experiments to verify the alignment between human and LLM responses to the ACS survey. Particularly, the paper questions existing literature suggesting that LLMs can be used as proxies for measuring responses to survey questions, suggesting instead that LLM choices are biased by the ordering of the questions, and when order choice is randomized, models tend to present uniformly random survey responses, thereby closely modeling the behavioural characteristics of sub-groups whose aggregate statistics are close to a gaussian. The paper suggests that using LLMs as human-proxies for multiple choice surveys is a questionable strategy. Strengths: 1. The authors test 42 different LLMs in their experiment, testing base, instruction-tuned, and RLHF-tuned models. This is comprehensive and substantive, and a lot of work. The results they find agree across model size and type barring one outlier in instruction-tuned models over one survey. 2. The authors test the use of randomized choice order and the original choice order, finding that randomizing the response order results in a uniform distribution of responses. 3. The authors investigate the effect of using instruction-tuning to train models. 4. The authors also test surveys besides the ACS, and find that the results persist. 5. The authors interpret findings in earlier papers and provide explanations for why the LLM responses more closely resembles responses from certain demographic sub-groups, i.e that these distributions are Gaussian 6. The paper is well-written and presents a simple and elegant experiment, and clear and consistent takeaways. Weaknesses: 1. Order choice bias is a well-known phenomenon in LLMs (Lu, Yao, et al. "Fantastically ordered prompts and where to find them: Overcoming few-shot prompt order sensitivity." arXiv preprint arXiv:2104.08786 (2021)). Therefore the big finding in this paper is that LLM responses model a Gaussian when order choices are randomized using a gaussian distribution. 2. Please use a different color for the sub-group dots in fig. 5 -- it is confusing because the same color is used for survey responses in earlier figures. 3. Takeaways are harder to gauge from Fig. 3. Please use simpler aggregate statistics like means, confidence intervals, variance and save this figure for the appendix. 4. Please plot the log linear scale as a dashed line in Fig. 2 for easy comparison. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How was the subset of 25 questions selected from the ACS? 2. Do the authors have any comments on the training data and its influence on survey responses, beyond the frequency of appearance of certain letters in English as noted in C? 3. Does skewed randomization (as opposed to simple randomization) of the responses, present a similar skew in the model response distributions? 4. Instruction-tuned models show higher variance in entropies. What could be causing this? 5. What form of instruction tuning was tried? 6. Were the effects of question-ordering investigated? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. We will implement the suggested changes to improve the figures. Let us address your questions in the following. **Selection of questions.** We chose 25 representative questions to achieve diversity over topics (e.g., educational attainment, healthcare status, employment status, etc.), while keeping figures readable. In the rebuttal pdf we include additional results for all ACS survey questions for comparison. We provide the A-bias and response entropy of models with publicly available weights. They show identical trends. **The role of training data.** Regarding the pre-training data, we find that all base models exhibit similar response distributions, despite being trained on substantially different pre-training data. While we find little difference across base models, beyond the ordering effects identified in Appendix C for smaller language models, we do observe substantial differences between the response distributions of different fine-tuned models. This suggests that fine-tuning data can have a larger effect on models’ distributions. This is a positive result for future work seeking to fine-tune models to alter their survey responses (e.g., emulate those of certain populations). **Why do instruction-tuned models have higher variance in entropy?** We generally find that, compared to base models, instruction-tuned models tend to have higher confidence in their responses for at least some of the survey questions. This causes instruction-tuned models to have higher variance in their response entropy compared to base models, as any deviation from balanced responses is more amplified. Note that we used publicly available instruction-tuned models, we did not perform instruction tuning ourselves. For some models (e.g., the Llama models), these instruction-tuning datasets are not publicly available. **Effects of question-ordering** We follow the predominant methodology of asking questions independent of each other. Therefore, there are no question-ordering effects. If questions were to be asked in sequence, putting the answer to previous questions in context, then we would expect to observe substantial question-ordering effects. But this was not the focus of our work. **Skewed randomization** Yes, for models that exhibit choice ordering biases, skewed randomization would change the response distribution. This is because we would no longer uniformly average across each of the possible choice orderings, but perform some weighted average. However, uniform is the only principled approach here to adjust for it.
Rebuttal 1: Rebuttal: We thank all reviewers for their feedback. We hope to have addressed your concerns, and we are happy to answer any further questions you may have. Thank you, Authors Pdf: /pdf/a0e4f4f00e81019ac67fcc15b866625951900655.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mean-Field Langevin Dynamics for Signed Measures via a Bilevel Approach
Accept (spotlight)
Summary: Mean-Field Langevin Dynamics (MFLD) framework is used to solve optimization problem over manifold. The main contribution appears to be reducing a general optimization problem over signed measures to probability measures using lifting or bilevel approaches. Convergence rate of MFLD, when applied to both approaches, are investigated. Strengths: It sounds interesting to use lifting or bilevel ideas to reduce MFLD to solve general optimization problems over signed measures. Weaknesses: The scope of lifting or bilevel ideas should be made clear. For instance, lifting idea seems to be applicable when the signed measure can be represented as projections of probability measures; does the projection representation always exist? Also, reduction of the optimization problem over probability measure may change to a more difficult optimization problem, e.g., the objective functions (3.1) and (3.3) which might be more difficult to deal with. Some discussions on how to deal with these functions empirically will be helpful. Technical Quality: 4 Clarity: 3 Questions for Authors: See *weakness* Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive evaluation, and address their questions and comments below. - Thank you for pointing out the potential confusion about the scope of these ideas. In the final version, we will further clarify that the lifting and bilevel ideas are both always applicable for optimization over signed measures. In particular, any signed measure $\nu \in \mathcal{M}(\mathcal{W})$ can be represented as a projection of a probability measure $\mu(\mathrm{d} r, \mathrm{d} w) = \delta\_{\\|\nu\\|\_{TV} \frac{\mathrm{d}\nu}{\mathrm{d}|\nu|}(w)}(\mathrm{d} r) \frac{|\nu|(\mathrm{d} w)}{\\|\nu\\|\_{TV}}$. - The objective functions $F_{\lambda,b}$ (3.1) and $J_\lambda$ (3.3) are amenable to optimization using Wasserstein gradient flow (WGF) or MFLD, because they are defined over sets of probability measures, whereas the original optimization problem (1.1) is not. To solve the optimization problems (3.1) or (3.3), we rely on the WGF and MFLD literature which have studied the problem of optimizing convex functionals over probability measures in detail. We will further emphasize this fact in the final version.
Summary: The paper studies extension of mean-field Langevin dynamics (MFLD) to perform convex optimization over the space of signed measures. The paper considers the lifting and bilevel approaches and shows that the latter guarantees better convergence properties at a wider range of hyperparameters. MFLD-bilevel is shown to be amenable to a previously studied improved temperature annealing schedule over the standard $1/\log t$ rate. Moreover when learning a single index model, MFLD-bilevel is shown to achieve a local convergence rate which depends polynomially on the dimension and inverse temperature via an improved LSI constant. Strengths: * The paper rigorously compares the lifting and bilevel approaches, and the result that the latter leads to stronger guarantees (at least in the low temperature regime) is quite surprising to me as many influential works on shallow NNs have built on the lifted formulation. * The paper is overall well written and detailed, and the analysis utilizes novel techniques (namely local LSI bounds) to go beyond the standard results in MFLD. * While the paper has some similarities with [Takakura, Suzuki, 2024], the setting is more general with a focus on more abstract optimization as explained in Appendix A. Weaknesses: * The parameter space $\mathcal{W}$ is assumed to be a compact manifold. Does this mean compact without boundary, which precludes subspaces of Euclidean space? Can this assumption be removed by e.g. adding a confinement term and considering relative entropy regularization with a log-concave distribution? While the non-necessity of these elements are presented as advantages in Section 1.1, studying $\mathcal{W} =\mathbb{R}^d$ is simply necessary for certain optimization problems and it would be very nice to fill out the details. Technical Quality: 4 Clarity: 3 Questions for Authors: * In Theorem 4.2, can the time complexity to achieve $(1+\Delta)$-accuracy be rewritten in terms of $\Delta$ and compared with the usual logarithmic annealing? The current rate expression in terms of $\delta$ makes this unclear. For example, it seems $\beta_t\approx O(1/t)$. * Besides the established positive results, are there any theoretical/empirical results showing the difference in optimization dynamics between using squared and un-squared TV regularizer? * What is the main issue when trying to extend the results of Section 5 to multi-index models or general target functions? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Discussed in corresponding sections. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thoughtful assessment, and address their questions and comments below. - **On compactness**: $\mathcal{W}$ is assumed to be without boundaries. This assumption is missing on line 20. Thank you for pointing this out. The techniques presented in our work can be used without significant modifications for dynamics over $\mathbb{R}^d$ using additional confinement. We chose to focus on compact Riemannian manifold without boundaries because it is the natural setting for 2-layer neural networks with homogeneous activations (on the sphere), or for sparse deconvolution (on the torus). The absence of additional confinement is not an advantage in general; the phrasing in Section 1.1 can indeed be misleading and will be corrected. - **On the statement of Theorem 4.2**: To state the time complexity of this theorem in terms of $\Delta$, we notice that $\delta = \Delta / \log(C/\Delta)$ for some constant $C$ that is polynomial in problem parameters (including $\lambda^{-1}$ and ${J^*_\lambda}^{-1}$). The time complexity would roughly read $T = \exp\big(O\big(\frac{\log(C/\Delta)}{\lambda\Delta}\big)\big)$. This can be compared with the classical logarithmic annealing procedure, which results in $T = \exp\big(O\big(\frac{1}{\lambda\Delta J^*_\lambda}\big)\big)$. Further, following the discussion on line 271, we can take $J^*_\lambda = O(\lambda)$. Thus, the annealing schedule of Theorem 4.2 leads to an improvement of order $O(\log(C/\Delta)/\lambda)$ in the exponent. Originally, we stated the time complexity of Theorem 4.2 using $\delta$ rather than $\Delta$ to simplify the presentation while having in mind that $\delta$ and $\Delta$ are equivalent up to logarithmic factors. We agree with the reviewer that the time complexity should be stated explicitly in terms of $\Delta$ for easier comparison with the baseline annealing procedure. - **On the difference between using squared and un-squared TV regularizer**: Concerning the lifting formulation, our negative results can be generalized to the case of un-squared TV regularizer, and we will add a remark on this in the final version. For the bilevel formulation, using a similar trick as for the squared TV norm, the unsquared TV norm can be expressed as $\Vert \mu\Vert_{TV} = \int_{\eta \in \mathcal{M}_+(X)} \frac12 \int \frac{\vert \mu\vert^2}{\eta} + \frac12 \eta(X)$, but here $\eta$ is an un-normalized measure so the resulting bilevel problem is not amenable to MFLD. - **On the generalization of Section 5 to multi-index models**: For Theorem 5.2, we rely on the rotational symmetry (orthogonal to the target direction) induced by the single-index model to simplify the Gibbs potential. It is an interesting open question to find examples of multi-index models with similar symmetries that lead to a polynomial LSI constant. --- Rebuttal Comment 1.1: Comment: Thank you for the reply. I will maintain my high rating of the work.
Summary: This paper extends the well-known and recently extensively studied mean-field Langevin dynamics (MFLD) to optimization problems over signed measures (instead of probability measures). This has applications and relevance to the training of NNs or other problems in data science, such as sparse deconvolution, which are intrinsically of such form. As an example, the learning of one neuron is presented at the end of the paper. In order to fit the setting of signed measures into the classical probabilistic framework of the MFLD, the Authors leverage and explore two classical strategies. Namely, the lifting and bilevel reduction. They observe that the bilevel reduction behaves more favorably compared to the lifting reduction, allowing to obtain stronger and faster convergence. Strengths: - Extending the MFLD via a bilevel approach to optimization problems over signed measures seems to be an interesting and relevant direction given that this captures several problems of interest in machine learning and data science. - After discussing two potential directions to approach the problem (lifting and bilevel), the paper adapts and employs the convergence analysis of [Chi22b] to the new setting of the bilevel MFLD, which transpired to be the way to go due to the assumptions holding in more realistic scenarios. - Despite the paper being of purely theoretical nature and in parts quite technical, it is overall well-written and good to follow. The organization and structure of a general introduction, revisiting the classical MFLD as well as the two reduction strategies in Section 2 and 3, respectively, helps in this regard. - Overall, the technical contributions seem to be rigorous and convincing. Weaknesses: The experimental exploration of the proposed approach is a bit limited. The experiment in Figure 1 seems very academic. I was wondering, if the Authors could comment on whether there are experiments already conducted in the literature that could prove the practicability of the approach. A remark on that instead of the experiments in Figure 1, which could be presented in the Appendix, would be appreciated. Yet, as the paper's contribution is predominantly of theoretical nature, I _do not_ see the limited experimental investigation as a reason for a lower score. Technical Quality: 3 Clarity: 3 Questions for Authors: - The description of the lifting and bilevel reduction approaches in lines 50--55 feels a bit loose and vague and not much insightful. I would recommend to the Authors to either skip this part here or be more elaborate by giving more insights into how these approaches allow to transfer from the signed measure to a probability measure. At the same place, some citations of the papers proposing these ideas would be welcome (i.p. for the lifting). - What is $J$ in line 44? - uniform log Sobolev inequality: At first sight, it seems unclear, whether this assumption is reasonable or quite strong, as it has to hold for $\mu_t$ for all times $t$. Could the Authors comment on that? Are there sufficient assumptions on $\mu_0$ and $G$ that ensure such assumption at all times? Moreover, does it hold in interesting settings such as the training of one-hidden layer NNs? As far as I know, the cited reference [Chi22b] addresses parts of this. It would be nice to include a comment. - Is there any intuitive insight that could explain, why the bilevel approach works much better than the lifting strategy? - And I have a clarifying question concerning the bilevel reduction: The central observation allowing this reduction is the fact that the TV-norm can be expressed as in line 197? In addition to the change of infima in line 198, which allows to recast the optimization problem over the signed measures into an optimization over probability measures? (If so, maybe it would be nice to put this as an equation (in addition to the Proposition) to make it better visible as the central "trick".) Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I do not see any unadressed limitation. Moreover, I like how the lifting approach is discussed, despite turning out not to be the way to go. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed evaluation, and address their questions and comments below. - Thank you for the feedback on this part of the introduction, it will be adapted. - "$J$" on line 44 should be "$F$". - Thank you for the feedback on this part of Section 2, we will add more discussions on the reasonableness of the uniform log-Sobolev inequality (LSI) assumption to the final version. The cited reference [Chi22b] indeed contains sufficient conditions on $F$ ensuring uniform LSI, which include the training of two-layer neural networks under some technical assumptions such as bounded activation function. Part of our motivation for the present work was to remove this boundedness assumption. - On why "bilevel" is better than "lifting" intuitively: For optimization over probability measures $\mathcal{P}_2(\mathcal{W})$, MFLD differs from Wasserstein gradient flow (WGF) by adding noise, which encourages exploration of $\mathcal{W}$. For optimization over signed measures $\mathcal{M}(\mathcal{W})$, which can be formulated as optimization over $\mathcal{P}_2(\mathbb{R} \times \mathcal{W})$ by lifting, there is no need to explore in the direction of the "weight" variables in $\mathbb{R}$, as the landscape is already convex in these directions. On the contrary, adding noise to both the weight ($\mathbb{R})$ and position variables ($\mathcal{W}$) turns out to be detrimental for convergence, by our negative result on "lifting" in Section 3.1. This suggests following an intermediary dynamics: adding noise to the WGF on the lifting objective, but only on the position variables. The bilevel approach is a two-timescale limit of this intermediary dynamics (see Eq. (3.4)), which is amenable to analysis as it is itself an instance of MFLD. - Indeed, lines 197 and 198 are the central trick leading to the bilevel formulation. We will emphasize more on the role of this trick in the final version. --- Rebuttal Comment 1.1: Comment: Thanks for your reply and for addressing my questions, in particular also for providing some intuition about the two approaches, which does sound very reasonable. After reading also the other rebuttals and the Authors' replies, I remain very positive about the paper and maintain my positive score.
Summary: Mean-field Langevin dynamics (MFLD) has been developed for optimizing convex functionals over the space of probability measures. This work extends MFLD to convex problems defined over the space of signed measures. The authors consider two approaches: lifting and bilevel approaches, and prove the superiority of the bilevel approach. The lifting approach cannot satisfy the two required conditions for MFLD convergence under weak regularization, whereas the bilevel approach satisfies both conditions simultaneously. Additionally, an improved convergence rate is given for the enhanced annealing schedule of the temperature. Strengths: - This work successfully extends MFLD to convex optimization problems over the space of signed measures. Although recent work [TS24] also considered a similar approach, their method is a special case of the MFLD-bilevel proposed in the paper. - This work proves an improved convergence rate for MFLD-bilevel with annealing temperatures. Although the rate still exponentially depends on the regularization strength, it may be difficult to avoid such dependence. Weaknesses: - Since this work considers (compact) Riemannian manifolds as the particle space, time discretization is essentially challenging. I’m curious about how to discretize the dynamics in time and guarantee the convergence rate. - I guess the lifting approach satisfies both (P1) and (P2) once we limit the range of $r$ to be bounded, like $r∈[−R,R]$. If this is true, it is worth considering whether the bilevel approach is truly superior to the lifting approach. - As the authors commented in the paper, [TS24] studies a quite similar method. I would like to see any additional technical challenges or difficulties compared to [TS24] if possible. Technical Quality: 3 Clarity: 3 Questions for Authors: - In Proposition 5.1, the iteration complexity for entering the sublevel where MFLD converges faster is not specified. Is it possible to explicitly describe the time required to enter this level? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Limitations are addressed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their detailed and encouraging comments. We address each point in "Weaknesses" and "Questions" separately. - **On time-discretization over Riemannian manifolds**: So far, the theory for time-discretization of MFLD is established when $\Omega = \mathbb{R}^d$ [SWN23], or when the objective functional $F+\beta^{-1} H$ is a relative entropy and $\Omega$ is a Hessian manifold [GV22] or a product of spheres [LE23]. In all cases, a crucial ingredient is the assumption that $F$ is smooth in the sense of (P1), hence our focus on this property. It would indeed be an interesting direction for future research to see how the analysis techniques of the aforementioned works could be combined, to show convergence guarantees for time-discretizations of MFLD on manifolds. - **On constraining $r$ to $[-R, R]$ in the lifting approach**: Indeed, artificially limiting the range of $r$ would ensure (P1) and (P2). This is an interesting suggestion to make the lifting approach work. We did not find it natural to investigate this direction mainly for the following reasons: - Practically, this introduces a hyperparameter $R$, the tuning of which might be quite tricky: taking $R$ too small will exclude the true solution ($\mathrm{argmin}~ G_\lambda$) from ever being reached, and taking $R$ too large will significantly affect the smoothness and LSI constants. - Theoretically, the manifold $[-R, R] \times \mathcal{W}$ possesses boundaries, which adds significant technical difficulties to the analysis (in particular for time discretization). We note that a related modification of the lifting approach is discussed in our answer to Reviewer akPi (4th bullet point). - **On the technical difficulties compared with [TS24]**: The dynamics they analyzed is (a variant of) MFLD-Bilevel applied to an objective $G$ of a specific form, as explained in Appendix A. The additional difficulties for us were $a)$ to establish (P1) and (P2) for the bilevel objective $J\_\lambda$ with no structural assumptions on $G$; $b)$ to analyze the convergence of MFLD-Bilevel beyond the crude upper bound provided by Theorem 2.1, corresponding to [TS24, Theorem 3.7]. We also note that the angle of that paper is quite different as it does not consider the lifting approach nor mentions signed measures. (We developed our results independently and concurrently with [TS24] which was announced on arXiv in March 2024.) - **On the time $t_0$ required until Proposition 5.1 applies**: $t\_0$ can indeed be explicitly described by inspecting the proof. Inspection reveals that the localness requirement is $J\_{\lambda,\beta}(\eta_{t_0}) - \inf J\_{\lambda,\beta} \leq \delta$ for $\delta = \left( 2L_0 G(0) (1+\frac{L_0}{\lambda}) \frac{L_0}{\lambda} \beta \sqrt{\frac{2\beta}{\alpha^*}} \right)^{-2} \left( \log(1-\varepsilon/\alpha_\beta^*) \right)^2$, and that it is guaranteed for $t_0 = \frac{\beta}{2 \alpha\_\tau} \exp(\frac{L\_0}{\lambda} G(0) \beta) \log\frac{J\_{\lambda,\beta}(\eta_0) - \inf J\_{\lambda,\beta}}{\delta}$. [GV22] Khashayar Gatmiry and Santosh S Vempala. “Convergence of the Riemannian Langevin algorithm” (2022). [LE23] Mufan Li and Murat A Erdogdu. “Riemannian Langevin algorithm for solving semidefinite programs” (2023). [SWN23] Taiji Suzuki, Denny Wu, and Atsushi Nitanda. “Mean-field Langevin dynamics: Time-space discretization, stochastic gradient, and variance reduction” (2023). [TS24] Shokichi Takakura and Taiji Suzuki. “Mean-field Analysis on Two-layer Neural Networks from a Kernel Perspective” (2024). --- Rebuttal Comment 1.1: Comment: Thank you for the reply. The authors have addressed my concerns well. I will increase the score. I would encourage the authors to add discussion on the lifting case with the constraint $r \in [-r,r]$.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Observational Scaling Laws and the Predictability of Langauge Model Performance
Accept (spotlight)
Summary: This paper proposes observational scaling laws to align scaling laws of computing from different model families, which are trained on various recipes, by projecting model benchmark performance to surrogate compute. This enables applying scaling law analysis without actually training models. Using the observational scaling laws, the authors predict emergent abilities, including agent performance of models trained with much more compute. Strengths: 1. This paper presents an elegant approach to resolve the misalignment of scaling laws due to different training recipes (e.g., pretraining data). 2. This paper also shows the surrogate compute drawn from the proposed observational scaling laws can predict emergent abilities with sigmoid function. Weaknesses: 1. The predictability of emergent abilities seems to be overclaimed. The provided results in Fig. 4, 6 and 7 support that the sigmoid function can well fit the relationship between downstream performance and (surrogate) compute. However, the feasibility of making reliable predictions on emergent ability is doubtful. The fitting data in Fig. 4 include points that have shown non-trivial performance and some even reached the inflection point. And extrapolation results in Fig. 5 only scale 4x. 2. Projecting the model benchmark performance to low dimensional surrogates of compute seems to imply the assumption that different model families mainly differ in the speed to obtain abilities and have little difference in the trade-off of different capabilities. Let's say A models are trained with 1% code + 99% text, while B are trained with 99% code + 1% text, do observational scaling laws still hold for these two model families? Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Eqn. 3 and 4 indicate that model performance scale with log compute in sigmoids. But Eqn. 5 shows linear projection between model performance and surrogates of log compute. This misalignment seems weird. 2. What are the implications of observational scaling laws on evaluating pretraining data quality? Do different families of models show very different compute efficiency according to your results in aligning them? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: The applicability of observational scaling laws seems to depend on training data distribution. The influence has not been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and suggestions. We would like to address your remaining questions and concerns with the following responses. ### Capability predictability and cutoff selection > *“The predictability of emergent abilities seems to be overclaimed… However, the feasibility of making reliable predictions on emergent ability is doubtful. The fitting data in Fig. 4 include points that have shown non-trivial performance and some even reached the inflection point”* We would like to clarify that for the emergent capability setups, some data points above the random-guess region are needed since it is not possible to fit reliable scaling with only random-guess data points that extrapolate to the effective scaling region. Our cutoff point was carefully selected to unify all the setups as much as possible to avoid the concerns of "cutoff overfitting" to each specific setup, so there may be more non-trivial data points in some setups than others. We will adjust the claims and explain this point more explicitly in the paper. In addition, we have also included extra results of pushing back the cutoff point for each specific emergent capability task, included in Figure 1(a) in the supplementary PDF. We presented the results on three representative tasks due to space constraints and tested on the newly released **Llama-3.1 405B (FP8)** to assess the generalization to a larger scale. We observe that with a more aggressive cutoff setup, observational scaling laws can still provide reasonable predictions, even to a larger scale (Llama-3.1 405B) that was not available for testing before. We will include these additional results in our future version of the paper. > *”And extrapolation results in Fig. 5 only scale 4x.”* We would like to clarify that the X-axis is on the log-e instead of log-2 scale, so the extrapolation is about ~7.5. We commit to testing our current scaling predictions on more capable models once they are released, similar to what we have done in the emergent capability setups by adding the newly released Llama-3.1 405B (Figure 1(a) in the supplementary PDF), which will enable us to further check how much our predictions can scale. ### Applicability to different model families & data distribution > *”Projecting the model benchmark performance to low dimensional surrogates of compute seems to imply the assumption that different model families mainly differ in the speed to obtain abilities and have little difference in the trade-off of different capabilities. Let's say A models are trained with 1% code + 99% text, while B are trained with 99% code + 1% text, do observational scaling laws still hold for these two model families?”* The training data that different models are trained on will mainly determine their speed in converting to training compute to different capabilities (as shown in Figure 3), but not the observational scaling results. In our experiments, we have included models trained on mostly text (e.g., Llama), mostly code (StarCoder), multilingual data (BLOOM), and synthetic data (e.g., Phi), and observed consistent scaling predictions for these models. > *”What are the implications of observational scaling laws on evaluating pretraining data quality? Do different families of models show very different compute efficiency according to your results in aligning them?”* As discussed above, the training data distributions will determine the efficiency in converting the compute to different capabilities. For example, the models trained on pure code data may demonstrate better compute scaling for the PC component corresponding to the programming capabilities than models trained mostly on natural language. In our results, different model families indeed demonstrate different compute efficiencies in converting to different PC components, as shown by the varying slopes of different scaling curves in Figure 3. > *”The applicability of observational scaling laws seems to depend on training data distribution. The influence has not been discussed.”* As discussed above, we would like to clarify that the scaling predictions from observational scaling laws could accommodate models trained from different data training distributions, instead of “depending on” them, and our results have already accommodated many models trained from very different data distributions. The data distributions determine their compute efficiency in converting FLOPs to capabilities instead of from principal capabilities to more complex downstream performance (observational scaling). ### Formulation misalignment > *”Eqn. 3 and 4 indicate that model performance scale with log compute in sigmoids. But Eqn. 5 shows linear projection between model performance and surrogates of log compute. This misalignment seems weird.”* We could unify the two formulations, e.g., by adding a sigmoid relation in Eqn. 5, and it will probably also work reasonably well if we properly deal with the logit transformations during the floor (~0) and ceiling (~1) region. We choose the current formulation because it simply works well in practice and does not involve dealing with the potential ill-behaved, numerical issues of the logit transformation. --- Rebuttal Comment 1.1: Comment: Thank you so much for the detailed reply. It is impressive to see the scaling trend holds for LLaMa 3.1 405B although I think the difference in training data has not been sufficiently resolved. The sigmoid curve of different model families do not overlap. Thank you so much for your inspiring work. Since the original rating is high, I will maintain my score.
Summary: They propose using PCA decomposition of the performance of a range of models across a number of benchmarks to form an observational scaling law, which effectively predicts downstream performance across several different model families, including predicting post-training interventions like chain-of-thought. They find that 3 principal components explain much of the variance on the LM benchmarks that they test. They can then use these principal components to 1) derive a llama2 effective compute quantity to use as a universal x-axis that works across all model families; and 2) for a new task and model family fit an observational scaling law based on the relevant principal components. They demonstrate that they are able to accurately extrapolate the performance of more capable models using weaker models on a number of standard benchmarks, including emergent tasks from Big-Bench. They also show that they can predict GPT-4's agent capabilities, and the effect of techniques like chain of thought and self-consistency using their method. Finally they demonstrate an approach for selecting a minimal sufficient set of models for deriving the observational scaling law. Strengths: * Developing a better understanding of the scaling of downstream capabilities in language models is an important problem and their paper represents very promising progress towards this goal. * Their experiments are pretty thorough, encompassing a range of models, tasks, and inference-time techniques. Weaknesses: * They show that their method can extrapolate predictions of stronger model capabilities using weaker models; however we don't get to see how far in advance they are able to make predictions. It would bo nice to see an ablation where the cutoff point used for fitting is pulled back so as to understand when extrapolation begins to fail. Moreover, they claim that they can predict emergence capabilities, however their cutoff point is conveniently just a little bit after performance begins to improve beyond chance, enabling their extrapolations to work. In the case where the cutoff point is pulled back further, are such emergent predictions still possible? I feel that demonstrating this would be important for being able to make the claim that prediction of emergence tasks is possible. * They don't give any kind of uncertainty measure or confidence interval for their predictions. Technical Quality: 3 Clarity: 3 Questions for Authors: For a given take, are all models evaluated with the same prompts? It says you got some evals from Open LLM Leaderboard and others from EvalPlus and yet others from OpenLMLeaderboard. Are we sure all of these sources used the same evaluation procedure? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I think they mostly did a good job of acknowledging limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and suggestions. We appreciate that you acknowledged that our work represents “very promising progress” towards an important problem with thorough experiments. We would like to address your remaining concerns with the following responses. ### Ablation study on the cutoff point > *”It would bo nice to see an ablation where the cutoff point used for fitting is pulled back so as to understand when extrapolation begins to fail. Moreover, they claim that they can predict emergence capabilities, however their cutoff point is conveniently just a little bit after performance begins to improve beyond chance, enabling their extrapolations to work. In the case where the cutoff point is pulled back further, are such emergent predictions still possible?”* We would like to clarify that our cutoff point was carefully selected to unify all the setups as much as possible to avoid the concerns of "cutoff overfitting" to each specific setup. We have also made the unified cutoff point as hard as we could by sweeping over the cutoffs (see Figure E.9), where we found some signals from data points above the random-guess regions are needed for reliable scaling predictions, especially for emergent capability setups – it is not possible to fit reliable scaling with only random-guess data points that extrapolate to the effective scaling region. We will explain this more explicitly in the main text. For the ablation study on the cutoff point, we have already included a quantitative analysis in Figure E.9 where we measure the test error with varying cutoffs to control the training/test size. In addition, we have also included extra results of pushing back the cutoff point for each specific emergent capability task, included in Figure 1(a) in the supplementary PDF. We presented the results on three representative tasks due to space constraints and tested on the newly released **Llama-3.1 405B (FP8)** to assess the generalization to a larger scale. We observe that with a more aggressive cutoff setup, observational scaling laws can still provide reasonable predictions, even to a larger scale (Llama-3.1 405B) that was not available for testing before. We will include these additional results in our future version of the paper. ### Confidence intervals > *”They don't give any kind of uncertainty measure or confidence interval for their predictions.”* We have calculated the 95% confidence intervals for predictions from the non-linear regression models at each data point using parametric bootstrap. The results are included in Figure 4 in the supplementary PDF. We find that the observed data points mostly fall within these confidence intervals, though the intervals may be wide when there are very few effective data points above the random-guess region (e.g., in Persian QA). ### Unification of setups > *”For a given take, are all models evaluated with the same prompts? It says you got some evals from Open LLM Leaderboard and others from EvalPlus and yet others from OpenLMLeaderboard. Are we sure all of these sources used the same evaluation procedure?”* Yes, we have carefully checked the evaluation setups of these results to follow the same evaluation protocols (including prompts, few-shot examples, etc). For example, in the [documentation of Open LLM Leaderboard](https://huggingface.co/docs/leaderboards/open_llm_leaderboard/about#reproducibility), they have detailed how to conduct the evaluation following the same protocol. For the models we have evaluated by ourselves, we have also carefully followed the same procedure. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I appreciate the work to include comparisons of the cutoff point and also confidence intervals (although I don't seem to see this supplementary pdf on openreview that you are referring to). I'm willing to raise my score 1 point to a 6. --- Reply to Comment 1.1.1: Comment: Thank you very much for your reply. The supplementary pdf is included in the [global response](https://openreview.net/forum?id=On5WIN7xyD&noteId=trfqKNaK2D), and the download link can be found at the bottom of that page. We'd be happy to address any questions after you've reviewed the additional results and appreciate your willingness to reevaluate based on this information.
Summary: This paper demonstrates the correlation, known as scaling laws, between training FLOPs and large language models’ (LLMs) downstream task abilities. The authors decompose performance metrics to fit this “Observational Scaling Law” and confirm its validity across emergent capabilities, agentic capabilities, and post-training interventions. Strengths: 1. Sufficient validation. Including 21 model families and general, reasoning and agent-related datasets (open LLM leadboard) 2. Clear formalization. Observational Scaling Law in Eq.3, Eq.7. Weaknesses: I did not find any specific weaknesses in this paper. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why using PCA to decompose the main component ? This may need to explain why using PCA is work. Do these main components have any corresponding physical significance? 2. I believe this paper aims to predict the performance of future large models without the need for training, but it only considers FLOPs. In reality, data quality and model size should also be considered. Is there any analysis of these two factors? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See in Question. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and suggestions. We appreciate that you acknowledged our paper provides sufficient validation and clear formalization. We would like to address your remaining questions and concerns in the following responses. ### Choice of using PCA > *”Why using PCA to decompose the main component ? This may need to explain why using PCA is work. Do these main components have any corresponding physical significance?”* Our motivating hypothesis is the existence of a low-rank space of capability measures that captures the major model benchmark performance, and PCA is the most standard method for low-rank decomposition. The PC components extract the most significant, orthogonal directions in the capability space that capture the different capability dimensions of LMs (as illustrated in Figure 2b), making it a desirable choice for our purpose. We have also conducted an additional analysis with nonnegative matrix factorization (included in Figure 2 in the supplementary PDF) and a detailed discussion on the tradeoff in [our response to Reviewer V1YP](https://openreview.net/forum?id=On5WIN7xyD&noteId=qBHgXOUGb4). ### Additional factors > *”I believe this paper aims to predict the performance of future large models without the need for training, but it only considers FLOPs. In reality, data quality and model size should also be considered. Is there any analysis of these two factors?”* We would like to clarify that our predictions are based on the low-dimensional capabilities measures extracted from the benchmarks – which project models from different training recipes (e.g., trained on different data) to a unified space. This enables us to utilize a lot of public models without needing to deal with their training heterogeneousness – which is one of the biggest advantages of our observational scaling approaches. In our results, our scaling law predictions have already accommodated models trained from different data (e.g., specifically trained on multilingual data like BLOOM, or those on code like StarCoder) or with different sizes (e.g., Llama-2 7B, 13B, 70B). The model-specific factors like the data quality are incorporated into the “compute/data efficiency” by which each model family converts FLOPs to capabilities, as seen by how different model families behave differently in their scaling properties on the capability dimensions (Figure 3 in the paper). We welcome any suggestions for additional critical analyses that you believe would further validate our findings, and we would be glad to conduct them for additional validation.
Summary: The paper proposes a generalized class of scaling laws that encompass multiple model families of different sizes. These resulting scaling laws are capable of predicting “emergent” behaviors, complex agentic performance, and inference techniques in an extrapolative manner, as seen with GPT-4. The observed laws yield less error compared to standard FLOP-based laws. Strengths: 1. The paper is well-written, with extensive experiments covering major benchmarks. 2. A unified scaling law for various families and benchmarks is important to the whole community. 3. The framework is general and demonstrates strong extrapolation performance. Weaknesses: See questions. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. How does the method apply to Llama 3, where the amount of data is significantly larger than the standard Chinchilla optimal? Can we still achieve accurate predictions for Llama 3 models? 2. For complex tasks, increasing amounts of compute are dedicated to inference time, especially for agentic benchmarks. Can the scaling law capture the scaling trends for inference time compute? 3. Are there any insights on how to choose the PC-dimension to balance preventing overfitting with achieving better estimation? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed. The authors claim they do not foresee any direct societal impact from their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and suggestions. We appreciate that you acknowledged that our work is “important to the whole community” with extensive experiments and strong extrapolation performance. We would like to address your remaining questions and concerns in the following response. ### Applicability to Llama-3 > “*How does the method apply to Llama 3, where the amount of data is significantly larger than the standard Chinchilla optimal? Can we still achieve accurate predictions for Llama 3 models?*” We would like to clarify that our main results already included models that are significantly overtrained such as Llama-3 and Qwen, as shown in Figures 4 & 6. We have also further tested the newly released Llama-3.1 405B (FP8) with results included in Figure 1 in the supplementary PDF. We observe fairly accurate predictions on these models from observational scaling laws. ### Generalization to inference-time scaling > *”For complex tasks, increasing amounts of compute are dedicated to inference time, especially for agentic benchmarks. Can the scaling law capture the scaling trends for inference time compute?”* That is a good question! Our results on the agentic setups (with > 100 output tokens per step) and on the analysis of CoT (as another type of spending inference for capability) indicate the possibility of predicting model performance on inference-compute-intensive setups. Furthermore, we believe our observational scaling laws can be generalized to account for the inference time compute, e.g., by combining the capabilities measures of pretrained models and inference-time quantities (e.g., inference compute) to study the scaling properties of specific inference-time techniques with respect to certain model capabilities. This is an interesting direction, and we leave it for future work. ### PC selection > *”Are there any insights on how to choose the PC-dimension to balance preventing overfitting with achieving better estimation?”* We would like to note that we have conducted an extensive ablation study on the selection of the number of PCs in Appendix C.3. In our results, using three PCs generally leads to the best extrapolation performance across different setups. Furthermore, for a more fine-grained PC selection, we may utilize the total explained variances of the included PCs as a practical unsupervised selection criterion to balance representativeness and noise inclusion. We have also done some preliminary experiments, where we found selecting the smallest number of PCs (for preventing overfitting) with a total variance above 97% (for keeping representativeness) could select the best number in most of our setups.
Rebuttal 1: Rebuttal: We thank all reviewers for their helpful feedback and suggestions. We are glad that the reviewers found our work offers a valuable contribution [V1YP] and very promising progress [oXqB] toward an important problem [oXqB, JzvB] with a comprehensive analysis [V1YP], extensive experiments [JzvB, M7jM], and interesting insights [V1YP]. We have provided extensive empirical results (included in the supplementary PDF) and responses to address reviewers’ remaining questions. Specifically, we have attempted to address the following major ones: - [Reviewer oXqB, DfgG, V1YPm] Pushing back the cutoff points on emergent capability [oXqB, DfgG] and agentic tasks [V1YPm, DfgG]: In individual responses, we have clarified the reason for the use of our current cutoff selection. Furthermore, we have also done additional analysis on pushing back the cutoff selection on each setup for robustness check and tested the newly released **Llama-3.1-405B** for scalability check, included in Figure 1 in the PDF. - [Reviewer V1YP, M7jM] The use of PCA for capability decomposition and its interpretability: We have analyzed an additional low-rank decomposition method (non-matrix factorization) in Figure 2 in the PDF and discussed its tradeoff against PCA in individual responses. - [Reviewer V1YPm] Scaling extrapolation within a single model family: We have analyzed the scaling prediction from FLOPs within a single family and included results in Figure 3 in the PDF. - [Reviewer oXqB] Confidence intervals: We have calculated the 95% confidence intervals for predictions, included in Figure 4 in the PDF. Please see detailed responses to specific questions below. Pdf: /pdf/34858831608b386caa359266d6dfa1e85537777f.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper proposes observational scaling laws, a method to use benchmark scores of LLMs to infer how their performance would change if the amount of training compute was scaled, without actually having to train additional models. The authors apply PCA to a model-task benchmark matrix in order to obtain latent capabilities, and find that only a small set of such capabilities can explain almost all of the benchmark variation. They use the latent capabilities to infer how efficient different model families are at converting compute into these capabilities, and to infer how models will perform on a new benchmark, based on their capabilities. The authors then examine how these observational scaling laws can be use to predict model performance on emergent, agentic and prompt-induced abilities and find that the predicted scaling performance closely matches the actual performance of held-out models. Strengths: 1. The proposed observational scaling laws are a valuable contribution towards anticipating LLM abilities as they scale: - They are helpful in extrapolating various LLM abilities (emergent, agentinc, prompting-based, and possibly other abilities), which can help in clarifying some of the scaling debates around these abilities, e.g. such as for emergent abilities, and which gives practitioners a tool for anticipating the performance gains of different methods, such as of prompting strategies. - They do so purely based on observational benchmark performance, i.e. they alleviate the need for costly (interventional) training of additional models at different scales. - They seem to obtain reliable estimates by doing a joint inference over different model families and benchmarks. 2. The paper conducts a comprehensive analysis spanning benchmarks for 77 models from 21 families which demonstrate that the proposed observational scaling laws, using latent capability estimates, can predict the performance of more capable models from less capable ones, with low error and better than naive methods that directly estimate performance from compute or model size. 3. ​The paper offers some interesting insights: - The authors show that emergent LLM abilities may be explainable by a lack of "sampling resolution" in terms of number and performance of models, corroborating recent results in that area. - The latent capability model allows for inferring how much the performance on different benchmarks contributes to abilities (e.g. agentic and prompting-induced ones), showing that for instance performance on coding benchmarks strongly contributes to agentic and chain-of-thought abilities. 4. Overall, the paper is well written. Weaknesses: 1. The conceptual model proposed in the paper is that the (principal component) PC-dimensions (latent capabilities) obtained based on model-benchmark performances capture basic, supposedly interpretable skills (e.g. general, reasoning, and programming capabilties) that increase the more training compute is applied to the models. However, as shown in Figure E.2 in the appendix, for some models for PC-2 and PC-3 there is a negative or flat relationship between training compute and model skill, as measured by the corresponding PC. Additionally, the performance on some of the benchmarks correlates negatively with some of the PCs, e.g. -0.62 for HellaSwag and PC-2 (which supposedly captures reasoning abilities). Given these discrepancies, I am not sure that the mental model of interpretable skills is really applicable for the PCs (even though they they seem to be useful for the scaling predictions). 2. In Section 4.2 and Figure 5 the authors show that observational scaling laws can be used to extrapolate the performance of LLMs in agent settings. However, the test sets used here are quite small (2 for AgentBench, 1 for AgentBoard), which makes these conclusions somewhat unreliable. Perhaps a different training-test split (e.g. 80/20 or 70/30) could be used (instead of 90/10) in order to obtain a larger test set. Would the results still hold? 3. There are some minor clarity issues 1. Section 3.1 defines an error metric ($E_m$) which is somewhat unclear, i.e. - Is $E_m$ capturing errors or performance (~= 1 - errors)? Perplexity is mentioned as an example, but shouldn't perplexity decrease the more compute is applied to the model (Eq. 1)? - Is it always normalized? Line 111 mentions that, but I am not sure about the initial definition. 2. I found it a bit difficult to understand Figures 5c and 6c from the captions. It would be useful to mention in the caption that they are based on $\beta^T \gamma$. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Would there be benefits to using more fine-grained benchmarks to estimate the skill vector $S_m$, for example a detailed breakdown over different BigBench tasks? 2. Would it be possible to use the observational scaling laws to estimate the effect of finetuning? I.e. suppose we knew for a few models how finetuning them on a particular dataset changed their performance on some target measure. Could we apply the observational scaling laws to infer the performance of finetuning more capable models? 3. How many models per family would be needed in order for direct extrapolation from compute/model size for that family to produce similarly accurate results as the observational scaling laws here? 4. The conceptual model of latent capabilities that determine benchmark performance is strongly reminiscent of [item response theory](https://en.m.wikipedia.org/wiki/Item_response_theory) (IRT) from psychometrics, which underpins exams such as GRE. In IRT the underlying skills also determine a sigmoid-shaped probability curve, according to which a student will be able to answer a question with a particular difficulty correctly. It would be quite interesting to draw a connection between IRT and the latent capability model used by the observational scaling laws. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper does not explicitly discuss limitations. I believe that the discussion throughout the paper is sufficient, though there may still be some potential limitations that the authors do not touch upon, for intance: 1. How well do the scaling laws hold up if benchmark data has leaked into training? 2. All models within the same family are assumed to share the same compute-efficiency. I think that this is a valid assumption, because models in the same family are by and large trained with the same architecture and dataset, but it's conceivable that there may be more heterogeneous families. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable feedback and suggestions. We appreciate that you acknowledged our paper offers a “valuable contribution” with a comprehensive analysis and interesting insights. We would like to address your remaining questions and concerns in the following response. ### Interpretability of the capability dimensions > *”However, as shown in Figure E.2 in the appendix, for some models for PC-2 and PC-3 there is a negative or flat relationship between training compute and model skill, as measured by the corresponding PC. Additionally, the performance on some of the benchmarks correlates negatively with some of the PCs, e.g. -0.62 for HellaSwag and PC-2 (which supposedly captures reasoning abilities). Given these discrepancies, I am not sure that the mental model of interpretable skills is really applicable for the PCs (even though they they seem to be useful for the scaling predictions).”* This is a really good point! First, we would like to note that the negative scaling of PC-2 & PC-3 for some model families is due to their negative coefficients on certain benchmarks for decorrelation with PC-1. We agree that this may undermine the interpretability of these PCs to some extent. Second, we would like to note that interpretability is *not a must* for the conceptual foundation of observational scaling laws, especially for making the scaling predictions that we mostly care about (as the reviewer has mentioned). What we need is the existence of low-rank capability measures that robustly correlate with compute measures and downstream capabilities and enable the scaling predictions from small to large scales, which have been validated in the paper. Finally, if interpretability is really needed in the extracted capability measures, we can apply [non-negative matrix factorization](https://en.wikipedia.org/wiki/Non-negative_matrix_factorization) which enforces the non-negativity of coefficients. We have conducted the analysis and included the results in Figure 2 in the supplementary PDF: - In Figure 2(a), we visualize the component-wise benchmark coefficients. We observe that the coefficients are enforced to be non-negative and provide an interpretable decomposition. For example, we may view Component-1 as reasoning (that emphasizes GSM8K, HumanEval, and MMLU) and Component 4 as language understanding capabilities (that emphasizes (X)Winograde, HellaSwag), respectively. - In Figure 2(b), we visualize the scaling of NMF components with respect to log FLOPs for each family, using Component 1 as an example. We observe a smooth, positive scaling that generally holds across model families, which also hold for other components (omitted due to space constraints). While NMF offers enhanced interpretability and positive scaling properties compared to PCA, it also has notable limitations. Firstly, unlike PCA, NMF does not enforce orthogonality among its extracted components, as evident in the observed correlation between Components 3 and 4. Consequently, the coefficients assigned to each model across dimensions may not serve as independent measures of specific capabilities. Secondly, the ordering of NMF components lacks uniqueness and intrinsic physical meaning. This contrasts with PCA components, which are systematically ordered by their explained variances. The PCA approach provides an 'importance' measure for each dimension and allows for controlled trade-offs between representativeness and noise inclusion by adjusting the number of PCs used in the analysis. We will adjust the claims and include the discussion in our future version of the paper. ### Number of test points in agentic setups > *”However, the test sets used here are quite small (2 for AgentBench, 1 for AgentBoard), which makes these conclusions somewhat unreliable. Perhaps a different training-test split (e.g. 80/20 or 70/30) could be used (instead of 90/10) in order to obtain a larger test set. Would the results still hold?”* We would like to clarify that the small number of test points is due to the small number of models evaluated by AgentBench and AgentBoard, and that a sufficient number of data points need to be used in the training set for fitting reliable scaling laws. We look forward to testing our current scaling predictions on more capable models once they are released, similar to what we have done in other setups (Figure 1(a) in the supplementary PDF). We will also adjust the claims about these results in the paper to acknowledge the limited number of test points. We have also tested the 80/20 train/test split on AgentBench where there are more available data points. The results are included in Figure 1(b) in the supplementary PDF. We observe although the extrapolation performance tends to underestimate the performance to some extent, it still aligns with the overall observed trend. ### Scaling extrapolation within a single model family > *“How many models per family would be needed in order for direct extrapolation from compute/model size for that family to produce similarly accurate results as the observational scaling laws here?”* This is an interesting question! It is hard to do a comprehensive apple-to-apple comparison since most accessible model families contain only a few data points (e.g., Llama-2) or without public compute measures (e.g., GPT). We did preliminary experiments with the Qwen1.5 family (with 7 models) and OPT (with 8 models), and tested them on tasks where the scaling predictions are non-trivial. The results are included in Figure 3 in the supplementary PDF. We find that typically at least 5 models are required for accurate extrapolation but the performance is highly dependent on the specific setup. For example, using five Qwen1.5 models achieves decent extrapolation on word unscramble but poor extrapolation on Persian QA. --- Rebuttal 2: Title: Additional Response Comment: ### Clarity > *”Is $𝐸_𝑚$ capturing errors or performance (~= 1 - errors)? Perplexity is mentioned as an example, but shouldn't perplexity decrease the more compute is applied to the model (Eq. 1)?”* In our formulation, $𝐸_𝑚$ refers to the model error (~= 1 - accuracy). For the case of perplexity, it does decrease with more compute, which corresponds to the case where $\beta_f < 0$ (we did not constrain $\beta_f$ to be nonnegative). Due to the potential confusion it may cause, we will adjust the formulation with a negative sign included for $\beta_f$. > *”Is it always normalized? Line 111 mentions that, but I am not sure about the initial definition.”* It does not have to be normalized in the general case (e.g., perplexity). For our specific case in Eq 2 & 6 with the use of a sigmoid nonlinearity for downstream predictions, normalization is required. > *”I found it a bit difficult to understand Figures 5c and 6c from the captions. It would be useful to mention in the caption that they are based on $\beta^{T}\gamma$”* Thanks for your suggestions. We will update the captions with the suggested clarifications in our future version of the paper. ### Other questions & suggestions > *”Would there be benefits to using more fine-grained benchmarks to estimate the skill vector 𝑆𝑚, for example a detailed breakdown over different BigBench tasks?”* Fine-grained benchmarks could help in cases where the available benchmarks may not be able to form a capability space that sufficiently captures the dynamics of downstream capabilities. For example, if the downstream task is related to "a scientific research agent", then the more fine-grained performance on the "Professional" split of MMLU could potentially offer additional predictive gains. In our case, the included benchmarks seem to well capture the downstream tasks that we have tested. > *”Would it be possible to use the observational scaling laws to estimate the effect of finetuning?... Could we apply the observational scaling laws to infer the performance of finetuning more capable models?”* This a good idea! We believe it is possible to establish scaling laws for fine-tuning, by combining the capabilities measures of pretrained models and finetuning-related quantities (e.g., fine-tuning data size) to study the scaling properties of specific finetuning techniques with respect to certain model capabilities. This is an interesting direction, and we leave it for future work. > *”The conceptual model of latent capabilities that determine benchmark performance is strongly reminiscent of item response theory (IRT) from psychometrics,”* Thanks for the suggestions! We will include a discussion of the connection to IRT in our future version of the paper. ### Limitation discussion > *”I believe that the discussion throughout the paper is sufficient, though there may still be some potential limitations that the authors do not touch upon, for instance, …”* Thanks for the suggested limitation discussion! We acknowledge that they are indeed parts of the limitations of our work and will more extensively discuss them in a separate section in the future version of the paper. --- Rebuttal 3: Comment: I thank the authors for their thorough response to my questions! The additional results on NMF capabilities are interesting and this method might be helpful in settings where interpretability of the capability dimensions is needed. However, I think that it is easy to read too much meaning into the capabilities, and it is important to make sure that any interpretations of the capabilities are consistent with their actual task-relationship. The authors and I seem to be in agreement that ultimately the most important function of the capabilities is to provide accurate estimates of benchmark performances, which they seem to be achieving. It is great to see the additional results with the 80/20 split on AgentBench, as well as the addition of the Llama-3.1-405B model. I also think the comparison with non-observational scaling laws Qwen and OPT models shows that the observational approach seems to be able to more accurately forecast the performance of scaled models, while requiring less models per family. I maintain my positive score and support the acceptance of the paper.
null
null
null
null
null
null
RLE: A Unified Perspective of Data Augmentation for Cross-Spectral Re-Identification
Accept (poster)
Summary: This paper provides a unified perspective to consider the data augmentation strategies in cross-spectral re-identification. Based on the Lambertain model, the author finds that the cross-spectral discrepancy is induced by multiple local linear transformations. Furthermore, the authors propose a robust linear enhancement (RLE) to imitate such a transformation moderately and radically. Visualization and experimental results in related datasets show the proposed RLE's effectiveness in cross-spectral circumstances. Strengths: 1. The motivation of this paper seems to be valid and intriguing, and the proposed unified perspective makes sense in this topic. 2. The paper is easy to follow and well-written. 3. The proposed method shows superior performance on both RegDB and SYSU-MM01 datasets. Weaknesses: 1. As a data augmentation strategy, it will be better if the author can provide several visualization examples. 2. In Table 4, I wonder about the detailed augmentation strategy that contains in the mentioned 'ours'. In DEEN, it already contains grayscale transformation and random erasing. So, how do you insert the RLE in such a structure? Technical Quality: 3 Clarity: 3 Questions for Authors: Please check the weakness part for more details. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have discussed the limitations of the proposed method in Section 5. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive and positive feedback which inspired us a lot. Below, we respond to your key concerns point by point. >**Q1: As a data augmentation strategy, it will be better if the author can provide several visualization examples.** **R1:** Thanks for your suggestion. We have provided some visualization results in Figure 1 of the attached PDF version rebuttal following your suggestion and added this part in the final-version paper. Therefore, please kindly refer to the attached PDF. Through the visualization results, we can clearly observe the differences between MRLE and RRLE. Specifically, MRLE aims to provide diverse linear augmentation results that strictly adhere to the initial reflection correlation prior, while RRLE aims to provide diverse linear augmentation results under limited risk-taking. >**Q2: In Table 4, I wonder about the detailed augmentation strategy that contains in the mentioned 'ours'. In DEEN, it already contains grayscale transformation and random erasing. So, how do you insert the RLE in such a structure?** **R2:** We feel sorry for the inconvenience caused. Since the DEEN [1] already contains the random grayscale and random erasing for data augmentation, we remove the random grayscale and add the RLE just like what we applied for the baseline method in Tbl. 1. Following your suggestion, we have strengthened this part to make it more clear. [1] Diverse embedding expansion network and low-light cross-modality benchmark for visible-infrared person re-identification. CVPR 2023. **Finally**, we hope our response has adequately addressed your concerns. If you have any further suggestions, please feel free to discuss them with us. We will provide the corresponding results as soon as possible. --- Rebuttal Comment 1.1: Comment: Thanks for your response. My concerns are addressed well thus I raise my score. I also looked through other reviewers' comments. And I have a different opinion from SzQ4. Though data enhancement has been explored in the ReID field for many years, there is still a lot of work in this area, especially for VI-ReID. Which enhancement or augmentation is more effective is also an important topic being rarely explored. This paper not only introduces a new data augmentation method in cross-spectral ReID, but also provides a theoretical analysis on the different enhancement methods in this field. I think this type of work is worth to be encouraged.
Summary: This paper explores data augmentation strategies for cross-spectral re-identification. The authors find that non-linear modal differences arise mainly from different linear transformations occurring on various material surfaces; all data enhancement strategies for cross-spectral re-identification aim to simulate such transformations. Based on these observations, they introduce a Random Linear Enhancement (RLE) augmentation method and further extend the boundaries of the moderate and radical transformations by Moderate Random Linear Enhancement (MRLE) and Radical Random Linear Enhancement (RRLE). Strengths: In this paper, by introducing the Lambertian model, the analysis finds that non-linear modal differences arise mainly from different linear transformations occurring on various material surfaces, which is an intuitive conclusion. The RLE method proposed by the authors together with the existing random erasing achieved good experimental results. Weaknesses: Innovation is limited, and although the authors' introduction of the physical model is striking, the conclusions obtained can not actually guide the authors well in designing their methodology. The experimental results in this paper do not reach SOTA, and the article lacks a comparison with the SOTA method [1] [2]. The ablation experiments in the article are not comprehensive, lacking experiments on smaller $t_{min}$ than 0.1 as well as experiments on single terms in mixed transformations. The paper lacks an analysis of the performance improvement due to the joint use of random erasing. Technical Quality: 2 Clarity: 2 Questions for Authors: See the weakness. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: There is no analysis of limitation in the paper. As mentioned in weakness, the correlation between the observations based on the physical model in this paper and the subsequent methods in the article is not strong, and the results of the article did not reach a SOTA level, and the generalisability of the data augmentation methods proposed in the paper deserves further validation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback which helps us a lot. Below, we respond to your key concerns point by point. >**Q1: Innovation is limited, and although the authors' introduction of the physical model is striking, the conclusions obtained can not actually guide the authors well in designing their methodology.** **R1:** Thank you for your comments and your high recognition of the physical model we introduced. The core contribution of our work is that it provides a new perspective of data augmentation for cross-spectral re-identification based on the reflection model. Compared to empirical data augmentation methods that pursue visual similarity, **_this is the first work that has modeled the cross-spectral transformation to guide the design of data augmentation._** It helps us to understand where the problem lies in this task and guides us to find what is needed for this task. In fact, the proposed RLE was totally designed under the guidance of the reflection prior, which has strong directive significance. As we mentioned in Section 3, the cross-spectral modality discrepancy mainly comes from the diverse local linear transformation on different material surfaces. Therefore, to overcome the modality discrepancy caused by the cross-spectral transformation, RLE aims to mimic such diverse linear transformations, thus encouraging the network to be robust to these transformations. Specifically, MRLE aims to provide diverse linear augmentation results that strictly adhere to the initial reflection correlation prior, while RRLE aims to provide diverse linear augmentation results under limited risk-taking. The results in Table 1 and Table 4 of this paper show that the proposed RLE brings significant performance improvements, proving that the theoretical guidance of this paper withstands scrutiny. This has also been highly recognized by Reviewer muwA, who noted, "The paper is technically sound, with an elegant motivation and formulation of the proposed method." >**Q2: The experimental results in this paper do not reach SOTA, and the article lacks a comparison with the SOTA method [1] [2].** **R2:** Thank you for your comments. However, it is regrettable that you did not provide the references for you mentioned method. In fact, as a data augmentation strategy, our proposed RLE is a plug-and-play component that can be applied to any state-of-the-art method to achieve improvements. In Table 2 and Line 280\~285, we demonstrate the RLE can boost a vanilla baseline to an impressive performance. Meanwhile, in Line 294\~304 and Table 4, we embedded RLE into the latest open-sourced method DEEN [1] and achieved significant performance enhancements, reaching new SOTA results. So, if you are interested and notice any new SOTA that we have missed, please provide a reference for related works (_preferably with open-source code due to time constraints_). We are willing to provide the performance by combining the SOTAs you mentioned with RLE. [1] Zhang, Yukang, and Hanzi Wang. "Diverse embedding expansion network and low-light cross-modality benchmark for visible-infrared person re-identification." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. >**Q3: The ablation experiments in the article are not comprehensive, lacking experiments on smaller $t_{min}$ than 0.1 as well as experiments on single terms in mixed transformations.** **R3:** Following your suggestion, we have added the experiments under more $t_{min}$ settings in Table 3(c). Meanwhile, we also added experiments on each single term in the ablation. Herein, we show the results under SYSU-MM01 all-search mode as below (More detailed experimental results are provided in Table 3 and Table 4 of the PDF version rebuttal). $t_{min}$: | $t_{min}$ | R-1 | mAP | mINP | |:------|:-------:|------:|-----:| | 0.3 | 72.9 | 70.5 | 58.2 | | 0.2 | 73.8 | 71.3 | 59.0 | | **0.1** | **74.2** | **71.8** | **60.4** | | 0.01 | 73.8 | 71.7 | 59.8 | | 0.001 | 73.6 | 71.3 | 59.5 | **single terms**: | Method | R-1 | mAP | mINP | |:------|:-------:|------:|-----:| | R only | 65.1 | 62.8 | 49.7 | | G only | 65.2 | 63.0 | 50.1 | | B only | 63.4 | 61.5 | 48.4 | | **MRLE** | **70.2** | **67.0** | **53.5** | >**Q4: The paper lacks an analysis of the performance improvement due to the joint use of random erasing.** **R4:** Thank you for your comment. As explained in Lines 271-274, we have clarified why RLE can be used in conjunction with random erasing. However, we are willing to strengthen this part based on your suggestion. Specifically, although 'RE' can be considered a special case of RRLE with a linear factor of 0, RRLE encourages images to undergo more times of transformations while preventing the loss of information. Therefore, RRLE and RE have different valuation perspectives and can be effectively used together. >**Q5: There is no analysis of limitation in the paper.** **R5:** Thanks for your suggestion. In fact, we have discussed the limitation of the RLE in Sec. 6 ‘Limitations and Broader Impact’. But we are willing to strengthen related discussion following your suggestion. In general, the core limitation of RLE is that it is founded based on a Lambertain model under cross-spectral conditions. Therefore, it may limit its adaptability to other scenarios. Under extremely bad weather such as heavy rain, fog, or limited illumination, it may show limited improvement since the Lambertain model may not work well. Under these conditions, it may be necessary to combine the RLE with advanced image deraining/dehazing or illumination enhancement strategies. However, these limitations are not the main challenge we discussed in this paper but may push forward future works to explore those complex circumstances. **Finally,** we hope our response has adequately addressed your concerns. If you have any further suggestions, please feel free to discuss them with us. We will provide the corresponding results as soon as possible. --- Rebuttal Comment 1.1: Comment: Thank you to the authors for responding to my concerns, sorry for not stating specific sota methods in the comments, now add that it is not appropriate for you, the authors' replies addressed most of my concerns, so I will raise my score to Borderline Accept.
Summary: This paper presents a unified perspective that reconsiders data augmentation strategies in cross-spectral re-identification. The authors identify that the main source of cross-spectral modality discrepancies stems from various local linear transformations due to material diversity. To address this, the authors propose a novel Random Linear Enhancement (RLE) strategy, effectively leveraging this unified perspective. Experimental results demonstrate that the proposed method significantly improves the visible-infrared re-identification performance. Strengths: 1. The novel approach of considering cross-spectral data augmentation from the perspective of the reflection model is well-motivated and innovative. 2. The paper is technically sound, with an elegant motivation and formulation of the proposed method. 3. Extensive testing on various benchmark datasets, including SYSU-MM01 and RegDB, showcases the method’s effectiveness and robustness compared to state-of-the-art methods. Weaknesses: (1) Some details are simplified in the paper. For instance, visualization results of the proposed RLE are not shown, and the baseline structure could be included in the appendix. (2) In MRLE, the sum of lambda_r, lambda_g, and lambda_b is constrained to 1, which may seem unnecessary from the basic formulation perspective. Alternative values like [0.2, 0.2, 0.5] might also satisfy the proposed perspective. (3) This paper could do a better work by citing, comparing, and building some connections with more recently-published literatures in the ReID community such as “Robust Object Re-identification with Coupled Noisy Labels”. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses :) Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors point out the potential limitations of the proposed method. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your positive and constructive feedback which inspired us a lot. Below, we respond to your key concerns point by point. >**Q1: Some details are simplified in the paper. For instance, visualization results of the proposed RLE are not shown, and the baseline structure could be included in the appendix.** **R1:** Thanks for your suggestions. We have added some visualization results of RLE in the paper. Meanwhile, we also provide several visualization results in Figure 1 of the pdf-version rebuttal. Please kindly refer to it. Through the visualization results, we can clearly observe the differences between MRLE and RRLE. Specifically, MRLE aims to provide diverse linear augmentation results that strictly adhere to the initial reflection correlation prior, while RRLE aims to provide diverse linear augmentation results under limited risk-taking. >**Q2: In MRLE, the sum of $\lambda_r$, $\lambda_g$, and $\lambda_b$ is constrained to 1, which may seem unnecessary from the basic formulation perspective. Alternative values like [0.2, 0.2, 0.5] might also satisfy the proposed perspective.** **R2:** As shown in Fig. 3, the linear transformation brings limited modality discrepancy. So, the transformation under [0.2, 0.2, 0.5] can also be considered as a linear transformation after the transformation under [0.22, 0.22, 0.56], which will bring limited difference. Therefore, we may only need to consider the mix percentage of different channels. We also provide the related experimental results on the SYSU all-search mode below to show the influence of the sum constraint (More detailed experimental results are provided in Table 2 of the PDF version rebuttal). | Sum Constraint| R-1 | mAP | mINP | |:------:|:-------:|------:|-----:| | | 66.3 | 63.8 | 50.3 | | &#10004; | **70.2** | **67.0** | **63.5** | Since removing the sum constraint may lead to some bad cases (e.g. all parameters are very small), the performance is worse than the strategy with the sum constraint. >**Q3: This paper could do a better work by citing, comparing, and building some connections with more recently-published literatures in the ReID community such as “Robust Object Re-identification with Coupled Noisy Labels”.** **R3:** Thanks for your suggestion. We have included several recent related works in Sec. 2 including this paper to make our work more comprehensive. If you have any other recommended works that are related to this paper, you can provide them in the discussion. We are willing to add them in. **Finally,** we hope our response has adequately addressed your concerns. If you have any further suggestions, please feel free to discuss them with us. We will provide the corresponding results as soon as possible. --- Rebuttal Comment 1.1: Comment: Thank you for your response. My concerns have been addressed. I believe this work could offer valuable insights to the community, particularly in highlighting the potential of focusing on the physical characteristics of the device as a promising research direction. Therefore, I would like to raise my score.
Summary: This paper presents a novel approach to addressing the challenges in cross-spectral Re-ID, particularly the modality discrepancy between visible (VIS) and near-infrared (NIR) images. The authors propose a unified perspective based on the Lambertian reflection model to understand and categorize data augmentation strategies. This model helps in analyzing how local linear transformations can bridge the gap between different spectral images. Strengths: **Originality:** The paper introduces a novel perspective on cross-spectral Re-ID by leveraging the Lambertian reflection model. This approach provides a unique and insightful framework for understanding the modality discrepancy between visible (VIS) and near-infrared (NIR) images. **Clarity:** The authors provide a thorough background on cross-spectral Re-ID and the Lambertian reflection model, ensuring that readers can grasp the context and significance of their contributions. **Significance:** By offering a unified perspective on data augmentation and introducing the RLE strategy, the authors address a critical challenge in surveillance and other applications involving spectral images. Weaknesses: (1) The paper lacks a detailed analysis of the sensitivity of the RLE strategy to its parameters involved in RLE, MRLE, and RRLE. Understanding how different parameter settings impact performance is crucial for practical deployment. (2) While the paper demonstrates performance improvements, there is limited discussion of the real-world applicability and potential deployment challenges of RLE strategies. (3) The paper focuses on the proposed RLE strategy but does not provide a detailed comparative analysis with other advanced data augmentation techniques beyond traditional methods. (4) The reliance on the Lambertian reflection model may limit the generalizability of the proposed methods to scenarios where this model does not hold. Technical Quality: 2 Clarity: 3 Questions for Authors: (1) Why is [$\lambda_r$, $\lambda_g$, $\lambda_b$] set to [0.299, 0.587, 0.114]? The authors should perform as many ablation experiments as possible. (2) How sensitive is the performance of the RLE strategy to different parameter settings? Have the authors explored the impact of varying these parameters systematically? (3) How does the RLE strategy compare with other advanced data augmentation techniques in the context of cross-spectral re-identification? (4) How applicable is the RLE strategy in scenarios where the Lambertian reflection model does not hold? Have the authors tested the method in such conditions? (5) Can the authors provide a more detailed explanation of how the RLE strategy works? (6) Figure 2 is very similar to PCB, can the authors do a comparative analysis accordingly? Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The length of the related work is obviously insufficient. Meanwhile, the data enhancement has been explored in the Re-ID field for many years, the novelty is not very impressive. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your constructive feedback. Below, we respond to your key concerns point by point. >**Q1: Why is [lambda_r, lambda_g, lambda_g] set to [0.299, 0.587, 0.114]?** **R1:** In fact, you may have misunderstood MRLE. As mentioned in Line 180~183, the [0.299, 0.587, 0.114] are not adopted for MRLE but come from the 'transforms.RandomGrayscale' function in the torchvision library. This is a classic grayscale method that previous works widely used. However, such transformation shows limited transformation results. Therefore, as described in Lines 184-194 and Eq. (6), we proposed MRLE, where the $\lambda_r$, $\lambda_g$, and $\lambda_b$ are randomly sampled from a Beta distribution in every transformation. It provides more diverse augmentation during training, enhancing the model's capability. The related experiments in Tbl. 1 demonstrate the effectiveness of the proposed MRLE. We hope this response clarifies our approach and the rationale behind it. >**Q2: How sensitive is the performance of the RLE strategy to different parameter settings? Have the authors explored the impact of varying these parameters systematically?** **R2:** Yes, we have explored different parameters and shown results in Table 3 of the paper. The results indicate that RLE is sensitive to parameter selection. For instance, $\beta_m$ can lead to a 2.4% increase in the R-1. It is important to note that $\beta_m$ and $\beta_r$ control the shape of the sampling function, while $t_{min}$ controls the stopping time. We selected the most suitable parameters based on ablation experiments. >**Q3: How does the RLE strategy compare with other advanced data augmentation techniques?** **R3:** Following your suggestion, we have included a comparison of our proposed method with the recently released CAJ [1] on the SYSU-MM01 dataset. The results, shown below, prove the effectiveness of our approach. We will add these experiments to the paper to make our conclusions more robust (More detailed experimental results are provided in Table 1 of the PDF rebuttal). | Method | R-1 | mAP | mINP | |:------|:-------:|------:|-----:| | CAJ | 73.5 | 69.4 | 55.4 | | **Ours** | **75.4** | **72.4** | **60.9** | >**Q4: How applicable is the RLE in scenarios where the reflection model does not hold?** **R4:** Thank you for the interesting question. Generally, the Lambertian model is suitable for most real-world applications, and major datasets, which makes RLE work well in both datasets. But the problem you mentioned is still a good suggestion. Since the Lambertian model may not work well when facing strong specular reflection or complex weather environments. Under these conditions, combining RLE with advanced image deraining/dehazing or light enhancement methods may be necessary. Your suggestion indeed points us toward future research directions, and we will include this discussion in the limitation section. Thank you again for your constructive feedback. >**Q5: Can the authors provide a more detailed explanation of how the RLE works?** **R5:** Thank you for your question. As mentioned in Sec. 3, the cross-spectral modality discrepancy mainly comes from the local linear transformation on different surfaces. Therefore, to overcome the modality discrepancy caused by this phenomenon, RLE aims to mimic such a linear transformation to provide a more diverse training set, thus encouraging the network to be robust to such a transformation. Meanwhile, the detailed processing of RRLE is also provided in Appendix A.1. >**Q6: Fig. 3 is very similar to PCB, can the authors do a comparative analysis accordingly?** **R6:** The similarity you noted might be due to both PCB and Fig. 3 employing a horizontal segmentation strategy. However, Fig. 3 is not related to PCB. Fig. 3 presents a validation visualization experiment aimed at showing that the modality gap results from diverse local linear transformations on the image. As mentioned in Lines 160~165, we uniformly segmented 100 randomly selected images into six parts from top to bottom and multiplied each part by a linear factor. These images were then fed into ResNet-50 to observe the differences in feature space before and after the transformation. This transformation is directly applied to the image. On the other hand, PCB is a model structure that segments the features extracted from ResNet to obtain fine-grained details. >**Q7: The length of the related work is obviously insufficient.** **R7:** Thank you for pointing out this issue. Following your suggestion, we have included additional classic and recent related works in Sec. 2 to make it more comprehensive. If you have any recommended works, please provide them in the discussion. We are willing to include them. >**Q8: The data enhancement has been explored for many years, but the novelty is not very impressive.** **R8:** Our core contribution lies in providing a unified perspective to explain the discrepancy caused by the cross-spectral transformation, rather than purely focusing on data augmentation. **_This is the first work in this area to provide new insight that will guide us to understand which kind of data augmentation is needed and how to design strategies to overcome the cross-spectral modality discrepancy_**. It even explains why complex GAN-based models [2] may not be as efficient as basic data augmentations in this context [1]. We believe RLE is just one attempt at using this perspective to design a data augmentation strategy. It may inspire more image generation or data augmentation strategies and even network designs. [1] Channel augmentation for visible-infrared re-identification. TPAMI 2023. [2] RGB-infrared cross-modality person re-identification via joint pixel and feature alignment. ICCV 2019. **Finally**, we hope our response has adequately addressed your concerns. If you have any further suggestions, please feel free to discuss them with us. We will provide the corresponding results as soon as possible. --- Rebuttal 2: Title: Reply to Authors Comment: Thank you for your detailed responses to the raised concerns. I have reviewed your answers and other reviewers' comments, and appreciate the effort you have put into addressing each point. Below are my thoughts on your responses: Firstly, according to the author's statement, the core of the method in this paper is the addition of Beta distribution on a classic grayscale method. Meanwhile, the paper lacks sufficient theoretical demonstration, proof of derivation, and adequate experimental evidence. Therefore, I think the novelty of this paper is limited and not up to the level of NeurIPS, because NeurlPS as a top conference requires high theoretical innovation and sufficient experimental validation. I agree with Reviewer DZyN's statement "The Importance of Data Enhancement Research for VI-ReID", and as stated above, whilst the paper's methodology has significant outcomes, the depth of the research is not deep enough and the evidence for it is not sufficient. Secondly, I have looked carefully at the authors' and other reviewers' COMMENTS and I still have the following questions: (1) $\lambda_r$, $\lambda_g$, and $\lambda_b$ are randomly sampled from a Beta distribution in every transformation, thus the results are random in nature, and whether the authors conducted multiple experiments and then averaged them? (2) This paper is built around data-augmented randomness, and the author could be asked to explain specifically what this randomness benefits actually are? (3) I realized that Table 3 explored the sensitivities of $\beta_m$, $\beta_r$, and $t_{min}$, and I'm curious is the sensitivities of other hyper-parameters such as $\alpha$, $r_{min}$, $r_{max}$, $s_{min}$, $s_{max}$, etc. (4) Although the proposed method is about data enhancement, the author's comparison and analysis of the experimental results should highlight the advantages of data enhancement. There are many methods related to data enhancement, and it is obviously not enough to compare only the CAJ method, why don't the authors compare other classical and commonly used data enhancement methods? (5) This paper focuses on the Lambertian model, but this model is unable to deal with the effects of highlights and ambient light due to specular reflections on surfaces. For this problem, I think the Phong model is more appropriate, so I don't think this work has been studied enough. (6) The distinction between PCB and your Fig. 3 is now clear. It is important to highlight this difference in the paper to avoid any confusion for readers. A brief comparison or note in the figure legend might be beneficial. And, expanding the related work section is a positive step. Please ensure that the additional references are thoroughly integrated and that the discussion reflects the broader context of the field. Therefore, I think this paper leaves a lot to be desired, and I'll reduce my score to Reject. --- Rebuttal 3: Comment: **Thanks for your response, but it is too late. It is only 20 minutes left for us to give a short response.** (1) For the randomness, there is no doubt that diverse augmentation is better than stable one. This has been widely verified by a large amount of data augmentation strategies such as Mixup, Random Erasing, and so on. (2) Other hyper-parameters you mentioned follow the basic setting of Random Erasing. (3) It will be better if the author can give us the specific augmentation strategy.CAJ is the most recently published data augmentation work in this area. --- Rebuttal Comment 3.1: Title: Reply to Authors Comment: After reading the responses, the author doesn't fully address all of my questions, I will keep my score.
Rebuttal 1: Rebuttal: Dear Reviewers, We would like to thank all reviewers for providing constructive feedback that helped us improve the paper. We are encouraged that reviews think our paper: * "provides a unique and insightful framework for understanding the modality discrepancy between visible (VIS) and near-infrared (NIR) images" (Reviewer SzQ4). * "an elegant motivation and formulation of the proposed method" (Reviewer muwA). * "motivation of this paper seems to be valid and intriguing, and the proposed unified perspective makes sense in this topic" (Reviewer DZyN). We have been working diligently on improving the paper on several fronts these days to address your critique. Specific feedbacks for every reviewer are provided below your review. **Herein, we have provided an additional PDF version rebuttal including the visualization results and detailed experimental results you mentioned in your reviews.** Best regards, The Authors Pdf: /pdf/c7f63f64741cfab6fa319ad74dcfd812da705f22.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Clustering in Causal Attention Masking
Accept (poster)
Summary: This paper studies the representations or tokens generated by a sequence of causal attention layers. To this end, and following the example of prior works, the authors model such a sequence as a discretization of a system of ODEs. Each token in the input sequence is modeled as a particle and the evolution of each token with depth is modeled as an interacting particle system. Layer normalization is used to ensure each token lies on he unit sphere. A number of theoretical results are derived and conjectures made in this setting (referred to as CSA dynamics). - Lemma 1 roughly characterizes rate at which tokens approache the leading eigenspace of the value matrix. - Theorem 4.1 characterizes a simple setting in which tokens collapse to a single point asymptotically. - A number of conjectures are made based on the dynamics as well as experiments describing situations in which tokens or particles spread out or collapse onto a few discrete points. - Meta stable clusters (which persist for a significant time but disappear eventually) are also studied in a simplified 2 dimensional setting and an interesting connection to the Renyi parking problem is made. Perhaps the key takeaway is that each token (or particle) is driven by an internal force as well as an external force, which is either attractive or repulsive according to the sign of the largest eigenvalue of the value matrix. This in turn controls the diversity of token representations asymptotically. Strengths: - Originality: not aware of other similar analyses for causal attention, the modified dynamics due to the causal masking also make the analysis challenging. - Quality and clarity: paper is well written, motivated and clear. - Significance: there are some interesting takeaways, e.g., the role of the largest eigenvalue in driving particles to be diverse versus collapsing onto a few points as well as the implications for dimensionality reduction. Weaknesses: - Many results are either asymptotic in nature, this could limit their relevance for explaining phenomena observed in practice. - The study of meta clustering is restricted to the 2D setting. - One might argue that there are a number of other aspects which hinder drawing practical takeaways, for instance weights are tied across different layers, no MLP layers between attention layers etc. - I don't see anywhere a discussion of the differences in the token dynamics + asymptotics of causal attention versus standard self attention, which would seem a natural and useful thing to include. Technical Quality: 3 Clarity: 3 Questions for Authors: The main thing I think it would be nice to see more discussion of is how the restriction to causal attention impacts the dynamics: can you highlight any important differences between the dynamics of the tokens of causal versus standard self-attention? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful to the reviewers for their positive feedback. We appreciate that they acknowledged the novelty and significance of our setting and are glad that they enjoyed our exposition **Quality and clarity: the paper is well-written, motivated, and clear**. **Many results are either asymptotic in nature, this could limit their relevance for explaining phenomena observed in practice.** This is true for final configurations and most of the results, but the meta-clustering section is devoted to the study of a phenomenon that is significant for practice. As mentioned in Section 6, it lacks a complete non-asymptotic proof, but it is a step forward toward better understanding what is happening in intermediate times that are practically relevant (see Fig. 2). **The study of meta-clustering is restricted to the 2D setting.** Although the restriction to the 2D setting is made, this is an important foundational step for two reasons. First, the interaction force is the same in any dimension, and so we believe that it is possible to lift the $d = 2$ assumption in the future. Secondly, because of the dimension reduction that matrix $V$ introduces, the lower-dimensional case $d = 2$ is actually relevant for the general case too. **The main thing I think it would be nice to see more discussion of is how the restriction to causal attention impacts the dynamics: can you highlight any important differences between the dynamics of the tokens of causal versus standard self-attention?** Thank you for this question. We are going to include an overview of key differences and compare our results with what was already known for the non-causal case. To summarize the key differences: in Geshkovski et al. (2023), the authors analyze non-causal attention by utilizing its gradient descent structure when $QK^T = V$. The part of their work that is connected to ours is the clustering part, where they prove asymptotic convergence to a single cluster under additional dimension/temperature restrictions. These restrictions were significantly lifted in [Criscitiello et al. (2024)](https://arxiv.org/abs/2405.18273) to $d \geq 3$ and $\beta \leq 1$, but they are still concerned with non-causal dynamics (and utilize gradient descent structure). In our work, we study causal attention in the same framework, but there is no gradient descent structure, which leads to different analysis techniques and different restrictions. In particular, that is why we are able to show asymptotic convergence to a single cluster for arbitrary matrices $Q$, $K$, and $V = I_d$, and conjecture final configurations for different choices of $V$ (ideologically, those conjectures follow from our proof of Theorem 4.1, but there are significant technicalities that are yet to be resolved). From our understanding, the final configurations in causal and non-causal cases are similar. The key difference is in the more practically relevant meta-clustering phenomenon. To the best of our knowledge, there is no successful theoretical approach to meta-clustering; in [Geshkovski et al.] it is noticed for original non-causal dynamics and it is conjectured that the number of meta-clusters should be of order $\sqrt{\beta}$ in the 2-dimensional case. In our work, we introduced a unique perspective on the appearance of meta-clusters in causal attention through Rényi parking, which predicts the conjectured $\sqrt{\beta}$ clusters, and showed relevant clustering results (mainly Theorem 5.1). Here, the difference is that in the causal case we are able to predict where meta-clustering is going to occur, while for standard non-causal dynamics it seems to be more complicated and it is yet to be done. --- Rebuttal Comment 1.1: Comment: Thanks for your response, I'll keep my score.
Summary: This paper strengthens the theoretical results from prior work by presenting causally masked attention used in AIGC. The authors prove asymptotic convergence to a single cluster for arbitrary key-query matrices and an identity value matrix under causal self-attention. This significantly extends the results of previous studies. Strengths: This paper provide novel insight in understanding causal attention mechanisms, proposing new mathematical models and proof techniques that extend existing knowledge. By linking the study to the Rényi parking problem, the authors provide a unique perspective on clustering phenomena in self-attention mechanisms. Weaknesses: This paper is really hard to follow, Geshkovski et al. (2023c) is referenced numerous times throughout the article, even including in the abstract. The authors should clearly summarize the previous work, and then emphasize their own contributions building on that foundation. While the paper extends the understanding of causal attention mechanisms, more empirical evidence is needed to validate the results across a wider range of scenarios and applications. Technical Quality: 2 Clarity: 2 Questions for Authors: This work largely builds upon the framework of Geshkovski et al. (2023c). Could the author give a more concise statement to emphasize the differences and contributions in this paper? Could the author provide more practical examples or evidence to prove the superiority of this theory? Confidence: 2 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: This paper has introduces the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback. We are glad that they noted our perspective on the clustering phenomena in transformers **By linking the study to the Rényi parking problem, the authors provide a unique perspective on clustering phenomena in self-attention mechanisms** and the theoretical novelty of the paper **This paper provides novel insights in understanding causal attention mechanisms, proposing new mathematical models and proof techniques that extend existing knowledge.** **This paper is really hard to follow, Geshkovski et al. (2023) is referenced numerous times throughout the article, even including in the abstract. The authors should clearly summarize the previous work, and then emphasize their own contributions building on that foundation.** This is a great point that we are going to address by providing a supplementary paragraph on how the dynamical system is obtained from the causal transformer architecture for a self-contained introduction to the topic. **This work largely builds upon the framework of Geshkovski et al. (2023). Could the authors give a more concise statement to emphasize the differences and contributions in this paper?** To summarize the key differences: in Geshkovski et al. (2023), the authors analyze non-causal attention by utilizing its gradient descent structure when $QK^T = V$. The part of their work that is connected to ours is the clustering part, where they prove asymptotic convergence to a single cluster under additional dimension/temperature restrictions. These restrictions were significantly lifted in [Criscitiello et al. (2024)](https://arxiv.org/abs/2405.18273) to $d \geq 3$ and $\beta \leq 1$, but they are still concerned with non-causal dynamics (and utilize gradient descent structure). In our work, we study causal attention in the same framework, but there is no gradient descent structure, which leads to different analysis techniques. In particular, that is why we are able to show asymptotic convergence to a single cluster for arbitrary matrices $Q$, $K$, and $V = I_d$, and conjecture final configurations for different choices of $V$ (ideologically, those conjectures follow from our proof of Theorem 4.1, but there are significant technicalities that are yet to be resolved). Moreover, to the best of our knowledge there is no theoretical approach to meta-clustering, in [Geshkovski et al.] it is only conjectured that the number of meta-clusters should be of order $\sqrt{\beta}$ in the 2-dimensional case. In our work, we introduced a unique perspective on the appearance of meta-clusters in causal attention through Rényi parking, which predicts the conjectured $\sqrt{\beta}$ clusters, and showed relevant clustering results (mainly Theorem 5.1). We are going to add a similar comparison in the paper. **While the paper extends the understanding of causal attention mechanisms, more empirical evidence is needed to validate the results across a wider range of scenarios and applications.** Extensive experiments with real-life models are very interesting and our theory suggests several insights to test. However, as the paper is already heavily theoretical, we believe that any proper experiments are out of scope of this work and a prospect for future research.
Summary: This work extends the work by Geshkovski et al. 23c, which analyzes the mean-field gradient flow of Transformer models and shows the emergence of clusters with full self-attention, to the ones with causal self-attention. Transformer with causal self-attention is modeled as an interacting-particle system on the sphere, as tokens are normalized by Layer Normalization. The authors conjecture that the largest eigenvalues of the value matrices alone governs the final states of the token particles. Finally, the problem is connected with the Rényi parking process to show that the particles of causal self-attention Transformer reach metastable clusters under certain conditions. Strengths: - The authors extend the analysis of full self-attention Transformers as interacting-particle systems to causal self-attention Transformers. As the current success of Transformers is mainly attributed to the autoregressive ones, the analysis of such models is relevant to the community. - They connect the problem with the Rényi parking process and then show that, when the weight matrices are identity in 2D, the particles reach metastable clusters (Theorem 5.1). - I appreciate that they discuss limitations using a single independent section. - They presented many figures from numerical experiments, which helped me understand the work. Weaknesses: - Although the results are interesting, the most exciting parts are conjectures, and the proven results are under strict conditions (e.g., Theorem 4.1 and Theorem 5.1). Technical Quality: 3 Clarity: 3 Questions for Authors: - What are the practical implications of the results? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations of the work (Section 6). # Suggestions - The references are not well maintained. Geshkovski et al. 2023a and Geshkovski et al. 2023b are identical, and it is accepted at NeurIPS 2023. - $\textsf{dist}$ in L 109 seems to be defined in L 228. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their positive feedback. We are encouraged that they appreciate the relevance of our work's subject: **The authors extend the analysis of full self-attention Transformers as interacting-particle systems to causal self-attention Transformers. As the current success of Transformers is mainly attributed to the autoregressive ones, the analysis of such models is relevant to the community.** Let us address the weakness **Although the results are interesting, the most exciting parts are conjectures, and the proven results are under strict conditions (e.g., Theorem 4.1 and Theorem 5.1)** by discussing the conditions. Theorem 4.1 is limited to the case $V=I_d$. Firstly, it is already a generalization of the results by [Geshkovski et al. (2023)](https://arxiv.org/abs/2312.10794) and [Criscitiello et al. (2024)](https://arxiv.org/abs/2405.18273) for non-causal models. This is because they utilize a gradient descent approach to convergence, which is limited to the case $QK^T = V$. In this regard, our result shows that matrices $Q$ and $K$ do not affect final convergence. Secondly, the provided proof is more general and could be followed to argue that the conjectured results for $V$ are true. The problem of the appearance of unstable critical curves that arises there is mostly technical, not ideological, but it requires further thorough research. Theorem 5.1 is limited to $V = I_d$ for a reason. There is numerical evidence and an argument in Section 3, mentioned on line 218, that we expect that tokens quickly project to the subspace of a few top eigenvectors. This is why, from that point of view, $V = I_d$ in low dimension is a foundational case and an important part of any system. The biggest obstacle in meta-clustering is its non-asymptotic nature, which makes it hard to study. That is why we assume the simplest $Q=K=V=I_2$ setting for two reasons: this assumption allows us to focus on the hardest part (meta-clustering itself), and it is still an important step for general dynamics because of the heuristic dimension reduction that is yet to be proven. **What are the practical implications of the results?** Relevant practical implications are yet to be tested, but there are several promising avenues. Clustering has been observed numerically, but it is poorly understood what mechanisms are underlying its occurrence in transformers. There is an empirical study that suggests that clustering might correspond to transformers learning specific tasks/concepts and assigning them positions in the representation space. Then, being able to control the appearance of these meta-clusters through parameters and predict their appearance might help construct better models for specific tasks.
Summary: This paper presents a theoretical framework where causal attention masking can be recast into an interacting particle system. The authors start by introducing the dynamics of the first token and extend it to $n$ tokens. They then discuss the token configurations as $t \rightarrow \infty$ (i.e., infinite number of attention layers) when $V = I_d$ and make conjectures for more general cases where $V \neq I_d$. Lastly, the authors discuss the discovery of meta-clustering in Geshkovski et al. and adapt causal attention to this framework using the definition of Rényi centers. Strengths: • Exploring the theoretical aspects behind the full attention and causal attention mechanisms is a very important topic in our understanding of how Transformers and modern LLM/LMMs work. • The authors give a comprehensive overview of the background and make a smooth transition to the token dynamics in causal attention. • The authors clearly explain the dynamics and the final states of the tokens with visualizations. • Despite the paper’s extensive theoretical and mathematical details, its main narrative is clear and easy to understand. Weaknesses: • Although the authors argue about the complexity of the problem, $V = I_d$ might be too limited for real-world use cases. • The probability measure $\mu_0$ is first mentioned in Conjecture 1, but it is not clearly defined until Theorem 5.1. Same for the geodesic distance $dist$, which is first introduced in Lemma 1 but is not defined until Section 5.1. • Small typo: Line 140, To get a better grasp of the effects of “how” the external force works, … Line 186, in the full-attention dynamics, … Technical Quality: 3 Clarity: 4 Questions for Authors: It is well-known that transformer-based language models produce more stochastic outputs at higher temperatures $\beta$. What additional insights does meta-clustering provide beyond this established understanding? Does clustering of the tokens indicate similar outputs? Confidence: 1 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The authors have provided a relatively comprehensive discussion of limitations in Section 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the positive feedback. We are glad they think that **exploring the theoretical aspects behind the full attention and causal attention mechanisms is a very important topic in our understanding of how Transformers and modern LLMs/LMMs work** and that they enjoyed our presentation. To address the biggest mentioned weakness, **although the authors argue about the complexity of the problem, it might be too limited for real-world use cases**, we note that even though on paper $V = I_d$ is a significant limitation, it is not irrelevant for practice. This is largely because of the dimensionality reduction effect, which is justified by numerical experiments and can be seen as an effect of the matrix $V$, as discussed in Section 3 and mentioned in line 219. We expect that for the general case of $V$, all particles quickly converge to the subspace spanned by a few top eigenvectors, where the matrix action is close to the identity (see Fig. 1b for an example). Thus, understanding the behaviour of the system for $V = I_d$ (and even in low-dimensional cases as Theorem 5.1 does) is important for understanding the general picture as well. We will add earlier definitions of $\mu_0$ and $\textrm{dist}$ and correct the mentioned typos. Thank you for pointing them out. **It is well-known that transformer-based language models produce more stochastic outputs at higher temperatures. What additional insights does meta-clustering provide beyond this established understanding? Does clustering of the tokens indicate similar outputs?** In addition to the mentioned knowledge, our observation might lead to several additional insights. For example, the observed connection to the R\'enyi parking problem allows us to connect the number of meta-clusters with temperature and effective dimension. Moreover, there is an empirical study, that suggests that clustering corresponds to transformers learning specific tasks/concepts and assigning them positions in the representation space. Then, being able to control the appearance of these meta-clusters through parameters and predict their appearance might help construct better models for specific tasks. To our best understanding, in practical transformers tokens exhibit clustering, but do not converge to the same point completely (as one might expect from Theorem 4.1). This is an effect of different weights/MLP layers, but it also can be seen in our Theorem 5.1 -- with some tokens fixed, convergence happens not to the centers themselves, but somewhere close to them. Then, the outputs are not going to be completely the same, but arguably correspond to the same topic/task depending on the field of use. This is a really interesting question, how clustering corresponds to produced outputs, that requires further research. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thank you for your comprehensive responses to my comments. I will keep my score for now.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Permutree Process
Reject
Summary: This article adapts the recently-developed combinatoric concept of the “Pemutree” into a machine learning context, making links with existing methods in Bayesian nonparametrics, and providing a pathway for how to make the abstract mathematical concept relevant to data-driven approaches and inference. The theory is explained, and an example application in phylogenetic analysis is performed. Strengths: This is an ambitious paper! Making links with recent combinatorial research and machine learning is a good thing to do! It has a large vision of building one grand unifying theory of discrete Bayesian nonparametrics, which (it is claimed) can be done with the framework presented. The review of discrete BNP at the start of the article is thorough. The potentially difficult subject matter of the abstract mathematical objects is explained fairly clearly through the use of figures and well-chosen notation. The theoretical aspects are explained thoroughly, and the application pursued emerges naturally from the framework developed earlier in the article: the coalescent analysis and the Mondrian process are good results to have. The scientific writing is of a fairly high standard, with only a couple of surprising vocabulary choices. Weaknesses: The subsequent developments of the theory and applications don’t quite live up to the grand vision set out earlier in the article. The authors struggle to represent the most widely-used discrete BNP object of the Dirichlet Process in this supposedly all-encompassing framework: perhaps this is doable in the future and represents work yet to be done, but the initial claims about the generality of permutree processes made in the article are not fully followed through on. The article also (necessarily) spends a lot of time introducing the theoretical framework, quite heavily at the expense of presenting the data applications properly later in the manuscript. Squeezing all of the experimental results into “Demonstration” in half a page is really too brief to be very convincing, although there is a lot of interesting material in Appendix C that would ideally be in the main text. Many of the potentially thorny issues concerning inference and computation are therefore overlooked. Some of the figures could be better designed: I found the visual interpretation of permutrees key to developing some understanding them, so making Figure 1 bigger and more prominent would be a help (I think figure 1 is more crucial than Figures 2/3/4 in this respect). The representation of the data in greenscale Figures 8 and 14 is very confusing: I don’t think I really learned anything from that representation. Some of the language choices are a bit strange: line 69: “we dare to pay particular and explicit attention here”, line 869: “Roughly speaking, it is not possible in principle to naively implement a model with infinite parameters on current computers”, Line 250: “as an overall trend…” The analysis of the experimental results is not rigorous enough. You have real values and uncertainties for the perplexity. Do some tests or similar to establish more clearly the differences in performances rather than painting broad brushstrokes. Line 80: some of the symbols used have already established meanings in a machine learning context, i.e. \otimes meaning kronecker product. Maybe make clear that this is a new notation that overrides any previous perceptions. Technical Quality: 2 Clarity: 3 Questions for Authors: Line 219: “Data path modelling” the introduction of the two-table Chinese restaurant process is instructive, but might reveal the limitations of the model framework: are the only methods that can be captured by a permutree some variation on iterative applying of CRP-style binary splits and merges? Do you really think that this is a large enough “basic vocabulary” of operations to cover the entire world of discrete BNP methods out there? Is that a provable statement? Line 336: epsilon = 0.01: where does that come from? Appendix C.3: How would you describing performing the inference for this model? Did it need much fine tuning? Did it need lots of mixing time? Did it need a lot of computational resources? What about the comparison methods? Is this this “fancy but expensive” model? No shame if so, but it’s good to know this. Line 887: “ordered Chinese Restaurant Process” - what are the material, practical implications of adding the ordering of tables on top of the standard chinese restaurant process? Can this model still answer the same type of statistical problems that a standard CRP does? Do the categories/clusters observed in the data need to have some sort of ordering for this to be valid? Can the permutree represent a totally standard CRP without this ordering? If the permutree cannot represent the most widely used discrete BNP model then the claims towards the start of the paper about this being the grand unified framework need toning down. Line 912: “Chinese Restaurant Street” - i worry that between the indian buffets and the Chinese Restaurant Franchises then the underlying metaphor is being overextended in this community. Why exactly have you proposed this model formulation? Do you consider it to be a useful new potential BNP model that has been underexplored so far? Is this the closest thing you can derive within this framework to a standard chinese restaurant process? This is unclear to me. Random non-crucial question from an interested non-combinatorist: Can you define permutrees in a >2d space? This will surely change the topologically allowable relationships between nodes. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The practical models that are (presently) successfully captured in this framework are not the most widely used BNP models out there. The whole permutree framework seems best suited to the coalescent-type models pursued, and the other BNP models that have been successfully described, such as Mondrian processes, are interesting but not in widespread use. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful and constructive comments. (To be more responsive to your important feedback, we would like to make additional remarks in 'Official Comment'.) --- **[Q1] Generality of our framework.** > “Data path modeling” (...) but might reveal the limitations of the model framework (...) This is an incredibly important issue that the BNP community has been facing for many years and is exactly what we wanted to discuss in this paper. Before discussing it in detail, we would like to start with a brief answer. Our argument is twofold: - *New insight*: Thanks to permutree, we have a unified modeling guideline for combinatorial stochastic processes, independent of the combinatorial structure of the target (permutation, tree, partition, binary sequence). - *Remaining challenge*: On the other hand, the projectivity and exchangeability (i.e., infinite exchangeability) required by the most BNP models must be achieved by us, the model designers, by cleverly using other mechanisms, such as the stick-breaking metaphor or the Chinese restaurant metaphor. **Background: difficulties in constructing new stochastic processes -** In Bayesian nonparametric community, the successful construction and design of essentially new combinatorial stochastic processes is often a brilliant and very important achievement, and has had a significant impact on subsequent research. However, creating essentially new combinatorial stochastic processes is a very difficult task. In fact, following achievements such as Dirichlet processes in classical statistics, the machine learning community has produced several essentially new stochastic processes, such as Indian buffet processes and Mondrian processes, which may have been largely due to a spark of genius by individual authors. More specifically, these successes have been made possible by a flash of genius in solving two challenges simultaneously: - (1) What generative algorithm can be used to represent a particular combinatorial object? (For example, in the Mondrian process, an algorithm that sequentially adds cuts to the block.) - (2) What probabilities can be assigned to that generative algorithm to make the model projective and exchangeable? (For example, in the Chinese restaurant process (CRP), the probability of table allocation should be a proportion of the number of customers sitting at the table up to that point.) **Benefits provided by permutrees -** The benefit of the permutree can be seen as giving a unified outlook to the former of these two challenges. More specifically, they give us the fact that various combinatorial structures (permutations, trees, partitions, binary sequences) can be represented as 'decorated permutations'. Thus, when modeling these combinatorial objects, we only need to introduce a generative algorithm with two functions: a mechanism that represents permutations and a mechanism that each has a decoration. **Remaining issues -** We will be able to use 'decorated permutations' as the basis for generative algorithms thanks to permutrees, but on the other hand, what probabilities can be assigned to them to have infinite exchangeability needs to be prepared separately again by the model designer. The permutree does not provide any new clues in this respect. The paper actually shows, as Section 4 shows, that the strategy of integrating the 'decorated permutation that is the permutree' with the stick-breaking and Chinese restaurant metaphor, a mechanism that automatically brings infinite exchangeability, is a promising one. Indeed, we believe that this strategy has some generality. By making the decoration possessed by the permutree a special case, it can be attributed to the standard CRPs, uniform permutations, Mondrian processes, etc. On the other hand, this strategy alone does not cover, for example, the Indian restaurant process. In this respect, it remains an open question as to what kind of mechanism can be used to provide infinite exchangeability. In light of the above, your concern about being overly assertive is valid: we would like to make a clear separation between the ambitions we are aiming for and what we are achieving at the moment by using the additional page given in the revised manuscript, if the paper is accepted. --- **[Q2] Relationship to conventional CRP and ordered CRP.** > “ordered CRP” (...) Can the permutree represent a totally standard CRP without this ordering? (...) The permutree process can be seen as a unified and general model that includes CRP and the ordered CRP (oCRP) as special cases. More specifically, we can attribute the permutree process (marked point process) to CRP and oCRP by restricting it so that only certain marks can appear in the permutree process. The table below illustrates the fact that, thanks to permutrees (= decorated permutations), these models can be represented in a unified way. | Model | Target combinatorial object | Exchangeability | Decorations | | ---- | ---- | ---- | ---- | | CRP | Partition | OK. | Restriction to (x) only | | Ordered CRP | Binary tree | OK. | Restriction to (v) only | | **Permutree process** | Permutree | OK. | **No restriction** | Additionally, the permutree we use also provides another interesting insight into the extension from CRP to oCRP, the traditional developments. Conventionally, oCRP has shown that by introducing a new table order = permutation for CRP, the representable objects can be extended from partitions to binary trees. This is very natural when one recalls that permutrees are exactly *decorated permutations*. In the context of this extension stream, our paper can be positioned within the following three stages of development. - For standard CRP, it can represent partitions. - For oCRP, the further introduction of **order** allows binary trees to be represented. - For permutree process, the introduction of **order** and **decoration** allows various combinatorial objects to be represented in a unified way. --- Rebuttal 2: Title: Supplement to author response. Comment: --- **[Q3] Motivation for Chinese restaurant street.** > “Chinese Restaurant Street” (...) Why exactly have you proposed this model formulation? The motivation for Chinese restaurant street is to derive an alternative representation for stochastic processes with an inherently infinite dimensional parameter space that would work with only a finite number of parameters for a finite number of observations. This can be summarized as follows in contrast to the marked stick-breaking described in Section 4 of the main text. | Model | Target combinatorial object | Parameter dimension | | ---- | ---- | ---- | | Marked stick-breaking process | Permutree | Infinite (even for finite observation) | | Chinese restaurant street | Permutree-'like' | Finite (for finite observation) | This relationship is a frequently discussed contrast in Bayesian nonparametric methods, where two model representation methods have often been explored, for example in infinite mixture models for partitions and infinite factor models for binary sequences, as follows. | Model | Target combinatorial object | Parameter dimension | | ---- | ---- | ---- | | Stick-breaking process | Partition | Infinite | | Chinese restaurant process | Partition | Finite | | Beta-Bernoulli process | Binary sequence | Infinite | | Indian buffet process | Binary sequence | Finite | The main advantage of the Chinese restaurant street, the standard Chinese restaurant processes and the Indian buffet processes is that they do not require finite approximations in inference. In contrast, the marked stick-breaking process, the standard stick-breaking processes and the beta-Bernoulli process have infinite parameters with probability $1$, regardless of the size of the input observation data, so their parameter inference generally requires finite approximations as described in Section 4, and intricate mechanisms such as slice sampling. Finally, we would like to note that the current Chinese restaurant street is not a perfect model, as discussed in the appendix. The sample that this stochastic process could generate certainly has a permutree-like structure, but it does not fulfill the exact definition. More specifically, of the two requirements mentioned in Section 2 - (C1) that each interior point has one or two parents and (C2) that each interior point can be horizontally aligned - the latter is not straightforwardly satisfied. However, we believe that this modeling strategy is worth discussing in the appendix to this paper, as it follows a natural extension of CRP -> oCRP -> Chinese restaurant street. --- **[Q4] Practical computational cost.** Roughly speaking, our model is only a minor modification (adding a new 'decoration mechanism') of the standard stick-breaking process for the Dirichlet process infinite mixture model, so the substantial increase in computational complexity is not cause for concern. We add a diagram of `MCMC iterations $\times$ perplexity’ and ‘wall-clock $\times$ x perplexity’. As there was not enough space in the one-page PDF of the global response, we would like to update it directly in the revised version. --- **[Q5] Permutrees on high-dimensional space.** > Can you define permutrees in a >2d space? This will surely change the topologically allowable relationships between nodes. This is a very interesting topic. We don't have a direct answer to your question, but we may be able to offer some relevant and interesting insights. We are interested in what is the continuous analogy for a discrete structure of the permutree. In the machine learning community, the hyperbolic space is often mentioned as a continuous analogue to the binary tree (e.g., [Nickel&Kiela, NeurIPS2017]), which is a special case of the permutree: | Representation | Target combinatorial object | Space | | ---- | ---- | ---- | | Discrete | Binary tree | Ultrametric space | | Continuous | Continuous analogy of binary tree (continuous hierarchy) | Hyperbolic space | | Discrete | Permutree | Ultrametric / Euclidean | | Continuous | Continuous analogy of permutree | (Future work) | We expect that research could be developed in this direction in the near future. Very exciting question, thank you very much. - [Nickel&Kiela, NeurIPS2017] Poincare embeddings for learning hierarchical representations, NeurIPS2017. --- Finally, thanks again for your valuable comments. --- Rebuttal Comment 2.1: Comment: This was illuminating, thank you. I think that clarification concerning how different existing BNP models fit within the permutree framework is important to really make the contribution of this article clear. --- Reply to Comment 2.1.1: Comment: We would like to thank you again for your kind comments and constructive feedbacks. As you mention, it is very important to clarify the existing BNP models within our framework. Your enlightening points have made this point much clearer. We really appreciate that we have been able to discuss this point in this valuable forum. With kindest regards, 
 The Authors
Summary: After giving an introduction to permutrees, a stochastic process on them is constructed by sampling the nodes according to an intensity function on the 2d unit interval and uniformly assigning the marks. It is shown how to add the edges to meet the requirements for the object being a labeled permutree. Paths from terminal nodes to other terminal nodes in the permutree can be used to represent sequential data. Finally, the model is used for an inference task involving DNA sequences. Strengths: * Sections 1-4 are very concise but nonetheless clear and easy to follow. This paper does a great job explaining the complex concepts of permutrees and permutree processes. * Figure 1-3 are very helpful * the concept generalizes popular processes used in ML, e.g., the Mondrian processes Weaknesses: * As a reader unfamiliar with phylogenetic analysis, I did not immediately understand what the task in this setting is (and I am still unsure if I fully got it): Do you have a set of DNA sequences where some of the letters are masked, and you want to predict the masked letters? * Besides not really understanding what the goal of this task is, I think Section 5 does not provide enough explanation of how this goal then is achieved, i.e., the length of the paper/ the level of detail in Section 5 is a problem. It's ok to defer details to the appendix as long as one can still follow the main section without them, but I struggle with that. Example given: I find it crucial to know the likelihood function when it comes to a Bayesian inference task, which is not mentioned in the main section. Technical Quality: 4 Clarity: 3 Questions for Authors: * What is the task in the phylogenetic analysis application? * What would be other tasks where inference on permutrees is relevant in machine learning? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: yes, there is a dedicated paragraph on limitations and I think it captures the limitations of the suggested concept appropriately. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are grateful for your important comments and suggestions. --- **[Q1] Task and goal of phylogenetic analysis.** > What is the task in the phylogenetic analysis application? Do you have a set of DNA sequences where some of the letters are masked, and you want to predict the masked letters? Your understanding is correct: The evaluation is based on the predictive performance of missing data. On the other hand, we take seriously the point that it took you some effort to understand it. We would therefore add the following explanation to further improve the clarity of our paper: Phylogenetic tree analysis can be seen as having the following two-stage goals: 1. *Informatics perspective*: to create better phylogenetic tree models and better inference algorithms. 2. *Scientific perspective*: to make scientific discoveries using better phylogenetic tree analysis methods. It is no exaggeration to say that the original ultimate goal of phylogenetic tree analysis is to discover the latter scientific findings. For example, in cases such as the recent pandemic (e.g., SARS-COV-2), phylogenetic tree analysis may reveal the pathways and causes of its spread. Therefore, as well as increasing the expressive power of models and improving the predictive performance of inference algorithms and the value of the objective function, it should also be emphasised that these are important applications that may lead to the discovery of scientific knowledge. --- **[Q2] Other applications of permutree process.** > 
What would be other tasks where inference on permutrees is relevant in machine learning? We are delighted to be able to answer this important question. From an ambitious perspective, thanks to its generality, the permutree process is expected to provide new insights into machine learning applications where combinatorial models such as partitions, permutations, binary trees and binary sequences have traditionally been utilized. However, as stated in line 356 in the discussion of Section 6, we also recognize that this is not straightforward, since the permutree is only a 'prior model' and careful design of the 'likelihood model' is required for success in real applications. Here we would like to list some potential possibilities and some budding attempts. - (1) **Low-dimensional embedding** (like PCA, t-sne, UMAP, and GPLVM). An example of the highest generality is the potential application to low-dimensional embedding methods as an extension of the Gaussian process latent variable model. In general, the latent variable structure of the data requires the foresight knowledge of the model designer (e.g., a mixture model for partitions, or a tree structure model for differentiation and branching hierarchy). The permutree process prior model may play a role in data-driven inference of such latent variable structures. More specifically, the method assumes the permutree process as a prior model for the data as latent variables and a Gaussian process as a likelihood model from latent variables into observation data. - (2) **Density estimation** (like Dirichlet process mixture, Dirichlet diffusion tree, and Polya tree). Another highly generalized task is the application to density estimation tasks. Indeed, various Bayesian non-parametric models have traditionally been applied to this task, including Dirichlet process mixture models, Dirichlet diffusion trees and Polya trees. With the similar motivation as in the above low-dimensional embedding task, we can think of ways to model the density function in a way that assumes permutree processes in the latent variable structure of the observable density function. - (3) **Dictionary learning** (like beta-Bernoulli process and Indian buffet process). One of the more concrete applications is dictionary learning. We consider, for example, the task of learning a dictionary of images. Here, an image can be regarded as a collection of vectors by converting a small patch (e.g. 8x8 pixels) into a vector (64-dimensional vector). The task of learning a dictionary from this data by means of factor analysis is one of the standard tasks in machine learning. In the Bayesian nonparametric field, infinite factor models based on the Indian buffet process or the beta-Bernoulli process are often used. Our focus here is to assume a permutree structure within this collection of factors. Indeed, it has been reported in the past that the introduction of cluster or hierarchical structures within factors, e.g. by exploiting spatial correlations within an image, can be effective. We may therefore introduce a mechanism for data-driven inference of such dependencies between factors by means of the permutree process prior model. This is illustrated in 'one-page PDF in global responses'. --- **[Q3] Advice on improving clarity.** > Besides not really understanding what the goal of this task is, I think Section 5 does not provide enough explanation of how this goal then is achieved, i.e., the length of the paper/ the level of detail in Section 5 is a problem. It's ok to defer details to the appendix as long as one can still follow the main section without them, but I struggle with that. We will be able to meet your request by specifying more details of the application cases in this paper, thanks to the additional content page given in the camera-ready version, if this paper is accepted. We are grateful for constructive advice on how to raise the clarity of our paper. --- Rebuttal Comment 1.1: Comment: Thank you for the clarifications. I was a bit unsure about the relevance for the machine learning setting, but the answer to Q2 makes me more confident in that regard. I raise my score from 6 to 7. --- Reply to Comment 1.1.1: Comment: We would like to thank you again for your valuable time and thoughtful comments. We are particularly pleased to have this valuable opportunity to discuss with you the potential applications of machine learning. With kindest regards, 
 The Authors
Summary: The authors introduce a prior for Bayesian nonparameterics called the permutree. They apply it to model complex phylogenetic data with both coalescence and recombination, a setting that previous processes such as the Kingman could not model; the model seems to perform state-of-the-art phylogenetic inference. In principle the process could also be used to model other combinatorial objects such as permutations or trees however this is not demonstrated. Strengths: Modeling recombination is challenging and this model proposes a method to do so. Weaknesses: The writing is very challenging. There are multiple points where the writing is strange, for example the use of "dare" in "For technical reasons (discussed immediately below), we dare to pay particular and explicit attention here to the set V of the “interior vertices” (i.e., vertices of degree at least 2) other than the terminal nodes." ". The exposition is also very verbose and proposition is challenging to understand without reading the proof. Figures 3(c) and 1(b)-(d) are never mentioned in the text, the later is quite confusing since it seems to suggest that the permutree can model many combinatorial objects. It seems that there is a disconnect between the description of the methods in sections 3, 4, and the experiment in section 5. See questions. The ultimate goal of phylogenetics is to infer ancestry which is not identical to maximizing likelihood. To validate a bone fide phylogenetic inference method that handles recombination, one should show that inferred recombination events are realistic. Technical Quality: 3 Clarity: 1 Questions for Authors: In section 3 you state that you generate $l$ from a Poisson process then in section 4 you use a stick-breaking procedure. Which do you use in practice? In the genetics case, $l_k$ is stated to be drawn from a point process in eqn 7. if this is the case, is the data still modeled as part of an infinite exchangeable process? How could I query the posterior predictive for example? In the genetics case, could you explain the role of the two table Chinese restaurant process? It seems that sequences are clustered by node and that the path does not affect their likelihood. What is the form of the likelihood for recombination? Confidence: 3 Soundness: 3 Presentation: 1 Contribution: 3 Limitations: Discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you so much for your important comments. (We would like to be more responsive to your important comments, so let us make additional remarks in the 'Official Comment' section.) --- **[Q0] More concise guidance.** > The writing is very challenging. Thank you for your helpful advice. By refining each lead sentence to more succinctly present the subject of each paragraph, we improve the revised version to be more readable for a diverse readership. --- **[Q1] Ultimate goal of phylogenetic tree analysis.** > The ultimate goal of phylogenetics is to infer ancestry which is not identical to maximizing likelihood. We agree with this opinion. Thank you for your very important remarks. Building on this advice, we would like to clarify the objectives of this paper in an applied perspective by adding the following explanations. More specifically, **we would like to clarify the original purpose and goal of phylogenetic tree analysis in two steps as follows and which part of it this paper is trying to focus on.** In general, phylogenetic tree analysis can be seen as having the following two-stage goals: 1. *Informatics perspective*: to create better phylogenetic tree models and better inference algorithms. 2. *Scientific perspective*: to make scientific discoveries using better phylogenetic tree analysis methods. As you point out, the original ultimate goal of phylogenetic tree analysis is to discover the latter scientific findings. For example, in cases such as the recent pandemic (e.g., SARS-COV-2), phylogenetic tree analysis may reveal the pathways and causes of its spread. From this perspective, increasing the expressive power of the model or improving the predictive performance of the inference algorithm or the value of the objective function are only milestones towards achieving the ultimate goal. As you point out, both of these aspects are significantly important in the development of science. At this time, we expect that many NeurIPS readers in the machine learning community are interested in this former perspective. Therefore, this paper also presents an evaluation of each model/algorithm using objective predictive performance on missing data. At the same time, however, it is important to explicitly state that this is aimed at the ultimate goal beyond this - the discovery of scientific knowledge. We are deeply grateful for the very enlightening advice. --- **[Q2] Form of the permutree process in practice.** > In section 3 you state that you generate $l$ from a Poisson process then in section 4 you use a stick-breaking procedure. Which do you use in practice? Thank you for your important question on improving the clarity. **In practice we use the marked stick-breaking process (in Section 4 of main text), which correspond to a special case** of the general concept of a permutree process, i.e., the marked point process (in Section 3 of main text). In response to this point of view, we would like to improve the presentation of our paper. In the version at the time of submission, we attempted to present modeling and inference methods in a more generalized situation, so that the reader can easily adjust the intensity function of the marked point process according to the individual task. On the other hand, this may be a weaker way of conveying the appeal of the marked stick-breaking process, which is the special case we most wish to recommend. We therefore intend to address the following in our revised paper. - *Clarity of motivation*: In Section 4, we add an explanation of the qualitative reasons for recommending the marked stick-breaking process in practice. More specifically, we clarify that the stick-breaking process is a prior model that induces sparser analysis results and is expected to have the effect of suppressing the generation of useless nodes to some extent. (Needless to say, the acceptability of this qualitative intuition is a topic that has been deeply studied theoretically and is very difficult to handle, so we explain it carefully to the reader.) - *Clarification of modification procedure*: We specify which parts of the model/inference need minor modifications when adopting the marked stick-breaking process, in addition to the general marked point process formulation. - *Supplementary material*: We add the following items to the experimental results as supporting evidence that marked stick-breaking processes give better empirical results than marked point processes with uniform intensity, as shown in '*one-page PDF in global responses*' --- **[Q3] Infinite exchangeability (projectivity and exchangeability) of the general permutree process described in Section 3.** > if this is the case, is the data still modeled as part of an infinite exchangeable process? How could I query the posterior predictive for example? **Yes, most of the properties of the marked stick-breaking process (we described in Section 4) can also be inherited by the marked point process described in Section 3.** Therefore, the inference of predictive and posterior distributions is similarly tractable. More specifically, it can be summarized as follows. - The marked point processes, permutree processes, are projective. This follows immediately from the projective nature of Poisson processes. - If we build the model of the datapath described in Section 4 by a general marked point process instead of a marked stick-breaking process, it is still exchangeable. Therefore, in terms of datapaths, the model of a datapath has infinite exchangeability, since exchangeability induces projectivity as it is. One point to mention where the general marked point process differs from the special marked stick-breaking process is that when its intensity function is finitely restricted, the number of interior points of the random permutree as a prior distribution is finite with probability $1$ (an unlimited finite, but not infinite, situation). --- Rebuttal 2: Title: Supplement to author response. Comment: --- **[Q4] Role of two-table Chinese restaurant process.** > In the genetics case, could you explain the role of the two table Chinese restaurant process? It seems that sequences are clustered by node and that the path does not affect their likelihood. Thank you for your important question. We would like to address your concern as follows: The role of the two-table Chinese restaurant process (2tCRP) serves to bring together DNA sequences that follow similar lineages (evolution over time) within a collection of observed DNA sequences. The tables that our 2tCRP have can be regarded as checkpoints for events (such as coalescence and recombination) in the time direction (vertical in the diagram) in gene evolution. The table serves to group together DNA sequences that can be regarded as having behaved identically at the checkpoints by means of table assignments. Our 2tCRP allows a collection of observed DNA sequences to be stored as data paths on a permutree to define a prior distribution. The likelihood of these data paths is then evaluated by using the evolutionary models such as the Jukes-Cantor model and the generalized time reversible model. This modeling strategy can also be seen as an extension of the Dirichlet diffusion tree [Neal2003] for representing binary trees to a new model for representing permutrees. Dirichlet diffusion trees also represent data diffusions in a similar way to the table assignment probabilities we use when an event occurs. We would like to add an explanation of the motivation for the 2tCRP in terms of this extension of the Dirichlet diffusion tree. - [Neal2003] Radford M. Neal. Density modeling and clustering using Dirichlet diffusion trees. Bayesian Statistics, 7:619–629, 2003. --- **[Q5] Likelihood for recombination.** > What is the form of the likelihood for recombination? The recombination is just selecting a split position uniformly at random. That is, the likelihood model of recombination is the concatenation of two DNA subsequences at the mutation-free, non-informative uniform split position. More specifically, as a generative probabilistic model, given two sequences (e.g., ACGTTC and GCGTCA), the recombination can be viewed as a stochastic operation that - The split position ( **** , ** ) of the sequence ( ****** ) is generated uniformly at random. - The two sequences are converted into a sub-sequence ( ACGT** ) leaving only the left side and a sub-sequence ( ****CA ) leaving only the right side of this split position, which are then concatenated and recombined into a single sequence ( ACGTCA ). It should also be emphasized that the likelihood model other than this uniform random split is separately represented by the evolutionary models. The recombination is assumed to be instantaneous and not accompanied by mutation. This means that the likelihood model for genetic mutation is separately represented by the JC and GTR models. --- Finally, thanks again for your valuable comments. --- Rebuttal Comment 2.1: Title: response Comment: Thank you for clarifying. Due to the challenging presentation and challenges in validating the genetics modeling case. I keep my score the same. --- Reply to Comment 2.1.1: Comment: We thank you again for your time and thoughtful comments. We are very appreciative of your comments, which have helped us to increase the clarity of our revised version even more. We remain available in case any additional query arises. With kindest regards, 
 The Authors
Summary: The authors describe the concept of permutrees, how to sample permutrees in a stochastic process, and how to model data with permutrees. They apply it to tracking DNA changes. Strengths: Interesting new model that unifies permutations, trees, partitions, and binary sequences Strong mathematical foundation Practical applications well-written Weaknesses: the figures are small and hard to read when printed in gray-scale Technical Quality: 3 Clarity: 3 Questions for Authors: I do not know what a stick breaking process or Chinese restaurant process is? >Fig.1 (l) It is not obvious how the binary number is generated. Could one say if the line from one node to the next goes up it yields 1 and if it goes down it yields 0? >p7 253 numnber number ? >p7 261 "tiny real value" how tiny is tiny ? >p16 669 uniformly random random random variables Uj is that more random than just a random variable? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your helpful comments and recommendations. --- **[Q1] Improvements to color schemes and size in figures.** > the figures are small and hard to read when printed in gray-scale Thank you for your important advice. We will improve the color scheme we used in our diagrams so that the colors are identifiable even when displayed in grayscale. Also, if this paper is accepted, we would like to use the extra page given to the camera-ready version to display the important figures in a larger size. --- **[Q2] Supplementary information on the basic tools of Bayesian non-parametric methods for a diverse readership.** > I do not know what a stick breaking process or Chinese restaurant process is? Thank you for raising this important point. We want this paper to be read by a diverse audience, not just the BNP community, so we would like to add some background knowledge to the basic tools in the appendix. Both the stick-breaking process and the Chinese restaurant process have traditionally been used in the BNP community for more than 20 years as the most fundamental tools for representing the 'random partitions'. The properties of these two models can be summarized as follows: | Model | Target combinatorial object | Exchangeability | Projectivity | Parameter dimension | | ---- | ---- | ---- | ---- | ---- | | Stick-breaking process | Partition | OK. | OK. | Infinite | | Chinese restaurant process | Partition | OK. | OK. | Finite (for finite observation) | Projectivity and exchangeability are generally used as the two conditions required for BNP models. Roughly speaking, these can be seen as conditions that guarantee that the BNP model can be defined as a stochastic model on an infinite dimensional parameter space. Plainly, *exchangeability* refers to the requirement that the labels of the target objects to be represented (e.g., the index of the data) does not affect the probability of the model. *Projectivity* refers to the self-similarity of the model with respect to the size of the target objects. **Reason for our use of these tools.** In this paper, we aim to exploit the fact that the stick-breaking process and the Chinese restaurant process provide projectivity and exchangeability in random *partitions* and extend it to a more general case, random *permutrees*. Therefore, we make frequent use of these tools in the paper. | Model | Target combinatorial object | Exchangeability | Projectivity | Parameter dimension | | ---- | ---- | ---- | ---- | ---- | | Marked stick-breaking process | **Permutree** (that contains partition as a special case) | OK. | OK. | Infinite | Thank you for your important remarks. Thanks to your advice we can improve our exhibition methods for a diverse readership. --- **[Q3] Binary sequence as a special case of permutree.** > For Fig. 1(l), It is not obvious how the binary number is generated. Could one say if the line from one node to the next goes up it yields 1 and if it goes down it yields 0? Your understanding is perfectly correct. Just to be sure, we would like to reiterate Figure 1(l) in the main body of the paper here as well. We read the vertical indices of the permutree in Figure 1(l) (shown in blue) from left to right, yielding '462153'. We can obtain a binary sequence by comparing the size relationship between adjacent left-right pairs of this sequence from the beginning and outputting '0 if the left side is larger and 1 if the right side is larger'. More specifically, - 4<6: => Output: 1 - 6>2: =>Output: 0 - 2>1: => Output: 0 - 1<5: => Output: 1 - 5<3: => Output: 0 As a result, Fig. 1 (l) can obtain the binary sequence "10010". --- **[Q4] Meaning situation in $\epsilon$** In practice, it makes sense to have a situation where $\epsilon<1/N$ for the size $N$ of the observation data. On the other hand, more precisely, the propositional assertion itself holds for any $0<\epsilon<1$. This is a point we wish to make intelligible.
Rebuttal 1: Rebuttal: --- **Thanks to all involved. -** We are very grateful to all the Reviewers who spend their valuable time to read our papers and give us constructive and favorable comments and suggestions. We are also deeply grateful to the Area Chairs (ACs) and Program Chairs (PCs) who, through their professional management, have given us this valuable opportunity. --- **Supplementary one-page PDF. -** We refer to the more intuitive diagrams for supplementary purposes when responding to some of the questions of each reviewer. These are referred to in our responses as '*one-page PDF in global responses*' and we hope you will make use of them. --- **Wish to address additional concerns as well. -** We will be pleased to be able to address any new points of concern that are noticed during the 'discussion phase until 13 August' to resolve them. We would like to gratefully appreciate in advance the time that reviewers, ACs and PCs will continue to devote to our paper during the subsequent management. Pdf: /pdf/688b09440f170c6df5c06c70b500f75adfcc87d3.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Zero-Shot Transfer of Neural ODEs
Accept (poster)
Summary: The work explores the use of neural ODEs as basis functions for function encoders. This requires dealing with the additional integration step, with the weighted combination of obtained ODEs representing the behaviour of the function to approximate via its integral. With an inner product and its tractable Monte Carlo estimation scheme, the authors derive an algorithm to train basis functions which span the space of dynamical systems, where each basis function is a neural ODE. The FE + ODE approach is put to the test against ODE and FE schemes for fitting a space of Van der Pol systems, long term prediction for RL, as well as quadrotor MPC control with improvements in zero-shot performance against the baselines used. Strengths: The authors deal with a very important problem in machine learning and all the more in robotics: that of zero-shot generalisation to changes in dynamics/environment (here modelled via hidden parameters) - Interesting extension of the FE paradigm with a clear derivation of the required additions to adapt the basis function algorithm to neural ODEs - Good readability and presentation - Relatively significant combination of neural ODEs and Function Encoders with interesting performance improvements Weaknesses: - Some clumsiness in the math notation: in equation (3) $N$ is used at the denominator when it should be $m$ instead. - The assumption of orthogonality for the validity of the coefficients is vital. However, both the authors in [13] and in this paper do not share any analysis on the validity of this assumption. It would be interesting to see how well this holds in practice (computing inner product of obtained basis functions/ODEs should be easy) and how many iterations are required to reach orthogonality in the examples provided. - The introductory paragraph of section 3 where the link between a new unknown function $f$, its observed trajectory $\mathcal{D}$ and ODE basis functions is made, could benefit from more motivation and explanation. It is not immediately clear why reasoning at the derivative level while dealing with integral trajectory data is more advantageous. Perhaps a figure would help as the manuscript is text heavy. - Explanations of the MuJoCo results are not very clear to me, they seem more like observations than interpretations. Why is the oracle unstable (does it require more data and training given the conditioning on hidden params)? What explains the differences between both experiments: FE + RES does well on half cheetah (better than FE + NODE) and terrible on ant? Where does the lack of inductive bias intervene in this case ? Technical Quality: 3 Clarity: 3 Questions for Authors: - The term zero-shot does indeed refer to the ability of networks to perform in novel circumstances without retraining, and does apply here. However, it can be slightly misleading as adaptability here requires analysis of new data from the new setting. Can the authors elucidate the scales involved in the tradeoff? In other words, when does it become interesting to train 100 base models (on the 100 datasets) to gain a deployment advantage that still requires new data to function, versus fine-tuning one model or retraining? - Do the authors have any insights on changes in levels of performance whether the new ODE is in the convex hull of the available basis or outside it? (for example in the Van der Pol example, if the basis datasets contain trajectories for values of $\mu$ between 0.1 and 3, can the system perform for $\mu = 5$ ? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: - A major numerical challenge working with continuous time neural networks is that of integration, with a plethora of solvers and schemes available. The authors touch upon this topic in their limitations sections and the impact of the integration horizon selection on behaviour predictability as well as the compute overhead involved (also in appendix C). It would have been interesting to give readers a better sense about the tradeoffs with numerical comparisons that go beyond verbal description. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The assumption of orthogonality for the validity of the coefficients is vital. However, both the authors in [13] and in this paper do not share any analysis on the validity of this assumption. It would be interesting to see how well this holds in practice (computing inner product of obtained basis functions/ODEs should be easy) and how many iterations are required to reach orthogonality in the examples provided.** Please see the response to all reviewers, section 3. Also section A.5 in [13] may be of interest. **The introductory paragraph of section 3 where the link between a new unknown function $f$, its observed trajectory $D$, and ODE basis functions is made, could benefit from more motivation and explanation. It is not immediately clear why reasoning at the derivative level while dealing with integral trajectory data is more advantageous. Perhaps a figure would help as the manuscript is text heavy.** To clarify, in continuous time dynamical systems, the dynamics is inherently a differential equation that is being integrated through time. However, it is typically the case that system observations occur at discrete time intervals, and do not include derivative measurements. Hence, neural ODEs are advantageous because they allow you to learn a model of the true system derivatives while using integral trajectory data. **Explanations of the MuJoCo results are not very clear to me, they seem more like observations than interpretations. Why is the oracle unstable (does it require more data and training given the conditioning on hidden params)? What explains the differences between both experiments: FE + RES does well on half cheetah (better than FE + NODE) and terrible on ant? Where does the lack of inductive bias intervene in this case ?** The apples-to-apples comparison is between FE + Res. and FE + NODE + Res. These two approaches differ only in basis function architecture, and FE + NODE + Res. consistency outperforms. The other relevant comparison is FE + NODE + Res. and the Oracle. These two approaches both benefit from the inductive bias of neural ODEs, and both have information on the hidden parameters. However, the format of the information is different. Oracle gets this information explicitly as an additional input, whereas FE + NODE +Res. implicitly gets this information via the coefficients of the basis functions. We expect that Oracle is performing badly because it is generalizing to an entire space of dynamics with one NODE, which is a very complex function to learn. We believe that Ant is much more difficult and varied than Cheetah, and the relative ease of Cheetah is likely why both Res. methods perform so well on Cheetah. We have improved the discussion in section 4.2 to include these details. **The term zero-shot does indeed refer to the ability of networks to perform in novel circumstances without retraining, and does apply here. However, it can be slightly misleading as adaptability here requires analysis of new data from the new setting. Can the authors elucidate the scales involved in the tradeoff? In other words, when does it become interesting to train 100 base models (on the 100 datasets) to gain a deployment advantage that still requires new data to function, versus fine-tuning one model or retraining?** Two things to note. Rather than training a function for every dataset, we train $k$ basis functions to span all datasets. So, if we have 100 datasets but only want to train 10 basis functions, this is possible without issue. Secondly, it does require a small amount of online data. This data is necessary for any method because you need some information to identify the current dynamics. For example, finetuning a neural ODE based on online data is another approach, but this also requires online data. Indeed, finetuning a model likely requires even more data than our approach, and incurs a significant computational overhead that is going to take significantly longer than our approach. In contrast, we can compute the coefficients of the basis functions in milliseconds, as the inner product calculation is effectively a sample mean, and then we can instantly adapt our predictions based on the data. **Do the authors have any insights on changes in levels of performance whether the new ODE is in the convex hull of the available basis or outside it? (for example in the Van der Pol example, if the basis datasets contain trajectories for values of $\\mu$ between 0.1 and 3, can the system perform for $\\mu=5$?** Examining OOD generalization is an interesting future direction. Please see the response to all reviewers, section 2, for the experiment you described. **A major numerical challenge working with continuous time neural networks is that of integration, with a plethora of solvers and schemes available. The authors touch upon this topic in their limitations sections and the impact of the integration horizon selection on behaviour predictability as well as the compute overhead involved (also in appendix C). It would have been interesting to give readers a better sense about the tradeoffs with numerical comparisons that go beyond verbal description.** We leverage RK4, and there are indeed more accurate integrators available. However, there is inherently a trade off with respect to both training time and execution time. Integrators such as adaptive step size solvers can potentially make 20 or more calls to the neural ODE during a forward pass, while RK4 makes only 4. The increased number of neural ODE forward passes greatly increases memory usage and compute time. We experimented with better integrators, but ultimately found this tradeoff to be unfavorable. We find RK4 to be the right balance of speed and accuracy for this work. We have added an additional section of the appendix discussing the choice of integrator. --- Rebuttal Comment 1.1: Comment: I thank the authors for the elements presented in the rebuttal and general response. They clear up some points of confusion and improve in my opinion the understanding of the work, although they do not shed light on new strengths of the approach that might have gone unnoticed in my initial review. Thus, the current score still represents my appreciation of the work.
Summary: The paper presents a novel framework for the zero-shot transfer of neural ODEs by leveraging function encoders to represent a space of dynamical systems. It demonstrates the method's effectiveness in adapting to unseen environments without retraining, using MuJoCo and quadrotor experiments. Strengths: The paper demonstrates interesting ideas by introducing a novel framework for zero-shot transfer of neural ODEs using function encoders. This approach enables adaptation to unseen scenarios without retraining using the neural ODEs and function encoders. This research shows its potential to enhance the adaptability and safety of autonomous systems, bridging the gap between training and testing data. Weaknesses: The paper presents a promising framework for zero-shot transfer of neural ODEs, but there are several areas for improvement. Firstly, the reliance on a large and diverse dataset for training is a significant limitation. The approach requires extensive data that spans the entire function space of possible dynamics, which may not always be feasible. This dependency on comprehensive datasets should be addressed by exploring data-efficient learning methods or leveraging transfer learning techniques. Secondly, the computational overhead involved in training multiple neural ODEs is substantial. This might hinder the scalability and real-time applicability of the proposed method. The authors could investigate more efficient training algorithms or consider approximations that reduce computational costs without compromising accuracy. Thirdly, the paper does not explicitly enforce the orthogonality of basis functions, relying instead on implicit regularization through the loss function. This might lead to suboptimal representation of the function space, affecting the model's performance. Moreover, the experiments lack diversity in evaluating the model's robustness to completely unseen environments or significantly different conditions. Including more varied test scenarios or stress tests could provide a deeper understanding of the model's adaptability and limitations. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The computational overhead for training multiple neural ODEs might be significant. How does this affect the scalability and real-time applicability of your method? 2. The paper does not enforce orthogonality explicitly for the basis functions, which might lead to suboptimal representation. Have you evaluated the levels of orthogonality (and their relationship with performance) in your method? 3. The experiments lack diversity in evaluating the model's robustness to completely unseen environments or significantly different conditions. How confident are you in your model’s adaptability in such scenarios? 4. The evaluation metrics used in the MuJoCo experiments are primarily focused on prediction accuracy. Have you considered additional metrics that might better capture your model's practical performance? For example, task-specific performance such as scores or cumulative rewards. 5. It seems to lack a sufficient variety of baselines for comparison. Have you considered additional zero-shot / few-shot or multitask / meta-learning studies? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Refer to Weaknesses and Questions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **The paper presents a promising framework for zero-shot transfer of neural ODEs, but there are several areas for improvement. Firstly, the reliance on a large and diverse dataset for training is a significant limitation. The approach requires extensive data that spans the entire function space of possible dynamics, which may not always be feasible. This dependency on comprehensive datasets should be addressed by exploring data-efficient learning methods or leveraging transfer learning techniques.** Any approach that aims to model a large set of dynamics from online data inherently must learn from a large dataset. This is because the space of functions is infinite dimensional, and this large, diverse dataset is effectively teaching the model which subspace is most important. Therefore, the ability to infer behavior from data necessitates the presence of a large training dataset. Our approach learns basis functions to span the functions present in the training set. However, it’s important to note that any function in the span of the basis can be perfectly represented, and thus our approach is an efficient transfer learning technique for this domain. However, examining strategies for further improving data efficiency is an interesting future direction. For example, it may be possible to augment the training dataset with additional common functions, such as linear transformations between inputs and outputs or trigonometric functions, as this would increase the diversity of the training dataset without requiring more data. **Secondly, the computational overhead involved in training multiple neural ODEs is substantial. This might hinder the scalability and real-time applicability of the proposed method. The authors could investigate more efficient training algorithms or consider approximations that reduce computational costs without compromising accuracy.** Overhead is indeed an important consideration, and we have two strategies to address this. During the offline training phase, the key consideration is memory usage. This is because neural networks converge faster with larger batch sizes, and so leveraging the largest batch size possible is key. To address this, we run the basis functions sequentially during training. Since only one basis function is being called at a time (both during the forward pass and back-propagation), the memory overhead is the same as a single neural ODE. This does increase the training time by a factor of $k$. However, as neural ODEs alone cannot model multiple dynamics functions, we view this cost as acceptable for the benefits our approach provides. During the online phase, memory usage is inherently reduced as all memory usage results from online data only. So, memory is not a concern. Instead, the compute speed is the most important factor for real-time control. To address this, we run the basis functions in parallel. As there is no dependence between them, each basis function can run at the same time. Thus, by running the basis functions in parallel, there is no additional compute time overhead. Therefore, this method is equivalent to a vanilla neural ODE with respect to real-time compute speeds. We have improved the discussion in the appendix, section C, to include this additional information. **Thirdly, the paper does not explicitly enforce the orthogonality of basis functions, relying instead on implicit regularization through the loss function. This might lead to suboptimal representation of the function space, affecting the model's performance.** Please see the Response to all Reviewers, section 3. **Moreover, the experiments lack diversity in evaluating the model's robustness to completely unseen environments or significantly different conditions. Including more varied test scenarios or stress tests could provide a deeper understanding of the model's adaptability and limitations.** Verifying OOD generalization capabilities is an interesting future direction. Please see the response to all reviewers, section 2. **The experiments lack diversity in evaluating the model's robustness to completely unseen environments or significantly different conditions. How confident are you in your model’s adaptability in such scenarios?** Please see the response to all reviewers, section 2. It’s worth noting that the test environments are sampled from the same distribution of environments, but are unseen by the model during training. **The evaluation metrics used in the MuJoCo experiments are primarily focused on prediction accuracy. Have you considered additional metrics that might better capture your model's practical performance? For example, task-specific performance such as scores or cumulative rewards.** The MuJoCo experiments are designed specifically to highlight prediction accuracy. We agree that other metrics might be useful as well, and so we leverage a task-specific efficiency metric in the quadrotor experiments. For more information see section 4.3, especially the last paragraph. **It seems to lack a sufficient variety of baselines for comparison. Have you considered additional zero-shot / few-shot or multitask / meta-learning studies?** We are unaware of any other zero-shot techniques for neural ODEs. It is possible to use a transformer for the same type of data, however transformers suffer notable drawbacks. First, their memory usage is quadratic, and so they are unable to make use of large, online datasets. Second, their forward pass time is quite slow, and so they are not suitable for real time model-predictive control as many forward passes must be made every timestep, e.g. 100s of forward passes in 30 milliseconds. Meta-learning techniques are similar in that they adapt models given instance-specific data. However, these techniques perform additional finetuning or retraining based on this data, which makes them ill-suited for real-time control. --- Rebuttal 2: Comment: The authors' rebuttal and general response have improved my understanding and addressed many of my concerns. As a result, I have revised my score to 5.
Summary: This paper proposes a method to learn the dynamics of autonomous systems in a few shot manner. The core assumption is that the dynamics function dx/dt=f(x) of a new system can be modeled by a linear combination of basis dynamics functions. The method involves two stages. In the offline stage, the method learns a set of basis functions with neural ODE. In the online stage, the method uses incoming dynamics to fit the linear weights, which when combined with the learned basis, can be used to model the new system. The authors evaluated the effectiveness of the method on Van der Pol Oscillato system (with varying parameters) and Mujoco Ant. Strengths: 1. The writing is very clear. Readers with reasonable math background & ODE shall understand this paper reasonably well 2. The core of the method is intuitive, and the method is sound. Basis function with linear weighting for quick adaptation has been grounded in many fields. 3. I appreciate the fact that the authors explains the limitation and scope of the project, and there is no overclaim Weaknesses: 1. My main criticism of the paper is the technical contribution. The problems this system can solve seem to be constrained to systems limited variation in parameter, where the offline dataset & online system share a high level of similarity. While the authors explained how they apply NODE very clearly, I think using NODE is not a fundamental contribution because you can also have other sequence models trained to predict residue. The idea of basis function has also been explored by many prior literatures. 2. The title is misleading - you are doing quick online adaptation (few shot) instead of a zero-shot setting. A few trajectories of online data is required to adapt to the new system 3. There are a lot of baselines the authors should look into since few shot learning / quick adaptation is a long standing topic of research. For example, other basis function methods, meta learning methods. I understand that the authors picked a setting where time is continuous, but the mujoco Ant environment is also traditionally studied as discrete too, so a meta learning + some sequence prediction model should be applicable here. If you can provide a convincing argument about why those priors works aren't applicable it's also fine. 4. As I suggested in questions section, there could be many perspectives the author should dive deeper to improve the paper minor: 1. In line 114, the reasoning doesn't provide grounding to the orthogonality question - the training objective make them span the space assuming diverse enough dynamic systems - but this is unrelated to orthogonality! Are you saying that non orthogonal is okay as soon as they span the space? 2. It would be good to give more intuitive visualizations of estimation error in mujoco Ant. e.g. render the predicted trajectories as video 3. ablation about the number basis functions used would be helpful to understanding Technical Quality: 3 Clarity: 4 Questions for Authors: 1. In line 165, the authors said "Given data collected online from a single trajectory". I am curious to see ablations how the method improves as the amount of online data grow 2. In figure 3, error goes up as the number of look-ahead steps increases. While this is partially expected due to compounding error, how much of it shall be attributed to limited expressiveness of linear basis. e.g. I can do an experiment where I train a NODE with many many data, not just 200 example points, and roll it out. You will witness how MSE loss increases and gain insights about a upper bound for prediction accuracy that's not due to limited data / limited number of basis Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Please see my points in weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **My main criticism of the paper is the technical contribution. The problems this system can solve seem to be constrained to systems limited variation in parameter, where the offline dataset & online system share a high level of similarity. While the authors explained how they apply NODE very clearly, I think using NODE is not a fundamental contribution because you can also have other sequence models trained to predict residue. The idea of basis function has also been explored by many prior literatures.** * We use hidden environmental parameters as an example problem setting because it is frequent in robotics. However, we never make use of the hidden parameters directly. As mentioned in Section 3, this algorithm can be applied to a space of dynamical systems that arises for *any* reason. * Generalization to OOD online systems is indeed an interesting direction. Please see section 2 of the response to all reviewers for a discussion. * We are interested in model-based control. Sequence prediction models, e.g. transformers, are often too computationally complex for model predictive control. SOTA control algorithms such as iCEM MPC need hundreds of forward passes per control-action. Transformers are well known to have quadratic cost with respect to input space, and their forward pass is much slower than neural ODEs, and so they are not suitable for this setting. Meta-learning algorithms can be applied under the same data assumptions, but often finetune or retrain the learned model based on new data. This training period is not amenable to real-time control. Lastly, there is indeed a plethora of analytical basis function methods. However, analytical basis functions often scale extremely poorly to high-dimensional function spaces. Other approaches like SINDy have similar scaling issues, but also require a potentially lengthy sparse coefficient identification procedure which can be too time-consuming for real time control. In contrast, our method enjoys the same mathematical interpretation as these approaches, while scaling to high-dimensional problems due to the use of a relatively small number of neural network basis functions. We have extended the related works to include information on why common sequence models and meta-learning are not applicable to this setting. **The title is misleading - you are doing quick online adaptation (few shot) instead of a zero-shot setting.** The term “zero-shot” means different things to different communities. For example, the LLM community will often say “few-shot prompting” to mean operating a model with a few examples as input, and no finetuning. However, for ODEs, dynamics modeling, and reinforcement learning, “Zero-shot” is the ability to leverage unseen data at execution time to improve model performance without additional training, where the zero comes from zero gradient steps. See “Does Zero-Shot Reinforcement Learning Exist?” (Ahmed Touati, Jérémy Rapin, Yann Ollivier, 2022), “Unsupervised Zero-Shot Reinforcement Learning via Functional Reward Encodings“ (Kevin Frans, Seohong Park, Pieter Abbeel, Sergey Levine, 2024), “Zero-Shot Reinforcement Learning via Function Encoders” (Tyler Ingebrand, Amy Zhang, Ufuk Topcu, 2024), etc. **A few trajectories of online data is required to adapt to the new system.** For the Mujoco examples, we only leverage 200 data points. For a system running at 30 Hz, this is equivalent to only ~7 seconds of data rather than a full trajectory. This data can then be used to identify the system dynamics immediately at runtime. We have improved the experiment description in section 4.2 to clarify this. **In line 114, the reasoning doesn't provide grounding to the orthogonality question, ..., Are you saying that non orthogonal is okay as soon as they span the space?** Thank you for your comment, we have revised the wording of line 114 for clarity. Please see the response to all reviewers, section 3 for more information on orthonormality. **It would be good to give more intuitive visualizations of estimation error in mujoco Ant. e.g. render the predicted trajectories as video.** Good idea, we will add screenshots to the appendix. **Ablation about the number basis functions used would be helpful to understanding.** We agree that the effect of the number of basis functions is an interesting ablation. Please see the response to all reviewers, section 1a. **In line 165, the authors said "Given data collected online from a single trajectory". I am curious to see ablations how the method improves as the amount of online data grow.** We agree that examining the prediction accuracy as the amount of data increases is an interesting experiment. Please see the response to all reviewers, section 1b. **In figure 3, error goes up as the number of look-ahead steps increases. While this is partially expected due to compounding error, how much of it shall be attributed to limited expressiveness of linear basis. e.g. I can do an experiment where I train a NODE with many many data, not just 200 example points, and roll it out. You will witness how MSE loss increases and gain insights about a upper bound for prediction accuracy that's not due to limited data / limited number of basis.** Please see the response to all reviewers, section 1, which may provide some more intuition on the effect of the example data and number of basis functions on the performance. While using more example data slightly improves performance, a key feature of our approach is that it can operate in low data settings. Additionally, the method is insensitive to the number of basis functions, which implies that the basis functions are more than sufficient to express the dynamics. Lastly, “NODE” and “Oracle” are also neural ODE models, and so they suffer from the same compounding error problems as our approach. Thus, those approaches provide a reference for how much of the error is due to the inherent difficulty of dynamics prediction. --- Rebuttal Comment 1.1: Comment: I acknowledge that I've read the rebuttal and general response. Thank you for the ablations and they are helpful to understanding. Here are some additional questions: 1. If quality of prediction isn't sensitive to number of basis functions & data after they are increased to a certain degree, does it mean that the method won't be able to scale up further to bring the prediction error to 0, even for a deterministic system. I understand that every predictive model have errors, I am simply looking for some analysis here - what's the source of the remaining error. 2. In my review, I mentioned that > The problems this system can solve seem to be constrained to system limited variation in parameter While the authors did an ablation on OOD parameters, it's still limited to the family of OOD that's reflected via parameters. I am wondering whether the system can generalize to OOD dynamics that completely changes its form (e.g. walking to jumping), not just parameters --- Reply to Comment 1.1.1: Comment: **If quality of prediction isn't sensitive to number of basis functions & data after they are increased to a certain degree, does it mean that the method won't be able to scale up further to bring the prediction error to 0, even for a deterministic system. I understand that every predictive model have errors, I am simply looking for some analysis here - what's the source of the remaining error.** This is a nuanced and complicated question. The biggest thing that likely leads to error is the fact that the system (MuJoCo) is not strictly continuous due to contact forces. Therefore, there will always be some error for any method that uses a neural network due to the contact forces in the environment. Another factor is that the space of functions is infinite-dimensional. Therefore, for any finite number of basis functions, there is inherently going to be error due to the unrepresented dimensions. We do expect to see diminishing returns as the number of basis functions increase, as some of these dimensions are less important for predicting the dynamics than others, and this aligns with the empirical results. We also leverage RK4 as the integrator for the neural ODE. RK4 is a fourth-order method, and higher-order terms are truncated. This choice of integrator therefore imposes error, which increases for larger time horizons. More accurate integrators are available, but come at the cost of compute time. Lastly, numerical precision may play a small role here due to implementation details. Any numerical precision errors will compound as more operations occur. These errors can affect both training, where they effectively appear as noise added to the gradients, and during execution. **In my review, I mentioned that “The problems this system can solve seem to be constrained to system limited variation in parameter”. While the authors did an ablation on OOD parameters, it's still limited to the family of OOD that's reflected via parameters. I am wondering whether the system can generalize to OOD dynamics that completely changes its form (e.g. walking to jumping), not just parameters** No, our approach is not limited to variation in parameters. As stated in the paper in section 3, we are able to handle variations due to changes in the underlying physics model. The only theoretical requirement is that the functions exist in the same space. We believe the MuJoCo examples demonstrate this as our model can generalize beyond simple parameter varying systems, where the robot’s physical shape is changing. However, your question highlights two nuanced questions. The first is a shift in the distribution of states seen between a robot that is walking and jumping. A jumping robot is likely to experience different states than when it is walking. If these states have never been trained on, then any learned model cannot hope to accurately model these dynamics (without leveraging prior knowledge). In other words, a purely learned model which leverages no prior information needs its training set to cover the state space. The second question is if the learned dynamics can generalize across behaviors. Our model has actions as an input, and therefore it can accurately predict a given transition even if the underlying policy was not used to collect training data. Therefore, generalization across behaviors is possible.
Summary: The paper aims to address the challenge of zero-shot transfer and adaptation. The authors propose tackling this challenge by learning a dynamics space spanned by neural ODE basis functions, which can then be used for rapid identification and adaptation to dynamics at inference time without additional training. The paper demonstrates the efficacy of the proposed approach, using both a simpler oscillator system and scaled up simulated robotics environments. Strengths: - While this is not my area of expertise, to the best of my knowledge, the method proposed in the paper and the corresponding experiments are novel. - The paper is well written and clear, and fast adaptation is a crucial problem to the field, especially in the field of robotics and control. - The experiments seem well designed and thorough, showing the contribution of each of the method components when ablated, and demonstrating the efficacy of the approach for control via MPC. Weaknesses: - To place the results in context with prior work, it would be additionally helpful for the robotics experiments to show comparisons to other methods that enable adaptation (e.g. training a model free method with domain randomized parameters). - It’s not clear how well the method will scale to more complex dynamics, or when trying to generalize beyond the learned basis functions (e.g., if they are not expressive enough or span the relevant spaces for a dynamical system, or if insufficient data is used to learn the dynamics space). Technical Quality: 3 Clarity: 3 Questions for Authors: See suggestions in the Weaknesses section above. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors address method limitations adequately in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. See below for responses to your suggestions. **To place the results in context with prior work, it would be additionally helpful for the robotics experiments to show comparisons to other methods that enable adaptation (e.g. training a model free method with domain randomized parameters).** Domain randomization allows RL algorithms to find policies which are robust to randomized parameters, however the policy does not adapt to those parameters. Our approach is inherently solving a different problem, in that our controller can adapt to the specific dynamics it is experiencing. This is done by using a small, online dataset to identify the coefficients of the current dynamics function. Then, the basis functions (in combination with the coefficients) are used to model the current system dynamics. A controller using this system model therefore adapts to the current dynamics in an online fashion. **It’s not clear how well the method will scale to more complex dynamics, or when trying to generalize beyond the learned basis functions (e.g., if they are not expressive enough or span the relevant spaces for a dynamical system, or if insufficient data is used to learn the dynamics space).** The ablation in the response to all reviewers, section 1a, shows that the basis functions are sufficiently expressive to span the space of dynamics present in the system. Furthermore, as $L_2$ is a Hilbert space, any function in $L_2$ can be perfectly described as a linear combination of basis functions. In other words, basis functions are sufficiently expressive for the vast majority of problem settings. Generalization beyond the function space in the dataset is a fascinating question. Please see the response to all reviewers, section 2 for more information.
Rebuttal 1: Rebuttal: # Response to all Reviewers: We thank the reviewers for their comments and keen insights. We have made the following major changes to the paper in response to their feedback. ## **1. Hyper-Parameters**: We have added an ablation on both the number of basis functions and the number of example data points. ### **1a. Number of Basis Functions**: We ablate different numbers of basis functions at predicting dynamics in the Half-Cheetah environment. We find that the algorithm is generally insensitive to the number of basis functions, and performance eventually decays as the number of basis functions approaches 0. See the attached PDF for the corresponding figure. The results indicate that 100 basis functions are sufficiently expressive for the dynamics of the ablated environment. ### **1b. Number of Example Data Points**: We ablate the sensitivity to the number of data points on predicting dynamics in the Half-Cheetah environment. We find that increasing data only slightly improves performance and that there are diminishing returns. See the attached PDF for the corresponding figure. Note that if the function is within the span of the basis, then its prediction accuracy only depends on the error between the true basis function coefficients and the Monte Carlo estimate of the coefficients. As the estimate is computed via a sample mean, the error in this prediction decreases to 0 as more data is collected due to the law of large numbers. ### **2. Generalization**: For dynamics functions outside of the training set, it can be shown that any function within the span of the basis can be perfectly computed. Functions that lie outside the span of the basis may still have low error if they are sufficiently close to the learned subspace. To illustrate this point, we train the Van Der Pol example on $\mu \in [0.1, 3.0]$, and evaluate it on $\mu=4.0$. We observe that the model is still able to reasonably approximate this out-of-distribution $\mu$. See the figure in the attached PDF. Exploring the exact OOD generalization capabilities of this algorithm, along with possible error bounds, is an active direction of future work, but is outside the scope of this paper. ### **3. Orthonormality**: This discussion relates to the function encoder algorithm of the paper “Zero-Shot Reinforcement Learning via Function Encoders”. Nonetheless, we present the following discussion for clarity as it is relevant to this work. Consider a set of basis functions $g_1, …, g_k$. Suppose that $g_1, …, g_k$ is not orthonormal. Consider a function $f$, and suppose $f$ happens to be in the span of $g_1, …, g_k$. Then $f$ can be expressed as $f=b^\top g$, where $b$ is a set of coefficients and $g$ is the concatenation of $g_1, …, g_k$. Consider the coefficients calculated via the inner product, $c^\\top =\\begin{bmatrix} \\langle f, g_1 \\rangle \\\\ \\vdots \\\\ \\langle f, g_k \\rangle \\end{bmatrix} =\\begin{bmatrix} \\langle b^\\top g, g_1 \\rangle \\\\ \\vdots \\\\ \\langle b^\\top g, g_k \\rangle \\end{bmatrix} =b^\\top \\begin{bmatrix} \\langle g_1, g_1 \\rangle & … & \\langle g_1, g_k \\rangle \\\\ \\vdots & \\ddots & \\vdots \\\\ \\langle g_k, g_1 \\rangle & … & \\langle g_k, g_k \\rangle \\end{bmatrix}$ Consider the loss function $l=\\vert f - \\hat{f} \\vert ^2=\\vert f - c^\\top g \\vert ^2$. If $c=b$, then the loss will be 0. Observe that $c=b$ if and only if the Gram matrix is identity, and the Gram matrix is identity only for an orthonormal basis. In other words, the minimizer of the loss function is an orthonormal basis. Thus, in order for gradient descent to decrease loss, the basis functions converge towards orthonormality. This intuition is empirically validated in “Zero-Shot Reinforcement Learning Via Function Encoders”, Appendix section A.5. Please see that paper for more information. As a final note, as mentioned in section 2.2, line 106, the coefficients can be computed via least squares after training. Least squares does not require an orthonormal basis as it uses the Gram matrix to account for the inner products between basis functions. In summary, it has been shown in prior works that the basis functions converge towards orthonormality. As stated in section 2.2, we may also use least squares at execution time, which sidesteps the issue. We have added this discussion to the appendix. Pdf: /pdf/d325f44258715830ff6ae4b7b36cb0c2b4fa12d2.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Resource-Aware Federated Self-Supervised Learning with Global Class Representations
Accept (poster)
Summary: This paper introduces a novel approach for enhancing global representation models in resource-adaptive federated self-supervised learning through a multi-teacher knowledge distillation framework, named FedMKD. The proposed method addresses the challenges posed by heterogeneous architectures and extreme class skew, demonstrating significant improvements in representation abilities across diverse clients. The authors provide detailed experimental results to support their claims. Strengths: 1. The paper tackles a critical issue in federated self-supervised learning, offering a unique and effective solution. 2. FedMKD effectively leverages multi-teacher knowledge distillation to integrate knowledge from heterogeneous clients, even under extreme class skew. 3. The adaptive knowledge integration mechanism enhances the representation abilities of heterogeneous models. 4. The experimental section is thorough, demonstrating the efficacy of the proposed method on multiple datasets. 5. The combination of self-supervised loss and distillation loss, along with the global knowledge anchored alignment module, significantly improves local and global representation abilities. Weaknesses: The Related Work section could be expanded to include a more thorough discussion of existing approaches in both transfer learning and attention mechanisms. The paper may have a limited audience due to its specialized focus on image classification tasks. It could benefit from providing more context for readers unfamiliar with the specific datasets and techniques discussed. The contributions of the work could be more explicitly stated in the Introduction to enhance clarity for readers. Technical Quality: 3 Clarity: 3 Questions for Authors: It would be beneficial for the authors to include a complexity analysis of the FedMKD algorithm to provide insights into its computational efficiency. If a complexity analysis is not feasible, the authors should provide a rationale for its omission. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitations of the FedMKD framework are well-addressed in the paper. These constraints appear to be inherent to the design choices made in the method and are necessary trade-offs for its functionality. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer hPCk for the time and valuable feedback! We would try our best to address the comments one by one. **Response to Weakness:** Thank you very much for the insightful comments. - First, we survey some **federated transfer learning (FTL)** methods, and add the following discussion of them. "Federated transfer learning aims to share the knowledge and insights derived from training machine learning models across various entities, all while keeping the raw data decentralized. It has been used widely in various regions, such as cross-domain recommendation [1,2], medical image classification [3,4] and financial service [5,6]" About **attention mechanisms**, we would add the following discussion as follows. "Attention mechanism is a component used in neural networks that dynamically focuses on the most relevant parts of the input data, enhancing the model's performance. By assigning varying levels of importance to different pieces of information, it allows the network to prioritize and efficiently process significant elements, has proven to be highly successful in various tasks, such as image classification [7,8], object detection [9,10] and image generation [11,12]." Both of these discussions will be added in Related Work in the final version. - By the way, we would add more detailed description for readers who are unfamiliar with the specific datasets and techniques to understand our method as follows. "We use CIFAR-10 and CIFAR-100 to evaluate the performance. The CIFAR-10 dataset consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. CIFAR-100 dataset is just like the CIFAR-10, except it has 100 classes containing 600 images each. There are 500 training images and 100 testing images per class." - Finally, we present our contributions in **Lines 65-76** of the Introduction. We emphasize that in resource-adaptive Fed-SSL, we are the first to delve into global class representations learning by addressing the deviated representation abilities and inconsistent representations caused by heterogeneous architectures and class skew. We designed **FedMKD** to tackle these challenges, and the experimental results demonstrate the superiority of our proposed **FedMKD**. **Response to Question:** Thank you very much for the insightful comments. We would like to analyze the complexity of client and server separately. First in the client, each client chooses the appropriate model according to their resource, so the computation complexity is different among them. Then in server, the training process has two phases. The complexity of knowledge distillation is $O(T' \times |D_{pub}| \times (N+1))$, where $T'$ is the epoch of knowledge distillation, $|D_n|$ is the size of the public dataset, and $N$ is the number of client local model. The complexity of the global knowledge anchored is $O(|D_{pub}| \times N)$. By the way, considering that each deep network includes multiple operations and computations, we commonly use the amount of computation (FLOPS) to analyze the time complexity and the amount of memory access (Bytes) to analyze the space complexity. The results of each process are shown as follows: | **Process** | **Memory** |**FLOPs** | |:------------:|:-------:|:-------:| | Local-VGG9 | 37.857M |121.761G| | Local-ResNet18| 49.294M |440.813G| | Distillation | 343.931M |1.701T| | Alignment | 60.129M |295.309G| If there are any further confusions/questions, we are happy to clarify and try to address them. Thank you again and your recognition means a lot for our work. --- [1] Ammad-Ud-Din, Muhammad, et al. "Federated collaborative filtering for privacy-preserving personalized recommendation system." arXiv preprint arXiv:1901.09888 (2019). [2] Minto, Lorenzo, et al. "Stronger privacy for federated collaborative filtering with implicit feedback." Proceedings of the 15th ACM Conference on Recommender Systems. 2021. [3] Gong, Xuan, et al. "Federated learning with privacy-preserving ensemble attention distillation." IEEE transactions on medical imaging 42.7 (2022): 2057-2067. [4] Sui, Dianbo, et al. "Feded: Federated learning via ensemble distillation for medical relation extraction." Proceedings of the 2020 conference on empirical methods in natural language processing (EMNLP). 2020. [5] Shaheen, Momina, Muhammad Shoaib Farooq, and Tariq Umer. "Reduction in data imbalance for client-side training in federated learning for the prediction of stock market prices." Journal of Sensor and Actuator Networks 13.1 (2023): 1. [6] Pourroostaei Ardakani, Saeid, et al. "A federated learning-enabled predictive analysis to forecast stock market trends." Journal of Ambient Intelligence and Humanized Computing 14.4 (2023): 4529-4535. [7] Hu, Jie, Li Shen, and Gang Sun. "Squeeze-and-excitation networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018. [8] Woo, Sanghyun, et al. "Cbam: Convolutional block attention module." Proceedings of the European conference on computer vision (ECCV). 2018. [9] Dai, Jifeng, et al. "Deformable convolutional networks." Proceedings of the IEEE international conference on computer vision. 2017. [10] Carion, Nicolas, et al. "End-to-end object detection with transformers." European conference on computer vision. Cham: Springer International Publishing, 2020. [11] Gregor, Karol, et al. "Draw: A recurrent neural network for image generation." International conference on machine learning. PMLR, 2015. [12] Zhang, Han, et al. "Self-attention generative adversarial networks." International conference on machine learning. PMLR, 2019. --- Rebuttal Comment 1.1: Comment: After reviewing the responses and other reviewers' comments, I appreciate the detailed clarifications and new experiments provided. The explanations effectively addressed my concerns, leading me to update my rating to weak accept. --- Reply to Comment 1.1.1: Comment: Thank you very much for taking the time to read and respond to our rebuttal. We sincerely appreciate your recognition of our work.
Summary: The authors propose a multi-teacher knowledge transfer framework, FedMKD, based on two challenges in resource-adaptive federated self-supervised learning: deviated representations abilities and inconsistent representations. This framework uses an adaptive knowledge integration mechanism and a weighted combination of different loss functions to ensure that global representations with class-wide knowledge can be learned from heterogeneous clients even in the presence of extreme class skew. Extensive experiments on two datasets demonstrate the effectiveness of FedMKD, which outperforms the state-of-the-art baseline by an average of 4.78% under linear evaluation. Strengths: 1. The scenarios presented and the challenges solved in this work are of importance in reality and the proposed multi-teacher knowledge distillation based federated self-supervised learning framework is able to solve the challenges effectively. 2. By innovatively combining self-supervised loss and distillation loss, FedMKD can encode skew classes into a unified space, which is not covered by related work. 3. The paper is easy to follow, including the main paper and appendices. The introduction provides a clear motivation and a brief overview of the proposed framework, and the other sections provide detailed descriptions of the method. 4. The experiments are well-organized and successful, making this work more convincing. Weaknesses: 1. The mechanism design seems to be confusing. As shown in the FedMKD, multi-instructor knowledge distillation is allowing global models to learn local models, while global knowledge anchored alignment is allowing local models to learn global models, knowledge transfer happens alternately. 2. FedMKD aims to learn global class representations according to a global model. But an additional global-anchored mechanism improves the local performance. What’s the difference between personalized federated learning and the proposed FedMKD? 3. It’s suggested that the authors could add the instructions about the whole model training algorithm in Appendix A, which can help reviewers understand the whole process. 4. The presentation needs to be improved, especially for some equations. The self-supervised loss value in Eq. (10) was not discussed earlier. If it is equal to the loss value in Eq.(3), but with a different notation. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. As shown in Weakness 1, it seems to that multi-instructor knowledge distillation is allowing global models to learn local models, while global knowledge anchored alignment is allowing local models to learn global models, so does the two mechanisms result in duplication of knowledge transfer? 2. In the problem statement, the main objective is the global loss. Since recent studies on cross-device scenarios also consider personalized FL (PFL), how about the personalization? Can your method be extended to PFL? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: The authors pointed out the limitations of the paper that is the theoretical proof of FedMKD is not rigorous enough, the characteristic of multi-teacher distillation of the global model has not been sufficiently theoretically justified. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer sGaU for the time and valuable feedback! We would try our best to address the comments one by one. **Response to Weakness1 & Question1:** Thank you very much for your recognition. As you mentioned, we have two mechanisms to transfer knowledge, but they are not duplicated. First, the multi-teacher knowledge distillation allows the global model to encode all classes from clients in a unified space. And the global knowledge-anchored alignment module is applied to eliminate inconsistency in representation spaces of clients, which is benefit to the global model training, also. These two machenisms are critical for learning global class representation and are indispensable. **Response to Weakness2 & Question2:** Thank you for your insightful comments. The additional global anchored mechanism aims to ensure that the local representation spaces are closer to the global ones so that the following knowledge distillation process could gain more from clients. This module can further eliminate the effect of inconsistent representation spaces and improve local performance. The experiment result of the improvement in client is shown in Section 5.4 which verifies the effectiveness of the module. Compared to personalized FL, the optimization objective is not identical. We aim to train a global model that can encode all classes in a unified space, and the improvement of clients is a middle process. But for PFL, it aims to learn better local model without global model, it's quite different from ours. **Response to Weakness3:** As you suggested, we would add the instructions about the whole model training algorithm in Appendix A in our final version. **Response to Weakness4:** We are sorry for our unclear discussion. Eq. (10) shows the weighted combined loss for the global model, it includes two parts, the self-supervised loss and knowledge distillation loss. So that the global model can encode all classes from clients in a unified space and improve its representation capability. The self-supervised one is identical to the local training loss, that is Eq. (3). If there are any further confusions/questions, we are happy to clarify and try to address them. Thank you again and your recognition means a lot for our work. --- Rebuttal Comment 1.1: Comment: Thank you for your response ! I appreciate the thoughtful consideration given to my comments. The explanations and revisions provided are clear and effectively address my concerns, and I strongly recommend the acceptance of this paper. --- Reply to Comment 1.1.1: Title: Response to Reviewer sGaU Comment: We appreciate you taking time to read and respond to our rebuttal. And thank you very much again for recognizing our work.
Summary: This paper proposes a multi-teacher knowledge distillation framework named FedMKD for resource-adaptive federated self-supervised learning (Fed-SSL). The method aims to address the challenges of global representation learning in Fed-SSL caused by heterogeneous architectures and imbalanced class distributions. Experimental results demonstrate that FedMKD is significantly effective in addressing heterogeneity and class distribution imbalance issues in federated self-supervised learning. Strengths: Innovative Framework Design: This paper proposes a new multi-teacher knowledge distillation framework, FedMKD, which combines self-supervised learning and knowledge distillation to effectively address the challenges of heterogeneity and data imbalance in federated learning. Adaptive Knowledge Integration Mechanism: Through an adaptive weighting strategy, it integrates high-quality representations from heterogeneous models, significantly enhancing the representation capabilities of the global model. Global Knowledge Anchored Alignment: The design of a global knowledge anchored alignment module ensures that local models' representation spaces are closer to the global representation space, thus improving the performance of local models. Weaknesses: 1. This paper combines federated self-supervised learning and resource-aware federated learning. However, in introduction and related works, the authors lack a discussion about some works about resource-aware federated learning with heterogeneous clients. . 2. The authors assume to build a global dataset to perform knowledge distillation and anchored alignment. However, random sampling makes the global dataset and client dataset have the similar data distribution, which violates the original intention of data privacy in FL. Please provide some explanation for this. 3. CIFAR-10 and CIFAR-100 are both image datasets with relatively small sizes and low image resolution, which cannot fully reflect the complexity of real-world applications. It is recommended to add experiments on large-scale image datasets such as Tiny-ImageNet and ImageNet-100. 4. The framework of dual encoders increases the training load in clients. It is recommended to evaluate the computation time and resources required and provide analysis for local model training. 5. To evaluate the robustness and effectiveness of this framework, it is suggested to explore experiment settings with more clients, such as [1]. [1] ScaleFL: Resource-Adaptive Federated Learning with Heterogeneous Clients Technical Quality: 3 Clarity: 3 Questions for Authors: See weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The characteristic of multi-teacher distillation of the global model has not been sufficiently theoretically justified and the authors only combined it with the self-training loss as one whole loss to analyze. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer 24fB for the time and valuable feedback! We would try our best to address the comments one by one. **Response to Weakness 1:** Thank you very much for the insightful comments. In the related works section (Lines 94-99), we have surveyed several existing studies that address heterogeneous client models in federated self-supervised learning. We analyzed the advantages and disadvantages of **Hetero-SSFL** and **FedFoA** in detail. Additionally, we select **Hetero-SSFL** as one of the baselines in the experiment to compare and evaluate the performance of our proposed method. And as you mentioned, we will further add the description of **Hetero-SSFL** and **FedFoA** in the introduction in the final version. **Response to Weakness 2:** Thanks for your sincere comments. As you are concerned, this construction method of public dataset is the experimental simulation part only does not pose a serious risk of data leakage, in our FedMKD, no data need to be transferred. Here, we just want to address one practical problem: if the distribution of public dataset is different from the data distributions of clients, can the proposed FedMKD still work? In practice we can use appropriate public dataset to distill knowledge according to the specific task instead of this random sampling method. To verify this, we conducted another experiment using **CINIC** as the public dataset on the server. **CINIC** is an extension of CIFAR-10, augmented with downsampled ImageNet images, and it shares the same size and classes as CIFAR-10. We still use **CIFAR-10** as the train dataset for the clients. The public dataset is 'Partial' and the data distribution in the client is 'Class'. The other setting is identical to the draft. The linear probing results are shown as follows: | | **Public dataset** | **Acc**| |:--:|:--:|:---:| |FedMKD| CINIC | 59.98% | |FedMKD| CIFAR-10 | 66.39% | Here we can see that even when intuitively choosing an appropriate public dataset, our method still works effectively. **Response to Weakness 3:** Thank you very much for the insightful comments. We use **ImageNet-100** and **ImageNet-1k** to evaluate the effectiveness of our proposed method. The results are presented in **Table 11 in Global Response PDF**. We can conclude that our proposed FedMKD outperforms all baselines. **Response to Weakness 4:** Thank you very much for the insightful comments. We evaluate another single-encoder framework, **SimCLR**, on the clients and assessed the computation time and resources required for local model training. Although this method is more computationally efficient, its performance is much inferior to ours. The results are shown as follows: | | **Local method** | **Acc**| **VGG Memory** |**ResNet18 Memory**| |:--:|:--:|:---:|:---:|:---:| |FedMKD| SimCLR-single encoder| 48.32% | 21.066M|21.668M| |FedMKD| BYOL-dual encoder| 66.39% | 37.857M|49.294M| **Response to Weakness 5:** Thank you very much for the insightful comments. We've explored the scalability of our proposed algorithm FedMKD in Appendix D.3. In these experiments, we test scenarios with 10 and 30 clients, where 40% of the clients use the VGG model and 60% use ResNet18. We repartition the data for each client under Dir($\beta=0.5$) and set the public dataset distribution as IID. To compare scalability performance, we conduct the same experiments on the second-best baseline, Hetero-SSFL. The linear probing results are shown as follows: | **Method** | **5-Client**| **10-Client** |**30-Client** | |:------------:|:-------:|:-------:|:-------:| | Hetero-SSFL | 59.13% | 54.51% | 49.22% | | Ours| 67.79% | 57.39% | 53.85%| The results show that as the number of clients increases, our proposed method still outperforms **Hetero-SSFL**, demonstrating the scalability of our **FedMKD**. If there are any further confusions/questions, we are happy to clarify and try to address them. Thank you again and your recognition means a lot for our work.
Summary: This paper studies the problem of federated self-supervised learning. A multi-teacher knowledge distillation framework is proposed to address the two challenges: deviated representation abilities and inconsistent representation spaces. Specifically, the adaptive knowledge integration mechanism is designed to learn better representations from all heterogeneous models, and a global knowledge-anchored alignment module is used to make the local representation spaces close to the global spaces. Experiments on two datasets demonstrate the effectiveness of the proposed method. Strengths: * This paper is well-written and easy to follow. * The analysis of the challenges, i.e., deviated representation abilities and inconsistent representation spaces, is clear. * Introducing an attention module to integrate the knowledge is technique sound. Weaknesses: * Knowledge distillation is widely used in federated learning and federated self-supervised learning. The contribution of the proposed method (i.e., multi-teacher knowledge distillation framework) is incremental. * This paper uses small models such as resnet18 and vgg9. * Only CIFAR datasets are used to evaluate the proposed method. Experiments on ImageNet-1k are important to evaluate the effectiveness. Technical Quality: 3 Clarity: 3 Questions for Authors: Weakness. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors adequately addressed the limitations in the appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer de59 for the time and valuable feedback! We would try our best to address the comments one by one. **Response to Weakness 1:** Thank you very much for the insightful comments. As you said, knowledge distillation (KD) has been widely used in Fed-SSL, such as FedX and other works. However, our contrastive learning-based multi-teacher KD introduces novel approaches to address the challenges of **deviated representation abilities** and **inconsistent representation spaces**: - First, to mitigate the impact of heterogeneous models with **deviated representation abilities**, we introduce an **adaptive knowledge integration** module to learn high-quality representations from them. - Second, to encode all classes from clients in a unified space, the global model updates using a **weighted combination of self-supervised loss and distillation loss** to update. - Finally, the global knowledge anchored alignment module is applied on the server to eliminate the **inconsistency in representation spaces** and reduce the burden on the clients. In our experiments, FedMD and FedDF, which are based on traditional KD methods, were adopted as comparative baselines. As shown in Table 2 (Page 7) and Table 3 (Page 8), our proposed FedMKD achieves the best performance. **Response to Weakness 2:** Thank you very much for you valuable comment. Adopting larger models can indeed demonstrate the scalability of our proposed FedMKD framework. As suggested, we set one client's local model to **ResNet50** and re-conduct the experiment under the setting of the 'Partial' public dataset and 'Class' distribution among clients on **CIFAR-10**. All other settings remained identical to those in the original paper. The linear probing results are shown below: | **Method** | **1 ResNet50**|**Original**| |:---:|:---:|:-:| | FedMD| 48.37% |47.16%| | FedDF| 54.46% |52.59%| | MOON-KL| 48.48% |46.41%| | MOON| 56.33% |54.31%| | FedET| 58.96% |57.75%| | Hetero-SSFL| 64.74% |63.20%| | FedMKD| 67.75% |66.39%| The results indicate that the introduction of a larger model can improve overall performance, and our proposed FedMKD still outperform all baselines. **Response to Weakness 3:** Thank you very much for the insightful comments. We use **Imagenet-100** and **Imagenet-1k** to evaluate the effectiveness of our proposed method. The results are presented in **Table 11 in Global Response PDF**. We can conclude that our proposed FedMKD outperforms all baselines. If there are any further confusions/questions, we are happy to clarify and try to address them. Thank you again and your recognition means a lot for our work.
Rebuttal 1: Rebuttal: We thank all the reviewers' valuable comments and feedback, which are great helpful in improving the quality of this paper. We try our best to address the concerns including making preliminary experiments as long as time allows. As the **reviewer** **QsUu**, **de59**, **24fB** concerned, we try our best to conduct **new experiments** on **large-scale image datasetes, ImageNet-100 and ImageNet-1k**. We utilize 70 RTX 3090 for more than 100 hours to conduct these experiments. Due to the time limitation, here we only present preliminary results and more results will be presented in the final version. The setting is as follows: the public dataset is 'Partial' and the data distribution in the client is 'Class'. The other setting is identical to the draft. The linear probing results are shown in **Table 11** in PDF. **Figure 8** in PDF illustrates the multi-teacher knowledge distillation method in response to **reviewer** **QsUu**'s Weakness 2 and Weakness 3. **Table 12** in PDF is an updated version of the original Table 1, addressing the feedback of "Presentation and writing" from **reviewer** **QsUu**. Thank you again for the detailed valuable comments to help improve the quality of our work. Pdf: /pdf/2d432c1042c2bf105189d18a2e13b98cbbaacba3.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: - The paper proposes a framework called FedMKD which is a multi-teacher knowledge distillation framework for federated learning. - To allow for different clients to have different resources, a resource adaptive approach is designed. The approach handles class skew and different architectures. - The knowledge distillation framework allows for learning representations from the different client representations. An SSL loss is used in addition on the common representation space. - Different from prior works, a global representation model is learnt despite differences in clients. - Approach is shown to outperform prior works on CIFAR-10 and 100 datasets. Strengths: - The paper tackles an important problem in federated learning: heterogeneity in clients both from the perspective of resources and data. - A simple and effective solution is proposed to tackle to learn a common representation space by using techniques in self-supervised learning knowledge distillation. - The approach is shown to outperform prior work on CIFAR-10 and 100 benchmarks - The paper is easy to follow. Ablations and experiments seem thorough (see weaknesses). - While I have not tried it, authors have provided an implementation of their approach. Weaknesses: Weaknesses/Concerns & Questions: - How does the approach compare to the traditional FL settings when aggregation like L107 is done (assuming all client architectures are the same) ? - Did the authors experiment with applying the knowledge distillation to individual projected features instead of the aggregate ? - L167: Are the negatives also after the aggregation step ? or from individual projection layers ? - Additional larger-scale datasets will be helpful to understand whether the approach will scale. Presentation and writing: - Table 1 needs more details to be helpful. A more detailed caption should help. More subjective columns like "inconsistent representation space", "deviated representation ability" are not apt for a table like this. Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to the weaknesses section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The paper has adequately discussed limitations and potential negative impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer QsUu for the time and valuable feedback! We would try our best to address the comments one by one. **Response to Weakness 1:** Thank you very much for the insightful comments. Our proposed FedMKD is designed for resource-aware settings, allowing each client to choose an appropriate model to train, even if the model architectures differ among clients. In this context, traditional aggregation methods are ineffective. However, if all clients use the same architecture, the model with the minimal parameters should be chosen. In the newly designed experiments, both the clients' models and the global model are set to VGG9. The linear probing results compared to FedAVG are as follows: | **Methods** | **Acc**| |:--:|:--:| | FedAVG| 48.32% | | **FedMKD**| 66.39% | In our FedMKD approach, each client can adaptively choose the appropriate local model to fully utilize its computational resources. For example, a client with ample computational resources can select a larger model like ResNet18, while another client with limited resources can opt for a smaller model like VGG9. This flexibility enables FedMKD to outperform FedAVG on the VGG9 model. **Response to Weakness 2 & 3** First, we apologize for any confusion caused by our previous presentation. To clarify, in Line 167, the negative sample is derived from the individual projection layers of the global model. In knowledge distillation (KD), we have an aggregated anchor representation $\bar{r_i}$, a positive representation $r_{s,i}$ from global model, and multiple negative representations $r_{s,j}$. The purpose of knowledge distillation is to push the anchor representation $\bar{r_i}$ and the positive representation $r_{s,i}$ closer together in the feature space while pulling the anchor away from the negative representations. Thus, the global model could encode all classes from clients in a unified space. Formally, we define this KD loss function, which is presented in the original paper on Line 168: $L_{distill}=-log\frac{ exp(sim(r _ {s,i}, \bar{r} _ i)/\tau)}{exp(sim(r _ {s,i},\bar{r} _ i)/\tau)+\sum\limits _ {j\neq i}exp(sim(r _ {s,i},r _ {s,j})/\tau)}.$ We also illustrate this process in the **Global Response PDF (see Figure 8)**. **Response to Weakness 4:** Thank you very much for the insightful comments. We use **ImageNet-100** and **ImageNet-1k** to evaluate the effectiveness of our proposed method. The results, presented in **Table 11 of the Global Response PDF**, demonstrate that our proposed FedMKD outperforms all baseline methods. **About Presentation and writing:** Thank you very much for the insightful comments. We will revise the caption of Table 1 to "Comparison of federated self-supervised learning methods. √ indicates that the proposed method focuses on this challenge and × indicates that it does not.". Additionally, the columns labeled "inconsistent representation space" and "deviated representation ability" will be replaced with the objective description "data heterogeneity." The modified table is presented as **Table 12 in the Global Response PDF**. If there are any further confusions/questions, we are happy to clarify and try to address them. Thank you again and your recognition means a lot for our work. --- Rebuttal Comment 1.1: Title: Thanks for the responses Comment: I have read the responses and other reviews. Thanks for the detailed detailed clarifications and new experiments. I have updated my rating for the paper. --- Reply to Comment 1.1.1: Title: Response to Reviewer QsUu Comment: We appreciate you taking time to read and respond to our rebuttal. And thank you very much again for recognizing our work.
null
null
null
null
null
null
Decomposable Transformer Point Processes
Accept (poster)
Summary: The work designs a novel transformer-based approach for modelling time series (e.g. predicting next event). The main novelty is the decomposition of the log-likelihood into a conditional probability mass and density functions. The former, implemented with a transformer, models the distribution over the event types; the latter models the event occurance with a log-normal mixture. The experimental results are compelling: the approach achieves convincing performance on next-even prediction (in terms of log-likelihood) and long-horizon prediction. Strengths: * The writing and presentation are clear and precise. The technical introduction and the contribution have solid foundations. * The experiments are convincing. I appreciate reporting the variance for transparency. * I also appreciate the details provided here and there without distracting from the main story (e.g. the hyperparameter values in the supp. material). Weaknesses: I do not see any major problems, but would encourage minor revisions towards reaching broader audience. Specifically: * The thinning algorithm plays an important role in motivating the approach and interpreting the results. The readers will appreciate a high-level recap of the algorithm, intead of having to look it up in the reference. * The justification for the decomposition (ll. 90-93) comes across a bit weak. Perhaps it could be improved by providing an analytical argument why it should offer the same benefits as the intensity function, and why depending on the thinning algorithm at inference time is a problem. Technical Quality: 4 Clarity: 4 Questions for Authors: * How does the size of the mixture model (M) affect the prediction and what was the methodology for choosing the optimal M? * The scale in Fig. 1 is wildely different. How does one explain the differences and what does it say about the data quality and/or task complexity? Minor remarks: l. 80 The integral’s upper bound coincides with the integrand variable. l. 150-155: Perhaps one could provide a comparison with previous work, what does this simplification of the problem translates to in practice (how much faster it is to train/optimize). Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The limitations are discussed sufficiently in Sec. 6. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We respond to your concerns below: - Even though we do not present the full thinning algorithm in the main text due to space restrictions, we present the exact algorithm in the appendix on page 17. As we explain in lines 76-83, the expression of the two log-likelihoods in Eq. (1) and (2) are equivalent. Actually, there is a closed-form formula that relates the intensity function with the CPMF/CPDF; a derivation can be found in Section 2.4 in JG Rasmussen, 2018, "Lecture notes: Temporal point processes and the conditional intensity function" - We have included a table (attached pdf) with an ablation study on the influence of the number of mixture components M. The optimal M is chosen based on the log-likelihood of the held-out dev set as we explain in the Appendix (Section A.2). - One could potentially interpret the large (or low) log-lkl values as a proxy of the ability of the models to capture the complex dynamics of the event sequences. Larger values may indicate that the model provides a good approximation of the latent generative mechanism of the process however this might not be always true since it is heavily based on the quality of the training/test data and how representative is the sample at our disposal. - Thank you for spotting the typo in line 80. We will correct this in the revised version of the paper. --- Rebuttal Comment 1.1: Title: Discussion Comment: I thank the authors for their response. What is the value of $M$ in the main experiments (e.g. in Fig. 1)? Since $M=2$ seems optimal in the provided ablation study, what is the value of log-likelihood for $M=1$ on the same datasets? --- Reply to Comment 1.1.1: Comment: We have used $M=2$ across all datasets in Fig. 1. For $M=1$, the model's flexibility is quite reduced and thus there is a performance drop. The corresponding results are $$ \\begin{array}{lc} \\textbf{Datasets} & \\textbf{M=1} \\\\ \\text{Amazon} & -2.342 \\\\ \\text{Taxi} & 0.391 \\\\ \\text{Taobao} & 1.020 \\\\ \\text{SO-V1} & -2.19 \\\\ \\end{array} $$
Summary: This paper presents a novel framework for modeling marked temporal point processes (MTPPs) using Transformer-based architectures. The authors address the limitations of traditional methods that rely on computationally intensive thinning algorithms by proposing a decomposable approach that partly uses a Transformer. This approach models the conditional intensity function (CIF) using a Transformer architecture while separating the modeling of inter-event times and event types into two distinct components: the conditional probability density function (CPDF) for inter-event times and the conditional probability mass function (CPMF) for event types. Strengths: - Novel framework that uses a Transformer architecture for the first time to tackle this problem - Decomposing the likelihood into conditional probability density function (CPDF) and conditional probability mass function (CPMF) components - The proposed DTPP model is technically solid, with a well-structured approach to decomposing the MTPP likelihood and modeling the components using Transformers. The mathematical formulations are clearly presented. - The authors provide detailed implementation details and make their code available. This ensures that the results can be reproduced and verified by other researchers, which is important for the credibility and quality of the work. - The demonstrated improvements in predictive accuracy and computational efficiency over state-of-the-art methods highlight the significance of the proposed framework. The speedup achieved in long-horizon prediction tasks is particularly noteworthy. Weaknesses: - The primary contribution of the paper is the application of Transformer architecture to the problem of modeling marked temporal point processes. While this is a useful application, it does not introduce significant theoretical advancements or novel methodologies beyond leveraging existing models in a new context. To enhance the novelty, the authors could consider integrating more innovative elements or demonstrating new theoretical insights specifically tailored for MTPPs. - The paper could benefit from more detailed ablation studies to isolate the contributions of different components of the proposed framework. For example, assessing the impact of various hyperparameters, the influence of the Transformer architecture's depth and width, or the role of specific design choices in the decomposable framework would provide deeper insights into the model's functioning and robustness. - While the paper includes several figures and tables, additional visualizations could enhance clarity. For example, visualizing the learned intensity functions, the attention mechanisms within the Transformer, or case studies showing specific sequences predicted by the model could provide more tangible insights into the model's behavior. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Have you considered including more visualizations of the learned intensity functions, attention mechanisms, or specific case studies showing predicted sequences? Adding more visual representations of your model’s predictions and learned parameters would improve clarity and provide tangible insights into the model’s behavior. 2. Discussion about potential theoretical extensions that could lead to the modifications to the Transformer architecture specifically suited for MTPPs. Probably some interesting inductive biases could be added here. 3. Still, I do not see too much scientific novelty because literally Transformer was partly applied to a problem where it was not applied before. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Authors already discussed the limitations in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback and your suggestions. We respond to your concerns below: 1. Notice that we do not learn any intensity functions since our framework is based on the decomposition in Eq. (2). Given the black-box nature of the transformer-based architecture we refrained from including visualizations of the learned representations. This is a general problem for black-box models such as transformers or LSTM/RNN even though there has been some progress in the last few years [*] but in the context of computer vision tasks. To the best of our knowledge, previous works on MTPP using the transformer architecture do not provide such visualizations due to this reason. Only [44](Zuo, 2020) provides a visualization of attention patterns of different attention heads in different layers; however, we believe the results are more confusing rather than providing clarity since they have been arbitrarily chosen from one of the datasets without any insight regarding the configuration of the tranfsormer architecture. 2. This is an interesting point and something that we aim to investigate in the future regarding the modification of the architecture. Nevertheless, tha currently used architecture is already tailored for MTPP since it shares components with [44] amd [41]. We are happy to include this as future direction in the paper. Thank you. 3. We respectfully disagree with your assessment of lack of novelty. We indeed utilize ideas from previous works to build our final novel model but we fail to see why this is absence of novelty. For instance, [30] (Panos, 2023) used the same decomposition which is already known from [6] (Cox, 1975) and the functional form in [27] (Narayanan, 2023) to model the mark distribution. [35](Shchur, 2019) combined a mixture of log-normals with LSTM; all well-known ideas at that time. The continuous-time Transformer architecture for modeling point processes was first adopted by [44](Zuo, 2020) and [43] (Zhang, 2020) independently while [41] (Yang, 2022) later used the same transformer architecture with a modified way to model the intensity function. We also feel that the reviewer has overlooked our experimental evaluation which provides strong evidence of the state-of-the-art performance of our model over well-established baselines. Both the RMSE and ERROR metrics highlight the efficiency of combining a simple mixture of log-normals with a transformer architecture. We believe this is an important contribution and something that was not known to the community until now. The other contribution is the ability of our model to significantly outperform (in a fraction of time) the state-of-the-art HYPRO baseline on the long-horizon prediction task. This result showcases the importance of using a simple yet robust model for the inter-even times as the mixture of log-normals. We are also the first who investigate the limitations of thinning-based methods for the long-horizon prediction task. This result becomes more important if you consider that our method was never developed to deal with the more challenging long-horizon prediction task. We believe these results by themselves are novel enough and would be interesting for the community. [*] Chefer, et al. "Transformer Interpretability Beyond Attention Visualization", CVPR, 2021 --- Rebuttal Comment 1.1: Comment: Thank you for your detailed rebuttal. Your clarifications are appreciated, particularly concerning the challenges with visualizations in black-box models and your justification of the novelty of your work. While I understand your points and acknowledge the technical solidity and performance improvements demonstrated, I maintain that the primary novelty lies in applying an existing architecture to a new problem domain rather than introducing fundamentally new theoretical insights. I will keep my rating of 5 (Borderline accept), recognizing the technical soundness and potential practical impact of your contributions, while also noting the limited theoretical innovation.
Summary: The paper introduces a Decomposable Transformer Point Process (DTPP), a novel framework for modeling marked point processes. It maintains the advantages of attention-based architectures while avoiding the computational intensity of the thinning algorithm. The model uses a mixture of log-normals for inter-event times and a Transformer architecture for the conditional probability mass function of event marks, achieving state-of-the-art performance in next-event prediction tasks and outperforming thinning-based methods in long-horizon prediction. Strengths: 1. **Innovative Approach**: The paper proposes a new way to model marked point processes by decomposing the problem into manageable sub-problems, which is a creative advancement in the field. 2. **Empirical Performance**: The DTPP model demonstrates improved performance over existing methods, particularly in next-event prediction and long-horizon forecasting, which is a significant contribution. Weaknesses: 1. **Unclear motivation.** I am not an expert in this field, so I do not have a deep understanding of the field of neural point processes. I was pretty confused when I tried to understand the necessity of decomposing the log-likelihood of a marked point. 2. **The writing is a bit difficult to read.** The writing lacks clarity for readers outside the field. The overall quality needs improvement, as certain sections are challenging to comprehend due to complex sentence structures and unclear presentation of ideas. Specific examples include lines 60-63 and 128-136, where the convoluted language may hinder understanding for those unfamiliar with the subject matter. Technical Quality: 3 Clarity: 2 Questions for Authors: See above. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your feedback. We respond to your concerns below: 1. Using the decomposition in Eq. (2) is equivalent to using the standard log-likelihood based on $λ_k^∗(t)$ in Eq. (1). For more details, see JG Rasmussen, 2018, "Lecture notes: Temporal point processes and the conditional intensity function". We chose the decomposition in (2) because it allows us to freely define different models for the times and marks. Therefore, we can have models with nice properties (eg mixture of log-normals) without depending on the computationally demanding thinning algorithm for generating samples. We discuss this in lines 89-94. 2. We are unsure what is confusing in these lines. In lines 60-63, we just showcase the contributions of this paper. More details regarding the long-horizon prediction task can be found in Section 5.2. In lines 128-136, we introduce shortly the properties of the mixture of log-normal distributions and how this model can be used for modeling the distribution of inter-event times. The notation is quite standard but we encourage the reviewer to point specifically where the confusion comes from; we are happy to modify the text of the paper to increase readability.
null
null
Rebuttal 1: Rebuttal: We provide an ablation study on the influence of the number of mixture components M. Pdf: /pdf/bd6d281468db416422bc2b4b9858cd08847aba76.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
ST$_k$: A Scalable Module for Solving Top-k Problems
Accept (poster)
Summary: This paper addresses the Top-K problem by introducing a new loss function. Building on the Average Top-K Loss, the authors incorporate a smoothed ReLU function to create the $ST_k$ loss, which is fully differentiable. Through experiments on various datasets, they demonstrate the effectiveness of their approach. Strengths: 1. The paper is clearly written and easy to follow. 2. The authors conduct extensive experiments to evaluate the proposed $ST_k$ loss, using both synthetic and real-world datasets. Weaknesses: 1. The core weakness lies in the lack of significant innovation. The main technical contribution is the replacement of the ReLU-like operation in an existing optimization method for the Top-K problem with a smoothed ReLU function, controlled by a hyperparameter for smoothness. Similar concepts already exist in various ReLU function variants, as seen in related works such as [1,2]. 2. The only large-scale experiment is a long-tailed classification task involving parameter-efficient fine-tuning of a CLIP model pretrained on ImageNet-21K. Given that this model already has substantial pre-trained knowledge, the results are less convincing. Evaluating the method by training a model from scratch on these tasks would better demonstrate its effectiveness in practical scenarios. 3. The paper does not include an ablation study on the hyperparameter $\delta$ to study its impact on the performance. [1]. Biswas, Koushik, et al. "SMU: smooth activation function for deep networks using smoothing maximum technique." arXiv preprint arXiv:2111.04682 (2021). [2]. Hendrycks, Dan, and Kevin Gimpel. "Gaussian error linear units (gelus)." arXiv preprint arXiv:1606.08415 (2016). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In lines 20-21, the authors mention that "the cost of the ranking process can no longer be ignored." I am curious about how significant this ranking cost is in the context of the full forward pass of a deep learning model. Given that larger models require more computational resources for their intermediate layers, this ranking cost might be negligible in most cases. Additionally, there is no detailed comparison of time costs between the Average Top-K Loss $AT_k$ and the proposed $ST_k$ loss, apart from on the toy synthetic dataset. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and feedback. Below is a detailed point-by-point response addressing your main concerns and questions. **For your concerns:** > lack of significant innovation We hold a different view on this. * The sorting optimization algorithm proposed by [1] has been around for two decades and has not been widely applied, largely due to the instability caused by the non-differentiable nature of the optimization function. * This paper introduces a new SReLU that possesses point-wise convergence to the original ReLU, a feature not present in other shape-similar ReLU variants. Our experiments demonstrate that SReLU surpasses other ReLU variants at least in solving the Top-K problem. * We have adapted the improved version of [1] for training deep learning models, and our experiments have shown that the trainable parameter $\lambda$ and model parameters from [1] do not require a two-step optimization using BCD but can be integrated into the computation graph and optimized uniformly with SGD. * Utilizing ST$_k$, we refresh the state-of-the-art (SOTA) records for two long-tailed classification leaderboards without additional computational resources. Moreover, ST$_k$ is almost complementary to any existing long-tailed learning methods. This is likely to attract widespread attention from the relevant communities. [1] Ogryczak, Wlodzimierz, and Arie Tamir. "Minimizing the sum of the k largest functions in linear time." Information Processing Letters 85.3 (2003): 117-122. > Evaluating the method by training a model from scratch In this work, we trained models from scratch on two machine translation datasets, and the experimental results show that ST$_k$ improves model performance. We will consider including results from training ImageNet-LT from scratch, in our final version. > An ablation study on hyperparameter $\delta$. * The smoothing coefficient $\delta$, 0.01 is a grid search determined value for all datasets in section 5.1 (see page 7, line 186) and was adopted in all the other experiments. * We believe that the ablation study is necessary, and here we provide an ablation study on the CIFAR-100-LT dataset. All other settings remain the same as those in the paper, with only $\delta$ being changed. | $\delta$ | 0.1 | 0.05 | 0.01 | 0.005 | |--------------|-------|--------|-------|--------| | Accuracy | 88.8 | 89.4 | 89.8 | 89.8 | * We will include extensive ablation studies in our final version, thanks again for the insightful suggestion. **For your question:** > Is the ranking cost negligible? We experiment with the time cost of a few common sorting algorithms, AT$_k$ and ST$k$ to calculate the ranking average. The specific scheme of the experiment is to find the Top-k (k=5) sum from 10,000 normally distributed samples. For AT$_k$ and ST$_k$, we iterate until the error is less than $10^{-2}$, and for each algorithm, we conduct 50 experiments and record the average time taken in the third column of the Table. | Algorithm | Complexity | Average Time(s) | |--------------|--------------------|-----------------| | BubbleSort | $\mathcal{O}(n^2)$ | 20.42196$\pm$3.7015 | | HeapSort | $\mathcal{O}(n\log(n))$ | 0.1243$\pm$0.0446 | | AT$_k$ | $\mathcal{O}(n+k)$ | 0.2167$\pm$0.1528 | | ST$_k$(Ours)| $\mathcal{O}(n+k)$ | **0.0127**$\pm$0.0020 | In experiments, ST$_k$ is the fastest algorithm to solve the ranking average value, on the other hand, AT$_k$ lacks robustness in training due to the gradient discontinuity. In addition, we conduct time cost experiment by training a Large Language Model. On a single H800, we tune a Llama-8B using LoRA method. After iterating 1000 steps, the average time cost is 1.9824s per step, while the performing the quick sort on the individual losses cost 0.0317s per step (# of tokens per batch = 2048). --- Rebuttal 2: Comment: Thanks again for reviewing our work. As the author-reviewer discussion phase wraps up, could you let us know if our responses have addressed your concerns? If so, would you reconsider your rating? If you still have concerns, please share so we can address them. --- Rebuttal 3: Title: Response Comment: Thanks for the authors' response. Some of my concerns have been addressed in the rebuttal. However, after reviewing comments from other reviewers, such as reviewer uCvC, I also share curiosity about the theoretical support behind the proposed method, which has not been directly addressed in the rebuttal. The SReLU approach, in particular, seems to be an empirical result without clear theoretical guidance. --- Rebuttal 4: Comment: Thanks for raising this issue, and we are glad to address this for all reviewers. **1.** The motivation here is that the discontinuity has affected the stability of the parameter $\lambda$ optimization process. Therefore, we need to find a smooth objective function to approximate: $\min_\lambda \frac{k \lambda}{n} + \frac{1}{n}\sum_{i=1}^n [\ell_i - \lambda]_+$ **2.** Other ReLU-variants cannot sufficiently approximate ReLU as they are only similar in shape. Therefore, we design SReLU and proved that the approximation error can be bounded by $\delta / 2$ (see page 12, A.1). $ \left[ \frac{k \lambda}{n} + \frac{1}{n}\sum_{i=1}^n [\ell_i - \lambda]_+ \right] $ $- \left[ \frac{k \lambda}{n} + \frac{1}{n}\sum_{i=1}^n \frac{1}{2}\left[(\ell_i-\lambda) + \delta \left( \sqrt{\frac{(\ell_i-\lambda)^2}{\delta^2}+1}-1 \right)\right] \right]$ $= \sum_{i=1}^n \frac{1}{2n} \left[ \sqrt{(\ell_i-\lambda)^2} - \sqrt{(\ell_i-\lambda)^2 + \delta^2} + \delta \right]$ $= \sum_{i=1}^n \frac{1}{2n} \left[ \delta - \frac{\delta^2}{\sqrt{(\ell_i-\lambda)^2} + \sqrt{(\ell_i-\lambda)^2 + \delta^2}} \right]$ $= \sum_{i=1}^n \frac{1}{2n} \left[ \delta \left( 1 - \frac{1}{\sqrt{\frac{(\ell_i-\lambda)^2}{\delta^2}} + \sqrt{\frac{(\ell_i-\lambda)^2}{\delta^2} + 1}} \right) \right] < \frac{\delta}{2}$ **3.** The smoothed objective function shows jointly convex w.r.t $(w_{model}, \lambda)$, making ST$_k$ a special case of the non-linear multiple choice knapsack problems, with at most $q=2$ roots, which can be found in constant time. Thus, the problem can be solved in $\mathcal{O}(n*lnq)=\mathcal{O}(n)$ time as $q$ being fixed [1] (section 4, "non-linear case"). [1] Zemel E. An O (n) algorithm for the linear multiple choice knapsack problem and related problems[J]. Information processing letters, 1984, 18(3): 123-128. [2] N. Megiddo, Linear programming in linear time when the dimension is fixed, J. ACM 31 (1984) 114–127. [3] Ogryczak, Wlodzimierz, and Arie Tamir. "Minimizing the sum of the k largest functions in linear time." Information Processing Letters 85.3 (2003): 117-122. **4.** Experiments have shown that, empirically, we don't even need to spend additional time optimizing $\lambda$ separately. Instead, by optimizing it synchronously with the model parameters $w_{model}$, we can still achieve performance improvements. Hope these explanations address your concerns. If you have any confusion, please feel free to reach out to us anytime. --- Rebuttal Comment 4.1: Comment: Thank you for addressing my concerns about the theoretical support. I would like to increase my final rating. --- Reply to Comment 4.1.1: Comment: We appreciate your recommendation. Best wishes.
Summary: This paper proposes a differentiable module for solving the top-k problem. Specifically, the paper proposes to approximate the hinge function with a new differentiable function. Experiments on binary classification, long-tailed classification, and regression tasks with $ST_k$ loss show some improvements over baselines, like Average loss, and $AT_k$ loss. Strengths: (1) The paper is written clearly and easy to follow. (2) The proposed new differentiable function in Eq.(2) can approximate the hinge function. (3) The proposed method is flexible and is expected to be used in many scenarioes. Weaknesses: (1) For the implementation of $AT_k$, is it implemented by a sort algorithm or just Eq. (5)? From my point of view, the performance with $ST_k$ loss should be bounded by the performance with $AT_k$ loss. However, Tables 1,4 show that $ST_k$ outperforms $AT_k$. Why is $ST_k$ loss better than $AT_k$ loss? (2) The proposed $ST_k$ loss is flexible and can be directly deployed on the traditional training paradigm. It would be better to validate the effectiveness of $ST_k$ on large-scale data, like ImageNet, and iNaturalist. (3) The authors claim that the average loss is easily overfitted on imbalanced data. With $ST_k$ loss and CLIP models, it achieves the state-of-the-art on ImageNet-LT and Places-LT. Is the $ST_k$ loss complementary to other state-of-the-art long-tailed learning methods, like GPaCo[2] and BCL[1]? [1] Balanced Contrastive Learning for Long-Tailed Visual Recognition. CVPR 2022. [2] Generalized Parametric Contrastive Learning. TPAMI 2023. (4) The models trained with $ST_k$ loss could be optimized alternatively or simultaneously with SGD for model parameters and $\lambda$. The ablation study is encouraged to be included. Technical Quality: 2 Clarity: 3 Questions for Authors: (1) Explanation of why $ST_k$ outperforms $AT_k$. (2) The experiments on large-scale data, like ImageNet. (3) Is the $ST_k$ loss complementary to current state-of-the-art long-tailed learning methods. (4) Ablation for the optimization manner of model parameters and $\lambda$. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and feedback. Below is a detailed point-by-point response addressing your main concerns and questions. > Explanation of why ST$_k$ outperforms AT$_k$. * ST$_k$ and AT$_k$ are not equivalent. Reference [1] proves the equivalence between AT$_k$ and MAT$_k$, as stated in Eq. (5). * The parameter $k$ is a hyper-parameter, usually we set $k$ to 0.9 $\times$ batch size, which is a practical choice. However it's impossible to determine the optimal $k$ for the model. * During the optimization process, once $k$ is set, AT$_k$ strictly (and harshly) filters out all samples ranked beyond the ($k$+1)-th position. In contrast, ST$_k$, which uses SGD for optimization, does not always have $\lambda$ as the k-th largest individual loss. Therefore, in the optimization process, $\lambda$ also serves as a parameter aimed at minimizing the overall loss. [1] Ogryczak, Wlodzimierz, and Arie Tamir. "Minimizing the sum of the k largest functions in linear time." Information Processing Letters 85.3 (2003): 117-122. > Lack of experiments on large-scale data, like ImageNet. ImageNet is a highly balanced dataset, with nearly the same number of images in each of its 1,000 classes. It is not a long-tailed classification dataset. We conduct experiments on its long-tailed version, ImageNet-LT. In addition to this, we also conduct experiments on 2 machine translation datasets. Due to the disparity in word frequencies, machine translation datasets are naturally long-tailed. > Is the ST$_k$ loss complementary to current state-of-the-art long-tailed learning methods. ST$_k$ represents an improvement in the aggregation method of individual losses, making it complementary to almost all existing long-tailed learning methods. PaCo/GPaCo/BCL, after obtaining a pre-trained model through contrastive learning, can utilize ST$_k$ to aggregate cross-entropy individual losses during subsequent transfer learning. > Ablation for the optimization manner of model parameters and $\lambda$ In this work, we use Adam optimizer. We will consider including SGD or BCD (Block Coordinate Descent) as ablation studies. --- Rebuttal Comment 1.1: Title: Thanks for the responses from the authors Comment: Thanks for the responses from the authors. I still have the following confusion on the paper. Q1. As the authors claim that $ST\_{k}$ can dynamically adjust the k for loss calculation, there should be some empirical or theoretical analysis to confirm the hypothesis. Q2. The proposed loss function is general and should be applicable to balanced data, like ImageNet. If not, the authors should provide more analysis on why the loss is specific to imbalanced data. How does the proposed method alleviate the overfitting issue of imbalanced data? Q3. The authors claim that the $ST\_{k}$ complements other long-tailed algorithms. However, there is no empirical analysis to support it. Q4. Why is the $ST\_{k}$ specific to the Adam optimizer? If the concerns can't be addressed, I prefer to keep the initial rating. --- Rebuttal 2: Comment: Thanks again for reviewing our work. As the author-reviewer discussion phase wraps up, could you let us know if our responses have addressed your concerns? If so, would you reconsider your rating? If you still have concerns, please share so we can address them. --- Rebuttal 3: Comment: Thanks for listing out all your confusions! Some of them can be explained theoretically, while others can be clarified through additional experiments. Here is the point-by-point response. > Q1. As the authors claim that ST$_k$ can dynamically adjust the k for loss calculation, there should be some empirical or theoretical analysis to confirm the hypothesis. * The $\lambda^* = \ell_{[k]}$ not only serves as a filter, excluding individual losses that smaller than $\ell_{[k]}$ (see $\min_\lambda k\lambda + \sum_{i=1}^n [\ell_i - \lambda]_+$). The $\lambda$ itself, also serves as a trainable parameter aiming to minimize the loss. * By smoothing the $[\cdot]_+$ function, the optimization process becomes stable and efficient (see standard division in Table 2, 3 and 5, also time cost in Table 1). > Q2. The proposed loss function is general and should be applicable to balanced data, like ImageNet. If not, the authors should provide more analysis on why the loss is specific to imbalanced data. How does the proposed method alleviate the overfitting issue of imbalanced data? * In this paper, we use the ST$_k$ module to smooth the Average Top-k Loss (AT$_k$ Loss) to demonstrate that ST$_k$ can be applied in deep learning scenarios. We did not conduct experiments on balanced datasets because: AT$_k$ Loss was designed for imbalanced datasets [1], as this was proved by its creator, empirically and theoretically. * The application of ST$_k$ does not require additional computational time/resources while consistently bringing performance improvements. [1] Lyu S, Fan Y, Ying Y, et al. Average top-k aggregate loss for supervised learning[J]. IEEE transactions on pattern analysis and machine intelligence, 2020, 44(1): 76-86. > Q3. The authors claim that the ST$_k$ complements other long-tailed algorithms. However, there is no empirical analysis to support it. This is a very valuable suggestion, glad that you asked! The fact is that, we did conduct experiments on “how ST$_k$ complements other long-tailed algorithms.” However, due to the limited space, we did not include them in the present version. Here we would like to provide a temporary experiment, and consider providing a detailed ablation study in our final version. The backbone here is an Vision Transformer (ViT) pre-trained by MAE / CLIP [2][3]. "CS" here represents the cost-sensitive learning; while PEL was provided by [4], which is SOTA method on 2 long-tailed classification leaderboards. If the model was pre-trained by MAE, we add an Linear Classifier after the backbone; other-wise we follow the default PEFT in [4]. The batch size was set to 2048, training 30,000 steps. The results are as follows: Pretrained By| CS | PEL | ST$_k$ | ImageNet-LT | CIFAR-100-LT | |-------|-------|-------|-----------|-------------------|---------------------| | MAE| – | – | – | 65.701 | 78.572 | | MAE| ✓ | – | – | 69.884 | 82.135 | | MAE| ✓ | – | ✓ | 70.140 | 83.048 | | CLIP| – | ✓ | – | 78.296 | 89.103 | | CLIP| – | ✓ | ✓ | 79.148 | 89.833 | [2] He K, Chen X, Xie S, et al. Masked autoencoders are scalable vision learners[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 16000-16009. [3] Radford A, Kim J W, Hallacy C, et al. Learning transferable visual models from natural language supervision[C]//International conference on machine learning. PMLR, 2021: 8748-8763. [4] Shi J X, Wei T, Zhou Z, et al. Parameter-efficient long-tailed recognition[J]. > Q4. Why is the ST$_k$ specific to the Adam optimizer? * We claimed that the proposed loss can be optimized by the most common optimizers like SGD / Adam, only to stress the fact that no additional time is needed to update $\lambda$ and the model parameters $w_{model}$ separately in two steps by using BCD (Block Coordinate Descent). * Empirically, ST$_k$ is not specific to Adam, it also works with SGD, AdaGrad, and BCD. The Supplementary Material we uploaded include the source code for the experiments. If necessary, we would consider providing the results of different optimizers in our final version. Hope these explanations address your concerns. If you have any confusion, please feel free to reach out to us anytime. --- Rebuttal 4: Comment: We carefully reconsidered your questions. We would like to further clarify our intentions. **For your questions:** > Q2. The proposed loss function is general and should be applicable to balanced data, like ImageNet. If not, the authors should provide more analysis on why the loss is specific to imbalanced data. How does the proposed method alleviate the overfitting issue of imbalanced data? * The curves in Figure 3 clearly reflect that as the degree of imbalance increases, the Average Loss continuously misguides the model to shift the decision boundary toward the minority class. * In contrast, AT$_k$ Loss prevent the model from shifting by ignoring those individual losses with smaller values. * However, the discontinuity affected the stability of the optimization process of $\lambda$. Therefore, we need to find a objective function, * with continuous gradient; * at the same time sufficiently approximate the Average Top-k Loss $\frac{k \lambda}{n} + \frac{1}{n}\sum_{i=1}^n [\ell_i - \lambda]_+$. * $ST_k$ happens to achieve this. Theoretically, the approximation error can be bounded by $\delta/2, \forall \lambda, w_{model}$. This is a **uniform convergence**, which is known as a very strong condition. (because $\delta \in \mathbb{R}$, we call it point-wise convergence, this may cause some misunderstanding) * With the help of $ST_k$, models show consistent performance improvements, on synthetic datasets and real-world datasets.
Summary: The authors proposed a differentiable layer to approximate the top-k loss in deep learning. The proposed layer is motivated from Eq. (1), replacing the ReLU with a smoothed ReLU (SReLU in Eq. (2)). The authors showed that the proposed layer is point-wise convergent to top-k loss. Numerical experiments validate the superiority of the proposed method. Strengths: * The paper is over-all well written, easy to follow. Experiments on real-world datasets are provided, in addition to simulated results. * A fast, accurate approximate of top-k loss is important in deep ranking problems. * Experiments show that the proposed method is over-all better than baseline methods. Weaknesses: * The biggest concern is that most of the real-world experiments are done on small scale datasets. It would be more convincing if the authors could demonstrate their method on large-scale ranking problems, especially, large deep learning models. * The proposed method lacks strong theoretical guarantees. Point-wise convergence is somehow weak. As easy to expect, the convergence rate and error bound of the proposed method should depend on the data distribution, which is not discussed in the paper. It would be nice if the authors could show that under what conditions, the proposed method will converge faster than $AT_k$ and / or with smaller approximation error. * It is important to check how well the proposed method approximates top-k loss in real-world problems. However, most real-world experiments report model accuracy, which is an indirect metric of top-k loss approximation. Please consider to measure the top-k loss approximation error directly. Technical Quality: 2 Clarity: 2 Questions for Authors: * In real-world problems, is it possible to design experiments to measure the top-k loss approximation error directly? * What is the theoretical advantage of $ST_k$ v.s. $AT_k$ Confidence: 1 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your feedback. **For your concerns:** > Limited dataset scale. We hold a different view on this. * We conduct experiments on datasets such as ImageNet-LT and Places-LT, which, to the best of our knowledge, are the largest unbalanced visual classification datasets available. * Additionally, we conduct experiments on two machine translation tasks using the Transformer-Base model, which has 0.11B trainable parameters, approximates that of Large Language Models (LLMs). * In recent years, as model sizes have become increasingly larger, some works that contain large-scale experiments have become hard to follow for the community. We advocate for conclusions that can be validated at appropriate sizes without the need for validation at the level of LLMs. > Point-wise convergence is theoretically weak, expect more convincing experimental evidence. * In Table 1, ST$_k$ not only guides the model to achieve higher accuracy but also converges faster compared to AT$_k$. * we experiment with the time cost of a few common sorting algorithms, AT$_k$ and ST$k$ to calculate the ranking average. The specific scheme of the experiment is to find the Top-k (k=5) sum from 10,000 normally distributed samples. For AT$_k$ and ST$_k$, we iterate until the error is less than $10^{-2}$, and for each algorithm, we conduct 50 experiments and record the average time taken in the third column of the Table. | Algorithm | Complexity | Average Time(s) | |--------------|--------------------|-----------------| | BubbleSort | $\mathcal{O}(n^2)$ | 20.42196$\pm$3.7015 | | HeapSort | $\mathcal{O}(n\log(n))$ | 0.1243$\pm$0.0446 | | AT$_k$ | $\mathcal{O}(n+k)$ | 0.2167$\pm$0.1528 | | ST$_k$(Ours)| $\mathcal{O}(n+k)$ | **0.0127**$\pm$0.0020 | In experiments, ST$_k$ is the fastest algorithm to solve the ranking average value, on the other hand, AT$_k$ lacks robustness in training due to the gradient discontinuity. > Consider measuring the top-k loss approximation error directly. Unlike synthetic datasets, real-world problems do not have an explicit theoretical decision boundary. Therefore, we cannot provide a measure like the ParaF1 score in Table 1, which assesses the overlap between theoretical decision boundary and its estimation. We will consider including Precision, Recall, or F1-score in the final version, but these metrics would only serve as a supplement to Accuracy which cannot directly show the approximation performance. **For your Questions:** > In real-world problems, is it possible to design experiments to measure the top-k loss approximation error directly? Here comes a more fundamental question: in real-world classification problems, like ImageNet-classification, does an optimal model exist that minimizes the loss? Even if such an optimal model exists, it's likely that we cannot compute it with our finite resources, which is precisely why algorithms like Gradient Descent are so valuable. They allow us to approximate the optimal solution efficiently without the need to exhaustively search all possibilities. > What is the theoretical advantage of ST$_k$ v.s. AT$_k$ ST$_k$ converges faster theoretically. --- Rebuttal 2: Comment: Thanks for your insightful suggestions. Hope our explanations address your concerns. Based on the feedback, we are glad to inform you that we include 2 additional experiments. **1.** We present a temporary ablation over the values of the smooth coefficient $\delta$ on CIFAR-100-LT. All other settings remain the same as those in the paper, with only $\delta$ being changed. | $\delta$ | 0.1 | 0.05 | 0.01 | 0.005 | |--------------|-------|--------|-------|--------| | Accuracy | 88.8 | 89.4 | 89.8 | 89.8 | **2.** We conduct experiments on “how ST$_k$ complements other long-tailed algorithms.” The backbone here is an Vision Transformer (ViT) pre-trained by MAE / CLIP [1][2]. "CS" here represents the cost-sensitive learning; while PEL was provided by [4], which is SOTA method on 2 long-tailed classification leaderboards. If the model was pre-trained by MAE, we add an Linear Classifier after the backbone; other-wise we follow the default PEFT in [3]. The batch size was set to 2048, training 30,000 steps. The results are as follows: Pretrained By| CS | PEL | ST$_k$ | ImageNet-LT | CIFAR-100-LT | |-------|-------|-------|-----------|-------------------|---------------------| | MAE| – | – | – | 65.701 | 78.572 | | MAE| ✓ | – | – | 69.884 | 82.135 | | MAE| ✓ | – | ✓ | 70.140 | 83.048 | | CLIP| – | ✓ | – | 78.296 | 89.103 | | CLIP| – | ✓ | ✓ | 79.148 | 89.833 | **[1]** He K, Chen X, Xie S, et al. Masked autoencoders are scalable vision learners[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2022: 16000-16009. **[2]** Radford A, Kim J W, Hallacy C, et al. Learning transferable visual models from natural language supervision[C]//International conference on machine learning. PMLR, 2021: 8748-8763. **[3]** Shi J X, Wei T, Zhou Z, et al. Parameter-efficient long-tailed recognition[J]. **3.** The explanations we provided to Reviewers `SwHE` and `46Ey` (we appreciate the concerns they raised) may appropriately answer your question **"What is the theoretical advantage of ST$_k$ vs AT$_k$?"** * The motivation here is that the discontinuity has affected the stability of the parameter $\lambda$ optimization process. Therefore, we need to find a smooth objective function to approximate: $\min_\lambda \frac{k \lambda}{n} + \frac{1}{n}\sum_{i=1}^n [\ell_i - \lambda]_+$ * Other ReLU-variants cannot sufficiently approximate ReLU as they are only similar in shape. Therefore, we design SReLU and proved that the approximation error can be bounded by $\delta / 2$ (see page 12, A.1). $ \left[ \frac{k \lambda}{n} + \frac{1}{n}\sum_{i=1}^n [\ell_i - \lambda]_+ \right] $ $- \left[ \frac{k \lambda}{n} + \frac{1}{n}\sum_{i=1}^n \frac{1}{2}\left[(\ell_i-\lambda) + \delta \left( \sqrt{\frac{(\ell_i-\lambda)^2}{\delta^2}+1}-1 \right)\right] \right]$ $= \sum_{i=1}^n \frac{1}{2n} \left[ \sqrt{(\ell_i-\lambda)^2} - \sqrt{(\ell_i-\lambda)^2 + \delta^2} + \delta \right]$ $= \sum_{i=1}^n \frac{1}{2n} \left[ \delta - \frac{\delta^2}{\sqrt{(\ell_i-\lambda)^2} + \sqrt{(\ell_i-\lambda)^2 + \delta^2}} \right]$ $= \sum_{i=1}^n \frac{1}{2n} \left[ \delta \left( 1 - \frac{1}{\sqrt{\frac{(\ell_i-\lambda)^2}{\delta^2}} + \sqrt{\frac{(\ell_i-\lambda)^2}{\delta^2} + 1}} \right) \right] < \frac{\delta}{2}$ * The smoothed objective function shows jointly convex w.r.t $(w_{model}, \lambda)$, making ST$_k$ a special case of the non-linear multiple choice knapsack problems, with at most $q=2$ roots, which can be found in constant time. Thus, the problem can be solved in $\mathcal{O}(n*lnq)=\mathcal{O}(n)$ time as $q$ being fixed [4] (section 4, "non-linear case"). * Experiments have shown that, empirically, we don't even need to spend additional time optimizing $\lambda$ separately. Instead, by optimizing it synchronously with the model parameters $w_{model}$, we can still achieve performance improvements. **[4]** Zemel E. An O (n) algorithm for the linear multiple choice knapsack problem and related problems[J]. Information processing letters, 1984, 18(3): 123-128. **[5]** N. Megiddo, Linear programming in linear time when the dimension is fixed, J. ACM 31 (1984) 114–127. **[6]** Ogryczak, Wlodzimierz, and Arie Tamir. "Minimizing the sum of the k largest functions in linear time." Information Processing Letters 85.3 (2003): 117-122. * By smoothing the $[\cdot]_+$ function, the optimization process becomes stable and efficient (see standard division in Table 2, 3 and 5, also time cost in Table 1). --- Rebuttal 3: Comment: Thanks again for reviewing our work. After carefully considering all your suggestions, we realized there is one more thing we need to clarify. We hold a different view on your concern that **"point-wise convergence is somehow weak."** The approximation error between the smoothed loss function and the original loss function can uniformly bounded by $\delta/2, \forall (\lambda, w_{model})$ (see page 12, A.1), which means this is the **uniform convergence**, a very strong condition. We sincerely hope you may reconsider whether our response fully addresses your concern. Best wishes, Authors.
Summary: Authors introduce a novel differentiable module (ST_k) for efficiently solving top-k problems. Their method relies on optimizing a differentiable form of an equivalent optimization problem, by proposing an approximation of ReLU differentiable everywhere. This equivalent optimization problem contains a single parameter \lambda which is supposed to match the k-th largest element in the optimal case, and in practice is optimized over the dataset. Authors experiment on synthetic data, showing that their loss objective results in a model that more closely matches the true decision boundary in this setting. Next, authors experiment with real-world applications on binary classification and long-tailed classification datasets, showing consistent improvement over other optimization losses. Strengths: **Strengths** - Authors propose an efficient differentiable solution to the Top-K problem that integrates very easily into existing model architectures, and only has a single optimizable parameter. - Authors show consistent improvements both in synthetic and real-world tasks compared to sota methods. - The paper is well-written, authors provide ample motivation for their design choices. Weaknesses: **Weaknesses** - Although the improvements shown by the authors are consistent, they are also marginal - only improving slightly in for example CIFAR-100-LT classification and ImageNet-LT classification compared to average aggregation. - As authors mention, they only apply their module to a single Top-K Loss; ATK, making it unclear how ST_k could be applied in other scenarios and how well it generalizes to different settings. Technical Quality: 3 Clarity: 3 Questions for Authors: **Questions** - How do you tune the smoothing coefficient in practice? Is this a hyperparameter that needs to be sweeped over? That would somewhat deteriorate the efficiency of your module. I would like to see an ablation over the values of delta, i think it is important to see what/if any impact this choice has on model performance. - Did you experience any training instabilities with the two-step optimization scheme? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors mention as limitation the fact that they only applied ST_k to the average top-k loss. I think it would be good if they more explicitly mention future research directions that they are interested in/ that would be of value (there is space in the manuscript). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Your positive feedback is very encouraging. **For your concerns:** > Although the improvements shown by the authors are consistent, they are also marginal - only improving slightly in for example CIFAR-100-LT classification and ImageNet-LT classification compared to average aggregation. We acknowledge that the performance improvement brought by the smoothed Average Top-k Loss is not significant. * In the field of Computer Vision, classification accuracy is an extensively explored direction. In fact, any improvement is a further advancement beyond a high point. * By incorporating a single trainable parameter into the computation graph, ST$_k$ achieves performance enhancements without requiring additional computational resources or time, thereby making the proposed module energy-efficient. * Additionally, the adjustment to the aggregation method of the individual losses is compatible with almost all existing methods for handling imbalanced data, which is likely to attract widespread attention from the community focused on improving classification accuracy. > As authors mention, they only apply their module to a single Top-K Loss; ATK, making it unclear how ST_k could be applied in other scenarios and how well it generalizes to different settings. ST$_k$ can be applied end-to-end to models in any scenario that involves Top-K problem (see page 3, lines 87-89). **For your questions:** > How do you tune the smoothing coefficient $\delta$ in practice? Is this a hyperparameter that needs to be sweeped over? That would somewhat deteriorate the efficiency of your module. I would like to see an ablation over the values of delta, i think it is important to see what/if any impact this choice has on model performance. * The smoothing coefficient $\delta$, 0.01 is a grid search determined value for all datasets in section 5.1 (see page 7, line 186) and was adopted in all the other experiments. * We believe that the ablation study is necessary and will include it in our final version. Thanks for the insightful suggestion. > Did you experience any training instabilities with the two-step optimization scheme? * As mentioned above (see page1, line 25), in the experiments conducted in this paper, ST$_k$ does not require a two-step optimization using block coordinate descent, but can synchronously update $\lambda$ and model parameters using SGD/Adam. * Due to the elimination of points where gradients vanish, the training process is very stable, as evidenced by the standard deviations listed in Tables 2, 3, and 5. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. My concerns have been answered, i i crease my recommendation. --- Rebuttal 2: Comment: We really appreciate your recommendation! Here we present a temporary ablation over the values of the smooth coefficient $\delta$ on CIFAR-100-LT. All other settings remain the same as those in the paper, with only $\delta$ being changed. | $\delta$ | 0.1 | 0.05 | 0.01 | 0.005 | |--------------|-------|--------|-------|--------| | Accuracy | 88.8 | 89.4 | 89.8 | 89.8 | Integral experiments will be included in our final version. Thanks again for your suggestions.
Rebuttal 1: Rebuttal: We appreciate all your comments. Here is a summary of the strengths and a general response to the concerns received: **Strengths:** Firstly, we appreciate that all reviewers think our paper to be clearly written and easy to follow. - Reviewer 8VJU noted that our framework is efficient and integrates very easily into existing model architectures, and believed that our motivation was ample. - Reviewer uCvC found our method to be fast and overall better than baseline methods. - Reviewer 46Ey thought our method was flexible and is expected to be used in many scenarios. - Reviewer SwHE observed that we conducted extensive experiments, using both synthetic and real-world datasets. **Concerns:** > The limited scale of the dataset: * We conduct experiments on datasets such as ImageNet-LT and Places-LT, which, to our knowledge, are among the largest imbalanced visual classification datasets available. * Additionally, we have conducted experiments on two machine translation tasks using the Transformer-Base model, which has 0.11B trainable parameters. * In recent years, as model sizes have become increasingly larger, some works that contain large-scale experiments have become hard to follow for the community. We advocate for conclusions that can be validated at appropriate sizes without the need for validation at the level of LLMs. > To add more ablation studies and time-cost experiments: We add 2 additional experiments during rebuttal and will consider including more experiments in our final version.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Realizable $H$-Consistent and Bayes-Consistent Loss Functions for Learning to Defer
Accept (poster)
Summary: This work studies learning to defer, focusing on the single-stage and single-expert setting. It introduces a family of surrogate losses based on comp-sum losses [Mao et al., 2023b] and establishes their realizable H-consistency (under mild conditions). In addition, when the base loss is the logistic loss $\Psi_{log}$ or the generalized cross entropy loss $\Psi_{gce}$, H-consistency bounds are proven (under mild conditions) when the cost function of deferring is classification error. When the base loss is the mean absolute error loss $\Psi_{mae}$, H-consistency bounds are proven (under mild conditions) when the cost function of deferring is general. Note that H-consistency implies Bayes consistency. The results also close an open question raised by Mozannar et al., 2023. The relationship between realizable H-consistency and H-consistency bounds is then analyzed for learning to defer. Finally, the proposed surrogates are evaluated empirically and compared with existing baselines. Strengths: **Originality** - A new family of surrogate losses based on comp-sum losses (focusing on $\Psi_{log}$, $\Psi_{gce}$, and $\Psi_{mae}$) [Mao et al., 2023b] for learning to defer (the single-stage and single-expert setting). - Conditions for realizable H-consistency are identified (Theorem 4.1). In particular, $\Psi_{log}$, $\Psi_{gce}$, and $\Psi_{mae}$ all satisfy the conditions. This result is more general than Mozannar et al. [2023] - When the cost function of deferring is classification error, H-consistency bounds are proven for base losses $\Psi_{log}$ and $\Psi_{gce}$ (Theorem 4.2). - When the cost function of deferring is general, an H-consistency bound is proven for base loss $\Psi_{mae}$ (Theorem 4.3). The results also close an open question raised by Mozannar et al., 2023 (Section 4.4). The relationship between realizable H-consistency and H-consistency bounds is then analyzed for learning to defer (Section 5). Related work is adequately cited and compared. **Quality** The submission is technically sound. Claims are well supported by proofs. **Clarity** The submission is generally clearly written and well organized. **Significance** The work closes an open question raised in previous work. Its work to connect both realizable H-consistency and H-consistency bounds might be useful in other learning settings. Weaknesses: **Originality** - The realizable H-consistency result (Theorem 4.1) only applies to a subset of comp-sum losses. - The H-consistency results (Theorems 4.2 and 4.3) require case-by-case analysis. Is it possible to prove such results for all comp-sum losses? **Quality** - Some experiments in realizable settings can help confirm the realizable H-consistency result. **Clarity** - Section 5 is a bit unclear. Lines 300-302: Do you mean that in the realizable setting, all surrogate minimizability gaps vanish? Or is it only for comp-sum losses? **Significance** - The applicability of this work might be limited to learning to defer. Technical Quality: 3 Clarity: 3 Questions for Authors: Besides my concerns above, here are some other questions: 1. Can constrained losses [1] be used as the base loss? 2. Lines 273-274: Can negative results be proven formally? [1] Y. Lee, Y. Lin, and G. Wahba. Multicategory support vector machines: Theory and application to the classification of microarray data and satellite radiance data. Journal of the American Statistical Association, 99(465):67–81, 2004. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **Weaknesses:** **1. The realizable H-consistency result (Theorem 4.1) only applies to a subset of comp-sum losses.** **Response:** We would like to clarify that the assumption regarding the function $\Phi$ in Theorem 4.1 is mild. The condition that $\Psi$ is non-increasing, with $\Psi(\frac{2}{3}) > 0$ and $\lim_{t \to 1} \Psi(t) = 0$, is satisfied by all common comp-sum losses in practice. **2. The H-consistency results (Theorems 4.2 and 4.3) require case-by-case analysis. Is it possible to prove such results for all comp-sum losses?** **Response:** That's a great question. We expect that our proof ideas can potentially be extended to other comp-sum losses. First, we demonstrate that for any hypothesis $h$ and input $x$, if $y_{\max}$, the label with the highest conditional probability, is not the predicted label $h_{\max}$, the conditional error of $h$ is lower bounded by a modified hypothesis $\overline{h}$ (obtained by swapping the scores of $y_{\max}$ and $h_{\max}$). Then, we show that for hypotheses where $y_{\max} = h_{\max}$, we lower bound their conditional regret in terms of the conditional regret of the deferral loss using a new hypothesis $h_{\mu}$. However, the proof of establishing lower bounds in each case depends on the forms of the comp-sum losses and indeed requires a case-by-case analysis. Presenting a unified analysis and extending these results to all comp-sum losses is left as future work. Nevertheless, our results have included the most widely used comp-sum losses in practice, such as logistic loss, generalized cross-entropy loss, and mean absolute error loss. **3. Some experiments in realizable settings can help confirm the realizable H-consistency result.** **Response:** Thank you for the feedback. We have included the additional experiment in the realizable case. The additional experimental result (Figure 1 in the global response) shows that our surrogate loss $\mathsf L_{\mathrm{RL2D}}$ with $q = 0.7$ and $q = 1$ are realizable $H$-consistent, while $\mathsf L_{\mathrm{CE}}$, $\mathsf L_{\mathrm{OVA}}$ and $\mathsf L_{\mathrm{general}}$ are not. This validates our theory. **4. Section 5 is a bit unclear. Lines 300-302: Do you mean that in the realizable setting, all surrogate minimizability gaps vanish? Or is it only for comp-sum losses?** **Response:** In standard classification, minimizability gaps vanish under the realizable assumption for common multi-class surrogate losses such as max losses [Crammer and Singer, 2001], sum losses [Weston and Watkins, 1999], and comp-sum losses under mild assumptions. We will further clarify this in the final version. [1] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. Journal of machine learning research, 2(Dec):265–292, 2001. [2] J. Weston and C. Watkins. Support vector machines for multi-class pattern recognition. European Symposium on Artificial Neural Networks, 4(6), 1999. **5. The applicability of this work might be limited to learning to defer.** **Response:** While the proof techniques and methods may not extend directly to other settings beyond L2D, we believe our work connecting realizable $H$-consistency, $H$-consistency bounds, and Bayes-consistency could provide valuable insights into these consistency notions in other learning settings. In particular, these connections, along with our approach of constructively replacing indicator functions with smooth loss functions in the novel derivation of new surrogate losses from first principles, could help design loss functions that benefit from strong consistency guarantees in various scenarios. **Questions:** **1. Can constrained losses [1] be used as the base loss?** **Response:** That's an excellent question. We expect that the constrained losses can also be used to replace indicator functions in the deferral loss, based on a similar approach. Briefly, we expect that the first indicator function can be replaced by the standard constrained loss and the second indicator function can be replaced by a modified constrained loss. Establishing realizable $H$-consistency and $H$-consistency bounds for this new family of surrogate losses can be a very interesting future direction. We will elaborate on this insightful point brought up by the reviewer in the final version. **2. Lines 273-274: Can negative results be proven formally?** **Response:** We will seek to derive such counterexamples in the final version. This will involve carefully designing the conditional distribution, the expert, and the cost function to violate the more stringent condition for $H$-consistency in the additional cases due to the general cost function. --- Rebuttal Comment 1.1: Title: Increase my rating to 7 Comment: Thank you for the detailed responses to my questions and concerns, as well as those in other reviews. I am satisfied with the reply and have increased my rating to 7. --- Reply to Comment 1.1.1: Comment: We are glad to have addressed the reviewer's questions and sincerely appreciate their updated rating, insightful feedback, and recognition of our work. Please let us know if there is any other question.
Summary: This paper considers the problem of learning to defer (L2D), where a classifier is allowed to defer a decision to an expert (possibly expensive to query) and trained to accurately predict while minimising the expert cost. A major contribution of this paper is establishing consistency guarantees for surrogate losses. Specifically, the authors derive a loss that encompasses existing surrogate losses. The authors provide sufficient conditions for the proposed surrogate loss to have realisable H consistency and for H consistency bounds. From the derived H consistency bounds, the authors show that certain choices of the expert cost leads to a surrogate loss that is both Bayes and realisable H consistent. Strengths: The paper is very well written and does not have major issues in clarity. This paper seems to provide a general framework for analysing surrogate losses for L2D, and the established results appear to be novel. I am not actively working on L2D and my evaluation might not be accurate. Weaknesses: Due to my lack of experience in the field, I find it challenging to adequately adjudicate the significance of the work. Realisable H consistency effectively assumes that there is a perfect classifier that does not need an expert. This preposition seems to be strong and diminishes the point of having an expert. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Why is H consistency important in L2D? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **Weaknesses: Due to my lack of experience in the field, I find it challenging to adequately adjudicate the significance of the work. Realizable H consistency effectively assumes that there is a perfect classifier that does not need an expert. This preposition seems to be strong and diminishes the point of having an expert.** **Response:** The notion of realizable $H$-consistency was first studied by Long and Servedio (2013) and Zhang and Agarwal (2020) in the standard classification setting. There, it means that under the assumption there is a classifier that achieves zero multi-class zero-one loss, minimizing the surrogate loss also leads to a zero-error solution. This notion was extended to the Learning to Defer (L2D) setting by Mozannar et al. (2023). Here, it implies that under the assumption that there is a predictor (a standard classifier augmented with a deferral option) and an expert who achieve zero deferral loss, minimizing the surrogate loss also results in a zero-error solution. Note that in the L2D setting, the assumption does not imply "there is a perfect classifier that does not need an expert." Rather, it means that for every instance $x$, either deferral does not occur and the predictor achieves the zero multi-class zero-one loss, or deferral occurs but the expert achieves zero cost $c(x, y) = 0$. When the cost is defined as the expert’s classification error: $c(x, y) = 1_{\mathsf{g}(x) \neq y}$, as in previous work by Mozannar and Sontag (2020), Verma and Nalisnick (2022), and Mozannar et al. (2023), it implies that the expert achieves a zero multi-class zero-one loss solution on the instance $x$ where deferral occurs. This does not diminish the point of having an expert. Instead, it effectively assumes that there is a good balance between a standard classifier, a deferral option, and an expert such that for every instance $x$, the deferral loss is zero. **Questions: Why is H consistency important in L2D?** **Response:** Directly optimizing the deferral loss function, which is the target loss in L2D, is generally computationally intractable for complex hypothesis sets $H$. Therefore, a common approach is to optimize a surrogate loss that facilitates the optimization of the deferral loss function. But, what guarantees can we rely on when minimizing a surrogate loss of the deferral loss? This is a fundamental question with significant implications for L2D. A desirable guarantee often referred to in this context is Bayes-consistency. It means that optimizing the surrogate loss over the family of all measurable functions leads to the minimization of the deferral loss over the same family. However, while Bayes-consistency is valuable, it is not sufficiently informative, as it is established for the family of all measurable functions and disregards the crucial role played by restricted hypothesis sets in learning, such as a family of linear models or neural networks. As pointed out by Long and Servedio [2013], in some cases, minimizing Bayes-consistent losses can result in constant expected error, while minimizing inconsistent losses can yield an expected loss approaching zero. To address this limitation, the authors introduced the concept of realizable $H$-consistency, further explored by Zhang and Agarwal [2020] and more recently, by Mozannar et al. (2023) in L2D. Nonetheless, these guarantees are only asymptotic and do not provide guarantees for approximate minimizers. Recent research by Awasthi et al. [2022b,a] has instead introduced and analyzed $H$-consistency bounds, further explored by Mao et al. [2023a, 2024a] in L2D. These bounds are more informative than Bayes-consistency since they are hypothesis set-specific and non-asymptotic. Crucially, they provide upper bounds on the estimation error of the target loss, for example, the deferral loss, that holds for any predictor $h$ within a hypothesis set $H$. These bounds relate this estimation error to the surrogate loss estimation error. Realizable $H$-consistency and $H$-consistency bounds were first proposed and extensively studied in standard classification. $H$-consistency bounds imply Bayes-consistency and realizable $H$-consistency in the standard classification setting and represent a state-of-the-art consistency guarantee since it is both non-asymptotic and hypothesis set-dependent. As with Bayes-consistency studied in a variety of scenarios, it is natural to study the stronger $H$-consistency in scenarios including L2D. Although both realizable $H$-consistency and $H$-consistency bounds have been explored in the L2D setting, no existing surrogate loss achieves realizable $H$-consistency and $H$-consistency bounds (including Bayes-consistency) simultaneously. Furthermore, the connection between these consistency notions is not as clear as that in the standard classification. Our work fills this gap by presenting a comprehensive study of these consistency notions and surrogate loss functions for L2D. We introduce a new family of surrogate losses and establish their realizable $H$-consistency and $H$-consistency bounds under different cases. Our results also resolve an open question raised in previous work [Mozannar et al., 2023] by proving the realizable H-consistency and Bayes-consistency of a specific surrogate loss. We further investigate the relationship between $H$-consistency bounds and realizable $H$-consistency in L2D, highlighting key differences from standard classification. Furthermore, these $H$-consistency guarantees provide significant *algorithmic benefits* when minimizing our new surrogate loss functions, as illustrated by our experiments. We refer the reviewer to our response to Question 1 of Reviewer Nrap for more details on the superiority of our surrogate losses compared to previous work. --- Rebuttal 2: Comment: Thank you for your clarification. I assumed $c(x, y)$ included the cost of deferral (as in $1_{g(x)\neq y} + \beta$ with $\beta > 0$) not just the loss incurred by a mistake by the expert, which caused my confusion. I agree that "deferral occurs but the expert achieves zero cost" holds for a loss like $1_{g(x) \neq y}$ --- Rebuttal Comment 2.1: Comment: We are glad that the concerns have been addressed and appreciate the reviewer's support for our work. Please let us know if there is any other question.
Summary: The authors provide a framework of surrogate loss functions for learning to defer under the multi-class classification problem. By examining the deferral loss function and choosing different surrogates for the indicator functions, the authors provide a novel class of surrogate loss functions for learning to defer and prove the realizable $\mathcal{H}$-consistency and $\mathcal{H}$-consistency for some cases. The authors also verify their proposed loss with numerical results. Strengths: 1. The analysis is novel and provides new insights for learning to defer literature; 2. The paper addresses the problem of achieving $\mathcal{H}$-consistency and realizable $\mathcal{H}$-consistency simultaneously. Weaknesses: 1. (Major) The proposed loss is not practical in many critical settings when the cost to consult an expert is high; Both terms of the derived loss function contain the exact value of $c(x, y)$ while knowing $c(x, y)$ itself needs consulting the expert if $c(x, y)$ is related to the expert's response. Thus, training a model using the proposed surrogate loss requires querying not only the true label but also the expert's response to every single sample, which is somewhat against the motivation of deriving the problem of learning to defer. See the derivation of equation (2). (Addressed, improve my rating to 6) 2. (Minor) Some of the results are restricted to the case $c(x,y) = 1_{g(x) \neq y}$, which is limited compared to the vast problem settings of learning to defer; 3. (Minor) The presentation is a little confusing: there are three consistencies (Bayes consistency, $\mathcal{H}$- consistency, and realizable $\mathcal{H}$-consistency), while their relationships should be formally summarized into some propositions; the existing surrogate loss functions' consistencies should also be summarized into one table with indicator "yes", "no", or "not proved" for each consistency. 4. (Moderate) The numerical experiments don't specify the expert algorithm; the experiments are also very restricted to the case of $c(x,y) = 1_{g(x) \neq y}$. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. The major concern of mine is that the training under the proposed loss function requires the full responses of the expert to the entire training dataset, which is undesirable for the L2D problem. Previous surrogate loss functions (nominated in Section 3.3) only require $c(x, y)$ if the model chooses to defer. The learning to defer problem is proposed to ease the pain of consulting experts, while I think the loss function proposed works against that principle. Is it possible to mitigate the training cost of the model? 2. All the remaining questions are addressed in the Weakness part and do not matter unless the first question is addressed. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We have carefully addressed all the questions raised. Please find our responses below. **Weakness 1: The proposed loss is not practical ... the derivation of equation (2).** **Question 1. The major concern ... mitigate the training cost of the model?** **Response:** Let us begin by addressing a misconception: all previous surrogate loss functions, including those in Section 3.3, require knowing the costs $c(x, y)$ for all training points. This is essential to select an effective predictor that minimizes the need to consult experts during inference. The same requirement applies also to the original deferral loss itself (whose direct minimization is intractable). Furthermore, Eq. (2) is just an equivalent form of the original deferral loss, which helps derive new surrogate loss functions from first principles. The goal of the L2D framework is to suitably decide whether to predict a label or defer to an expert at *inference time*. During inference, it is only necessary to compute the output of the selected expert. Learning a good deferral solution requires training a predictor with an augmented label corresponding to the deferral options using the training data. Thus, the full responses of the experts are required during training to train the predictor effectively. Our work is fully consistent with the L2D framework and does not violate its principle. Now, it's certainly possible to explore alternative solutions to mitigate training costs: - Using Precomputed Costs: Instead of relying on instance-specific costs, one could use a general precomputed cost, such as the average error of an expert. This simplifies training but might lead to a less optimal deferral solution compared to using instance-dependent costs. - Learning to Predict Costs: Another approach is to train a model $f$ for each expert to predict the costs $c(x, y)$. While pre-training is also needed here, it might be possible to use a smaller training set. This approach introduces trade-offs worth investigating, depending on the desired level of cost estimation accuracy. - Partial Cost Computation: A third option involves using only a fraction of the costs during training and leveraging techniques like importance weighting to estimate the costs for instances where they weren't computed. **Weakness 2. (Minor) Some of the results are restricted to the case $c(x, y) = 1_{g(x) \neq y}$ ... settings of learning to defer.** **Response:** The case $c(x, y) = 1_{\mathsf{g}(x) \neq y}$ has been adopted in previous fundamental work on L2D [Mozannar and Sontag, 2020; Verma and Nalisnick, 2022], including the recent study on realizable $H$-consistency by Mozannar et al. (2023). The surrogate losses $\mathsf{L}_ {\mathrm{CE}}$, $\mathsf{L}_ {\mathrm{OVA}}$, and $\mathsf{L}_ {\mathrm{RS}}$ in Section 3.3 are all proposed in this context. Our work introduces a new family of surrogate losses parameterized by a function $\Phi$ that can achieve Bayes-consistency, realizable $H$-consistency, and $H$-consistency bounds simultaneously, both in the case of $c(x, y) = 1_{\mathsf{g}(x) \neq y}$ and for general cost functions with appropriate choices of $\Phi$. Our work also highlights that consistency guarantees can differ between the standard case of $c(x, y) = 1_{\mathsf{g}(x) \neq y}$ and cases involving general cost functions for surrogate loss functions in L2D, a distinction not pointed out in previous studies. Additionally, we resolve an open question raised in previous work [Mozannar et al., 2023] by proving the realizable $H$-consistency and Bayes-consistency of a specific surrogate loss in the case $c(x, y) = 1_{\mathsf{g}(x) \neq y}$. **Weakness 3. (Minor) The presentation is a little confusing ... for each consistency.** **Response:** Thank you for the suggestions. We will clarify the relationships between the three consistency notions in the propositions and include tables summarizing the properties of existing surrogate losses by using an additional page in the final version. Below is a table of the consistency properties of existing surrogate losses and ours in the case of $c(x, y) = 1_{\mathsf{g}(x) \neq y}$. | Surrogate losses| Realizable $H$-consistency| Bayes-consistency| $H$-consistency bounds| |-|-|-|-| | $\mathsf{L_{\mathrm{CE}}}$|no|yes|yes| | $\mathsf{L_{\mathrm{OvA}}}$|no|yes|yes| | $\mathsf{L_{\mathrm{general}}}$|no|yes|yes| | $\mathsf{L_{\mathrm{RS}}}$ ($\mathsf{L}_{\mathrm{RL2D}}$ with $\Psi(t) = -\log(t)$)|yes|yes (proved by us)|yes (proved by us)| | $\mathsf{L}_{\mathrm{RL2D}}$ with $\Psi(t) = \frac{1}{q} \left(1 - t^q\right), q \in (0, 1)$|yes|yes|yes| | $\mathsf{L}_{\mathrm{RL2D}}$ with $\Psi(t) = 1 - t$|yes|yes|yes| **Weakness 4. (Moderate) The numerical experiments don't specify the expert ... of $c(x, y) = 1_{g(x) \neq y}$.** **Response:** We follow the setting of Mozannar et al. [2023] and adopt the same expert algorithm as in [Mozannar et al., 2023, Table 1, Human column]. We will clarify this in the final version. As mentioned before, most surrogate losses proposed in previous work, except $\mathsf{L}_ {\mathrm{general}}$, are in the case of $c(x, y) = 1_{\mathsf{g}(x) \neq y}$. This naturally leads to a comparison of these surrogate losses in the context of $c(x, y) = 1_{\mathsf{g}(x) \neq y}$. We have included an additional experiment involving general cost functions. In the non-realizable case with general cost functions, the additional experimental result (Figure 2 in the global response) shows that our surrogate loss $\mathsf{L}_ {\mathrm{RL2D}}$ with $q = 1$ performs comparably to the surrogate loss $\mathsf{L}_ {\mathrm{general}}$, as both are supported by $H$-consistency bounds and Bayes-consistency with general cost functions. Our surrogate loss $\mathsf{L}_ {\mathrm{RL2D}}$ with $q = 1$ outperforms $\mathsf{L}_ {\mathrm{RS}}$ because the latter does not benefit from Bayes-consistency with general cost functions. --- Rebuttal Comment 1.1: Comment: Thank the authors for their detailed reply. My major concern has been addressed, and I will improve my rating correspondingly. --- Reply to Comment 1.1.1: Comment: We are pleased to have addressed the reviewer's concerns and are grateful for their insightful comments and constructive suggestions. Please let us know if there is any other question.
Summary: This paper proposes considers the setting of learning to defer: a machine learning system can choose to either classify an instance or defer the decision to an expert which incurs a variable cost. The objective is to minimize the deferral loss of the system. To solve this problem, prior work has proposed surrogate losses with certain theoretical guarantees with respect to the original deferral loss. This work proposes the first surrogate loss RL2D that is realizable H-consistent, satisfies an H-consistency bound and thus is bayes consistent. This resolves an open problem proposed from prior work. Empirically, the authors showcase that the proposed surrogate exceeds or matches prior surrogates on three different datasets. Strengths: Originality: The surrogate loss proposed in the work is novel as well as the proof technique for the theoretical properties. The derivation of the surrogate loss is different from prior work, however, it is not clear how the technique can be generalized to other settings. Quality: I have verified a good portion of the theoretical derivations and they seem sound. The experimental setup follows similar protocol to prior work and is sound. The paper is very strong theoretically. Clarity: very well written, clearly stating contributions of prior work and setting the stage for readers unfamiliar with the setting and the literature. Derivation and theoretical properties very well stated and easy to follow along. Significance: The paper settles an open problem from prior work at AISTATS and in turns I believe concludes (barring any breakthroughs) a line of work on deriving surrogate losses for learning to defer. I think this is important because now the community can focus on other settings and other considerations beyond theoretical consistency properties. However, I don't think the paper has a lot to offer in terms of techniques/methods for the community beyond the learning to defer problem as the derivation rely on some algebra of the deferral loss. Therefore, I don't expect this to be widely read by the community, but will be instead read in great detail by the community working on learning to defer and related problems. Weaknesses: There are no weaknesses with regard to the theory in this paper beyond the generalizability of the approach taken to related problem settings. However, the experimental setting is quite limited in terms of showcasing the behavior empirically of the newly proposed method. For the use of the surrogate in practice, it is not clear in which scenarios (if any) is the surrogate superior to prior work. Moreover, it is not clear how do the theoretical properties help in practice. Technical Quality: 4 Clarity: 4 Questions for Authors: - In which settings is the new proposed surrogate better than prior work? - How do the theoretical properties manifest in practice? Follow-up after author response: - the authors have done a good job answering the concerns, I believe this paper merits acceptance but I will maintain my original score as I am unsure about the level of impact of the paper. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 2 Limitations: Yes they have. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of our work. We will take your suggestions into account when preparing the final version. Below please find responses to specific questions. **Weaknesses:** **1. There are no weaknesses with regard to the theory in this paper beyond the generalizability of the approach taken to related problem settings.** **Response:** Thank you for appreciating our theoretical contributions. While the proof techniques and methods may not extend directly to other settings beyond L2D, we believe our work connecting realizable $H$-consistency, $H$-consistency bounds, and Bayes-consistency could provide valuable insights into these consistency notions in other learning settings. In particular, these connections, along with our approach of constructively replacing indicator functions with smooth loss functions in the novel derivation of new surrogate losses from first principles, could help design loss functions that benefit from strong consistency guarantees in various scenarios. **2. However, the experimental setting is quite limited in terms of showcasing the behavior empirically of the newly proposed method.** **Response:** Thank you for the feedback. We have included two additional experiments in the global response: the realizable case and the non-realizable case with general cost functions. In the realizable case, the additional experimental result (Figure 1) shows that our surrogate loss $\mathsf L_{\mathrm{RL2D}}$ with $q = 0.7$ and $q = 1$ are realizable $H$-consistent, while $\mathsf L_{\mathrm{CE}}$, $\mathsf L_{\mathrm{OVA}}$ and $\mathsf L_{\mathrm{general}}$ are not. This validates our theory. In the non-realizable case with general cost functions, the additional experimental result (Figure 2) shows that our surrogate loss $\mathsf{L}_ {\mathrm{RL2D}}$ with $q = 1$ performs comparably to the surrogate loss $\mathsf{L}_ {\mathrm{general}}$, as both are supported by $H$-consistency bounds and Bayes-consistency with general cost functions. Our surrogate loss $\mathsf{L}_ {\mathrm{RL2D}}$ with $q = 1$ outperforms $\mathsf{L}_ {\mathrm{RS}}$ because the latter does not benefit from Bayes-consistency with general cost functions. **Questions:** **1. In which settings is the new proposed surrogate better than prior work?** **Weakness: For the use of the surrogate in practice, it is not clear in which scenarios (if any) is the surrogate superior to prior work.** **Response:** Our surrogate losses $\mathsf{L}_ {\mathrm{RL2D}}$ satisfying Theorem 4.1 perform better in realizable scenarios than the surrogate losses $\mathsf{L}_ {\mathrm{CE}}$, $\mathsf{L}_ {\mathrm{OVA}}$, and $\mathsf{L}_ {\mathrm{general}}$ from prior work, as ours are realizable $H$-consistent while theirs are not. This is illustrated by our experiment in the realizable case (Figure 1 in the global response). Our surrogate losses $\mathsf{L}_ {\mathrm{RL2D}}$ satisfying Theorem 4.2 and Corollary 4.4 are comparable to the surrogate losses in prior work in non-realizable scenarios when the cost is the expert's classification error, as all of them are Bayes-consistent and supported by $H$-consistency bounds. This is demonstrated by our experiment in the non-realizable case with the cost function being the expert's classification error (Table 2 in the submission). Our surrogate losses $\mathsf{L}_ {\mathrm{RL2D}}$ satisfying Theorem 4.3 and Corollary 4.5 are superior to the surrogate loss $\mathsf{L}_ {\mathrm{RS}}$ in non-realizable scenarios with general cost functions, as ours are supported by $H$-consistency bounds and Bayes-consistency while theirs are not. This is evidenced by our experiment in the non-realizable case with general cost functions (Figure 2 in the global response). **2. How do the theoretical properties manifest in practice?** **Weakness: Moreover, it is not clear how do the theoretical properties help in practice.** **Response:** The additional experimental result (Figure 1 in the global response) in the realizable scenario demonstrates that our realizable $H$-consistent surrogate loss $\mathsf{L}_ {\mathrm{RL2D}}$ with $q = 0.7$ and $q = 1$ outperforms $\mathsf{L}_ {\mathrm{CE}}$, $\mathsf{L}_ {\mathrm{OVA}}$, and $\mathsf{L}_ {\mathrm{general}}$, which are not realizable $H$-consistent. Realizable $H$-consistency is beneficial in practice since realizable scenarios are common, particularly in the current use of neural networks in applications. Furthermore, the simultaneous $H$-consistency properties of our surrogate losses support their use in non-realizable scenarios, as they share the same Bayes-consistency properties with existing surrogate losses and are expected to perform comparably. The additional experimental result (Figure 2 in the global response) in the non-realizable scenario with general cost functions shows that our surrogate loss $\mathsf{L}_ {\mathrm{RL2D}}$ with $q = 1$ performs comparably to the surrogate loss $\mathsf{L}_ {\mathrm{general}}$, as both are supported by H-consistency bounds and Bayes-consistency with general cost functions. Our surrogate loss $\mathsf{L}_ {\mathrm{RL2D}}$ with $q = 1$ outperforms $\mathsf{L}_ {\mathrm{RS}}$ because the latter does not benefit from Bayes-consistency with general cost functions. These experimental results align with and further validate our theory.
Rebuttal 1: Rebuttal: Dear reviewers, We would like to express our appreciation for all your constructive suggestions and insightful comments. We have attached a PDF that includes additional experimental results for both the realizable case and the non-realizable case with general cost functions. Figure 1 shows system accuracy versus training samples on a realizable mixture of Gaussian distributions in [Mozannar et al., 2023]. Our surrogate loss $\mathsf L_{\mathrm{RL2D}}$ with $q = 0.7$ and $q = 1$ are realizable $H$-consistent, while $\mathsf L_{\mathrm{CE}}$, $\mathsf L_{\mathrm{OVA}}$ and $\mathsf L_{\mathrm{general}}$ are not. This validates our theory. Figure 2 shows system accuracy versus coverage on the HateSpeech dataset by varying $\beta$ in the general cost functions $c(x, y) = 1_{\mathsf g(x) \neq y} + \beta$. As $\beta$ increases, deferral algorithms yield solutions with higher coverage and decreased system accuracy. This is because $\beta$ controls the trade-off between expert's inference cost and accuracy. $\mathsf{L}_ {\mathrm{RL2D}}$ with $q = 1$ performs comparably to the surrogate loss $\mathsf{L}_ {\mathrm{general}}$, as both are supported by $H$-consistency bounds and Bayes-consistency with general cost functions. Our surrogate loss $\mathsf{L}_ {\mathrm{RL2D}}$ with $q = 1$ outperforms $\mathsf{L}_ {\mathrm{RS}}$ because the latter does not benefit from Bayes-consistency with general cost functions. Pdf: /pdf/d0d025bcd4a531681cff91043fc14d030f706048.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mitigating Spurious Correlations via Disagreement Probability
Accept (poster)
Summary: To address the issue of spurious correlations when bias labels are unavailable, the work proposes a new method to mitigate the spurious correlations by minimizing the maximum average loss over bias-aligned and bias-conflicting groups. Additionally, they introduce the disagreement probability/sampling probability, which weights samples in the training objective, to achieve group-robust learning. Strengths: 1. The organization of this paper is good, and it is written in a clear and understandable manner. 2. This work is comprehensive as it covers both theoretical and experimental analyses. 3. The experiments are extensive, considering a rich variety of datasets and baselines. Weaknesses: 1. The method proposed in this paper, DPR, is not novel. Although the authors introduce several concepts such as sampling probability and group indicator, the essence of DPR can be seen as first using ERM with GCE to predict pseudo spurious attribute labels, and then applying the Group DRO method. Similar approaches (pseudo group labels from ERM + invariant learning) can be found in several methods like JTT [1] and CnC [2]. The essence of the author's emphasis on disagreement probability seems to be the predicted probability of ERM that a sample belongs to the minority group (i.e., bias-conflicting groups). Similarly, the reweighting of samples using the predicted probabilities by ERM can be seen in methods like EIIL [3] (the main difference is EIIL focuses more on the regularization term). 2. The use of data augmentation helps alleviate spurious correlations, while other baseline models JTT do not utilize data augmentation, which may lead to unfair comparisons in experiments. 3. The experimental section lacks unity, as evidenced by inconsistent metrics and experimental datasets. It is recommended to focus on the worst group (such as C-MNIST, MB-MNIST) to illustrate the effectiveness of DPR in mitigating spurious correlations. Additionally, I noticed that the average accuracy of CivilComments-WILDS decreases significantly compared to other baselines, which contradicts the theoretical claim in the article of simultaneously reducing the loss for bias-aligned and bias-conflicting groups. Can the authors provide insights into this? 4. I appreciate the authors' demonstration of section Identifying group via disagreement probability, which shows the effectiveness of biased models in predicting bias-conflicting groups. This is an important evidence for measuring the effectiveness of DPR. However, showcasing it only on C-MNIST is not sufficient. Can you provide experimental results on other datasets, especially real-world datasets? If authors could provide reasonable replies, I am willing to further increase the score. [1] Liu, Evan Z., et al. "Just train twice: Improving group robustness without training group information." International Conference on Machine Learning. PMLR, 2021. [2] Zhang, Michael, et al. "Correct-n-contrast: A contrastive approach for improving robustness to spurious correlations." arXiv preprint arXiv:2203.01517 (2022). [3] Creager, Elliot, Jörn-Henrik Jacobsen, and Richard Zemel. "Environment inference for invariant learning." International Conference on Machine Learning. PMLR, 2021. Technical Quality: 2 Clarity: 3 Questions for Authors: see my weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: see my weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We are very grateful for your constructive comments. We have provided answers to each comment. Please let us know if you need any clarification or have additional questions. > **Q1**: Originality of DPR. **A1**: The following differences exist between JTT, CNC, and our proposed DPR. JTT identifies misclassified samples and uses heuristic reweighting to upweight them, which does not guarantee to encourage consistent model performance between bias-aligned and bias-conflicting groups, thus having little impact on mitigating the effect of spurious correlations. CNC is a debiasing method using contrastive learning, which aims to close the gap between worst-group loss and average loss. In contrast, our proposed method DPR aims to mitigate the impact of spurious correlations by reducing the performance gap between bias-aligned and bias-conflicting groups. There is a difference in group definition between EIIL and DPR. Unlike EIIL, which partitions the dataset by maximizing the IRM objective, DPR distinguishes between bias-aligned and bias-conflicting groups based on the presence of spurious correlations, inspired by previous debiasing works [3, 5, 6]. Additionally, there is a difference in implementing a practical algorithm to train a debiased model without bias labels. EIIL optimizes the invariant learning objective, while DPR uses a weighted loss function directly derived from the proposed training objective with easily achievable assumption to reduce performance differences between groups and mitigate spurious correlations. The table below shows the performance of EIIL and DPR, with experiments following the CNC [1] setup, reporting average accuracies and standard deviations over three trials. | | CelebA Average | CelebA Worst | CivilComments-WILDS Average | CivilComments-WILDS Worst | |:----------:|:--------------:|:------------:|:---------------------------:|:-------------------------:| | EIIL | 85.7 (0.1) | 81.7 (0.8) | 90.5 (0.2) | 67.0 (2.4) | | DPR (ours) | 90.7 (0.6) | 88.9 (0.6) | 82.9 (0.7) | 70.9 (1.7) | As shown, DPR outperforms EIIL across various data modalities. > **Q2**: Data augmentation setting on other baselines. **A2**: The experimental setup we used for our experiments is the same as the experimental settings of CNC [1] for CelebA, JTT [2] for CivilComments-WILDS, and PGD [3] for the remaining datasets, which include data augmentation. Therefore, the same data augmentation technique was applied to all other baselines, including JTT, as well as our proposed algorithm. > **Q3**: lack of unity in metrics and contradictory results on CivilComments-WILDS. **A3-1**: Lack of unity in metrics We appreciate your recommendation. As you suggested, we could focus on evaluating the worst group to show how effective DPR is in mitigating spurious correlations. We believe we can demonstrate this using existing experimental and dataset settings that focus on the worst group. To demonstrate the effectiveness of DPR, we selected six datasets (two synthetic and four real datasets) widely used in existing debiasing papers. The metrics used in our experiments are those commonly employed in previous debiasing studies [1-4], chosen according to each dataset's specific setting. For datasets not focused on the worst group, the bias-conflicting ratio in the training dataset is relatively low (such as 0.5%, 5%, or 10%), while the bias-conflicting ratio in the test set used for performance reporting is relatively high (such as 50%, 90%, or 100%). Therefore, we believe this provides a sufficient metric to evaluate how effectively a debiasing method mitigates spurious correlations. **A3-2**: Contradictory results on CivilComments-WILDS. The practical algorithm DPR, derived from the training objective with an assumption, initializes the debiased model with a biased model and then trains it by oversampling bias-conflicting samples to obtain the debiased model. If, during the training process of oversampling bias-conflicting samples, the model forgets previously learned information about bias-aligned samples, this could result in an accuracy drop and consequently lower average accuracy. Unlike CelebA, where the average accuracy of DPR is higher than other baselines except ERM, a relatively lower average accuracy is reported in CivilComments-WILDS. This could be due to certain characteristics of the BERT model and language data. Analyzing and addressing these causes will be part of future work. > **Q4**: Group identification experimental results on real-world datasets. **A4**: We conducted an experiment on group identification via disagreement probability for the real-world dataset BFFHQ. The figures showing the experimental results have been uploaded as a PDF file in the global response. Please refer to it. As shown in the figures, for BFFHQ, bias-aligned samples generally exhibit smaller disagreement probabilities compared to bias-conflicting samples, effectively distinguishing between bias-aligned and bias-conflicting samples. Additionally, the relatively large disagreement probability of bias-conflicting samples allows DPR to effectively identify and upsample these bias-conflicting samples. [1] Correct-n-contrast: A contrastive approach for improving robustness to spurious correlations, ICML 2022 [2] Just train twice: Improving group robustness without training group information, ICML 2021 [3] Mitigating dataset bias by using per-sample gradient, ICLR 2023 [4] Learning debiased representation via disentangled feature augmentation, NeurIPS 2021 [5] Learning from failure: De-biasing classifier from biased classifier, NeurIPS 2020 [6] Learning debiased classifier with biased committee, NeurIPS 2022 --- Rebuttal 2: Comment: Dear Author, Thank you for your response and the additional experimental results. Regarding Q1, although EIIL and DPR use different terminologies, in the presence of spurious features, EIIL encourages the model to violate the principles of invariant learning by maximizing the EI objective in the first stage, thereby learning the spurious attribute labels of the samples (i.e., biased model). The model probabilities (in a binary classification scenario) are then used as sample weights for invariant learning in the next stage (i.e., debiased model). Despite differences in algorithm details, EIIL is close to DPR. For Q2, could you please confirm where JTT and CnC indicate that they use data augmentation techniques? For instance, in the case of JTT, the original paper explicitly states in Section A Training Details that the experimental settings for Waterbirds and CelebA are "no data augmentation" (please correct me if I am wrong). Best, Reviewer GDUK --- Rebuttal Comment 2.1: Comment: We appreciate your comments and provide answers to each question. For Q1, the differences between EIIL and the proposed DPR are as follows: First, in terms of the algorithm, there are distinct differences between EIIL and DPR. The most significant difference is that in DPR, the weighted loss function used to train the debiased model is mathematically derived directly from the training objective under an easily achievable assumption. We also utilized the characteristics of the biased model (i.e., relatively low loss for the bias-aligned group) to satisfy this assumption, ensuring that the loss function used to train the debiased model aligns with the training objective. This process of deriving the loss function and the resulting loss function itself are novel and have not been proposed in previous debiasing papers, including EIIL. Therefore, this newly derived loss function leads to a new algorithm, DPR, which is distinct from EIIL. Although it may use similar components (i.e., biased model, reweighting), DPR creatively combines these components, as it is derived by the newly derived loss function. Secondly, the group definitions used by EIIL's objective function and our proposed objective function are different. Upon examining EIIL's objective function and algorithm, it seems that EIIL has no restrictions on the number of groups (i.e., environments) and this is not explicitly specified. In contrast, we explicitly defined two groups - bias-aligned and bias-conflicting - and based our objective function on these two groups. For example, regardless of whether a dataset consists of 6 classes or 10 classes, or irrespective of the dataset's composition and characteristics, we propose dividing it into two groups in every situation and building our objective function upon these two groups. We would like to point out that there is a difference between our approach and EIIL's, as EIIL did not start by dividing into two groups to set up its objective function, but rather proposed an objective function and algorithm for an arbitrary number of groups. The differences mentioned above have indeed led to performance disparities. As you can see in the comparison experiment table above for CelebA and CivilComments-WILDS, there is a significant performance gap, showing that our DPR outperforms EIIL. We believe this highlights the differences from EIIL and demonstrates the superiority of our method. For Q2, we followed their experimental settings, including data augmentation policies (e.g., whether to use it, types, etc.). Therefore, since JTT and CNC did not use data augmentation for CivilComments-WILDS and CelebA, we also refrained from using it.
Summary: The authors mainly target fairness without accessing bias labels. They suggest a new learning objective that minimizes the loss of the bias group showing the highest ERM and demonstrate that minimizing this objective decreases the upper bound of the expected average loss. To utilize this loss when the bias labels are not accessible, they derive the oversampling method from the new learning objective. In experiments, they show that their method outperforms baselines on various datasets. Strengths: * They suggest a new learning objective and provide theoretical support for the proposed objective. * The authors derive the sampling weights from the perspective of the new learning objective. * They demonstrate that their method outperforms baselines on various benchmark datasets. Weaknesses: * Since the sampling weights are derived from the proposed learning objective, it's important to demonstrate the effectiveness of the proposed learning objective itself. Could you confirm if it is superior to GroupDRO in scenarios where bias labels are provided on CelebA? * The effectiveness of the proposed method compared to baselines seems weak. Except for the synthetic datasets and BFFHQ, their performance gain is marginal. * There is no clear explanation of how to choose hyperparameters or select the model from a specific epoch during training. In the absence of an unbiased validation set, such as BFFHQ, how were these selections made? * In situations where the ratio of bias-conflicting samples is low, using oversampling methods might make training unstable. Could you show how the test accuracy of bias-conflicting samples changes every 10 epochs during the whole training process on BFFHQ (160 epochs as mentioned in the Appendix)? Technical Quality: 3 Clarity: 3 Questions for Authors: * According to Section 4.3, the authors introduce additional augmentation for the diversity of minor class samples. Did the other algorithms compared in the experiment also use these augmentation techniques? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors properly address the limitations in Section 7. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive comments. We have provided answers to each comment. Please let us know if you need any clarification or have additional questions. > **Q1**: The effectiveness of the proposed learning objective itself. **A1**: As you mentioned, to demonstrate the effectiveness of the proposed learning objective, we conducted comparative experiments with GroupDRO in a scenario where bias labels are provided for CelebA. We followed the setting of CNC [1] and reported the average accuracies (%) and standard deviations over three trials. The results are as follows. | | CelebA Average | CelebA Worst | |:--------:|:--------------:|:------------:| | GroupDRO | 93.9 (0.1) | 88.9 (1.3) | | Ours | 92.3 (0.8) | 90.9 (0.6) | As shown in the table above, when bias labels are provided, the proposed learning objective (Ours) demonstrates excellent performance on CelebA. This clearly demonstrates the effectiveness of the objective. > **Q2**: The effectiveness of the proposed method compared to baselines. **A2**: The proposed DPR consistently outperforms or matches other baselines across 11 metrics on all six benchmarks. Considering that evaluations were conducted on different settings with various bias-conflicting ratios, types of bias, and data modalities, DPR's consistently highest performance across all benchmarks clearly demonstrates that it is more effective than any of the other baselines. > **Q3**: Hyperparameter and model selection criteria. **A3**: We followed the experimental settings of CNC [1] for CelebA, JTT [2] for CivilComments-WILDS, and PGD [3] for the remaining datasets, which include their hyperparameters. In other words, all hyperparameters are the same as those of the aforementioned works [1, 2, 3], except for the temperature hyperparameter newly introduced by our proposed method, DPR. Following existing debiasing papers [1, 2, 4, 5, 6], we selected the temperature and performed early stopping based on the best worst-group validation accuracy for CelebA and CivilComments, and the best validation accuracy for the remaining datasets. > **Q4**: The test accuracy on BFFHQ bias-conflicting samples every 10 epochs. **A4**: As you requested, we conducted an experiment to examine the changes in the test accuracy on bias-conflicting samples of BFFHQ. We reported the average accuracies (%) and standard deviations over three trials. The results are as follows. | Epoch | 1 | 10 | 20 | 30 | 40 | 50 | 60 | 70 | 80 | 90 | 100 | 110 | 120 | 130 | 140 | 150 | 160 | |:--------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:|:------------:| | BFFHQ Conflict | 76.67 (0.31) | 72.40 (3.80) | 74.87 (2.53) | 76.40 (0.92) | 76.60 (4.01) | 75.67 (3.07) | 75.00 (0.80) | 75.87 (1.68) | 75.60 (1.51) | 75.87 (1.36) | 76.07 (1.42) | 75.80 (1.51) | 75.87 (1.42) | 75.87 (1.86) | 75.80 (1.78) | 76.07 (1.62) | 76.07 (1.50) | As can be seen in the table above, although the test accuracy shows somewhat unstable patterns during the training process, it achieves a test accuracy close to 76% for many epochs. This figure consistently outperforms other baselines. > **Q5**: Whether the same data augmentation was applied to other baselines. **A5**: We conducted our experiments using the same experimental setup as Ahn et al. [3], which includes the data augmentation explained in Section 4.3 of the paper. In other words, the same data augmentation technique was applied not only to our proposed DPR but also to all other baselines. [1] Correct-n-contrast: A contrastive approach for improving robustness to spurious correlations, ICML 2022 [2] Just train twice: Improving group robustness without training group information, ICML 2021 [3] Mitigating dataset bias by using per-sample gradient, ICLR 2023 [4] Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization, ICLR 2020 [5] Learning from failure: De-biasing classifier from biased classifier, NeurIPS 2020 [6] Learning debiased classifier with biased committee, NeurIPS 2022 --- Rebuttal Comment 1.1: Comment: Thank you for providing a detailed response. Most of my concerns are resolved except for the hyperparameter selection. In A3, I hope to confirm whether the authors used the best validation accuracy on the BFFHQ, where the validation set is highly biased, for hyperparameter selection. Given that the validation set is biased, relying on its accuracy for hyperparameter tuning might not yield reliable results. --- Reply to Comment 1.1.1: Comment: Thank you for your comments, and we would like to provide a response to your question. Yes, as you mentioned, we tuned the hyperparameters and reported the performance using a highly biased validation set. The reasoning behind this approach is as follows: 1. about the use of bias-aligned validation set **First, the results of other baselines reported for BFFHQ were taken from Ahn et al. [1]. To ensure a fair comparison, we conducted our experiments on BFFHQ following the experimental settings of Ahn et al. [1]. These settings include the dataset split.** Therefore, we used the same dataset split for our experiments and reported the performance of the proposed DPR using a bias-aligned validation set (composed only of bias-aligned samples). 2. about the reliability of the performance of DPR Even though we selected hyperparameters and reported performance using a bias-aligned validation set, DPR outperforms other baselines. We believe that the reason for these results is that **DPR is a debiasing method derived from a newly suggested training objective that aims to improve performance for both bias-aligned and bias-conflicting groups while reducing the performance gap between them.** This explains why DPR outperforms other baselines, even when performance is reported using a bias-aligned validation set. While the performance of DPR could improve further if evaluated using an unbiased validation set or a bias-conflicting validation set with the same distribution as the test set, we used the same dataset split as Ahn et al. [1] for a fair comparison. [1] Mitigating dataset bias by using per-sample gradient, ICLR 2023
Summary: This paper addresses the critical issue of spurious correlations in machine learning models, where models often rely on easy-to-learn bias attributes rather than the intended target features, leading to poor generalization on minority groups where these spurious correlations are absent. The authors propose a novel method called Disagreement Probability based Resampling for debiasing (DPR), which aims to mitigate the effects of spurious correlations without requiring explicit bias labels during training. The key contributions of this paper are: 1. A new training objective designed to achieve consistent model performance across both bias-aligned and bias-conflicting groups, encouraging robustness against spurious correlations. 2. Development of DPR, a practical resampling method derived from the proposed objective. DPR leverages the disagreement probability between the target label and the prediction of a biased model to identify and upsample bias-conflicting samples. 3. Theoretical analysis demonstrating that DPR minimizes losses for both bias-aligned and bias-conflicting groups while reducing the disparity between their losses. 4. Extensive empirical evaluation on six benchmark datasets (including both synthetic and real-world data) showing that DPR achieves state-of-the-art performance compared to existing methods that do not use bias labels. 5. Ablation studies and analyses that provide insights into the effectiveness of various components of the proposed method, such as model initialization, generalized cross-entropy loss, and data augmentation. Strengths: ### Originality: 1. Novel objective formulation: The authors introduce a new training objective designed to achieve consistent performance across bias-aligned and bias-conflicting groups. This approach differs from previous works that typically define groups based on combinations of target labels and bias labels. 2. Creative use of disagreement probability: The method leverages the disagreement between target labels and biased model predictions as a proxy for identifying bias-conflicting samples. This is an innovative way to address the challenge of not having explicit bias labels. 3. Unique combination of existing ideas: DPR creatively combines ideas from biased model training, resampling techniques, and theoretical analysis to create a cohesive and effective debiasing method. ### Quality: 1. Comprehensive empirical evaluation: The authors test their method on six diverse benchmarks, including both synthetic and real-world datasets, providing a thorough assessment of DPR's performance. 2. Theoretical foundations: The paper includes a rigorous theoretical analysis that supports the empirical results, demonstrating how DPR reduces loss disparity between groups and minimizes average loss. 3. Ablation studies: The authors conduct detailed ablation studies to understand the contribution of each component of their method, showing a commitment to thorough scientific investigation. 4. Comparison with state-of-the-art: DPR is compared against multiple recent baselines, consistently showing superior or comparable performance. ### Clarity: 1. Clear problem formulation: The authors provide a clear definition of the problem and their approach to solving it. 2. Step-by-step derivation: The development of DPR from the initial objective to the final algorithm is presented in a logical, easy-to-follow manner. 3. Visual aids: The paper includes helpful figures and tables that illustrate key concepts and results. 4. Detailed experimental setup: The authors provide comprehensive information about their experimental setup, facilitating reproducibility. ### Significance: 1. Addressing a crucial challenge: Spurious correlations are a major issue in machine learning, and this work provides a novel approach to mitigating them without requiring bias labels, which is often a practical constraint in real-world scenarios. 2. Broad applicability: The method is shown to be effective across various types of data (image and text) and problem setups, suggesting its potential for wide adoption in different domains. 3. Theoretical and practical relevance: By providing both theoretical guarantees and strong empirical results, the paper bridges the gap between theory and practice in addressing spurious correlations. 4. Potential impact on fair ML: The proposed method could contribute to the development of fairer and more robust machine learning models, which is a critical goal in the field. Weaknesses: Nothing Technical Quality: 4 Clarity: 4 Questions for Authors: Nothing Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: Authors have discussed the limitations sufficiently. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for pointing out the strength and originality of DPR, which uses a loss function directly derived mathematically from setting an objective. We also appreciate your high regard for the various experiments, theoretical analyses, and ablation studies that support the effectiveness of DPR. We would like to express our deep gratitude once again for your positive review. --- Rebuttal 2: Title: Thanks for your response Comment: I want to state that I have also checked other reviewers' concerns and questions and I believe they are well-answered. I will keep my score. Thanks for answering questions and clarifying strengths and limitations. --- Rebuttal 3: Comment: Thank you very much for your comments and for taking the time to review our responses. Additionally, We really appreciate your review and your decision to maintain your score.
Summary: This paper proposes a re-sampling approach based on disagreement between bias predictions and target label predictions. First, a biased model is trained using generalized cross-entropy. Then, sample-wise weights are determined by calculating the probability of disagreement between the bias predictions of the biased model and the target label predictions of the target model. Strengths: 1. This paper tackles the significant and pressing issue of robust learning, which is crucial for effectively using trained models in real-world applications. 2. The proposed sampling method is both simple and straightforward, making it easy to understand and implement. Weaknesses: 1. The method extends the approach of upweighting misclassified samples in the existing Just Train Twice (JTT) by utilizing generalized cross-entropy (GCE) and the disagreement between bias predictions and target label predictions. However, this extension does not appear novel. Furthermore, do there not exist approaches that use bias predictions? If so, discussing these studies would provide context and underscore the novelty of the proposed method. 2. Color jitter is employed as a data augmentation technique that can directly solve the color bias in CMNIST. However, the ablation study does not show significant performance improvement. What is the reason for this outcome? Additionally, for a fair evaluation, the same data augmentation should be applied to the baselines. 3. CMNIST is a relatively easy dataset, and BAR is not widely used recently. Therefore, additional experiments on other widely used datasets, such as CIFAR-10C, Waterbird, or NICO, are needed. Demonstrating similar performance improvements on these datasets would better support the experimental effectiveness of the proposed method. 4. The related works section does not discuss some recent studies, such as [1, 2]. [1] Lee et al., "Surgical fine-tuning improves adaptation to distribution shifts.", ICLR'23 [2] Jung et al., "Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation", ICML'23 Technical Quality: 2 Clarity: 2 Questions for Authors: Please see the Weaknesses Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Please see the Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your insightful comments. We've carefully considered them and provided responses. Please let us know if you need any clarification or have additional questions. > **Q1**: The differences between the proposed DPR and other existing approaches using bias predictions. **A1**: As outlined in Section 2, several debiasing methods utilize biased models to identify bias-conflicting samples, including JTT [1], DFA [2], PGD [3], and LC [4]. Each method employs a unique approach: 1. JTT upweights bias-conflicting samples by specifying their weights as a hyperparameter. 2. DFA generates diverse bias-conflicting samples by mixing features between samples, using these to mitigate spurious correlations. 3. PGD, a resampling-based method, determines upweighting based on the sample gradient of a biased model. 4. LC mitigates spurious correlations by correcting the sample logit. Our proposed DPR method differs from these existing approaches in several key aspects: 1. DPR is derived from a newly suggested learning objective (equation 3) that aims to improve performance for both bias-aligned and bias-conflicting groups while reducing the performance gap between them. 2. In contrast, JTT's heuristic reweighting strategy may not consistently achieve this objective, potentially limiting its effectiveness in mitigating spurious correlations. 3. PGD interprets debiasing as a min-max problem of minimizing loss for maximally difficult samples, which can be relaxed to minimizing the trace of inverse Fisher Information. 4. LC focuses on debiasing by maximizing group-balanced accuracy. These differing objectives distinguish our approach from existing methods. In section 6.3, we provide empirical evidence demonstrating that our proposed method outperforms other baselines including JTT, DFA, PGD, and LC across all six benchmarks. > **Q2**: Data augmentation setting and the impact of color jitter on mitigating color bias. **A2**: We would like to clarify that our experiments followed the same setup as Ahn et al. [3], including the data augmentation settings. This ensures a fair comparison, as the same data augmentation techniques were applied to all baseline models as well as our proposed method. Regarding the C-MNIST dataset results presented in Table 3 of our paper: 1. Our model achieves high performance (95.94% and 97.66%) on C-MNIST with just initialization and the use of GCE loss, even without augmentation. This is due to the effectiveness of the biased model trained with GCE loss in identifying and oversampling bias-conflicting samples within the C-MNIST dataset, thus reducing the impact of color bias. 2. As you correctly pointed out, incorporating data augmentation further addresses the color bias in C-MNIST. This additional step effectively mitigates the remaining bias, leading to significant performance improvements. > **Q3**: Additional experiments on other widely used datasets. **A3**: We conducted additional experiments on CIFAR-10C following the experimental settings of Ahn et al. [3] and reported the average accuracies (%) and standard deviations over three trials. The results are as follows. | | CIFAR-10C (0.5%) | CIFAR-10C (1%) | CIFAR-10C (5%) | |:----------:|:----------------:|:--------------:|:--------------:| | ERM | 23.06 (1.25) | 25.94 (0.54) | 39.31 (0.66) | | JTT | 25.34 (1.00) | 33.62 (1.05) | 45.13 (3.11) | | DFA | 29.96 (0.71) | 36.35 (1.69) | 51.19 (1.38) | | PGD | 30.15 (1.22) | 42.02 (0.73) | 52.43 (0.14) | | DPR (Ours) | 32.20 (0.81) | 43.77 (0.93) | 53.10 (0.62) | The ratios shown in parentheses next to the dataset names in the table represent the bias-conflicting ratio in the training set. As shown in the table above, the proposed DPR consistently outperforms other baselines on CIFAR-10C with various bias-conflicting ratios. > **Q4**: Missing recent studies such as [5, 6] in the related works section. **A4**: Thank you for letting us know. We will make sure to incorporate these recent studies in the revised version. [1] Just train twice: Improving group robustness without training group information, ICML 2021 [2] Learning debiased representation via disentangled feature augmentation, NeurIPS 2021 [3] Mitigating dataset bias by using per-sample gradient, ICLR 2023 [4] Avoiding spurious correlations via logit correction, ICLR 2023 [5] Surgical fine-tuning improves adaptation to distribution shifts, ICLR 2023 [6] Fighting Fire with Fire: Contrastive Debiasing without Bias-free Data via Generative Bias-transformation, ICML 2023 --- Rebuttal Comment 1.1: Comment: Thank you for your detailed responses. Many of my concerns have been clarified. However, the concern regarding color augmentation still remains. As you mentioned, applying the same color augmentation to the baselines may ensure a fair comparison within that specific setting. However, this does not necessarily equate to an accurate evaluation of each method's debiasing effects. The vast majority of debiasing studies have generally omitted color augmentation because its inclusion can obscure an accurate assessment of the methods' effectiveness. It is well-known that color bias in colored MNIST and corruption bias in CIFAR-10C can be partially mitigated through color augmentation. Therefore, to ensure the reliability of future research on spurious correlations, I believe it is important to avoid using color augmentation in experimental settings. Given these concerns, I strongly recommend including main results without color augmentation (a comparison between the proposed method and state-of-the-art methods would suffice, considering the limited time available) as well as ablation studies specifically addressing the impact of color augmentation. If this concern is addressed, I will raise my score. However, if not, I cannot overlook the contribution of color augmentation to the main results, and therefore, I cannot agree to accept this paper. --- Reply to Comment 1.1.1: Comment: We appreciate your comments. As you suggested, we conducted experiments on C-MNIST, CIFAR-10C, and additionally on the real-world dataset BFFHQ, after removing color jitter. We chose PGD [1] as the comparison baseline, as it was recently proposed and showed the highest performance among the baselines. We report the average accuracies (%) and standard deviations over three trials. The results are shown in the table below. | | C-MNIST (0.5%) | C-MNIST (1%) | C-MNIST (5%) | CIFAR-10C (0.5%) | CIFAR-10C (1%) | CIFAR-10C (5%) | BFFHQ (Unbiased) | BFFHQ (Conflict) | |:----------:|:--------------:|:------------:|:------------:|:----------------:|:--------------:|:--------------:|:----------------:|:----------------:| | ERM | 60.19 (0.96) | 79.01 (0.56) | 95.23 (0.07) | 22.90 (0.76) | 25.94 (0.69) | 39.14 (0.44) | 77.80 (0.61) | 56.00 (0.35) | | PGD | 96.83 (0.13) | 98.14 (0.11) | 98.40 (0.07) | 30.01 (1.25) | 41.55 (0.51) | 52.17 (0.37) | 84.17 (1.38) | 70.20 (1.91) | | DPR (Ours) | 97.45 (0.14) | 98.28 (0.13) | 98.44 (0.21) | 31.91 (0.55) | 43.31 (1.01) | 52.92 (0.20) | 87.13 (0.87) | 76.53 (2.20) | As you can see in the table above, even without color jitter, the proposed DPR outperforms ERM and PGD on all benchmarks (i.e., C-MNIST, CIFAR-10C, and BFFHQ). Additionally, we would like to mention that we followed the experimental settings of CNC and JTT respectively for CelebA and CivilComments-WILDS, which did not use any data augmentation. Therefore, color jitter was not used on them. Considering these points, we believe that DPR clearly demonstrates its effectiveness for mitigating spurious correlations. In addition, we would like to address the impact of color jitter in the experimental settings. As you can see from the results in the table above, the experimental results for CIFAR-10C shown in the previous answer, and Tables 1 and 2 in the paper, the presence or absence of color jitter does make a difference, but the difference is somewhat smaller than expected. We believe this may be due to the low intensity of color jitter used for data augmentation. For datasets other than CelebA and CivilComments-WILDS, we followed the experimental settings of Ahn et al. [1], which include various data augmentations including color jitter. The code for the color jitter used in the experiments is as follows, and you can find this in the code submitted with the paper as supplementary material. torchvision.transforms.ColorJitter(hue=0.05, saturation = 0.05) As shown in the code above, only hue and saturation were set, both at 0.05, and this was used for all experiments on C-MNIST, MB-MNIST, BAR, and BFFHQ. We believe that because a fairly low value of 0.05 was used for color jitter, it likely did not significantly alter the color characteristics of the data images. Furthermore, we think that even without color jitter, other data augmentations such as random rotation and random resize crop were sufficient to increase the diversity of bias-conflicting samples, thus achieving high debiasing performance. [1] Mitigating dataset bias by using per-sample gradient, ICLR 2023
Rebuttal 1: Rebuttal: We are very grateful to all the reviewers for their valuable comments. The additional figures showing the experimental results of group identification for BFFHQ are in the PDF file. Pdf: /pdf/230092823703aa0d62429c7da1bc43865c55becc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Geometric Trajectory Diffusion Models
Accept (poster)
Summary: A new diffusion-based generative model for modeling complex 3D geometric structures with time-evolving trajectories is proposed. By introducing the SE(3) equivariance property, temporal attention, and learnable geometric prior into the discretized diffusion model, the proposed model can achieve high performance in learning the distribution of geometric trajectories while preserving the symmetry of geometric systems. It can also be improved to attain conditional generation by using the equivariant cross-attention mechanism. Various types of experiments show high accuracy and performance of the proposed model in cases such as molecular dynamics and simulation. Strengths: 1. The introduction and methods are well-written and clear. 2. The idea and methodology of introducing and processing the time sequence information are reasonable and well-defined with good novelty. And the appendix provides sufficient proof and detailed analysis. 3. The paper explained and analyzed the case of conditional generation in detail. 4. The experimental setting and results are demonstrated in very extensive and detailed ways. 5. Clear ablation studies. Weaknesses: No major weakness. Please refer to the part of Questions for my concerns. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. As shown in the experiment part, the speed and efficiency are still the main limitations of the proposed model. Since the transition kernel and the prior are modified with extra restrictions, will it be difficult to redesign the framework (both forward and backward) into the continuous diffusion ODE/SDE form? (which might be important for further acceleration) 2. Except for the part of processing temporal information, could you compare more details about the proposed EGTN/EGCL with the E(3) Equivariant Diffusion Model in the paper [1]? 3. Why the geometric restrictions are also important in N-body simulation? What is the 'geometric structure' in these scenarios? It would be better if the author could provide more background knowledge about geometric trajectory and the physics simulation/molecular dynamics in the appendix. 4. Based on the visualization results in the appendix, the change in the molecular structure seems to be 'small'. Will such short trajectories be practically useful/meaningful in application? [1]: Hoogeboom, Emiel, et al. "Equivariant diffusion for molecule generation in 3d." International conference on machine learning. PMLR, 2022. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed and discussed the limitations about computation. It would be better if the paper could also mention the current gap between the proposed model and the industrial/academic demand in medical/physical fields. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review and suggestions! We provide point-to-point response below. > **[Q1] Since the transition kernel and the prior are modified with extra restrictions, will it be difficult to redesign the framework (both forward and backward) into the continuous diffusion ODE/SDE form? (which might be important for further acceleration)** Thank you for the insightful comment! Interestingly, our framework can be naturally extended into continuous-time formulation with a simple reparameterization for the learnable prior. In detail, denote the learnable prior as $\mathbf{x}\_r^{[T]}$, then with simple change-of-variables $\mathbf{x}\_\tau'^{[T]}=\mathbf{x}\_\tau^{[T]}-\mathbf{x}\_r^{[T]}$ where $\mathbf{x}\_\tau^{[T]}$ are the original latent variables, the resulting trajectory $\mathbf{x}\_\tau'^{[T]}$ effectively takes the same form as DDPM (see proof of Proposition A.2 in appendix). This finding enables us to readily transform the diffusion trajectory of $\mathbf{x}\_\tau'^{[T]}$ into an SDE/ODE (see Song et al. [1]), while we only need to add $\mathbf{x}\_r^{[T]}$ back after sampling from the transformed SDE/ODE. The convenient transformation from GeoTDM to its ODE formulation enables us to leverage fast samplers such as DDIM [2]. We provide extra experiments on Nbody dataset by performing DDIM sampling using the method described above. The results with different sampling steps are depicted below. ||ADE|FDE| |-|-|-| |DDPM-1000|0.110|0.258| |DDIM-100|0.118|0.271| |DDIM-50|0.127|0.289| Interestingly, the performance degradation is marginal even when using only 50 sampling steps with DDIM sampler. GeoTDM with 50 sampling steps still outperforms the best baseline EqMotion which has an ADE/FDE of 0.141/0.310. This underscores the potential of streamlining GeoTDM for faster inference. [1] Song et al. Score-Based Generative Modeling through Stochastic Differential Equations. In ICLR'21. [2] Song et al. Denoising Diffusion Implicit Models. In ICLR'21. > **[Q2] Except for the part of processing temporal information, could you compare more details about the proposed EGTN/EGCL with the E(3) Equivariant Diffusion Model in the paper [1]?** Except for processing temporal information, our model has several advantages compared with EDM [1]: EDM is designed for molecule generation from scratch and does not directly support conditioning. GeoTDM can perform unconditional generation and conditional generation with the help of the proposed equivariant cross-attention that enables conditioning on a given trajectory. GeoTDM also employs a flexible learnable prior parameterized by a lightweight neural network. The prior is carefully designed such that it subsumes existing parameterizations of the prior, including the center-of-mass (CoM) based prior used in EDM (see Theorem A.4 in Appendix A.3). We have shown in Sec. 5.3 that our prior yields superior performance over CoM-based prior used in EDM. We have also systematically compared GeoTDM with other related works in Table 13 of Appendix C.6. [1] Hoogeboom, Emiel, et al. "Equivariant diffusion for molecule generation in 3d." International conference on machine learning. PMLR, 2022. > **[Q3.1] Why the geometric restrictions are also important in N-body simulation?** In N-body simulation the particles are governed by physical laws such as Columb law, Hooke's law, and Newton's law of gravitation. These physical laws ubiquitously abide by physical symmetry, which is mathematically revealed by equivariance. That is, when the system is rotated/translated, the trajectories driven by these physical laws will rotate/translate in the same way, hence motivating our geometric restrictions of GeoTDM to be SE(3)-equivariant. > **[Q3.2] What is the 'geometric structure' in these scenarios?** * In physical simulation, the geometric structure is instantiated as a fully-connected geometric graph since each pair of the particles preserves some interaction (e.g., Columb force for Charged Particles dataset). * For molecular dynamics, the geometric structure is defined as the molecular graph, with the node indicates the atoms and the edges corresponds to the chemical bonds. * For pedestrain trajectory, the geometric structure is a radius graph with node being each pedestrian and edges are connected if two pedestrians are within a certain distance. > **[Q3.3] It would be better if the author could provide more background knowledge about geometric trajectory and the physics simulation/molecular dynamics in the appendix.** Thank you for the advice! We will add one section in the appendix to introduce backgrounds on geometric trajectory, physics simulation, and MD. > **[Q4] Based on the visualization results in the appendix, the change in the molecular structure seems to be 'small'. Will such short trajectories be practically useful/meaningful in application?** In computational chemistry, MD sampling is usually performed over metastable molecular structures instead of highly unstable states [1], which accounts for why the changes in the visualization seem to be relatively small. However, in our MD17 experiments each trajectory has 30 frames, down-sampled from the raw data by a factor of 10, corresponding to a length of 300 in raw data which is sufficiently long to capture important chemical prior such as the rotation of the methyl group in aspirin (see Fig. 5). Moreover, in Appendix C.2 we have provided a sampling algorithm to sample much longer trajectories using our GeoTDM trained on shorter trajectories. A demonstration for a long trajectory (120 frames) obtained from our model (trained on 30 frames) can be found in `aspirin_longtraj.gif` in the supplementary file. It demonstrates our GeoTDM can produce long and stable MD trajectories, which highlights its practical significance. [1] Durrant et al. Molecular dynamics simulations and drug discovery. BMC biology. 2011. **We sincerely hope our response could address your concerns!** --- Rebuttal Comment 1.1: Comment: Thank you very much for your detailed response! Most of my concerns have been addressed. Also combined with other reviewers' comments and rebuttals, I decided to raise the score. --- Reply to Comment 1.1.1: Title: Thank you for the feedback! Comment: Dear Reviewer RWWk, Thank you very much for your feedback and recognition of our efforts! We greatly appreciate your valuable suggestions and will incorporate the discussions into the final version. Best, Authors
Summary: In this paper, the authors propose geometric trajectory diffusion models (GeoTDM) to model temporal distribution of geometric trajectories while keeping the desirable physical symmetry of the trajectories. Strengths: The authors impose certain constraints of SE(3)-invariant on the prior and transition kernel to keep the desirable physical symmetry of the trajectories. Weaknesses: I am concerned that dynamic system and rotation translational invariant has already been studied in the previous research (listed below, but not quoted here), not like the authors claimed that this is the first study on this topic. There might be some overlaid here. Both algorithms use the same E(n) equivariant graph neural networks. Missing reference: Fang Wu, Stan Z. Li, DIFFMD: A Geometric Diffusion Model for Molecular Dynamics Simulations. The work in this previous paper requires that the dynamics must be invariant to rotation or translation. The paper needs more experimental results to support it. Technical Quality: 3 Clarity: 3 Questions for Authors: P5, line 185: In the definition of the operator P, ⊗ is not defined? It seems to me no need to define operator P if P(x) is defined. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review and suggestions! We provide point-to-point response below. > **[W1] I am concerned that dynamic system and rotation translational invariant has already been studied in the previous research (listed below, but not quoted here), not like the authors claimed that this is the first study on this topic. There might be some overlaid here. Missing reference: [1]** Thank you for bringing up the related work DiffMD [1]. We will definitely cite and discuss the work [1] in the revised version. However, our work is substantially different from DiffMD in the following aspects. 1. DiffMD only models the distribution of the next frame $\mathbf{x}^{(t+1)}$ given the current frame $\mathbf{x}^{(t)}$, while our GeoTDM directly models the joint distribution of an entire trajectory $\mathbf{x}^{[T]}$ with a collection of multiple frames. Therefore, DiffMD is not explicitly a generative model on geometric trajectories, while to the best of our knowledge our GeoTDM is the first. Given such nature, DiffMD is vulnerable to error accumulation when generating trajectory with multiple frames at inference time, while our GeoTDM is able to jointly model the correlations of multiple frames within a trajectory and thus incurs much smaller error. We also empirically verify this argument in the additional experiment provided below. 2. DiffMD relies on taking as input a given frame, while GeoTDM can both perform unconditional generation as well as conditional generation. It is unclear how to generate a geometric trajectory from scratch using DiffMD, for example, in the unconditional generation settings in our paper (Table 4 and 5). 3. DiffMD is specifically designed for modeling molecular dynamics which heavily leverages particular geometric features like bond angles and dihedral angles in molecules. By contrast, our GeoTDM, without relying on domain-specific features, has been demonstrated to perform promisingly across a wide suite of benchmarks including physical simulation, molecular dynamics, and even pedestrian trajectory forecasting. We also add additional experiments perform a thorough comparison. Since there is no public code released for DiffMD, we implement a variant of DiffMD (denoted DiffMD*) which replaces the task-specific backbone proposed in DiffMD by EGNN. We adopt the original training loss and sampling procedure proposed in DiffMD. We also set the same number of layers and hidden dimension as our GeoTDM, ensuring a fair comparison. Since DiffMD is not directly applicable to unconditional generation, we benchmark it in the conditional generation setting on Nbody and MD17 datasets. The results are presented below. ||Particle|Spring|Gravity| |-|-|-|-| |DiffMD*|0.170/0.382|0.0093/0.0252|0.298/0.724| |GeoTDM|0.110/0.258|0.0030/0.0079|0.256/0.613| ||Asp|Ben|Eth|Mal|Nap|Sal|Tol|Ura| |-|-|-|-|-|-|-|-|-| |DiffMD*|0.152/0.301|0.025/0.051|0.193/0.406|0.209/0.477|0.090/0.142|0.131/0.306|0.129/0.235|0.161/0.260| |GeoTDM|0.107/0.193|0.023/0.039|0.115/0.209|0.107/0.176|0.064/0.087|0.083/0.120|0.093/0.121|0.074/0.099| It is observed that GeoTDM, by modeling the joint distribution of all frames within the geometric trajectory, consistently yields lower error than DiffMD*, which only captures the distribution of one target frame and thus needs to perform iterative roll-out for the entire trajectory, incuring extra error accumulation. We will add these discussions to the paper. [1] Fang Wu, Stan Z. Li, DIFFMD: A Geometric Diffusion Model for Molecular Dynamics Simulations. > **[W2] The paper needs more experimental results to support it.** In this paper, we have performed exhaustive experiments on tasks including physical simulation (3 datasets), molecular dynamics (8 datasets in MD17 as well as an additional OC22 dataset [1], see Appendix C.1), and pedestrian trajectory forecasting (5 datasets in ETH-UCY). For physical simulation and MD, we also benchmark on both unconditional and conditional generation scenarios. In Sec. 5.3 we conduct extensive ablation studies and some additional use cases including leveraging GeoTDM to perform trajectory interpolation and optimization. We also propose a sampling algorithm to sample much longer trajectories using our GeoTDM trained on shorter trajectories through model composition, and verifying its feasibility by experiment (see `aspirin_longtraj.gif` in the supplementary file). The above experiments not only cover a diverse range of benchmarking suites but also involve in-depth use cases of how to unleash the potential of GeoTDM for novel application scenarios and tasks. We are glad to include more if the reviewer has concrete suggestions on what extra experiments are needed. > **[Q1] P5, line 185: In the definition of the operator P, ⊗ is not defined? It seems to me no need to define operator P if P(x) is defined.** Thank you for raising this point. The symbol $\otimes$ refers to the Kronecker product. For instance, $I_D\otimes I_N=I_{DN}$ where $I_D$, $I_N$, and $I_{DN}$ are the indentity matices with shape $D\times D$, $N\times N$, and $DN\times DN$, respectively. Here $\mathbf{P}$ is the matrix representation of the linear function $P(\cdot)$, i.e., $\mathbf{P}\mathbf{x}=P(\mathbf{x})$, where the left hand side is matrix multiplication. We formally define the operator $\mathbf{P}$ here in order to clearly show that it is a linear operation with rank $(TN-1)D$, which facilitates our subsequent analysis (more details in Appendix A.1). However, we agree that it is not necessary to define $\mathbf{P}$ in the main text when $P(\cdot)$ has already been introduced, and we will move it to Appendix A.1 instead. **We sincerely hope our response could address your concerns!** --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response! After I read all your rebuttals and other reviewers comments. I decided to stay at the same rating. --- Rebuttal 2: Title: Thank you for the feedback Comment: Dear Reviewer CUJA, We greatly appreciate your advice and promise to incorporate the discussions in the final version. In particular, we also summarize the core distinctions between our GeoTDM and DiffMD [1] as follows: | | Distribution Modeled | Unconditional | Prior | Backbone | |-|-|-|-|-| | DiffMD [1] | Frame | $\times$ | Fixed Pointwise Prior | Specialized for molecules | | GeoTDM | Trajectory | $\checkmark$ | Learned Flexible Prior | General for $n$-D geometric data | Overall, our experiments have verified the advantages of our approach over DiffMD in terms of empirical performance. We highly appreciate it if you could kindly consider adjusting your evaluation if we have addressed your concerns. Thank you! Best, Authors [1] Fang Wu, Stan Z. Li, DIFFMD: A Geometric Diffusion Model for Molecular Dynamics Simulations. --- Rebuttal 3: Comment: Thank you for your new update! What troubled me is that I am still not convinced what you said that: "DiffMD is not explicitly a generative model on geometric trajectories, while to the best of our knowledge our GeoTDM is the first. ". I believe that both of your methods are a generative model, but each of you approached in some detailed difference, so I need you to clarify this in more details. While you claimed that your "modeling the joint distribution of all frames within the geometric trajectory", what I saw is you still look at the individual conditional probability like DiffMD, see line 193. It seems like you take into data sampling of time into consideration, to me, I think this is considered as generalize of DiffMD. For the above reason, I believe that is why you both used Equivariant Graph Convolution Layer. --- Rebuttal Comment 3.1: Title: Follow-up on the clarifications Comment: Dear Reviewer CUJA, Thank you for your follow-up discussion and we have clarified your questions in detail. As the discussion period is coming to an end, we greatly appreciate it if you could let us know whether the clarifications address your concern and we are willing to offer more explanations if there are any questions. Thank you! Best, Authors --- Rebuttal 4: Title: Further Clarifications Comment: Dear Reviewer CUJA, Thank you very much for the timely feedback and the follow-up question! We are glad to further clarify your concern. > What troubled me is that I am still not convinced what you said that: "DiffMD is not explicitly a generative model on geometric trajectories, while to the best of our knowledge our GeoTDM is the first. ". I believe that both of your methods are a generative model, but each of you approached in some detailed difference, so I need you to clarify this in more details. Yes, we agree that both DiffMD and GeoTDM are generative models. The core difference lies in whether the method is modeling the distribution of **geometric trajectories** $\mathbf{x}^{[T]}=[\mathbf{x}^{(0)},\mathbf{x}^{(1)},\cdots,\mathbf{x}^{(T-1)}]\in\mathbb{R}^{T\times N\times D}$ which is a **sequence/collection of frames**, or a **single** frame $\mathbf{x}^{(t)}\in\mathbb{R}^{N\times D}$. In particular, our GeoTDM explicitly models the distribution of geometric trajectories $p(\mathbf{x}^{[T]}|\cdot)$ (see line 116-122), while DiffMD is designed to model the distribution of the next frame $p(\mathbf{x}^{(t+1)}|\cdot)$ (see Model Overview section on Page 3 of DiffMD [1]), where $\cdot$ denotes certain conditioning if applicable (e.g., the previous frames). Such difference leads to several distinctions in model design, inference, and empirical performance, as we will discuss below. * **Model design.** In order to jointly model $p(\mathbf{x}^{[T]}|\cdot)$, in the transition kernel we need to additionally handle the temporal dimension with size $T$. This motivates us to design the equivariant temporal attention layer (Eq. 5-7), which is absent in DiffMD since DiffMD does not require to explicitly model temporal correlation. Our framework also enables conditioning on a trajectory with multiple frames while DiffMD is based on Markovian assumption and always consumes the previous one single frame to generate the next frame. * **Inference.** The benefit of directly modeling the distribution of geometric trajectories over single frames also presents at inference time. Within one diffusion loop, our GeoTDM can generate an entire trajectory with $T$ frames, while DiffMD requires an additional outer loop that sweeps through $T$ frames in order to achieve the same effect. * **Performance.** We also demonstrate the benefit in terms of empirical performance. Since DiffMD does not consider the correspondence between multiple frames, it is more vulnerable to error accumulation when generating a long trajectory through iterative rolling out. Our extra experiment provided in the rebuttal verifies this point, with GeoTDM outperforming DiffMD by a remarkable margin. > While you claimed that your "modeling the joint distribution of all frames within the geometric trajectory", what I saw is you still look at the individual conditional probability like DiffMD, see line 193. It seems like you take into data sampling of time into consideration, to me, I think this is considered as generalize of DiffMD. Thank you for the question. However, there might be a misunderstanding here and we would like to respectfully clarify. Specifically, line 193 depicts the transition kernel $p\_\theta({\mathbf{x}\_{\tau-1}^{[T]}}|\mathbf{x}\_{\tau}^{[T]})$. Here $\tau$ refers to the **diffusion step** instead of the **frame index on the geometric trajectory**. The superscript $[T]$ indicates that the latent variable here is a geometric trajectory instead of a single frame. Therefore, the individual conditional probability is enforced on **diffusion step**, which is due to the Markovian assumption of the diffusion process. This is a common practice for diffusion models and is shared in both GeoTDM and DiffMD, and we are not claiming any difference on this point. However, we are **not** enforcing conditional independence on the actual temporal dimension of the geometric trajectory, while DiffMD has a Markovian assumption on molecular dynamics and only models the distribution of a single frame. This is what we refer to by stating "modeling the joint distribution of all frames within the geometric trajectory". This point leads to the core difference between these two approaches, as we have discussed. > For the above reason, I believe that is why you both used Equivariant Graph Convolution Layer. We are both using EGCL to process spatial information on the geometric structure. However, since we are modeling the whole trajectory, we need to additionally introduce the equivariant temporal attention layer to process the temporal correspondence, which never presents in DiffMD. Thank you again for the follow-up discussion! We promise to include the discussion and the distinctions between these two methods in the manuscript. Please let us know if this addresses your concern and we are happy to further clarify if you have any questions. Best, Authors --- Rebuttal 5: Comment: Thank you for your response and clarification! I decide to raise my rating to 6: Weak Accept. I trust you can revise your paper according to the discussion. --- Rebuttal Comment 5.1: Title: Thank you for the supportive feedback! Comment: Dear Reviewer CUJA, Thank you for the supportive feedback and the insightful comments that help us improve the manuscript! We will include these in the final version. Best, Authors
Summary: The paper introduces geometric trajectory diffusion models for generation of particle, pedestrian, or molecular trajectories. The architecture consists of EGNN layers within a temporal frame and temporal attention across frames computed with relative temporal ecodings. The architecture is shown to possess the appropriate equivariances. Key points of novelty include a learnable prior as a function of conditioning frames in the conditional generation case. Experiments are conducted on particle, molecular, and pedestrian dynamics datasets. Strengths: * The method is a timely extension and synthesis of geometric diffusion models and trajectory diffusion models. * As demonstrated in the experiments, the method has broad applicability across many machine learning domains. * The experiments are thorough. The ablation studies validating the somewhat unconventional choices of conditional prior are appreciated. * The paper is clearly written and the exposition of the many experiments is handled very cleanly. Weaknesses: * The work does not score highly in conceptual novelty. While all architecture choices are sensible, they are relatively straightforward and do not seem surprising, insightful, or inspired. An extension or exploration of more sophisticated equivariant architectures could have strengthened the paper. * The exposition of the method drags at times, re-proving well-known statements about equivariance in neural networks and geometric diffusion models. * The experiments, while diverse and broad, also suffer from being scattered and not necessarily the most convincing individually. Little effort is spent on exploring the ways in which a trajectory diffusion could be employed and the new problems that could be solved with the mode. To me, the novel capabilities of such a method are more exciting than marginal gains across a number of established datasets, executed checkbox-style. Technical Quality: 3 Clarity: 4 Questions for Authors: See above Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review and suggestions! We provide point-to-point response below. > **[W1] The work does not score highly in conceptual novelty. While all architecture choices are sensible, they are relatively straightforward and do not seem surprising, insightful, or inspired. An extension or exploration of more sophisticated equivariant architectures could have strengthened the paper.** We will first elaborate on the motivation and insights of our architectural design. We also add extra experiments that explore more sophisticated equivariant networks under the framework of our GeoTDM. **Motivation and insights of the architecture.** Our EGTN is a stack of spatial layers and our proposed equivariant attention temporal layers in an alternated fashion. * For spatial layer, we use the widely adopted EGNN layer, akin to existing geometric diffusion models for static structures. Notably, this module is adopted in a plug-and-play manner, which enables us to switch into other advanced backbones depending on different data and task. We also provide extra experiments by switching into more advanced equivariant networks in the part below. * To process temporal information, we propose a novel equivariant attention layer that involves several core designs including relative temporal embedding, satisfaction of equivariance, and extension to cross-attention for conditioning. The utilization of attention here is inspired by the success of Transformers in processing sequence data such as text and audio, since here we are handling a sequence of frames. We have also performed thorough ablation studies in Table 6, which aim to provide deeper insight about how these design choices matter. **Exploration of more sophisticated equivariant networks.** We also provide additional experiments on exploring more sophisticated equivariant networks in the framework of our GeoTDM. We replace EGCL by an advanced backbone Equiformer [1] (dubbed GeoTDM*) which enables utilizing higher order equivariant tensor. We use the codebase of [1]. The results on MD17 conditional generation are exhibited below. ||Asp|Ben|Eth|Mal|Nap|Sal|Tol|Ura| |-|-|-|-|-|-|-|-|-| |GeoTDM*|0.099/0.184|0.023/0.036|0.098/0.189|0.117/0.184|0.060/0.092|0.075/0.113|0.112/0.130|0.067/0.094| |GeoTDM|0.107/0.193|0.023/0.039|0.115/0.209|0.107/0.176|0.064/0.087|0.083/0.120|0.093/0.121|0.074/0.099| We observe that the performance is generally enhanced across several molecules. However, the sampling time has been dramatically increased since Equiformer is more computationally exhaustive compared with EGNN. Besides, it is a domain-specific backbone that cannot be directly applied to other tasks like pedestrian trajectory modeling, which limits the broadness of application. [1] Liao et al. Equiformer: Equivariant Graph Attention Transformer for 3D Atomistic Graphs. In ICLR'23. > **[W2] The exposition of the method drags at times, re-proving well-known statements about equivariance in neural networks and geometric diffusion models.** The formal statements presented in this paper are fundamentally different from existing works and are not simply re-proved counterparts. The theoretical framework initiates with the formal definition of geometric trajectory (line 116), which is distinct from the static structures discussed in existing literature, being an extension of them with an additional temporal dimension. We then rigorously defined how the group action $g\in SE(3)$ operates on geometric trajectory (line 129). Notably, Theorem 4.2-4.4 are introduced and proved under the definition of geometric trajectory and the group action enforced on it, which generalize the theorems in existing works and subsume them as special case when $T=1$. These theorems are presented for completeness and mathematical rigor of the paper. > **[W3] The experiments, while diverse and broad, also suffer from being scattered and not necessarily the most convincing individually. Little effort is spent on exploring the ways in which a trajectory diffusion could be employed and the new problems that could be solved with the mode. To me, the novel capabilities of such a method are more exciting than marginal gains across a number of established datasets, executed checkbox-style.** Thank you for recognizing the diversity and broad coverage of the experiment and the suggestions on exploring novel capabilities of GeoTDM. Indeed, in the paper we have already explored a few directions regarding the novel capabilities of GeoTDM: 1. We adopt GeoTDM to OC22 [1], a dataset of novel large-scale catalytic MD systems. The detailed experimental setup and results have been presented in Appendix C.1. The results demonstrate the potential of leveraging GeoTDM to simulate the dynamics of catalyst system, which bears significance in designing novel catalyst systems. 2. We also demonstrate the capability of GeoTDM in performing temporal interpolation of trajectories and trajectory optimization in Sec. 5.3. GeoTDM is capable of handling these special tasks since it enjoys the benefits of being a controllable diffusion model that captures the joint distribution of the entire trajectory, while the baselines in existing works are not. These experiments have shown the promise of employing GeoTDM to more applications such as designing chemical reaction path given the initial and target system state, which is an interesting future direction. 3. We propose an approach to generate long trajectory using GeoTDM trained on shorter trajectories through model composition in Appendix C.2. The demonstration reveals the capability of our model to produce long and stable MD trajectories, which also highlights a novel application scenario of GeoTDM. [1] Tran et al. The Open Catalyst 2022 (OC22) Dataset and Challenges for Oxide Electrocatalysts. **We sincerely hope our response could address your concerns!** --- Rebuttal Comment 1.1: Comment: I appreciate the detailed author response. However it has not moved the needle substantively on my concerns and I will keep the score. --- Reply to Comment 1.1.1: Title: Thank you for the feedback Comment: Dear Reviewer UDq7, Thank you for the timely reply. To help substantively address your concerns, we summarize and further illustrate our response below. For **W1**, we added extra experiments that explore the way to combine our proposed equivariant temporal layer with more sophisticated equivariant architectures, e.g., Equiformer, that helps further boost the performance on specific tasks like MD. We will also include these discussions in the paper. For **W2**, we clarified the difference between theorems presented in this paper and previous work. Besides, we provided necessary derivations in Appendix A where we have interestingly found that our GeoTDM also preserves the simplified loss (Eq. 12) which is yet not straightforward. Notably, in Theorem A.4 we also theoretically justify our learnable prior subsumes existing parameterizations. All of these constitute our theoretical contributions. For **W3**, We have also presented various explorations on approaching new problem with GeoTDM, including modeling large scale catalyst dynamics (Appendix C.1), interpolating trajectories given initial and target states (Sec. 5.3), optimizing the input trajectory towards the learned distribution (Sec. 5.3), and performing generation through longer time horizon (Appendix C.2). Please let us know if you have any concerns in particular or any advice that could help further improve the paper and we are more than willing to address them. Thank you! Best, Authors
Summary: The paper proposed the first diffusion model for modeling the temporal distribution of 3D geometric trajectories, while previous works only operate on static structures. It demonstrates the equivariant temporal kernels can lead to density with desired symmetry and develop a novel transition kernel leveraging SE(3)-equivariant spatial convolution and temporal attention. The experiments demonstrates it can generates realistic geometric trajectories with significantly higher quality. Strengths: 1. The geometric trajectory diffusion model is novel to me. 2. The EGTN that operates on geometric trajectories permits conditioning upon a given trajectory using equivariant cross-attention 3, The experiments demonstrate that GeoTDM achieves SOTA performance on both unconditional and conditional trajectory generation tasks. 4. The work can be extended to applications like temporal interpolation and trajectory optimization Weaknesses: None Technical Quality: 3 Clarity: 3 Questions for Authors: None Confidence: 1 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your constructive review and recognition of our work! Please let us know if you have any questions and we are more than happy to answer.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Amortized Eigendecomposition for Neural Networks
Accept (poster)
Summary: In this paper, the authors proposed a novel framework to integrate eigendecomposition efficiently into neural network training, which is typically costly in the backward process. The key insight is to compute the decomposition together with training (since the decomposition also changes during training). Rather than forcing exact eigendecomposition $U\Sigma U^T=A_\theta$ every iteration, the authors treat the "eigenvectors" U as optimizable variables and leverage an eigen loss that attracts U to the true eigenvectors of $A_\theta$. The authors include a thorough analysis of different "eigen loss" inspired by the trace inequality (Eq.9, 11); they also provided an efficient implementation that re-parameterizes the orthogonal basis U into an unconstrained matrix $W$ via QR decomposition $U=\text{QR}(W)$. For convergence analysis, the authors empirically showed that their eigen loss cooperates well with different optimizers for solving eigendecomposition; furthermore, they tested their method on several machine learning tasks and observed a significant speed-up from 1.4x to 600x. Strengths: Originality: Good. The idea of applying regularizers to simulate SVD/eigendecomposition has been proposed in previous works such as https://arxiv.org/abs/2303.10512. However, the framework proposed here is novel and detailed. Quality: Good. The authors combined theoretical analysis (such as Fig.2) with empirical results (Fig.3). Clarity: Good. The paper is well-written and easy to follow. Significance: Fair. The experiments showed a substantial speedup for multiple tasks involving eigendecomposition. Weaknesses: The idea of doing eigendecomposition amortized is interesting. However, the stability and convergence rate of this method are not adequately discussed in the paper. Also, the design of eigen loss is very much "handcrafted", and I look forward to more flexible methods. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. It seems amortized eigendecomposition would converge slower w.r.t iterations, compared to traditional eigh/svd (because it's a rather indirect way of optimization). And I wonder how much time one can save for the whole training process? 2. Can this framework be applied to other aspects such as improving the stability of SVD in network training? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: 1. The proposed method seems to be designed for relatively simple machine learning tasks. But for large-scale deep learning tasks with complex loss landscapes, there would be many sub-optimal solutions and the eigen loss may not be helpful. 2. Limited to eigendecomposition which is only applicable to symmetric matrices. For example, in section 5.2 the authors have to approximate the nuclear norm $\|\theta\|_\star$ with $\sum_i \|\theta u_i\|$ since $\theta$ is not necessarily symmetric. So perhaps an "amortized SVD" would be more significant. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate reviewer fsNU for the constructive feedback on our paper. Here are our responses. ***Q1 It seems amortized eigendecomposition would converge slower w.r.t iterations, compared to traditional eigh/svd (because it's a rather indirect way of optimization). And I wonder how much time one can save for the whole training process?*** Our amortized eigendecomposition may indeed converge slower in terms of iterations compared to traditional eigh/SVD when used purely as a numerical eigensolver. We acknowledge this limitation in the conclusion section of the paper. However, the primary advantage of our approach lies in replacing the computationally expensive eigh/SVD with a more efficient QR decomposition. While the convergence to the same results may require more steps, the QR decomposition is significantly faster overall. In our experiments, we observed a substantial speed-up with our approach, ranging from 1.4x to 600x across various tasks. Additionally, we conducted an experiment on the latent PCA task using a backbone with 64 layers and over 1 billion parameters, and an eigendecomposition on a 4096x4096 matrix. The experimental results are shown in the additional PDF page. Our approach achieved at least a 50% reduction in time. Specifically, the average training time for such a backbone with eigh was 0.330 seconds per iteration, whereas our method reduced this to 0.155 seconds per iteration. ***Q2 Can this framework be applied to other aspects such as improving the stability of SVD in network training?*** Yes, our method can enhance the stability of SVD during network training. QR decomposition tends to be more stable than SVD, especially when the underlying matrix $A$ is full-ranked [see 1]. This stability can be achieved through effective initialization of $W$ (where $U = QR(W)$ ), and the rank of $W$ is preserved during optimization on the orthogonal manifold. Furthermore, our approach can serve as a robust alternative to truncated SVD (ie. we just want the top k singular values), which is often not differentiable in frameworks like JAX and TensorFlow, or lacks numerical stability, as seen with methods like `torch.lobpcg`. Our method provides a stable and differentiable approach to matrix decompositions, making it a viable option in these scenarios. ***Q3 For large-scale deep learning tasks with complex loss landscapes, there would be many sub-optimal solutions and the eigen loss may not be helpful.*** We want to clarify that the primary contribution of this paper is not to claim that the eigen loss itself improves neural network performance. Instead, our focus is on accelerating neural networks that involve eigendecomposition (or SVD) operations. Our approach aims to make these operations faster without compromising the accuracy of the eigendecomposition results. While it is true that involving eigendecomposition can make the loss landscapes more complex, there is extensive research demonstrating its effectiveness in various tasks, such as nuclear norm regularization, graph convolutional networks, and network compression. Our paper’s goal is to provide a more efficient way to perform eigendecomposition within neural networks, achieving speed improvements in an amortized fashion while preserving performance. ***Q4 Limited to eigendecomposition which is only applicable to symmetric matrices.*** Although this paper focuses on eigendecomposition, our approach is NOT limited to symmetric matrices. The singular values of an asymmetric matrix $A$ can be obtained by performing eigendecomposition on the matrix $A^\top A$ (or $AA^\top$) and taking the square root of the eigenvalues, which is a well-known result. In Section C2 of the appendix, we demonstrate the extension of our approach to SVD. However, we chose to focus on eigendecomposition in this paper because it is a more fundamental and more commonly-used linear algebra operation compared to SVD. For the next version of this manuscript, we will make this point more clear. [1]. Demmel, James W. Applied numerical linear algebra. Society for Industrial and Applied Mathematics, 1997. --- Rebuttal Comment 1.1: Comment: Thanks for the comprehensive rebuttal, especially **Q3** illustrating that the eigen loss can act as a regularizer and preserve networks' performance. I am increasing my rating to 5.
Summary: This paper proposes a method that decouples the eigendecomposation calculation from training process, instead, they use a eigen loss to jointly optimize it with the training loss of the neural network as a nested optimization loop. The proposed method can speed up the training process of problems that incorporate eigendecomposition within their constraints or objective functions. Strengths: The overall method is easy to understand and soundness. Combining the designed eigen loss with normal training loss is an interesting and reasonable method to effectively speed up the training process. Weaknesses: The theoretical support of this paper is sufficient. However, in the experiment section, except for the speed up ratio, I think the author should report the loss value and the corresponding task performance for a comprehensive comparison. It is important to verify whether the accelerate method can maintain the performance. Technical Quality: 4 Clarity: 4 Questions for Authors: I have no questions. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The author clearly state the limitation of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank reviewer WiHy for the constructive comments on our paper. Here are our responses. ***Q1 Whether the accelerate method can maintain the performance.*** In the manuscript, we try to answer this question through two experiments: - Latent-space Principal Component Analysis: As shown in Figure 5a, the convergence curves of the traditional eigendecomposition and our approach align well. The reconstruction loss curves for both the conventional eigh function and our amortized eigendecomposition strategy are nearly indistinguishable. Initially, our method registers lower eigen-loss values compared to the eigh function, but it eventually converges to equivalent values. This demonstrates the efficacy of the amortized optimization approach. - Adversarial Attacks on Graph Convolutional Networks: As shown in Figure 6, we find that incorporating eigendecomposition of the Laplacian matrix makes the graph convolutional network more robust to attacks on graph structures. Both experiments demonstrate that our acceleration method can maintain performance without any degradation. Due to the page limit of the review round, we were not able to include these results in the main context. We make this point more clear in the main context of the paper if additional page is allowed. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed explanation, I will increase my score to 8.
Summary: This paper proposes a method named "amortized eigendecomposition" to replace the computationally costly SVD operation in settings where an eigendecomposition is required during neural network training. The proposed method introduces a loss term ("eigen loss") and replaces the full SVD with the less computationally expensive QR decomposition at each iteration. A theoretical analysis shows that the desired eigenpairs are obtained as optima of the eigen loss. An empirical analysis on several tasks, including nuclear norm regularization, latent-space PCA, and graph adversarial learning, demonstrates that the proposed method attains significant improvements in training efficiency while producing nearly identical outcomes to conventional approaches. Strengths: Excellent organization and presentation, reads like a textbook chapter. The method is well motivated and clearly explained. The analyses are on-point and thorough. The significance and applications of the method is made clear, and the limitations are adequately discussed. The method addresses a concrete application and is of immediate benefit for a class of problems in ML. Weaknesses: A discussion of scalability and comparison with alternative methods for fast SVD are two significant points that are missing in this paper. The experiments report results on small-to-moderate matrix dimensions. Technical Quality: 4 Clarity: 4 Questions for Authors: How does amortized eigendecomposition scale to larger dimensions? How does it compare to other fast-SVD methods? In particular, I have randomized SVD in mind, which is known to scale very well. Is there a trade-off, or is there a reason for why randomized SVD is not applicable in this setting? Or is it just still more computationally expensive because it needs to be done in each iteration? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have addressed the limitations of their method reasonable well. As mentioned in the sections above (weaknesses/questions), an additional discussion of scalability and comparison to other fast/approximate SVD methods would be nice to have. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank reviewer 5kBi for his acknowledgement and valuable comments of our paper. Here are our responses. ***Q1 Regarding the scalability.*** For additional experiments on large-scale setups, please refer to the general response provided to all reviewers. ***Q2 Comparison with randomized SVD.*** While randomized SVD can enhance the efficiency of SVD by randomly projecting the matrix into a smaller one and then performing SVD on the smaller projected matrix, there are several key differences between our approach and randomized SVD: - *Applicability*: Randomized SVD is more suitable for large **sparse** matrices with **low rank**, as mentioned in the document of `torch.svd_lowrank`. Our method, on the other hand, is a general eigendecomposition/SVD technique that makes no assumptions about the input matrix. For dense matrices, randomized SVD often suffers from lower performance compared to standard SVD. As shown in Figures 3 and 4, our approach achieves error rates close to those of standard SVD on dense matrices. - *Determinism*: Randomized SVD is a stochastic algorithm, meaning the results can vary due to inherent randomness. Although this variability can be mitigated by using a fixed random seed, it still introduces a non-deterministic element to the computation. In contrast, our algorithm is deterministic, providing consistent and reproducible results. - *Hyperparameter*: The accuracy of the randomized SVD approximation depends on the choice of the oversampling parameter p. If p is too small, the approximation may be poor. Conversely, if p is too large, it reduces computational efficiency. Our method does not require any specification of hyperparameters. We acknowledge that our algorithm could be further accelerated using randomized linear algebra techniques such as randomized QR decomposition or randomized trace estimators. We plan to explore these possibilities in future work.
Summary: The paper describes a novel method to circumvent the need for explicit eigendecomposition during neural network training. The central insight is that one can simply learn estimates of the eigenvectors (parameterized by a QR decomposition) of interest alongside the original loss function via gradient descent. To do so the author's add an auxiliary (and differentiable) loss whose optima is obtained by the appropriate set of eigenvectors. A summary of the presented results: - The author's provide both theoretical (proofs) and empirical evidence that the proposed estimation algorithm converges to the true eigenvectors. - 3 settings where SVD or eigendecomposition are incorporated in neural network training schemes are investigated empirically, in all settings the proposed scheme yields a significant speed up relative explicit matrix decompositions. Strengths: - Motivation: The paper benefits from a very clear goal: to reduce the computational load of neural network training schemes that require eigendecomposition in each iteration. While this may be a somewhat niche use case, I believe such approaches are becoming increasingly prevalent. For example besides the three situations considered in the paper multiple recent self-supervised learning methods for learning image representations could benefit from the proposed speedups [1, 2]. - Novelty: The idea of learning estimates of eigenvectors using loss functions from existing optimization methods for eigendecomposition in neural network training is novel to the best of my knowledge. The author's discuss existing solutions in the appendix C. (iterative solvers, manifold optimization, etc.) and it seems clear to me the proposed method fills a hole in the practitioner's toolbox. I think this section could potentially be promoted to the main text if space permitted. - Clarity of presentation: The theorems and their proofs, the description of the algorithm, and the experimental design are all clearly presented and easy to understand. - Strong empirical results: In all of the considered settings the proposed amortization method provides significant speedups relative to full eigendecomposition (or SVD) during each iteration. I would be curious to know how such benefits scale as larger scale networks are employed (see below). Additional results in the appendix also suggest the amortization method results in similar solutions to the traditional approach. [1] Ermolov, Aleksandr, et al. "Whitening for self-supervised representation learning." International conference on machine learning. PMLR, 2021. [2] Yerxa, Thomas, et al. "Learning efficient coding of natural images with maximum manifold capacity representations." Advances in Neural Information Processing Systems 36 (2023): 24103-24128. Weaknesses: - Small scale experiments: Each of the three empirical experiments are performed with relatively small networks and datasets. While this is not necessarily a weakness in and of itself, my intuition is that the speed-up benefits will be more marginal as network size increases. This is because, for example as the depth/width of the encoder/decoder increases in the latent space PCA setting, the eigendecomposition becomes less of a bottleneck as more time is spent propogating activations through the network. If this is the case I think it merits discussion in the paper. On the same note it would be nice to have more assurances that the amortization method finds correct solutions (a la Figure 5a) on larger scale problems. Technical Quality: 4 Clarity: 4 Questions for Authors: - I would appreciate if the author's could comment on the scalability of the method (or more precisely, how the computational benefits scale with the architecture size). - I found myself slightly confused by Eq. 19. The numerator of the regularizer encourages high variance along the first two PCs (and synergistically encourages $U$ to converge toward the first two PCs). Doesn't this, alongside the projection and reconstruction loss, encourage the covariance matrix to be low rank (as any variance in lower variance eigenspaces will not be accessible to the deocder)? Does homogeneity need to be preserved simply because without trace normalization the weights of the encoder would grow large and the weights of the decoder would decay? Sorry for getting caught up in this as it really is a detail, but wouldn't this also be a case where we really only care that $U$ forms a basis for the space of the top 2 PCs (because the first linear layer of the decoder could "undo" any such rotation of the space if it were advantageous)? If this is the case, I assume then it would be sufficient to set $M=I$? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank reviewer 6W8h for the constructive comments and valuable suggestions. Here are our responses. ***Q1 Regarding the scalability.*** For additional experiments on large-scale setups, please refer to the general response provided to all reviewers. We would like to emphasize that, as shown in Figure 1, eigendecomposition/SVD is significantly more expensive than matrix multiplication—potentially up to 1000 times slower. If a network increases in depth while keeping the width constant, the additional matrix multiplications do not significantly increase computational cost. However, if the width of the network increases substantially, the cost of eigendecomposition operations escalates dramatically. For example, in the experiment discussed in the general response, the training time of an MLP model with 64 layers and over 1 billion parameters is still faster than performing an eigendecomposition operation on a 4096x4096 matrix. Thus, the eigendecomposition typically becomes the bottleneck unless the backbone is a significantly large neural network. ***Q2 Regarding Eq. 19.*** Thank you for highlighting this confusion. Yes, the loss function in Eq. 19 encourages the covariance matrix to be low rank, with the rank being close to or equal to the number of principal components (PCs). Additionally, If the trace normalization is removed, the eigenvalues of the eigen loss can cause the output of the encoder to become excessively large. If we set $\mathbf{M}=\mathbf{I}$, $\mathbf{U}$ can find the optimal subspace, but the resulting PCs are not orthogonal, meaning the covariance matrix of the PCs is not diagonal. Therefore, $\mathbf{M}$ must have distinct diagonal elements. We acknowledge that this loss function may not be the clearest way to demonstrate latent PCA. The purpose of introducing this trace-normalized eigen loss (Eq. 19) is to demonstrate that a simple PCA layer in a network can enforce sparsity in the neural network, as shown in Figure 5c. However, we also find that this loss function may cause some confusion. An alternative loss is to avoid concerns about the eigenvalues by using a stop-gradient operator, as in Eq. 15, applied to the covariance matrix of the hidden output. The eigenloss would then become: $$ tr(\mathbf{M}\mathbf{U} \text{StopGrad} (\text{cov} (h_\theta(X)) \mathbf{U} ) $$ In this case, the optimal $\mathbf{U}$ would be the PCA projection, and the trace loss would no longer cause the encoder/decoder parameters to become excessively large or small. Just, sparsity would not be enforced directly. We will modify this loss in the manuscript. --- Rebuttal Comment 1.1: Title: Response to Rebuttal. Comment: Thank you to the author's for their clarifying comments. I keep my original score.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their in-depth comments and valuable suggestions, which have significantly improved the quality of our paper. As many of the reviewers mentioned the scalability of our approach, we would like to provide a general response on this aspect. We conducted an additional study on the scalability, with the results presented in the additional PDF page. Using the Celeb-A-HQ (256x256) dataset, we examined the scaling of the latent PCA task by varying the depth and width of the backbone autoencoder. The average execution time per iteration is reported. Notably, the largest model tested, with an autoencoder of 64 layers and a dimension of 4096, comprises over 1.0 billion parameters. From the results, we can draw two main conclusions: - **Efficiency of amortized eigen loss**: Our amortized eigen loss does not significantly increase the computational cost of the backbone but greatly reduces the training time for eigendecomposition. This is evident from the close alignment of the red (backbone) and green (backbone + our approach) lines. Compared to the traditional eigendecomposition approach (shown in the blue line), which increases dramatically as the dimension increases, our approach scales much more slowly. - **Bottleneck of the latent PCA**: The primary bottleneck in such neural network structures is the eigendecomposition, while the computation for fully-connected layers is comparatively minor, especially when the width is large (>2000). This is demonstrated by the increasing gap between the backbone (red line) and backbone + eigh (blue line) as the dimension increases. However, if we increase the depth of the backbone while keeping the hidden dimension fixed, the execution time remains relatively unchanged. This indicates that the cost of fully-connected layers is small compared to that of eigendecomposition, echoing the results shown in Figure 1. It is important to note that the total execution time includes both the computation of the eigen solver and the backbone autoencoders. The speed-up ratios presented in Table 1 reflect the speed-up of the eigen solvers alone, excluding the computation time of the backbone autoencoders. Pdf: /pdf/01e916777381fe61601001e6fdb661cdb596d3b0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Local to Global: Learning Dynamics and Effect of Initialization for Transformers
Accept (poster)
Summary: This paper focuses on training a single-layer linear attention transformer with low-rank parameterization on first-order Markov chain data. By reparameterization, they can reduce this problem to a 3-variable learning dynamics and comprehensively characterize the trajectory and local/global minimizers. This paper precisely characterizes the different convergence conditions for local/global convergence, highlighting the role of initialization. Empirically, on this specific data distribution, they verified the superiority of their special initialization compared to standard initialization. Strengths: This paper is the first result highlighting the role of initialization. Theoretically, they simplify the model to a low-rank parameterization with a fixed subspace for each parameter and rigorously characterize the landscape and trajectory. With this simplified model, all the possible stationary points can be found, and the dynamics can be calculated given different initializations. The simplification is somehow corroborated by the empirical evidence, and the analysis is complicated. Surprisingly, empirically a random low-rank initialization can outperform the standard initialization on a linear transformer on the Markovian task. It implies the theoretical insights from the low-rank parameterization provide some guidelines for transformer initialization. Weaknesses: 1. I am concerned with the generality of the insights on the role initialization. I appreciate that the authors make a great effort to theoretically analyze the Markov Chain with 2 states (i.e. $P\in R^{2\times 2}$) and do extensive experiments on this particular dataset with linear attention. However, the authors claim that ``we offer practical guidelines for initialization of transformer parameters and demonstrate their effectiveness."(Line 280-281) in both the abstract and the conclusion. Line 255 also claims "the corresponding insights are more general and apply more broadly to the general architecture.''. Since all the conclusions are based on this simplified setting, I don't think it can be characterized as a realistic guide for initialization. I think further experiments on the Markov data with more states and non-linear attention transformers can benefit this paper and make the results more general. I will increase the score if the results can be verified in a more general setting. 2. As a minor point, the dynamics analysis is restricted to a low-rank manifold. Though it is partially corroborated by the experiments that GD finally converges to the low-rank solution, it cannot guarantee that without this restriction, the parameter can stay on the manifold once initialized on the manifold. Nevertheless, the analysis is already complicated and this simplification is acceptable. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Does the low-rank initialization scheme insight transfer to more general data distribution or architecture? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The dynamics analysis is restricted to certain subspaces. The attention pattern does not train (which is simplified as a trainable scaling factor $a$). The experiments cannot corroborate the general claim. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback and insightful comments. We address the individual questions below: - **Generality of the insights**: For the two-state Markov chain, while our analysis capitalizes on the canonical parameters and linear attention, our guidelines and insights from this setting readily generalize to non-linear soft-max attention. For instance, Figure 6 in the paper (Figure 1, 3 and 4 in the rebuttal) demonstrates the effectiveness of our initialization scheme for the full one-layer transformer model with soft-max attention (Lines 60-64). Similarly, the low-rank structure of the parameters, as shown in Figure 5 (Figure 2 in the rebuttal), is concerning the full general model. The lines 280-281 and 255 were meant to convey this message which we are happy to rewrite to make it more clear. For multi-state Markov chains, we observe a similar phenomenon where, under standard Gaussian initialization, a single-layer transformer can converge to either local or global minima depending on the Markovian switching. Specifically, in Figure 5 of the rebuttal, we consider a first-order Markov chain with state space $S = \\{ 0, 1, 2, 3, 4 \\} $ where any state $s \\in S$ stays in the same state with probability $1-p$ and switches uniformly random to either of the remaining states with probability $p/4$. This generalizes the binary-state Markov chain in the paper with $ p = q$. As illustrated in Figure 5 of the rebuttal, depending on the values of the switching probability $p$, the transformer parameters either converge to local-minima (unigram) or global minima (bi-gram). Further, Figure 6 of the rebuttal also reveals the low-rank structure of the parameters during training for the multi-state setting too. These interesting empirical observations highlight an exciting research opportunity to mathematically characterize the gradient dynamics for the multi-state Markov chains, similar to our two state analysis. - **Low rank manifold**: Indeed, using canonical parameterization to characterize the gradient descent dynamics of transformers is a standard tool in the literature. For instance, a recent work [1] employs a similar approach to show that two-layer transformers learn induction heads, assuming specific structure for the attention matrices in both layers, inspired by empirical observations (Figure 2, [1]). - **Limitations**: Please note that the attention parameter $a$ is also trained alongside the embedding and weight parameters $(e,w)$, and the corresponding analysis is highlighted in Section 4.1. All the figures in the paper reflect this setting, with no parameters being frozen or omitted. ---------------- **References** [1] How Transformers Learn Causal Structure with Gradient Descent? https://arxiv.org/pdf/2402.14735 --- Rebuttal Comment 1.1: Comment: Thanks for the detailed response. After careful consideration, I will maintain my score. Below, I provide specific reasons for each of the points. **Generality of the insights**: Regarding the softmax attention for $P\in \mathbb{R}^{2\times 2}$ mentioned in the paper, I believe that the initialization scheme in linear attention prevents the softmax transformers from converging to local minima. However, the exact architecture used in the experiments should be explicitly stated in the main paper. For the multi-state Markov Chain setting, since I was requested to look only at the first page, the current content does not sufficiently demonstrate general results that would illustrate the low-rank structure of the final minimizer or the effectiveness of your low-rank initialization. For completeness, I suggest including experiments that explore whether your low-rank initialization helps transformers not converge to local minima in multi-state Markov Chain tasks. **Low-rank manifold**: I fully understand the difficulty of analyzing transformer dynamics, and I agree that reparameterizing the transformer to simplify the analysis is a reasonable approach. However, I must note that restricting the analysis to the ground-truth low-rank manifold with only a trainable scalar factor $a$ may be an oversimplification. Compared with the related works, [1] and other linear regression ICL papers only limit the trainable parameter by zeroing out some irrelevant parts of the matrices. They can then prove the gradient can help learn the correct ground truth, instead of limiting the feature on the ground-truth direction. I believe that if the low-rank parameters in the attention layer were trained (initialized or parameterized in a low-rank manner, but not necessarily aligned with $\alpha$), it would significantly enhance the contribution of this paper. It may be too challenging to analyze with so many trainable parameters, so the authors are free to consider or disregard this suggestion. **Limitations**: I have revised my previous inaccurate comment. My primary concern is that the attention features of the transformers are not learned, with the only trainable parameter being the scalar $a$. Specifically, the attention score $\texttt{attn}_{n, i}$ is not trained. --- Rebuttal 2: Title: Attention features are also learnt. Insights for linear-attention generalize to soft-max. Low-rank vectors turn out not be relevant. Comment: We thank the reviewer for interesting questions and apologize for not being able to accommodate the requested figures in the first page of the PDF. We address the concerns below. **Attention features are indeed learnt:** Note that with canonical parameterization, the attention score is given by $\texttt{attn}_{n,i} = a \cdot (x_n - \frac{1}{2}) (x_i - \frac{1}{2})$ (Eq. 45 in the Appendix). Since $a$ is a trainable parameter, the attention scores are also learned. **Linear vs. soft-max attention**: We are afraid that there has been a misunderstanding about our intialization and the transformer architecture. For all the experiments in the paper, we use the full general 1-layer transformer with non-linear soft-max attention, as described in lines 63-64 of the paper. For the theoretical analysis, we use the linear attention mechanism but with rest of the architecture intact. Using these insights derived from the linear setting, we demonstrate that our initialization scheme performs significantly better than standard Gaussian initialization even on the full soft-max model. This is not due to linear vs. soft-max but rather the actual loss landscape as illustrated in Figure 4 in the paper and theoretically characterized in Theorem 9 of Appendix G. Hence we respectully disagree with the claim that "...our intialization scheme in linear attention prevents the softmax transformers from converging to local minima...". **We do not limit low-rank parameters:** Please note that the low-rank parameters in the attention layer are not artificially restricted to be same as $\alpha$. The reason why they are also parameterized as $\alpha$ is because empirical evidence suggests that the transformer parameters always converge to this specific structure. Furthermore, this $\alpha$ turns out to be all $\pm 1$ vector. Hence, even after parameterizing $\alpha$ as a trainable vector in our setting, it turns out that the final loss function $L$ (in line 203) is independent of $\alpha$ (since $\|\alpha \|$ is constant, cf. Appendix B) and only depends on three scalar parameters $(e,w,a)$. Thus without loss of generality, it suffices to analyze these scalars. --- Rebuttal 3: Comment: **Attention features are indeed learned:** The definition of $a$ in the paper is somewhat confusing, as there are two different definitions provided. Between lines 111-112, $a=\langle v, \alpha\rangle$ is stated, while on line 200, $a=q^2d^{5/2}/4\langle v, \alpha\rangle$ is given. I believe the definition on lines 111-112 is a typo, but it seems to suggest that only the $V$ layer is being trained. This would imply that $\texttt{attn}$ is not learned and that the trainable parameter $a$ comes from $v$. It is a minor issue, since it is equivalent to put this scalar with $V$ or $\texttt{attn}$. **Linear vs. Softmax Attention:** I am puzzled by the authors' disagreement with my claim. My statement was intended to convey that "your initialization scheme, inspired by the linear model, indeed helps generalize to softmax attention and prevents convergence to local minima." But my concern remains: does this low-rank initialization (not necessarily rank-2) generalize to multi-state Markov models? Even within the extended rebuttal PDF, I don't see this experiment included. Furthermore, according to Figure 5, it is not strictly low-rank (and definitely not rank-1); the strict low-rank property only holds for the binary-state case in the PDF. I suggest that the authors include these details in future versions, and hopefully, all previous insights will generalize to the multi-state settings. **We do not limit low-rank parameters:** I fully understand the authors need to make these simplification. But I respectively disagree with the claim that "even after parameterizing $\alpha$ as a trainable vector in our setting, the loss L is constant", since the after training the norm of $\alpha$ will change. Also, it does not necessarily true that $x_n, W_Q,W_K$ all align with $\alpha$ after you train $\alpha$ with some random initialization. And the authors **limit** low-rank parameters, just not artificially. It is natural and common to simplify the network as a theoretical work but it is indeed a limitation. I will maintain my score and cannot strongly recommend acceptance. --- Rebuttal Comment 3.1: Comment: Dear Reviewer er5c, Thanks for your prompt response. We address the individual comments below. **Attention features:** As you already noted, indeed the attention scalar $a$ captures the essence of both the attention mechanism and the value matrix and hence can be thought of as a trainable parameter that takes both of them into account, as also implied by its definition in Line 200. Sorry for the confusion through the typo in Line 112. Here we just wanted to motivate the definition of $a$ without getting into too much notation-heavy details. We will correct this. **Linear vs. soft-max attention:** Sorry if we misunderstood your claim. Our earlier disagreement was with the initialization scheme and not regarding the multi-state Markov chains. As the Figure 6 in the rebuttal suggests, while it's true that the matrices are not exactly rank-1, nonetheless they still converge to a relativley low-rank corresponding to rank-4. And as also highlighted in Figure 5 of the rebuttal, the input Markovian switching $p$ has an effect on model converging to local or global minima, albeit with a threshold different than $0.5$, which is the threshold for the binary case. This is not surpising given that this corresponds to multi-state setting. Together these observations suggest that while similar phenomena as in binary-setting happen also in multi-state setting, this warrants a more detailed study and corresponding gradient flow analysis. This is considerably more challenging than the binary case, but nonetheless an important and exciting venue of future research and currently outside the scope of our current paper. We will include a discussion and corresponding results for multi-state case in the revised version to provide more context to our work. **Low-rank parameters:** We agree with your assessment. We initialized all the low-rank vectors to be a single $\alpha$ because that's what happens in practice at convergence (one could also experimentally check how fast this happens from the beginning). But we agree that analyzing the full dynamics with different vectors is indeed an interesting direction of future research though considerably more challenging, as already revealed in our simplified setting.
Summary: The paper seeks to characterize how single layer transformers learn (two symbol) markov chains. The analysis relies on a reparametrization of the transformer that assumes the weight matrices are rank 1. The primary results are characterizing when gradient flow on this reparameterized transformer leads to global minimums in loss or other critical points, showing how in some cases a standard gaussian initialization will lead to a local minimum instead of the global minimum. Strengths: The paper is generally well written and readable, with well explained theory (in the main body at least, I have not checked the appendix proofs carefully) and high quality visualizations. I like the step by step presentation of the architecture. Theoretically analyzing the training dynamics and initializations of transformers is definitely significant, and currently somewhat rare to see even in toy settings such as this one. Post Rebuttal: I believe the paper is technically very solid, with my main concern still being the contribution. I did not raise my confidence only because I am less confident that the contribution quite reaches "moderate to high", but I am more confident (4/5) in the other aspects of the paper. Weaknesses: 1. The canonical parameterization is a rather strong assumption (reducing the model to just three parameters) and so it needs strong results to justify it. For me, the results in this paper are close to strong enough, but some strengthening of them would be appreciated (short of removing assumptions). It would be nice more evidence showing how the analysis on the canonical parameterization could extend to more general settings. Ideally, theoretical analyses of toy systems should be paired with empirical and heuristic evidence that they either a) approximately explain the dynamics of more realistic systems, or b) yield new insights that can be empirically verified on more realistic systems. I believe that the paper is primarily trying to show the first claim (a), with the experiments showing the transformers initialized with rank 1 weights stay rank 1, but I think this can benefit from some additional evidence. For example, it would be great to show that the learning dynamics of the canonical parameterization are similar to that of a single layer transformer (with softmax attention and no constant parameters) with a rank one initialization. 2. Figure 6 might be trying to address the second claim (b), but I think it can be improved. Having W_1 and W_2 be constant implies that “our initialization” is more of a reparametrization than an initialization, making the comparison not feel apples to apples. 3. For figures 5 and 7, while I agree that qualitatively the matrices look rank 1, some quantitative measure of rank (like nuclear norm) would be good to graph (especially with error bars for multiple runs). 4. While it is mentioned that experiments were repeated 5 times, it might be nice to put into the paper some of these results, Figure 6 stood out as a place where it would be good and not too difficult to add error bars or additional runs. Additionally, details of how replications were done would be good to mention (such as how random seeds were chosen). The code as of now is a bit messy, and could do with some reorganizing and removing unnecessary files (like untitled.ipynb). Technical Quality: 3 Clarity: 4 Questions for Authors: 1. How is figure 6 testing the hypothesis that “for p + q > 1, any small initialization around zero would lead to a local minima” (line 247). I feel like it's more of comparing a small initialization to your parametrization (which is still a worthy endeavor). I’d imagine that testing the stated hypothesis would require trying many different small initializations and showing that they lead to local minima (Given the nature of the problem, I imagine it could be feasible to test initializations in a grid and create something like an empirical analogue to corroborate figure 1d?). Also, a natural follow-up question (or null hypothesis) is can some standard initialization that isn’t small and/or isn’t around zero work just as well as your initialization. 2. Also, it took me a bit to understand the definition of the attention matrix (attn) (along with Wq and Wk, they seem to only only be mentioned in section 4.1 in the main body, though I may have missed something). Are they abstracted away from earlier in the paper for simplicities sake, or some other reason? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The discussion of limitations in the conclusion section (which I mentioned specifically because of the response given in the checklist, lines 1200-1201) is a bit too brief. It would be good to have all of the assumptions made for the main theorems in one place, especially those implicit and not stated in the theorem statements, including the rank one and related assumptions, linear attention, and gradient flow (which is in the theorem statements, but good to mention because transformers are not normally trained with gradient flow). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback and suggestions to improve the paper and the code which we are planning to incorporate in the revised version. We refer to the global response regarding our results across repeated trials for various $(p,q)$ and the measure of low-rankness. We address the individual questions below. - **Our insights carry over to the full general model**: While our theoretical analysis uses canonical parameterization, our insights and guidelines extend to the general single-layer transformer with soft-max attention (Lines 60-64). For example, Figure 6 demonstrates the effectiveness of our initialization scheme for the full one-layer transformer model. Similarly, the low-rank structure of the parameters, shown in Figure 5 in the paper (Figure 2 and 6 in the rebuttal PDF), pertains to this general model. - **Canonical parameters exhibit the same dynamics as the full model**: We strongly believe that the training dynamics of the canonical model closely resemble those of a full single-layer transformer with rank-1 initialization. Due to the time constraints, we currently lack empirical evidence for this. However, we will share this comparison during the discussion phase. - **$W_1$ and $W_2$ are not constants**: We are afraid that there has been a misunderstanding about the initialization in Figure 6. While $W_1$ and $W_2$ are both initialized constant, they are trained alongside all other parameters $\theta$ in the single-layer transformer with non-linear soft-max attention (Lines 60-64). Thus, Figure 6 of the paper (Figure 1, 3 and 4 in the rebuttal PDF) effectively demonstrates the success of our initialization scheme and the broad applicability of our insights. - **Q.1**: In Figure 6 in the paper, our standard initialization scheme is to let $e_0=1$ and $w_0 = -1 $ which falls in the region $ \mathcal{I}_\ast $ (see Figure 1d) and converges to the global minimum as predicted by our theory. To test our hypothesis on a wider range of initializations, we try the following set of initializations which concur with our theoretical predictions (Figure 3 of the rebuttal): we let $(e_0,w_0) \in \mathcal{N}(\mu, 0.02) $ with $ \mu \in \{ (-1,-1), (0, -0.5), (0,0.5), (0, 0) \} $, with $ p=0.5$ and $ q=0.8$. As highlighted in Figure 3 of the rebuttal, we observe that the initializations around $ (-1,-1)$ leads to global minimum convergence whereas the rest converge to local minima, as predicted by Figure 1d in the paper. - **Q.2**: Yes indeed. We wanted to highlight the main ideas behind our gradient-flow analysis using canonical parameterization and hence chose to omit the attention parameters in the beginning for the ease of exposition and clarity. ----------------------------------------- --- Rebuttal Comment 1.1: Title: Question Comment: I was wondering (after seeing figure 6 in the rebuttal pdf), if theres any chance your initialization scheme help in the multi state markov chain setting (I know that this might be beyond the scope of this work, but I am curious)? I also wanted to ask if you plan for the discussion of limitations in the conclusion to be any different in the camera ready version than the current version? --- Reply to Comment 1.1.1: Title: Limitation section and new figures Comment: Dear reviewer, - **Canonical vs. low-rank**: Thanks for your prompt response and thought provoking questions. In our earlier response, due to time constraints , we couldn't attach the figure showcasing the similar training profiles of canonical and low-rank model but we now have the corresponding empirical results and the figure that illustrate this phenomenon. We are currently checking with the ACs about how best to share these results as the official rules forbid us from sharing links. We'll keep you updated. - **Multi-state**: In this setting as well, there would exist analogous good and bad initialization regions as highlighted for the binary case in Figs. 1 and 4 in our paper. While empirically we can verify if our scheme still works here as is, however we have to rederive and precisely determine the corresponding basins of attraction for the local and global minima in the multi-state setting separately. Also the Markovian switching condition $p+q>1$ would also have to correspondigly modified to accommodate more states. We already see a glimpse of this in Figure 5 of the rebuttal for five-state Markov chain: $p=0.6$ still converges to global minima unlike the binary-state setting where $p+q = 0.6 + 0.6 >1$ would have led to local minima convergence. - **Limitations and assumptions**: Indeed, in our revised version we will add two separate subsections. The first one outliining all the corresponding assumptions and the setting for our results. Then the second discusses the limitations of our results and potential future directions to address them. Once again, we would like to thank the reviewer for their constructive efforts to improve the paper which we really appreciate. We are wondering if they are willing to change their score if their concerns are addressed. We remain at your disposal for any further questions. --- Rebuttal 2: Title: Discussion period ending soon Comment: Dear Reviewer Fte8, We sincerely appreciate the time you have taken to provide valuable feedback for our work. As we are getting closer to the end of the discussion period, could you let us know if our responses above have adequately addressed your concerns? We remain at your disposal for any further questions. If you agree that our responses to your reviews have addressed the concerns you listed, we kindly ask that you consider whether raising your score would more accurately reflect your updated evaluation of our paper. Thank you again for your time and thoughtful comments! Sincerely, The Authors --- Rebuttal Comment 2.1: Comment: Sorry for the delay, I have updated my score.
Summary: The paper investigates how transformers learn first-order Markov chains, focusing on the role of parameter initialization and providing a comprehensive analysis of the learning dynamics. It proves that transformer parameters can converge to global or local minima based on initialization and data properties, and supports these theoretical findings with empirical evidence. Strengths: 1. This paper studies the training dynamics of transformers in learning Markov chains and study the effects of initialization of parameters, which is a meaningful direction which have not been fully explored. 2. The paper is well-structured and well-written, offering clear explanations and visualizations that enhance comprehension. 3. The empirical studies are well-designed and the results convincingly support the theoretical findings. Weaknesses: 1. The paper examines first-order Markov chains with a rank-1 input sequence and considers single-layer linear attention. This general setting might be overly simplified. 2. Can the initialization guidelines provided be extended to more complex transformer models, such as those with additional layers? 3. While the empirical results support the findings, it is not intuitively obvious why the optimal parameters exhibit low rank. Could you offer a more detailed explanation? 4. A minor suggestion: the formulation $att_{n,i}= . . .$ should be included in the main body of the paper rather than just in the appendix, as attention is a crucial component of the transformer structure. Technical Quality: 4 Clarity: 4 Questions for Authors: Please see the weakness. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback and insightful comments. We will update the revised version with information about the attention in the main text. We address the individual questions below. - **Rank-1 input sequence……:** Please note that the input is a first-order Markov chain, without the assumption of it being rank-one. We use linear attention for ease of theoretical analysis, but our guidelines and insights generalize to non-linear soft-max attention. For instance, Figure 6 in the paper demonstrates the effectiveness of our initialization scheme for the full one-layer transformer model (Lines 60-64). Similarly, the low-rank structure of the parameters, as shown in Figure 5 in the paper (Figure 2 and 6 in the rebuttal PDF), applies to the full general model. - **Initialization guidelines**: As our analysis shows, understanding the theoretical foundations of transformer training and initialization, even for shallow models, is challenging. Our paper, focusing on shallow transformers, is a crucial step toward optimizing deeper models. Recent research [1] employs a similar reparameterization technique to study gradient descent dynamics in a simplified two-layer attention-only transformer, elucidating the induction head mechanism. However, unlike our work, they do not explore how initialization impacts convergence to local or global minima. Given the nascent and evolving understanding of deeper transformers and higher-order Markov chains [1,2], exploring similar initialization guidelines for large-scale models is an exciting avenue for future research. - **Low rank parameters**: Recent evidence [3,4] suggests that SGD with weight decay inherently leads to rank minimization, resulting in low-rank parameters at convergence. Although these studies focus on feed-forward neural networks, we believe a similar phenomenon occurs in transformers as well.. ------------------------------------------------------------------------------------------------------------------------- **References** - [1] How Transformers Learn Causal Structure with Gradient Descent? https://arxiv.org/pdf/2402.14735 - [2] Transformers on Markov Data: Constant Depth Suffices, https://arxiv.org/abs/2407.17686v1 - [3] Characterizing the Implicit Bias of Regularized SGD in Rank Minimization, https://arxiv.org/pdf/2206.05794 - [4] Rank Diminishing in Deep Neural Networks: https://openreview.net/forum?id=tIqzLFf3kk --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal. Comment: Thank you for the reply. Some of my concerns have been addressed. Hence, I will keep my original score.
Summary: This paper investigates the learning dynamics of transformer models, specifically focusing on first-order Markov chains and single-layer transformers. The authors aim to understand how transformers learn Markov chains and the impact of initialization on their training outcomes. They provide a comprehensive theoretical analysis, demonstrating that transformer parameters trained on next-token prediction loss can converge to either global or local minima, depending on the initialization and properties of the Markovian data. The paper also offers empirical evidence to support their theoretical findings and proposes guidelines for initializing transformer parameters effectively. Strengths: - The paper provides a detailed theoretical framework that explains the conditions under which transformer parameters converge to global or local minima, filling a gap in the understanding of transformer learning dynamics. - The theoretical findings are supported by empirical experiments, which strengthens the credibility and applicability of the results. - The authors highlight the critical role of initialization in the training process and offer practical guidelines for initializing transformer parameters, which can be valuable for practitioners looking to optimize transformer training. Weaknesses: - One important weakness of this study is its limited scope. The study focuses on single-layer transformers and first-order Markov chains, which may limit the generalizability of the findings to more complex models and data structures. - While thorough, the theoretical analysis may be difficult for practitioners without a strong mathematical background to fully grasp and apply. Maybe the presentation can be changed and made more intuitive to also target the audience who does not have a strong mathematical background. - The empirical validation could benefit from a broader range of experiments, including different types of datasets and more complex transformer architectures. Technical Quality: 3 Clarity: 2 Questions for Authors: The paper does not address the scalability of the proposed initialization guidelines for large-scale transformer models, which are commonly used in practice. Can the authors provide an insight on this? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: As I mentioned in weaknesses section, the most important limitation of this study is its generalizability. The study focuses on single-layer transformers and first-order Markov chains, and it is unknown if this method is generalizable to more complex models and data structures. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the constructive feedback and helpful suggestions to improve the paper. We will add a separate section with prerequisite background details in the revised version. We refer to the common response for experiments on multi-state Markov chains. We address the individual concerns below. **Scalability**: Understanding the theoretical foundations of transformer training and initialization, even for shallow models, remains a challenging task, which our work aims to address. While this paper focuses on shallow transformers, it represents a crucial step toward understanding optimization in deeper models. For instance, recent research [1] employs a similar reparameterization technique to study gradient descent dynamics in a simplified two-layer attention-only transformer, elucidating how they learn the induction head mechanism. However, unlike our work, they do not investigate how initialization impacts convergence to local or global minima. Given the nascent and evolving understanding of deeper transformers and higher-order Markov chains [1,2], exploring similar initialization guidelines for large-scale models is an exciting avenue for future research. --------------------------------------------------------------------------- **References** - [1] How Transformers Learn Causal Structure with Gradient Descent? https://arxiv.org/pdf/2402.14735 - [2] Transformers on Markov Data: Constant Depth Suffices, https://arxiv.org/abs/2407.17686v1 --- Rebuttal Comment 1.1: Title: Discussion period ending soon: call for action Comment: Dear Reviewer DieV, We sincerely appreciate the time you have taken to provide valuable feedback for our work. As we are getting closer to the end of the discussion period, could you let us know if our responses above have adequately addressed your concerns? We remain at your disposal for any further questions. Sincerely, The Authors --- Rebuttal 2: Title: Generality of the insights Comment: Dear Reviewer DieV, Thanks for the insightful questions and keeping the score. We are afraid that there has been a misunderstanding about the generality of the insights, which we addressed in the global response and also individually to the Reviewer er5c just now. We are reposting them here for the sake of completeness. **Generality of the insights:** For all the experiments in the paper, we use the full general 1-layer transformer with non-linear soft-max attention, as described in lines 63-64 of the paper. For the theoretical analysis, we use the linear attention mechanism but with rest of the architecture intact. Using these insights derived from the linear setting, we demonstrate that our initialization scheme performs significantly better than standard Gaussian initialization even on the full soft-max model. Since the same soft-max attention used for both schemes empirically, our superiority is not due to linear vs. soft-max but rather due to capitalizing on the actual loss landscape as illustrated in Figure 4 in the paper and theoretically characterized in Theorem 9 of Appendix G. Hence we respectully disagree with the Reviewer er5c's claim that "...our intialization scheme in linear attention prevents the softmax transformers from converging to local minima...". **Attention features are indeed learnt:** Note that with canonical parameterization, the attention score is given by $\texttt{attn}_{n,i} = a \cdot (x_n - \frac{1}{2}) (x_i - \frac{1}{2})$ (Eq. 45 in the Appendix). Since $a$ is a trainable parameter, the attention scores are also learned. **We do not limit low-rank parameters:** Please note that the low-rank parameters in the attention layer are not artificially restricted to be same as $\alpha$. The reason why they are also parameterized as $\alpha$ is because empirical evidence suggests that the transformer parameters always converge to this specific structure. Furthermore, this $\alpha$ turns out to be all $\pm 1$ vector. Hence, even after parameterizing $\alpha$ as a trainable vector in our setting, it turns out that the final loss function $L$ (in line 203) is independent of $\alpha$ (since $\|\alpha \|$ is constant, cf. Appendix B) and only depends on three scalar parameters $(e,w,a)$. Thus without loss of generality, it suffices to analyze these scalars.
Rebuttal 1: Rebuttal: ## **Generality of the insights: Our insights and conclusions hold for all pairs of $(p,q)$, across repeated trials, and for non-linear soft-max attention** We thank the reviewers for the constructive feedback. We address the common concerns here regarding the error bars across repeated trials, experimental results for various $p$ and $q$, a metric for the low-rankness, and linear vs. soft-max attention. ### **Error bars** In the paper, we reported results averaged across three to five trials but omitted the error bars due to an oversight. The rebuttal PDF includes all figures with error bars, confirming that our conclusions remain valid. For instance, Figures 1, 3 and 4 in the rebuttal shows that our initialization scheme consistently converges to a global minimum across multiple trials while standard Gaussian initialization gets stuck at the local minima. ### **Experimental results for various $(p,q)$** We would like to emphasize that our theoretical results hold for all values of $(p,q)$ with the regions $p+q <1 $ and $p+q > 1$ exhibiting fundamentally different behaviors as highlighted in respective Theorems 7 and 2, and Figure 1. For the empirical results in the paper, especially for Figure 6, we chose a single pair $(p,q)= (0.5, 0.8)$ in the region $p+q>1$ for the ease of exposition only. Complementing this result, we further empirically demonstrate in Figure 4 of the rebuttal that our initialization scheme consistently outperforms the standard Gaussian one for various pairs of $(p,q) \in \{ (0.6, 0.6), (0.7, 0.9) \} $ all satisfying $p+q>1$. ### **Low-rank metric** Indeed, complementing our visual illustration of the low-rank matrices in the paper, we can also mathematically quantify this property via the following metric: we compute the relative energy contained in their top singular value compared to their overall singular values, i.e. $ \frac{\sigma\_1^2}{\sum_{i} \sigma_i^2 } $. The closer this fraction is to one, the closer the corresponding matrix is to being rank-one. Tracking this metric training for various matrices, Figure 2 in the rebuttal highlights that while random initialization training eventually reaches rank-one parameters, initializing at rank-one always retains that structure during training. This figure corresponds to the same setting as Figure 5 in the manuscript. ### **Linear vs. soft-max attention, canonical parameters vs. full-transformer** We are afraid that there has been a misunderstanding about the setting in the paper regarding linear and soft-max attention. For the two-state Markov chain, while our theoretical analysis capitalizes on the canonical parameters and linear attention, our guidelines and insights from this setting readily generalize to the full single-layer transformer with non-linear soft-max attention (Lines 60-64). For instance, Figure 6 in the paper (Figure 1 in the rebuttal) demonstrates the effectiveness of our initialization scheme for the full one-layer transformer model with soft-max attention across various values of $(p,q)$ and repeated trials. Similarly, the low-rank structure of the parameters, as shown in Figure 2 and 6 of the rebuttal (Figure 5 in the paper), pertains to the full general model. Pdf: /pdf/8fe4f08d80ae10f45f0b19d552f282019dd3f818.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper theoretically studies the effect of initialization on the gradient dynamics of a single-layer transformer trained on Markov data. It considers a simplified model by: (1) using a binary input alphabet and (2) reducing the many parameters of the single-layer transformer to just two or three scalar parameters. The authors identify the critical points of this reduced set of parameters and compare them to the stable point under its gradient flow dynamics. This is used to determine whether an initialization will converge to global or local optima. This informs choices of initialization that will reliably converge to global optima. This is validated in a single-layer transformer where the reduction of weights is not performed. This experiment is also used to validate the reductions made for theoretical analysis. Strengths: This paper studies the dynamics of a simplified transformer model under next-token prediction loss. This extends previous work, which just studied the static landscape of the loss. In its study of dynamics, this paper also newly considers questions about initialization and its role in convergence toward global minima. The paper is written clearly and generally supports its claims, highlighting the questions asked and key contributions. Weaknesses: It is not clear how much results vary across repeated trials. The authors state that they repeated experiments 3 to 5 times, but the results across replications are not presented in the figures or the text. For instance, in Figure 6, did every repeated experiment follow the same blue and red curves? Similarly, while the authors show the weights from a single run on Figure 5, they do not provide a numerical assessment of the validity of their low-rank approximations. Another weakness is the simplified data model, which is just a binary alphabet. The authors seem to inherit this from previous analytical work, but a discussion of how this restricts results and real-world applicability is missing. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. How much do results vary across replications? The authors could easily answer this by showing different runs in the same figure or stating measures of uncertainty. If possible, the authors could also consider more values of p and q. 2. How reliable is the low-rank approximation? It would be beneficial to quantify the low-rank structure of W1 and Wv so that it is easy to understand how consistent this approximation is in general. 3. Do similar empirical results hold with larger alphabets? More specifically, do similar low-rank approximations still hold, and do these initializations still work better? 4. The authors remark that they observe $a \approx 0$, so the attention term is often not relevant. Is this a consequence of the simplified data model? It would be nice to comment more on this observation. 5. Figures 2 and 3 should be subfigures. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors are fair in the scope of their claims, though it would be good to emphasize the use of reduction/reparameterization earlier in the paper or abstract (this is not made clear until the end of page 3, and this reduces the theoretical scope of their claims). Alternatively, the authors could provide stronger evidence to justify the universality of these approximations (precise quantification of low-rank structure, and more replications). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the helpful feedback and insightful comments. We refer to the global response regarding our results across repeated trials for various $(p,q)$ and the measure of low-rankness. We address the individual concerns below. - **Binary and large alphabet**: This paper primarily focuses on the binary alphabet to analytically characterize the phenomena reported in [1], specific to the binary setting. However, we observe a similar low-rank structure for transformer parameters in a first-order Markov chain on a larger alphabet (see Figure 6 in the rebuttal PDF). Consistent with the binary setting, parameters initialized as low-rank remain low-rank throughout training, as shown in Figure 6 . Theoretically, while our gradient-flow analysis can be generalized to the multi-alphabet setting, it is challenging to precisely characterize the phase portraits, as in Figure 2 of the paper, due to the increased number of parameters. This presents an intriguing avenue for future research. - **$ a \approx 0 $**: This results from the fact that for first-order Markov chains, the Markov kernel $ \mathbb{P}(x\_{n+1} = 1 \mid x_1^n) = x_n (1-p-q) + p $ is a simple linear function of the last symbol x_n​. Consequently, this function can be represented by a single-layer transformer using only the skip connection in the attention layer, without relying on the attention mechanism. Therefore, $ a \approx 0 $ becomes a viable solution, which we leverage for the gradient-flow analysis without attention. Figure 7 in the rebuttal empirically corrobarates this fact. As a side note, there are also non-zero attention coefficients that represent global minima, as shown in Figure 4 and discussed in Section 4.1 of the paper. ---------------------------------------------------------------- **References:** [1] Attention with Markov: A Framework for Principled Analysis of Transformers via Markov Chains: https://arxiv.org/abs/2402.04161 --- Rebuttal 2: Comment: Dear Authors, Thank you for your responses to the other reviewers and myself, and for responding __Low-rank structure__: Thank you for simulating Markov chains with more data. Looking at Figure 6 (despite exceeding the page limit), it appears to be the case that the rank of $W_1$ and $W_V$ is |S| - 1, based on the binary and 5-state examples given (I assume Figure 6 used the same alphabet size as Figure 5, but this does not appear to have been explicitly stated). This further seems to imply that for more plausible settings, where alphabets are very large, we should initialize the weight matrices to be full rank. While I am not as concerned as other reviewers about the analytical simplifications of reducing assuming parameters to be low-rank at initialization for the sake of tractability, I find it hard to see how the implications, as presented in section 5, are very impactful. Nevertheless, I appreciate that the paper is technically strong and precise. If the assumptions and limitations are made clearer throughout the text (which the authors have promised to do), I believe this paper could be a nice, small contribution towards analyzing transformers, but I cannot presently strongly recommend acceptance. So, I maintain my score.
null
null
null
null
null
null
MonkeySee: Space-time-resolved reconstructions of natural images from macaque multi-unit activity
Accept (poster)
Summary: In this paper, authors record multi unit activity (MUA) from macaque ventral stream (V1, V4 and IT) and train a CNN decoder to reconstruct the visual stimuli. The authors present three variations decoding attempts: a baseline CNN decoder that maps MUA directly to image space, and two U-Net based decoders that differ in the fact that the first only uses spatial information (averaging over the time dimension), while the latter is time-resolved (spatiotemporal). Author find that the spatiotemporal decoder yield the highest performances as evaluated via feature correlation and present occlusion analysis (either masking brain regions or time windows). Strengths: The paper tackles an established field such as stimuli reconstruction using electrophysiological neural signals (MUA) as opposed to the more common fMRI techniques. This grants superior time-resolution which offers ways for novel experiments. Indeed author showed interesting effects of decoding from different time windows in different brain regions. Furthermore the paper is in general well structured and presented. Weaknesses: - The paper present some inconsistencies between Text and Figures. * In line 267 it is said that the spatiotemporal model achieve the highest performance, however Table 1 reports the End-to-End models as the top scorer. * In line 288 author link to Figure 9 while the correct one should be Figure 3. However, description of Figure 3 is still inconsistent as in line 290 authors say that last column report full-data reconstruction while Figure 3 has it in the first column. * In Figure 8 meaning of x-axis is unclear. Moreover, Figure caption states that "Deeply colored points indicate higher correlations [...]", however it is unclear how this is assessed since for example the V4 row has higher correlation with fc7 and fc8 than IT (which is instead marked). Similar remarks hold for Figure 9, which has an incomplete caption (a trailing "The"). - Hyperparameters for decoder loss (sec. 3.6.2) are missing. - Authors introduce the "Learned Receptive Field" (LRF) layer claiming it "enhances understanding of (model) structure and interpretive capacity", however no discussion about is presented. Only related result is Figure 7, which is never mentioned in the main text and presents somewhat puzzling results (see Questions section of this review). - In the occlusion analyses, author perform model *inferences* on truncated data, however a model *trained* on the same occluded data might compensate the measured decrease in performances. - Paper lacks comparisons with existing decoding techniques, albeit originally introduced for other - but structurally similar - modalities such as EEG signals (see Y. Bai et al., DreamDiffusion (2023)). Technical Quality: 2 Clarity: 3 Questions for Authors: - Figure 7 (not discussed in the main text) reports the learned 2D receptive field. Most learned RF seem to have a smaller-than-one pixel size, does this imply that no information from that electrode is used at all? - Why do authors refer to their decoding technique as "homeomorphic"? - Would a model trained on occluded data still performed as poorly as the results presented in the paper? - Could the difference in measured performances between the baseline and homeomorphic model be simply attributed to the fact that the full model has higher complexity (more parameter), as the baseline decoder only leverages *half* of the U-Net architecture? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors have adequately addressed the limitations of their study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review and constructive feedback. We appreciate the opportunity to clarify and address your concerns. **Addressing Mentioned Weaknesses:** **Text and Figures:** - **Line 267 and Table 1**: We apologize for the oversight. There was a mistake in the header of Table 1. The correct table should have the "End-to-End" and "Spatiotemporal" headers switched. The rest of the text is correct in stating that the spatiotemporal model achieved the highest performance overall, consistent with Figures 8 and 9. We will fix the table header. - **Figure references**: Line 288 should refer to Figure 3. We will correct the description to accurately reflect the columns. Thank you for pointing this out. - **Figures 8 and 9**: In Figure 8, the x-axis represents relative correlation, showing the percentage of correlation when all correlations add up to 1 per brain region. This ensures a fair comparison by normalizing values per brain region. The color coding represents the highest relative correlation, not the highest absolute correlation, also depicted by larger partial bars. We will clarify the x-axis meaning and provide a detailed explanation of the color coding and correlations. The caption for Figure 9 will be adjusted as well. - **Hyperparameters for decoder loss**: The hyperparameters for the training are mentioned in section 3.6.1: “We used the ADAM optimizer with a learning rate of 0.002 and beta coefficients of 0.5 and 0.999 to enhance convergence. The loss function included a discriminator loss (α_discr) at 0.01, VGG feature loss (β_vgg) at 0.9, and L1 pixel-wise loss (β_pix) at 0.09 to balance error sensitivity.” - **Learned Receptive Field (LRF) layer**: We thank the reviewer for this point. Figure 7 demonstrates that the LRF layer adapts during end-to-end reconstruction, showing smaller receptive fields in the center and larger ones in the periphery, indicating that information is used differently based on spatial location. This suggests that the LRF learns a mechanism similar to the brain while reconstructing end-to-end. We will discuss these results in the main text with more emphasis. - **Occlusion analyses**: Our goal with the occlusion analyses was to highlight the spatial and temporal features carried by neurons, not necessarily to compare reconstruction quality. We agree that training a model on occluded data could yield different results and will address this in the discussion. We did train the model on occluded data (see the second figure in the rebuttal). - **Comparisons with existing decoding techniques**: We currently compare our model against two baselines: the Brain2Pix model from Le et al. (2021) and an improved version of Shen et al. (2019). These models represent the state-of-the-art in the field. We will emphasize this in the manuscript and cite the mentioned paper (Y. Bai et al., DreamDiffusion, 2023) as well as any other relevant ones in the revised manuscript. **Answer to Questions:** - **Figure 7**: Many learned receptive fields (RFs) are indeed smaller than one pixel. However, this does not mean no information from that electrode is used. If an RF estimate is less than a pixel, the model uses a single pixel for that RF, still containing necessary information. This results from the limited field of view of the model (96 by 96 pixels) being unable to accurately assign sufficient (integer number of) pixels to electrodes with relatively smaller receptive fields (e.g., those with visual angles less than one degree of arc). - **Homeomorphic decoding**: We referred to our technique as "homeomorphic" due to its topography-preserving nature, formulating the neuron-to-pixel mapping problem as an image-to-image translation problem after a (learnable) retinotopic projection. The model preserves the inherent topography of data throughout all the network layers, focusing on processing local features. - **Occluded data**: While training the model on occluded data might yield different results, our objective was to demonstrate feature dependencies of a model trained on full brain data, not to evaluate absolute model performance on occluded data. We also trained separate reconstruction models on V1, V4, and IT regions, in addition to the combined model, which will be included in the manuscript. - **Model complexity**: The difference in performances between the baseline and homeomorphic model can be attributed to the higher complexity of the full model. Our baseline model, based on Shen et al. (2019), uses only half of the U-NET architecture, while our enhanced generator includes a full U-NET with more parameters. While increasing the baseline's complexity might deviate from strict comparisons with existing models, it illustrates how increased model complexity affects results. We will include an additional run with a full U-NET in the revised manuscript, which showed improved performance compared to Shen et al. but still fell short of the full homeomorphic model’s performance. Thank you again for your valuable feedback. We will revise the manuscript to address these points, improve clarity, and include the new analyses in the manuscript and the rebuttal images. **References:** - Guohua Shen, Kshitij Dwivedi, Kei Majima, Tomoyasu Horikawa, and Yukiyasu Kamitani. End-to-end deep image reconstruction from human brain activity. *Frontiers in Computational Neuroscience*, 13:432276, 2019. - Le, L., et al. (2022). Brain2pix: Video frame reconstruction from brain activity. *Frontiers in Neuroscience*, 16, 940972. - Y. Bai et al., DreamDiffusion (2023). --- Rebuttal Comment 1.1: Comment: I thank the authors for providing additional material in their rebuttal in such short time and for carefully addressing my specific concerns with a detailed reply. I believe their work was strengthened by the new material and most importantly the new result that even a complexity-matched model "still fell short of the full homeomorphic model's performance". I will increase my review score to reflect this change. --- Rebuttal 2: Comment: Thank you very much. We greatly appreciate it.
Summary: SETTING: Decoding (reconstructing) observed images via Utah-array recordings made from the visual cortex of a macaque. APPROACH: A GAN, but taking (transformed--see below) neural data as input, and with some additional losses. In particular, in addition to the standard adversarial loss, the generator/decoder is aligned with VGG-19 at various layers, and its output is penalized with a reconstruction penalty (mean absolute error). Raw neural data are not passed into the generator/decoder (except in the baseline model); instead, multi-unit activity is first mapped to a retinotopic/pixel space, either with (learned) isotropic Gaussian receptive fields, or by mapping to features of a pre-trained Inception network. RESULTS: The data and model evidently support high-quality reconstructions (Fig. 1). The authors believe the decoder with a spatiotemporal inverse retinotopic map works best. They also perform qualitative experiments to determine which cortical areas (V1, V4, IT) and time windows contribute what to image decoding. Strengths: The image reconstruction is, to my knowledge, state of the art. The authors also convincingly demonstrate the importance of the retinotopic mapping. Finally, the experimental setup (15 Utah arrays in one animal!) is heroic. Weaknesses: I put much of the details of these weaknesses into the questions (below). It's not clear what we learn from the results following Figure 1. Part (but not all--see the questions) of the problem may be due to the presentation. The model itself is also not presented clearly: The model figures (Figs. 5 and 6 in the Appendix--not referenced in the main text) do not cover the first type of inverse retinotopic map, and introduce yet more symbols not appearing in the text. The equations in the text appear incomplete (see questions). Technical Quality: 3 Clarity: 2 Questions for Authors: > The number of arrays is certainly much larger than what is used by almost any research group. It would be very helpful to see how results scale with number of neurons (really, electrodes). Such a figure would also perhaps help make sense of Fig. 2: how much of the degradation in image reconstruction there is due to using fewer neurons, and how much due to using neurons only from this or that area? (If there isn't time to run this experiment, perhaps the authors have some other way of answering this question.) > Along these lines: The authors somehow only get ~600 neurons from these fifteen arrays, apparently because they reject neurons with "intraclass correlations" less than 0.4. What are these ICCs? (What are the classes?) Does this really improve performance? How did the authors come up with the threshold of 0.4? > Generically, I understand why one would want to map from electrode space into retinotopic/pixel space (basically so that the model can be a UNet from there on out.) But I don't understand the explanation of the "inverse retinotopic mapping" the authors given is Section 3. The authors give formulae for computing the retinal embedding E and say that the parameters of these functions are learned. But where is E subsequently used? For the "pre-trained mapping," a cost function is given, but E doesn't appear in it. Instead, it contains yet more weights, in this case mapping from a pretrained Inception network to neural responses. (In section 3.3.1, the letter W or w is used with seven different subscripts, some of them seemingly referring to the same weights and others not.) In the appendix, there is a figure (not referenced in the main text!) that seemingly corresponds to the end-to-end inverse retinotopic mapping. But E doesn't appear in this figure either. The caption has a broken reference to an equation. Can the authors clarify this part of the model? > The authors state in the text that the spatiotemporal model achieves the highest correlations with AlexNet features, but Table 1 shows it to achieve the lowest (excluding the baseline; the end-to-end model achieves the highest correlation). Is the text or the table in error? They also claim that the spatiotemporal model qualitatively outperforms the other models (Fig. 1), but that claim is not obvious to this reader's eye. (The horse and the butterfly, e.g., look worse; may others are a push.) > Can the authors confirm that the 100 images used for testing were never used during training? > What conclusions do the authors draw from Fig. 3? In any case, the results would seem to depend strongly on the values assumed for synaptic delays: This determines what time window in IT corresponds to what time window in V1, etc. (otherwise the windowing scheme is essentially random). Can the authors justify high confidence in this synaptic-delay time? MINOR: > Section 4.1.2 examines the effect on decoding of setting certain areas of visual cortex to their baseline values. Why is this called "spatial occlusion"? Also, the column headings are described as identifying which brain region has been excluded, rather than included, but that obviously can't apply to the second column ("V1 + V4 + IT"). I think the text is wrong here. (But if not, perhaps the authors could reverse the labels, letting all column headings state which areas are *in*cluded.) > Figs. 8 and 9: I don't understand the lightness/darkness color scheme, which doesn't seem to correspond to the correlation values listed in the figure. Can the authors clarify this? > Why is the decoder called "homeomorphic"? > "...spatially downsampling data to 8 Hz..." What does this mean (and how could spatial sampling be denominated in Hertz?)? > "...x_e and y_e denote the spatial coordinates of electrode e...." I think it would be less confusing to say the "retinotopic coordinates corresponding to electrode e," or something similar. > The reference to Fig. 9 (l. 288) should be to Fig. 3. > The supplementary figures should be referenced in the main text. (Figure 6, e.g., would be very useful to see while reading Section 3.4.) > Fig. 3: "...with the final column representing the full data reconstruction" -> "...with the *first* column....." Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: N/A. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough and thoughtful review. We appreciate your positive feedback on our contributions and the detailed suggestions for improvement. Your insights on our decoding approach are invaluable. We are glad to hear that our retinotopic mapping approach is recognized for its importance. We also appreciate your acknowledgment of our experimental setup. Below, we address your concerns and clarify the points raised. ### Model Presentation: We have fixed the figures to match the text and will clarify the presentation of our model and ensure that all relevant figures (Figs. 5 and 6) and equations are clearly referenced and explained in the main text. The inverse retinotopic mapping and its role in the model will be more clearly described. ### Answer to Questions: **Scaling Results with Number of Neurons:** We agree that analyzing how results scale with the number of neurons would provide further valuable insights. Therefore, we provide new comparative results of reconstructing the images only using data from electrodes that are located in specific visual ROIs. The results of this new experiment can be found in the Rebuttal Figures Document. Specifically, we show the reconstruction results from the regions V1, V4, and IT, separately. **Threshold Explanation:** We apologize for the incorrect use of "Intraclass Correlation Coefficient" in our manuscript. The measure we used was a reliability score based on self-correlation, which evaluates the consistency of neural responses across repetitions. The threshold of 0.4 was selected to balance retaining sufficient electrodes while removing unstable signals. Approximately 600 electrodes remained from the original 1024, partly due to this threshold and partly because one electrode was broken. **What is E:** The inverse retinotopic mapping is learned by the model in an end-to-end manner. The retinal embeddings (E) are not pre-calculated in this end-to-end model, whereas in the spatial and spatiotemporal model they are. We apologize for the confusion as we have used E interchangeably with RFSimages in the figure and agree that it is confusing. We will fix this and clarify it in the manuscript. **Appendix Figure:** This figure is a depiction of how the inverse receptive field layer in the end-to-end model computes the retinal embeddings (E) based on input brain responses. Apologies for the broken reference, we will fix this in the manuscript. We have removed this claim of qualitative outperformance. **Test Images:** Yes, definitely. We confirm that the 100 test images were never used during training. **Figure 3:** Figure 3 illustrates that the models are capable of detecting temporal variations in the information carried by neurons. This finding is promising and warrants further investigation in future research. By occluding the time windows that are not highlighted in Figure 3, we infer that the neurons at the highlighted time points carry the specific visual information indicated by our model. As time progresses, we observe changes in this information. We will clarify this in the text in the manuscript. **Synaptic Delay:** We base this assumption of synaptic delay on literature (Kravitz et al., 2013). **Text and Table Consistency:** We will correct the inconsistency between the text and Table 1 regarding the spatiotemporal model’s performance and ensure the qualitative claims are supported by clearer visual evidence. **Training and Testing Data:** We confirm that the 100 images used for testing were never used during training and will explicitly state this in the revised manuscript. ### Minor Issues: We have addressed and fixed the minor issues mentioned in the paper: - The term "spatial occlusion" was used to indicate the model's inference when specific locations of the input are set to baseline values (i.e., no signal or activity). The occlusion occurs at the regions that we are not interested in. Baseline values are used instead of zero to account for existing noise. - The column headings in Section 4.1.2 are currently explaining which areas are included indeed. Will add it to the figure description. - In Figures 8 and 9, the x-axis represents relative correlation, showing the percentage of correlation when all correlations add up to 1 per brain region. This ensures a fair comparison by normalizing values per brain region. The color coding represents the highest relative correlation, not the highest absolute correlation, also depicted by larger bars. We will clarify the x-axis meaning and provide a detailed explanation of the color coding and correlations. The caption for Figure 9 will be adjusted as well. We added this explanation in the main text as well. - We referred to our technique as "homeomorphic" due to its topography-preserving nature, formulating the neuron-to-pixel mapping problem as an image-to-image translation problem after a (learnable) retinotopic projection. The model preserves the inherent topography of data throughout all the network layers, focusing on processing local features. *Two shapes are said to be homeomorphic if there exists a continuous, one-to-one mapping between them that preserves the spatial relationships of points. If you can stretch or bend one shape into another without tearing or gluing parts together, the shapes are homeomorphic.* - Changed the reference from spatial downsampling in Hz to temporally downsampling data to 8 Hz. - Revised the description of electrode coordinates. - Fixed the reference to Fig. 3 and the labeling issues in Fig. 3. - Ensured all supplementary figures are properly referenced in the main text. Thank you again for your constructive feedback. - Kravitz, D. J., Saleem, K. S., Baker, C. I., Ungerleider, L. G., & Mishkin, M. (2013). The ventral visual pathway: An expanded neural framework for the processing of object quality. *Trends in Cognitive Sciences, 17*(1), 26-49. --- Rebuttal Comment 1.1: Comment: The authors have answered some of my questions but not all. The inverse retinotopic mapping still isn't adequately explained (the authors still haven't said how, e.g., E_axy enters the objective function). It's still not clear what conclusions the reader is supposed to draw from Fig. 3. The authors reply that "we infer that the neurons at the highlighted time points carry the specific visual information indicated by our model," but what information is this? Can it be described even qualitatively? E.g., "excluding TW-3 tends to make the images..."--what? Blurrier? If this analysis were redone with a different choice for synaptic delay (say, a factor of 2 larger), would we draw different conclusions? Still, the paper's strengths remain, and I stand by my original ratings. --- Rebuttal 2: Title: We thank the reviewer for these clarification questions! Comment: **Q1**. Let's first define the following: - S := visual stimulus - S_real := ground-truth stimulus - S_fake := reconstructed stimulus - R := neural response - E := retinal embedding After (pre-/end-to-end trained) inverse retinotopic mapping projects R onto E as in the (pre-/end-to-end trained) inverse retinotopic mapping subsection, pixel-to-pixel mapping reconstructs S_fake from E as S_fake = U_Net(E). Note that the U-Net architecture is provided in the appendix. Every loss component is a function of S_real and/or S_fake, which is how, e.g., E enters the objective function. We will make sure to thoroughly review and clarify the relevant subsections in the revised manuscript. **Q2**. Figure 3 demonstrates the temporal occlusion analysis results of five example stimulus-response pairs. While we can observe some qualitative trends in these results, we cannot easily draw general/strong conclusions from the figure alone. For example, it appears that the form/shape information is present mostly in the first time window. Likewise, color information seems to be mostly present in the second and third time windows. Also, quantitative results of the temporal occlusion analysis is provided in Figure 9 and 10, which complement Figure 3 and can be interpreted in a similar way to the spatial occlusion analysis results. That is, in terms of the internal representations of the AlexNet layer that show the highest correlation at a time window (instead of only ROI as in the spatial occlusion analysis). Likewise, we will make sure to thoroughly review and clarify the relevant subsections, figure captions, as well as correcting the figure references in the revised manuscript.
Summary: The authors present a CNN-based decoder that illuminates the distinct information encoded in V1, V4, and IT neuronal populations. The decoding results are remarkably good, and their accomplishment in decoding natural images from neuronal-level signals is unprecedented and the best so far. Strengths: Their novel space-time resolved decoding technique and the learned receptive field layers are new and interesting. Perhaps most impressive of all is their unprecedented data, recorded simultaneously from 15 Utah arrays from a single monkey. It can be considered a milestone in neuroscience research. Weaknesses: Analytical innovation is relatively minor. I am not sure what new insights about the visual cortex we learned. Nevertheless, proving that it can be done (with 4-7 Utah arrays per area), and doing it beautifully is still a remarkable feat that should be admired and applauded. Technical Quality: 4 Clarity: 3 Questions for Authors: No specific questions. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: No concerns. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thoughtful and positive review. We appreciate your recognition of our novel space-time resolved decoding technique and the data collected from 15 Utah arrays. Regarding the mentioned weakness, our primary aim was indeed to demonstrate the feasibility and beauty of using these advanced models for studying the visual cortex. While the new neuroscientific insights reported in this work may seem limited, we believe this research is an essential step towards deeper understanding in the future. Your acknowledgment of our effort is greatly appreciated. --- Rebuttal Comment 1.1: Title: Responses to Rebuttal Comment: Thanks for your responses. Congratulation on your accomplishment.
Summary: This paper proposes a CNN-based decoder to reconstruct naturalistic images from macaque brain signals. To this end, the paper presents the Learned Receptive Field (LRF) layer to enhance the reconstruction and understanding of the model's structure and interpretive capacity. Here, the work aims to interpret brain activity by transforming low-level to high-level brain signals. It focuses on the readout characteristics of neuronal populations in areas V1, V4, and IT. The evaluations of the model are performed using the THINGS dataset, and the results demonstrate the effectiveness of an end-to-end model and provide interpretations, such as the crucial role of V1 in color processing. Strengths: - The integration of a U-Net architecture for the pixel-to-pixel mapping of retinal embeddings to visual stimuli is intriguing. - The proposed method is end-to-end with different parameters, making it adaptable to the target problem. - Though the method of the paper is data-specific, the proposed model’s results are somehow interpretable Weaknesses: - The proposed model is data-specific, which could potentially limit its application. - The overall loss consists of multiple criteria, each with its weight, which might make the model harder to train. Specifically, mixing - adversarial loss with pixel and feature loss can make it challenging to control the training dynamics. - The paper does not include a complete ablation analysis of the different modules. Technical Quality: 3 Clarity: 2 Questions for Authors: - Could you elaborate more on the motivation behind using Inception v1 for different cortical areas? How does this layer contribute to the generalization of the image reconstruction? - Is the model generalizable to other types of neural data from different species or different types of recording equipment? - In equation line 151, how does the standard deviation influence the spatial spread of the receptive field and overall reconstruction? - In Pre-trained Inverse Retinotopic Mapping, why do we need two embeddings? Can’t the model handle it with a unified spatiotemporal module instead of having separate embeddings? - What about employing an existing method to build a baseline, such as using a recurrent CNN or Vision Transformer? For a fair comparison, the evaluation should include some of the other state-of-the-art architectures. - Is the quantitative comparison presented in Table 1 a standard manner of measuring performance? Is it used in the literature? If not, what would be other metrics, such as the R² score? - What about the ablation analysis of the loss functions? Also, could you present the dynamic criterion of each loss during the training and validation phases? - How did you set the hyperparameters of the model? Also, the full implementation details of the model are missing. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: - I think the main limitation of the paper is the limited area of applicability of the proposed framework. - The evaluation, both in terms of architecture and criterion, is missing to determine the necessity and effectiveness of all the modules of the model. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed feedback. We appreciate your positive comments and are pleased to hear that you recognized our model's adaptability. Below, we address your points in further detail: **Weaknesses:** 1. **Data-specific model**: While our model was trained on the THINGS dataset, which is extensive and diverse, it has demonstrated robust performance on limited data, which is crucial given the challenges of collecting invasive monkey data. Our model is also generalizable to various types of neural data and recording equipment, similar to earlier models successfully applied to human fMRI data (Le et al., 2022). 2. **Loss functions and training dynamics**: Although our model employs multiple loss functions, their combination enhances training stability and performance. We have conducted additional ablation studies, which we will include in the revised manuscript, demonstrating the necessity of each loss component for optimal model performance. 3. **Complete ablation analysis**: We acknowledge the importance of comprehensive ablation analysis. We have now performed additional ablation studies, examining the contributions of the discriminator, L1, and VGG losses, and will include these results in the revised manuscript (also see figure 1 provided in rebuttal). **Questions:** 1. **Motivation for using Inception v1**: Inception v1 was chosen for its capability to capture features with varying receptive field sizes and complexities, aligning with the hierarchical processing in the ventral visual stream. This alignment allows effective generalization across visual stimuli and provides accurate predictions and meaningful MEIs for neurons in V1, V4, and IT. Previous studies (e.g., Yamins et al., 2014; Khaligh-Razavi & Kriegeskorte, 2014; Güçlü & van Gerven, 2015) have demonstrated the success of task-optimized DNN models in modeling visual areas. 2. **Model generalization**: Our model is designed to be generalizable to different types of neural data and recording equipment, as demonstrated by its successful application to human fMRI data (Le et al., 2022). 3. **Standard deviation in receptive fields**: The standard deviation models the size of the receptive field, directly influencing the accuracy of stimulus reconstructions. Precise modeling of receptive field sizes enhances the quality of the reconstructed images. 4. **Pre-trained Inverse Retinotopic Mapping**: The use of two embeddings corresponds to our model’s variants for spatial and spatiotemporal reconstructions. Each variant requires specific embeddings to optimize performance in their respective domains. 5. **Baseline comparisons**: We currently compare our model against two baselines: the Brain2Pix model (Le et al., 2022) and an improved version of Shen et al. (2019). These models represent the state-of-the-art in the field. We will emphasize this in the manuscript. 6. **Performance metrics**: The metrics presented in Table 1 are standard in the context of image reconstruction and have been used in previous studies. We will clarify this in the revised manuscript. 7. **Ablation analysis of loss functions**: We have now conducted ablation analyses on the discriminator, L1, and VGG losses. The results, showing decreased performance when these components are removed, will be included in the revised manuscript. Additionally, we trained separate models for V1, V4, and IT, in addition to the combined model, and will present these findings. 8. **Hyperparameter settings and implementation details**: Our hyperparameters were set based on existing literature and detailed in section 3.6.4. We used the ADAM optimizer with specified parameters and provided full architecture details in the supplementary material. The training process has proven robust to hyperparameter variations, as confirmed by our pilot experiments. We have corrected the link to the full source code in our official comment to the AC, adhering to NeurIPS guidelines. **Limitations:** 1. **Limited area of applicability**: In the revised manuscript, we will discuss potential extensions and adaptations of our framework, including applications in BCI communication and neuroprosthetics for individuals with acquired blindness. 2. **Evaluation of architecture and criterion**: We have addressed this in our responses above, providing additional evaluation and justification for our model's architecture and criteria. Thank you again for your insightful comments. We believe these revisions will significantly strengthen our manuscript and address the concerns raised. **References:** - Yamins, D. L., et al. (2014). Performance-optimized hierarchical models predict neural responses in higher visual cortex. *PNAS*, 111(23), 8619-8624. - Khaligh-Razavi, S. M., & Kriegeskorte, N. (2014). Deep supervised models may explain IT cortical representation. *PLoS Computational Biology*, 10(11), e1003915. - Güçlü, U., & van Gerven, M. A. J. (2015). Deep neural networks reveal a gradient in the complexity of neural representations across the ventral stream. *Journal of Neuroscience*, 35(27), 10005-10014. - Le, L., et al. (2022). Brain2pix: Video frame reconstruction from brain activity. *Frontiers in Neuroscience*, 16, 940972. - Shen, G., et al. (2019). End-to-end deep image reconstruction from human brain activity. *Frontiers in Computational Neuroscience*, 13, 432276.
Rebuttal 1: Rebuttal: We have included additional results in the form of figures as requested. The ablation study on various losses is presented in the figure titled "Model Ablations," along with the corresponding model losses for each ablated model in the figure titled "Ablation Losses." Although these were run with fewer epochs than usual, we were already able to observe the effects of the ablations. Additionally, we have trained our model on different brain regions in response to reviewer gW2o's request, depicted in the figure titled "Model Trained on Brain Region of Interest." The figures and detailed results provide insight into the model's performance under these conditions. We hope these additions address your concerns and provide a clearer understanding of our model's robustness and versatility. The rest of the rebuttals are addressed individually to each reviewer. Thank you for your valuable feedback and guidance. Pdf: /pdf/3816edde31a8e78837ba11bb6eef5b94eec6a49a.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Sparse maximal update parameterization: A holistic approach to sparse training dynamics
Accept (poster)
Summary: This work (similarly to \muP method) proposes a specific parameterization for weights, gradients and updates such that the optimal training hyperparameters (i.e. learning rate) are transferrable across different sparsity levels. The approach is validated on the task of language modeling within wide range of sparsities. Strengths: * The proposed approach is theoretically sound and intuitive. * SμPar appears to produce a stable optimal learning rate with a wide range of sparsities, whereas the values for standard parameterization and dense \muP have to be tuned according to the level of sparsity. * SμPar allows one to achieve strong results for larger, outperforming the tuned SP and \muP configuration, given a small proxy tuned model. With significantly smaller tuning cost one can achieve better results. Weaknesses: Whereas all the components in SμPar seem to be important, the importance of each individual parameterization is not studied. It could be the case that SμPar for weights while keeping \muP parameterization for gradients and learning rate may suffice and produce the same performance. For the sake of completeness, I would suggest ablating individual terms. Prior [1], [2] work shows that proper weight initialization (scaled according to sparsity) already improves training dynamics. Technical Quality: 3 Clarity: 2 Questions for Authors: Does the SμPar transfer to other domains in the same way? For example, one could try pretraining a vision transformer in a setup similar to [3]. Sparsity scaling laws paper [3] showed that given sufficiently long compute, sparse models turn out to be compute-optimal. Given that SμPar finds better hyperparameters, could one expect that higher sparsity levels may be favored compared to the recipe used in the aforementioned work? It seems like the value of train loss decreases with the increase of sparsity (i.e., sparse models are better given the same number of parameters), whereas in Figure 10, the performance of SμPar loss seems to be independent of sparsity. I would expect an increase in the quality gap for longer training according to sparsity scaling laws. --- [1] Evci, Utku, et al. "Gradient flow in sparse neural networks and how lottery tickets win." Proceedings of the AAAI conference on artificial intelligence. Vol. 36. No. 6. 2022. [2] Liu, Zhuang, et al. "Rethinking the value of network pruning." arXiv preprint arXiv:1810.05270 (2018). [3] Frantar, Elias, et al. "Scaling laws for sparsely-connected foundation models." arXiv preprint arXiv:2309.08520 (2023). Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: See Weaknesses and Questions. Would be interesting to see whether the introduced parametrization is optimal for training methods with dynamic sparsity (RigL [1], Top-Kast [2], AC/DC [3]). --- [1] Evci, Utku, et al. "Rigging the lottery: Making all tickets winners." International conference on machine learning. PMLR, 2020. [2] Jayakumar, Siddhant, et al. "Top-kast: Top-k always sparse training." Advances in Neural Information Processing Systems 33 (2020): 20744-20754. [3] Peste, Alexandra, et al. "Ac/dc: Alternating compressed/decompressed training of deep neural networks." Advances in neural information processing systems 34 (2021): 8557-8570. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for your thoughtful review. It is gratifying to know that you found the method theoretically sound and intuitive, and that the experimental results were a strength of the paper. ``` Whereas all the components in S$\mu$Par seem to be important, the importance of each individual parameterization is not studied. It could be the case that S$\mu$Par for weights while keeping $\mu$P parameterization for gradients and learning rate may suffice and produce the same performance. For the sake of completeness, I would suggest ablating individual terms. Prior [1], [2] work shows that proper weight initialization (scaled according to sparsity) already improves training dynamics. ``` This is an excellent suggestion! In Section 1 "Individual ablations of S$\mu$Par initialization and learning rate corrections" from our "Global Author Rebuttal" we perform the ablation you suggested and show that both the initialization and learning rate correction are required for S$\mu$Par to achieve transfer of optimal initialization standard deviation or optimal learning rate across sparsity levels. We will include these ablations in the camera version of the paper. ``` Does the S$\mu$Par transfer to other domains in the same way? For example, one could try pretraining a vision transformer in a setup similar to "Scaling laws for sparsely-connected foundation models". ``` This is a good question because much of the sparse neural network literature has focused on the vision domain. Since ViT shares most of its architecture with language transformers, we believe the extension of $\mu$P and S$\mu$Par to ViT should also follow Table 1 from our main submission, but we leave testing it as future work. However, the setup you are referencing also uses gradual magnitude pruning (GMP). In Section 2 "Dynamic sparsity hyperparameter transfer" from our "Global Author Rebuttal", we show that S$\mu$Par is not correct for GMP and provide an explanation as to why (continuing our explanation from Lines 274-278). ``` "Scaling laws for sparsely-connected foundation models" showed that given sufficiently long compute, sparse models turn out to be compute-optimal. Given that S$\mu$Par finds better hyperparameters, could one expect that higher sparsity levels may be favored compared to the recipe used in the aforementioned work? ``` This is a very interesting paper with potentially high impact. On Lines 262-266, we discuss how the setup used in "Scaling laws for sparsely-connected foundation models" adjusts the learning rate proportional to sparsity. This correction likely has a similar effect to the S$\mu$Par learning rate correction however without both an appropriate initialization and learning rate correction, optimal hyperparameter transfer is not guaranteed. Without HP transfer, it is likely that many of the models trained in this paper experience "de-tuned hyperparameters" as width and sparsity are varied. As we discuss in Section 2 "Dynamic sparsity hyperparameter transfer" from our "Global Author Rebuttal", if one could extend S$\mu$Par for arbitrary sparse algorithms, then that parameterization would likely improve the sparse models in the aforementioned work, especially at high sparsity levels. We leave the development of this method as future work, as stated on Line 278. ``` It seems like the value of train loss decreases with the increase of sparsity (i.e., sparse models are better given the same number of parameters), whereas in Figure 10, the performance of S$\mu$Par loss seems to be independent of sparsity. I would expect an increase in the quality gap for longer training according to sparsity scaling laws. ``` Iso-FLOP efficiency isn't the focus of our paper. However in Figure 10 we do include the iso-Parameter scaling tests similar to the scaling strategy in "Scaling laws for sparsely-connected foundation models" as well as several other papers. However "iso-Parameter" doesn't necessarily mean iso-FLOP because increasing $d_\text{model}$ also increases attention dot product FLOPs, which can't be sparsified. This increase in attention FLOPs is so significant that our 87.5\% sparse model from Figure 10 has just over double the training FLOPs of the dense baseline, with virtually unchanged loss. Unfortunately, "Scaling laws for sparsely-connected foundation models" did not model the effect of attention FLOPs in their scaling law functional forms, instead assuming that sparsity and parameter count alone are sufficient. We suspect that many of the sparse models in that paper use considerably more total training FLOPs than their dense baselines, making for an unfair comparison. It is unclear whether their results hold under more robust FLOP comparisons. For this submission though, we consider the work of beating dense FLOP efficiency with sparse training as future work. We will update our manuscript to reflect the subtleties of FLOP counting in sparse models. ``` Would be interesting to see if S$\mu$Par is optimal for DST (eg. RigL, Top-KAST, AC/DC) ``` This is a good suggestion for an experiment! Unfortunately S$\mu$Par is not optimal for DST methods. We demonstrate this and provide an explanation in Section 2 "Dynamic sparsity hyperparameter transfer" from our "Global Author Rebuttal". We leave the solution as future work. --- Rebuttal Comment 1.1: Comment: Thanks for your response and ablations. Most of my concerns were resolved, as well as the additional results are quite insightful. Therefore, I decide to increase the score.
Summary: A parameterization method is proposed to ensure that the sparse models with different weight sparsity share the same set of optimal hyperparameters. The sparse models with S$\mu$PaR achieve lower validation accuracy using the same hyperparameters the dense model uses. Strengths: - The paper is well-written and easy to follow. The experimental results support the claim well. - The proposed method links the maximal update and sparse network training, which is an interesting found. Weaknesses: - The proposed method is discussed and evaluated only with the random unstructured sparsity pattern, which is hardly used in practice to speed up the DNN training and inference. - The theoretical contribution is somewhat weak. The proposed parameterization method is simple extension of maximal update parameterization to the sparse version, which I don't see significant technical improvement. - The experimental results should include the accuracy comparison between S$\mu$PaR and other sparse weight initialization methods (e.g., [1]). Although the proposed method was compared with SP and $\mu$P, the superiority of S$\mu$PaR is somewhat predictable since the other two are not designed for the sparse weights. I am curious how much improvement S$\mu$PaR can make from the previous sparse weight initialization schemes. - The scaling factors for the weight update need to be derived for each optimizer, which might limit the usability of S$\mu$PaR in practice. [1] Lee, Namhoon, et al. "A signal propagation perspective for pruning neural networks at initialization." arXiv preprint arXiv:1906.06307 (2019). Technical Quality: 3 Clarity: 3 Questions for Authors: Major questions - Is the proposed parameterization method generalizable to any structured sparsity patterns (e.g., channel pruning) for the density-invariant hyperparameters? As mentioned in the paper, the unstructured sparsity is extremely hard to speed up. Discussing S$\mu$PaR on more structured sparsity patterns, such as channel pruning, will make the paper stronger. - Why does S$\mu$PaR stabilize the sparse network training even after multiple training steps? How is S$\mu$PaR technically different from the previous sparse weight initialization methods? Also, what is the accuracy difference between S$\mu$PaR and other sparse weight initialization methods? Minor questions - Line 113 about the weight update $\mathbf{X}^{l+1} + \Delta \mathbf{X}^{l+1}$ is confusing because the weight update is denoted by $\Delta \mathbf{W}^l$ in Line 105. - Is there any connection between the proposed method and the iterative magnitude pruning (IMP)used in Lottery Ticket Hypothesis [2]? Can S$\mu$PaR improve the final performance of IMP? [2] Frankle, Jonathan, and Michael Carbin. "The lottery ticket hypothesis: Finding sparse, trainable neural networks." arXiv preprint arXiv:1803.03635 (2018). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have addressed the limitations and broader impacts in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed and constructive feedback, and your kind words regarding the paper’s writing, experimental results, and core idea. ``` SuPar is discussed and evaluated only with the random unstructured sparsity pattern, which is hardly used in practice to speed up the DNN training and inference. ``` It is true that the current adoption rate of unstructured sparsity in training and inference is low, as we acknowledge on Lines 35-36 and 73-75 of our submission. While we currently mostly motivate unstructured sparse training on Lines 27-28 and 279-286, we will note in the paper introduction that unstructured sparsity remains a very important technique and intense focus of research for ourselves and other labs, as accelerators such as CPUs, Cerebras, GraphCore, and SambaNova can take advantage of unstructured sparsity during training and inference. Furthermore, GPU and TPU systems support 2:4 block structured sparsity which is quite similar to 50\% unstructured sparsity. Research has shown that the less structure imposed on sparsity masks, the better the performance, making this setting relevant to study in training and inference. For example, SparseGPT [Frantar and Alistarh(2023)], the SOTA LLM pruning technique creates unstructured sparse masks that require hardware acceleration to realize an inference benefit. They report a CPU inference speedup, demonstrating the potential for real-world impact of unstructured sparsity. Additionally, in Section 2 "Dynamic sparsity hyperparameter transfer" from our "Global Author Rebuttal", we test RigL and GMP dynamic sparse training and show that none of SP, $\mu$P, or S$\mu$Par achieve transfer of optimal learning rate across sparsity level. We hope this helps address your concerns around applying S$\mu$Par outside the context of random unstructured sparsity. ``` The theoretical contribution is somewhat weak. The proposed parameterization method is simple extension of maximal update parameterization to the sparse version, which I don't see significant technical improvement. ``` It is true that S$\mu$Par is a simple extension of $\mu$P. However, we note that we are the first to acknowledge and propose a solution to the systematic misuse of dense hyperparameters in sparse training. On Lines 90-91 and Footnote 1 (page 3) from our submission, we demonstrate how widespread the issue is by providing numerous examples of sparse training works which use dense hyperparameters for sparse training. So, regardless of the perceived significance of technical improvement, the work is important and the potential for impact is fairly high. ``` Why does S$\mu$Par stabilize the sparse network training even after multiple training steps? How is S$\mu$Par technically different from the previous sparse weight initialization methods? ``` We take your point, and indeed we discuss this on Lines 267-273. $\mu$P and S$\mu$Par differ from pure initialization-based methods due to the presence of a learning rate correction. The combination of an initialization and learning rate correction allows S$\mu$Par to stabilize sparse network training even after multiple training steps, as seen in Figure 5 of our main submission. In Section 1 "Individual ablations of S$\mu$Par initialization and learning rate corrections" from our "Global Author Rebuttal", we also show that the S$\mu$Par initialization adjustment alone is not sufficient to achieve transfer of optimal HPs across sparsity levels. Additionally, please see Section 3 "Downstream Task Evaluations" from our "Global Author Rebuttal" where we provide accuracy comparisons between SP, $\mu$P, and S$\mu$Par. We also think the comparison across PaI methods is somewhat orthogonal to our work because we are not studying the optimal sparse training recipe (eg. PaI, DST, prune-retrain), but instead how to best control sparse training dynamics. ``` Is the proposed parameterization method generalizable to any structured sparsity patterns (e.g., channel pruning) for the density-invariant hyperparameters? As mentioned in the paper, the unstructured sparsity is extremely hard to speed up. Discussing S$\mu$Par on more structured sparsity patterns, such as channel pruning, will make the paper stronger. ``` Great question! On Lines 160-161 we remark that S$\mu$Par should easily extend to random 2:4 structured sparsity as it closely resembles random unstructured sparsity. S$\mu$Par would also work for random structured pruning of entire neurons at initialization because this case simply reduces to training with a narrower dense model. However, S$\mu$Par does not transfer to dynamic sparse training because weights become non-Gaussian. Please see Section 2 "Dynamic sparsity hyperparameter transfer" from our "Global Author Rebuttal" for a discussion on the limitations of S$\mu$Par in dynamic sparse training. We will update the main body to more thoroughly discuss these possible extensions. Thank you for the feedback. ``` Can S$\mu$Par improve the final performance of IMP? ``` This is an interesting question. S$\mu$Par improves random static unstructured sparse training and IMP rewinds weights back to their initial values while maintaining the same mask. If the IMP mask at initialization still allows the non-zero weights to have a Gaussian distribution, then S$\mu$Par would apply to this case. Therefore, S$\mu$Par *could* prevent ``hyperparameter detuning'' in later IMP iterations, and *potentially* improve IMP losses. Given the popularity of IMP/LTH, we think this discussion would make an excellent addition to Section 5 of our paper. Thank you for bringing this up! ``` DeltaX vs DeltaW notation ``` $\Delta W$ refers to the weight update whereas $\Delta \mathbf{X}$ refers to "the effect of the weight update”. We will update Line 113 accordingly. [Frantar and Alistarh(2023)] Elias Frantar and Dan Alistarh. 2023. SparseGPT: Massive language models can be accurately pruned in one-shot. In International Conference on Machine Learning. --- Rebuttal Comment 1.1: Comment: Thanks for your thorough response to my review. My primary concern was the practicality of S$\mu$PaR and unstructured sparse weights in general, which was effectively resolved by the authors' response. Also, the additional experimental results are impressive. I am inclined to raise my score.
Summary: The paper introduces Sparse Maximal Update Parameterization (SµPar), a novel approach designed to address challenges in training sparse neural networks. Sparse networks, despite their computational advantages, often struggle with signal propagation and optimal hyperparameter (HP) tuning across different sparsity levels. SµPar aims to stabilize training dynamics by ensuring that activations, gradients, and weight updates are scale-invariant with respect to sparsity. This method allows for consistent optimal HPs across varying model widths and sparsity levels, thus reducing the tuning costs associated with sparse models. The empirical results demonstrate that SµPar outperforms standard parameterization (SP) and maximal update parameterization (µP) in maintaining stable optimal HPs and achieving better loss metrics across different sparsity levels and model scales. Strengths: 1. The idea of this paper is interesting. The authors select a trending topic with very clear and strong motivation. The preliminary experimental results is very clear and helpful to the readers. 2. The discussion about forward/backward pass and weight updates are very clear. Figures are also clear and good to read. 3. The writing is good and the paper is well structured. Weaknesses: 1. This paper has many supporting experiments reporting the training loss, validation loss, and transfer loss for SµPar. However, loss values are not always reliable for evaluation. The accuracy data is a more convincing data to demonstrate the performance. I wonder what is the accuracy for the experiments. It would make reader easy to compare against other methods. 2. What is the method that is used to prune the network? There are lots of method such as SNIP and GraSP for static sparse training. However, this part is unclear according to the paper. 3. The author claims SµPar is a holistic solution. However, from what the paper has demonstrate (discussion, experiments), the reviewer cannot be convinced. According to weaknesses 2, we don’t know what is the pruning method used in sparse training, and according to the paper, the author also not performs applying SµPar on different sparse training methods. Therefore, we don’t know if SµPar works for different sparse training algorithms. Furthermore, I assume this paper only discuss static sparse training (such as SNIP), without discussing the performance on dynamic sparse training (DST) which is a more reliable and promising sparse training domain (e.g., better accuracy, flexible). Therefore it is not suitable to claim SµPar is a holistic solution. Even the authors don’t make such claim, without sufficient experiments the reviewer mentioned in this part, this paper is still lacks fundamental evidence to demonstrate effectiveness. Technical Quality: 2 Clarity: 3 Questions for Authors: Please refer to weaknesses. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Please refer to weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the helpful comments, and for your kind words regarding the paper's key idea, motivation, and clear and well-structured presentation. ``` This paper has many supporting experiments reporting the training loss, validation loss, and transfer loss for SµPar. However, loss values are not always reliable for evaluation. The accuracy data is a more convincing data to demonstrate the performance. I wonder what is the accuracy for the experiments. It would make reader easy to compare against other methods. ``` Thank you for suggesting this. Please see Section 3 "Downstream Task Evaluations" from our "Global Author Rebuttal". We hope this addresses your concerns. ``` What is the method that is used to prune the network? There are lots of method such as SNIP and GraSP for static sparse training. However, this part is unclear according to the paper. ``` Thank you for bringing this lack of clarity to our attention! In this work we solely used random static unstructured pruning at initialization, as stated on Lines 107, 512, 528, 549, and 564. To improve clarity we will update our abstract, intro, and conclusion to further emphasize our choice of pruning method. As we noted on Lines 27-28, recent work has shown random sparsity to be a surprisingly effective strategy for sparse training, so we adopted this in our work. ``` The author claims SµPar is a holistic solution. However, from what the paper has demonstrate (discussion, experiments), the reviewer cannot be convinced. According to weaknesses 2, we don’t know what is the pruning method used in sparse training, and according to the paper, the author also not performs applying SµPar on different sparse training methods. Therefore, we don’t know if SµPar works for different sparse training algorithms. Furthermore, I assume this paper only discuss static sparse training (such as SNIP), without discussing the performance on dynamic sparse training (DST) which is a more reliable and promising sparse training domain (e.g., better accuracy, flexible). Therefore it is not suitable to claim SµPar is a holistic solution. Even the authors don’t make such claim, without sufficient experiments the reviewer mentioned in this part, this paper is still lacks fundamental evidence to demonstrate effectiveness. ``` Thank you for raising these points. Please see Section 2 "Dynamic sparsity hyperparameter transfer" from our "Global Author Rebuttal". This section shows that none of SP, $\mu$P, or S$\mu$Par achieve transfer of optimal learning rate across sparsity level for RigL and GMP. We hope this addresses your concerns around applying S$\mu$Par to dynamic sparse training methods. Also, we agree that ``holistic'' is a vague term, as it might imply S$\mu$Par solves challenges with other forms of sparse training, such as dynamic sparse training. We should have been more clear that S$\mu$Par holistically controls the scale of the forward, backward, and weight update operations with respect to model width and level of unstructured static random sparsity –- some of these operations have been handled \emph{in isolation} in prior work, but not all together. We will also update our introduction to make it clear that S$\mu$Par does not extend to pruning or dynamic sparse training because these methods can cause the unmasked/non-zero weight distribution to become non-Gaussian (as noted currently only in L274-L278). --- Rebuttal Comment 1.1: Title: Thank you for your rebuttal Comment: Dear authors, Thank you for your rebuttal and additional experiments. It seems that the performance on DST is not that good. But most of my concerns are addressed. I increase my score to 5
Summary: This paper studies the effect of various hyperparameters on static sparse training while having a holistic approach. It highlights that there is a correlation between their settings and neural network performance (practically loss function values). The experiments are performed on smaller and larger models, including a large-scale language model. Strengths: This is an interesting study with a relatively low level of originality (in my opinion). It practically puts together and discuss a bit more various findings from various papers. Also, the paper structure and clarity could have been improved. Currently, it is not easy to understand it. If the paper would be more mature, may have a relatively fair impact in the community. Weaknesses: I believe that one main weakness is that the networks performance is reported just with loss values. For a more comprehensive view, and deeper understanding other performance metrics (task dependent) shall be also presented for all experiments. With respect to the problem motivation, I believe that the paper exaggerates a bit the difficulty of training static sparse neural networks. Perhaps, for static sparsity, the effect of hyperparameters on network performance is more pronounced, but the alternative dynamic sparsity is quite robust to hyperparameter choice (e.g., sparsity level) and can obtain quite easy performance on par (or even better) with dense neural networks while having much fewer theoretical computational requirements as reported in a number of related works. For a fair overview, the paper shall discuss also comparatively the performance of static sparsity against dynamic sparsity and pruning, or alternatively, to motivate better its settings. The paper readability can be improved with a better structure, more clarity in arguments and qualitative discussions, and more clear statements about the paper novel contributions. (minor) The source code shall be provided to ensure easy reproducibility. Technical Quality: 2 Clarity: 1 Questions for Authors: I don't have questions which may change my evaluation. Still, I consider this study promising, and I hope that my comments will help the authors to improve the next version of the paper. Confidence: 3 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very useful suggestions, your positive comments regarding the promising/interesting findings, and your note on the potential for impact in the community. ``` This is an interesting study with a relatively low level of originality (in my opinion). It practically puts together and discuss a bit more various findings from various papers. Also, the paper structure and clarity could have been improved. Currently, it is not easy to understand it. If the paper would be more mature, may have a relatively fair impact in the community. ``` We take your point regarding the level of originality, however we note that we are the first to acknowledge and propose a solution to the systematic misuse of dense hyperparameters in sparse training. On Lines 90-91 and Footnote 1 (page 3) from our submission, we demonstrate how widespread the issue is by providing numerous examples of sparse training works which use dense hyperparameters for sparse training. So, regardless of the perceived level of innovation, the work is important and the potential for impact is fairly high, as you noted. ``` I believe that one main weakness is that the networks performance is reported just with loss values. For a more comprehensive view, and deeper understanding other performance metrics (task dependent) shall be also presented for all experiments. ``` Thank you for suggesting this. Please see Section 3 "Downstream Task Evaluations" from our "Global Author Rebuttal". We hope this addresses your concerns. ``` With respect to the problem motivation, I believe that the paper exaggerates a bit the difficulty of training static sparse neural networks. Perhaps, for static sparsity, the effect of hyperparameters on network performance is more pronounced, but the alternative dynamic sparsity is quite robust to hyperparameter choice (e.g., sparsity level) and can obtain quite easy performance on par (or even better) with dense neural networks while having much fewer theoretical computational requirements as reported in a number of related works. For a fair overview, the paper shall discuss also comparatively the performance of static sparsity against dynamic sparsity and pruning, or alternatively, to motivate better its settings. ``` This is very perceptive –- and a point that we did not fully appreciate previously. But first of all, we should have done a better job of motivating static random sparsity – the topic of the paper – to begin with. We will note in the paper that static sparsity remains a very important technique and intense focus of research for ourselves and other labs, not least because it enjoys native hardware support (e.g., 2:4 sparsity in NVIDIA GPUs, full unstructured sparsity in Cerebras chips). That being said, investigating hyperparameter shift in DST methods is insightful. Please see Section 2 "Dynamic sparsity hyperparameter transfer" from our "Global Author Rebuttal". In this section we show that the optimal learning rate significantly changes across sparsity levels for both RigL and GMP, contradicting your claim that dynamic sparsity is quite robust to hyperparameter choice and further highlighting the importance of developing systematic solutions like S$\mu$Par. As we discussed on Lines 274-278 of our submission, S$\mu$Par is not correct for dynamic sparse methods, leaving an impactful direction for future work. We really appreciate your suggestion here as these interesting findings definitely help expand the paper’s contribution! ``` (minor) The source code shall be provided to ensure easy reproducibility. ``` Unfortunately we do not have an open-source repository for S$\mu$Par. However, we do provide Table 1 in our submission, which outlines the simple implementation changes required to implement $\mu$P and S$\mu$Par. Based on this it should be straightforward to extend existing implementations of $\mu$P to implement S$\mu$Par (e.g. https://github.com/microsoft/mup). --- Rebuttal Comment 1.1: Title: Thank you for the rebuttal Comment: Dear authors, Thank you for considering my comments and for preparing an extensive rebuttal. I still believe that all these (including your answers to the other reviewers) has to be integrated in a new version of the paper and that version will still need a new peer-review process. Therefore, I have to keep my original rating. Nevertheless, I take your point about the systematic misuse of dense hyperparameters in sparse training as novel indeed, and for this reason, I will not put myself against acceptance if the other reviewers and the area chair will unanimously support the acceptance of this paper. Best wishes, Reviewer F4Ay --- Reply to Comment 1.1.1: Title: Thank you for the response Comment: Thanks a lot for engaging in this discussion and for being open to acceptance based on the novel contributions. Given the other reviewers are now unanimous in recommending acceptance, it would probably clarify things for the AC if your score also indicated you were no longer standing against acceptance. However what eventually matters is the review text, not the final score, so thanks again either way!
Rebuttal 1: Rebuttal: We thank all reviewers for taking the time to read our submission and provide helpful feedback. Please find attached our 1 page PDF containing additional results. We believe these additions help address many reviewer concerns and strengthen our submission. Here we provide a discussion of the results in our 1 page Rebuttal PDF. # Section 1: Individual ablations of S$\mu$Par initialization and learning rate corrections Thank you to reviewer Grts for suggesting we individually ablate the effect of the S$\mu$Par initialization and the S$\mu$Par learning rate. In Figure 1 of our Rebuttal PDF (blue and orange boxes), we show that using only the S$\mu$Par initialization in conjunction with $\mu$P ("$\mu$P + S$\mu$Par init only") does not allow for transfer of optimal initialization standard deviation or optimal learning rate across sparsity levels. This result also helps address some of the feedback from reviewer hqe9. We also show that using only the S$\mu$Par learning rate in conjunction with $\mu$P does not achieve transfer either ("$\mu$P + S$\mu$Par LR only"). Therefore, both the S$\mu$Par initialization and learning rate corrections are required to achieve optimal hyperparameter transfer across sparsity levels. # Section 2: Dynamic sparsity hyperparameter transfer Every reviewer mentioned dynamic sparsity in some capacity, which motivated us to study it more closely in the context of S$\mu$Par. In Figure 1 of our Rebuttal PDF (green box), we test the transfer of optimal learning rate across sparsity levels for two popular dynamic sparse training methods: Rigging the Lottery (RigL) [Evci et al.(2020)] and Gradual Magnitude Pruning (GMP) [Zhu and Gupta(2017)]. We show that none of SP, $\mu$P, or S$\mu$Par achieve transfer of optimal learning rate across sparsity levels. For SP and $\mu$P we see that higher sparsity levels have higher optimal learning rates. This is because sparsity reduces activation and gradient scales such that a larger learning rate is needed to counteract this. S$\mu$Par sees the opposite trend where higher sparsity levels have lower optimal learning rates, indicating that S$\mu$Par is ``overcorrecting''. On Lines 274-278 of our submission we mention that dynamic sparse methods can make updates to the weight mask such that the distribution of unmasked/non-zero weights changes to something non-Gaussian, which prevents S$\mu$Par from being mathematically correct. Compared to random pruning, a mask obtained from magnitude pruning will better preserve the size of activations and gradients seen in the dense network. Since S$\mu$Par assumes weights are drawn from a Gaussian distribution, S$\mu$Par ends up ``overcorrecting'' the initialization and learning rate. In future work it would be impactful to develop a parameterization which generalizes S$\mu$Par to work for an arbitrary sparse training algorithm. # Section 3: Downstream Task Evaluations Thank you to reviewer F4Ay and 6wqj for pointing out the limitations of limitations of relying on loss alone for evaluating LLMs. We recognize this shortcoming since ultimately LLMs are being trained for use in downstream tasks. Following their suggestion, in Table 1 of the Rebuttal PDF, we evaluated the models from Figure 1 of our submission to provide a head-to-head comparison between SP, $\mu$P, and S$\mu$Par. Results across pretraining loss and average downstream task accuracy consistently show that S$\mu$Par models achieve superior performance compared to SP and $\mu$P. We measured accuracy on five downstream tasks: ARC-easy, lambada, RACE, PIQA, and BoolQ, which collectively test for common sense reasoning, world knowledge, and reading comprehension. We also specifically chose tasks that are easy enough for even extremely sparse models to significantly outperform random chance. [Evci et al.(2020)] Utku Evci, Trevor Gale, Jacob Menick, Pablo Samuel Castro, and Erich Elsen. 2020. Rigging the lottery: Making all tickets winners. In International conference on machine learning. PMLR, 2943–2952. [Zhu and Gupta(2017)] Michael Zhu and Suyog Gupta. 2017. To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878 (2017). Pdf: /pdf/a7f1c101051c12c80c51dd267fa6e8bbeca945dc.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Can large language models explore in-context?
Accept (poster)
Summary: In this study, the authors investigate the impact of diverse prompt designs, including environmental descriptions, summarizing interaction history, and chain of thought reasoning, on the exploration behavior of three distinct LLMs in a multi-armed bandit task. The results demonstrated that none of the models engaged in exploration without substantial interventions. The only configuration that demonstrated efficacy was the GPT-4 model, which was prompted with change of thought reasoning and an externally summarised interaction history. The authors demonstrate that the typical failure mode for these models is either suffix errors, wherein the LLM fails to select the optimal arm even once in the initial stages, or uniform sampling of all options. In conclusion, the authors raise concerns about the external validity of the working configuration, noting that it may not be scalable to more complex task settings. They also speculate on the potential algorithmic interventions that could be employed to encourage LLMs to explore in such settings. Strengths: Whether or not LLMs can explored in context in a reinforcement learning task and if so, how is a question worth investigating and approach to study it including the paradigm used are tried and tested. Weaknesses: The research question and paradigm used to study that question have been previously investigated in the literature [1-5]. However, in contrast to previous studies, the authors have conducted a more comprehensive search of prompt designs, included a greater number of options in the multi-armed bandit task, and analyzed the failure modes of the models using two novel measures. Nevertheless, I have significant concerns regarding the work's novelty and its contribution's strength. While the results are interesting, the main contribution of the work is somewhat limited, as the subset of evaluations performed has already been studied by previous works (see questions and limitations for details). Additionally, the authors have not adequately cited related works [1-8]. [1] Binz, M., & Schulz, E. (2023). Using cognitive psychology to understand GPT-3. _Proceedings of the National Academy of Sciences_, _120_(6), e2218523120. [2] Schubert, J. A., Jagadish, A. K., Binz, M., & Schulz, E. (2024). In-context learning agents are asymmetric belief updaters. _arXiv preprint arXiv:2402.03969_. [3] Hayes, W. M., Yax, N., & Palminteri, S. (2024). Large Language Models are Biased Reinforcement Learners. _arXiv preprint arXiv:2405.11422_. [4] Coda-Forno, J., Binz, M., Akata, Z., Botvinick, M., Wang, J., & Schulz, E. (2023). Meta-in-context learning in large language models. _Advances in Neural Information Processing Systems_, _36_, 65189-65201. [5] William M Hayes, Nicolas Yax, and Stefano Palminteri. Relative value biases in large language models. arXiv preprint arXiv:2401.14530, 2024. [6] Julian Coda-Forno, Marcel Binz, Jane X Wang, and Eric Schulz. Cogbench: a large language model walks into a psychology lab. arXiv preprint arXiv:2402.18225, 2024 [7] Thilo Hagendorff. Machine psychology: Investigating emergent capabilities and behavior in large language models using psychological methods. [8] Lampinen, A. K., Dasgupta, I., Chan, S. C., Matthewson, K., Tessler, M. H., Creswell, A., ... & Hill, F. (2022). Can language models learn from explanations in context?. _arXiv preprint arXiv:2204.02329_. Technical Quality: 3 Clarity: 3 Questions for Authors: What factors within the model (size, architecture, RLHF, internal representation), its training dataset, and training protocol (at a specific point during training) could be preventing LLMs to fail at exploration? It may be beneficial for the authors to consider investigating some of these factors further, which could potentially enhance the quality of the paper. It would be beneficial to investigate the emergence of exploration behavior in LLMs by examining smaller models, such as GPT-2/LLaMa-2. It has been demonstrated that LLM models can be fine-tuned for reinforcement learning tasks, enabling them to function as decision-makers capable of generalizing to novel tasks with high efficiency [1-2]. These findings suggest that, despite their limited capacity for exploration when tested off-the-shelf, LLMs can be effectively fine-tuned to enhance their ability to explore. It would be valuable for the authors to investigate whether fine-tuning LLMs (such as the GPT-2 /LLaMa-2 model) on simple RL tasks could facilitate their exploration capabilities. [1] Cao, Y., Zhao, H., Cheng, Y., Shu, T., Liu, G., Liang, G., ... & Li, Y. (2024). Survey on large language model-enhanced reinforcement learning: Concept, taxonomy, and methods. _arXiv preprint arXiv:2404.00282_. [3] T. Carta, C. Romac, T. Wolf, S. Lamprier, O. Sigaud, and P.-Y. Oudeyer, “Grounding Large Language Models in Interactive Environments with Online Reinforcement Learning,” Sep. 2023 Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The failure mode of an agent following an epsilon greedy policy with a high epsilon value is analogous to that of Suffix Failure. Conversely, KMinFrac is analogous to the failure mode of an agent following a random policy or an epsilon greedy policy with a low epsilon value. Therefore, I believe the metrics themselves are quite trivial and do not constitute a novel contribution. Rather than relying on a qualitative comparison between LLM's choices and those of alternative bandit algorithms, the authors may find it beneficial to fit the models of exploration to LLM's choices and conduct a more quantitative evaluation [1] [1] Gershman, S. J. Deconstructing the human algorithms for exploration. Cognition, 173:34–42, 2018. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *Additionally, the authors have not adequately cited related works [1-8].* Thanks for the references! We actually became aware of some of these between the submission and now and have already incorporated appropriate discussions into the manuscript. But we’ll be sure to discuss them in a revision. Let us comment on [1-8], as per your list. [2,3,5,6] are concurrent, unpublished works (from 2024), and should not be considered as novelty issues. (Note that our paper has appeared on arxiv since March 2024.) We’ll provide a detailed comparison in a revision*. Of the remaining papers, [1,4] consider bandit tasks at a very small scale (T=10, 2 arms), which is really insufficient to distinguish between good and bad algorithms, and [7,8] have no bandit tasks at all. All bandit tasks but [4] focus on LLM vs human comparisons, whereas we focus on LLM vs algorithms. We believe this is an important distinction. *Two brief notes, though: [2] only experiments with Claude v1.3, which is considered roughly on the level of GPT-3.5 and hence probably not indicative of GPT-4 performance; [5] only considers bandit tasks at a very small scale. > [Q1] *What factors within the model (size, architecture, RLHF, internal representation), its training dataset, and training protocol (at a specific point during training) could be preventing LLMs to fail at exploration? It may be beneficial for the authors to consider investigating some of these factors further, which could potentially enhance the quality of the paper. It would be beneficial to investigate the emergence of exploration behavior in LLMs by examining smaller models, such as GPT-2/LLaMa-2.* Thanks for the suggestion. We did experiment with LLaMA-2 in the paper (results are summarized in Fig 3 and reported with more details in the appendix) and found that it is much worse than GPT-3.5-Turbo, which itself is much worse than GPT-4. In a sense, this already demonstrates that exploration behavior is emergent, and that it perhaps can be improved further by scaling. > [Q2] *It has been demonstrated that LLM models can be fine-tuned for reinforcement learning tasks, enabling them to function as decision-makers capable of generalizing to novel tasks with high efficiency [1-2]. These findings suggest that, despite their limited capacity for exploration when tested off-the-shelf, LLMs can be effectively fine-tuned to enhance their ability to explore. It would be valuable for the authors to investigate whether fine-tuning LLMs (such as the GPT-2 /LLaMa-2 model) on simple RL tasks could facilitate their exploration capabilities.* Thanks for these references. In our related work section (Appendix A), we discussed some works that train language models from scratch on trajectory data from another agent, resulting in suitable decision-making (and exploratory) behavior. Based on this, it’s not too surprising that LLMs can be fine-tuned to improve their decision-making capabilities; we’ll definitely include the citations. However, it’s worth emphasizing that fine-tuning requires expertise & money and/or infrastructure & access. Many practitioners are using (or will use) off-the-shelf LLMs for decision-making tasks without fine-tuning, and so it is critical to understand what is possible with standard training. > *The failure mode of an agent following an epsilon greedy policy with a high epsilon value is analogous to that of Suffix Failure. Conversely, KMinFrac is analogous to the failure mode of an agent following a random policy or an epsilon greedy policy with a low epsilon value. Therefore, I believe the metrics themselves are quite trivial and do not constitute a novel contribution. Rather than relying on a qualitative comparison between LLM's choices and those of alternative bandit algorithms, the authors may find it beneficial to fit the models of exploration to LLM's choices and conduct a more quantitative evaluation* Both approaches seem perfectly reasonable to us but we respectfully disagree that one is “qualitative” while the other is “quantitative.” Our approach is completely quantitative: we suggest quantitative statistics to measure, demonstrate that they correlate strongly with performance (measured via reward over a much longer time scale), and quantitatively compare to state-of-the-art methods. Compared with the suggested modeling approach, our statistics allow for direct measurement of exploratory behavior without imposing any modeling assumptions (which would likely be wrong/erroneous given the erratic behavior of LLMs that we witnessed in our experiments). --- Rebuttal 2: Title: Response to rebuttal Comment: Thank the authors for their detailed responses to our queries. Re. Q1: Can the authors include a mixed-effects regression analysis, where they perform a regression from the abovementioned factors onto the surrogate measures to see how much each factor determines the extent of exploration and lack thereof? This allows us to understand the contributions of the factors at a more fine-grained level. Re. Q2: We agree with the authors that expertise & money and/or infrastructure prevent fine-tuning to be a plausible option for everyone. But we differ from them in that there is a sufficient audience who would be interested if there is a simple fine-tuning protocol that makes LLMs explore optimally. We think smaller models can be a good test bed for such approaches and are quite confident that the protocol can even be standardized if shown to provide sufficient gains. Re. external validity concerns as raised by two other reviewers: We agree with the authors that the bandit task provides a well-studied, controlled setting to study exploration in LLMs. However, given that these models are typically made to solve tasks vastly different from the bandit setting, We still have concerns about whether the findings will hold for those settings. If authors find the same effects in coding or some problem-solving tasks with the help of their prompting strategy, that would greatly increase the potential impact of the findings. Keeping these points in mind, I am happy to increase my score from 3 to 5. --- Rebuttal Comment 2.1: Comment: Thanks for the helpful feedback. We would be happy to include such a mixed-effects regression analysis in our next revision.
Summary: This paper investigates whether large language models (LLMs) like GPT-3.5, GPT-4, and LLAMA2 can perform exploration in reinforcement learning settings without additional training. The authors focus on in-context learning, where the environment description and interaction history are provided entirely within the LLM prompt. They evaluate LLMs on multi-armed bandit problems, comparing their performance to standard bandit algorithms like Thompson Sampling (TS) and Upper Confidence Bound (UCB). The key findings are: (1) Most LLM configurations fail to robustly explore, exhibiting either "suffix failures" (converging to a suboptimal arm) or "uniform-like failures" (selecting arms uniformly at random). (2) Only one configuration (GPT-4 with chain-of-thought reasoning and externally summarized history) showed satisfactory exploratory behavior comparable to TS. (3) LLM performance is highly sensitive to prompt design and varies significantly across model versions. The authors conclude that non-trivial algorithmic interventions may be necessary for LLMs to function effectively as decision-making agents in complex settings. Strengths: - The study uses multiple LLMs, various prompt designs, and compares against established baselines. The use of both easy and hard multi-armed bandit instances provides a comprehensive evaluation. - The introduction of "suffix failure frequency" and "MinFrac" as surrogate statistics for identifying exploration failures is innovative and allows for more efficient evaluation of LLM performance. - The work addresses an important question in the rapidly evolving field of LLM capabilities, with implications for using LLMs in decision-making tasks. Weaknesses: - While the focus on multi-armed bandits provides a clean experimental setup, it may not fully represent the challenges of more complex reinforcement learning problems. The generalizability of the findings to broader settings is unclear. - The paper is predominantly empirical. A theoretical framework for understanding why LLMs struggle with exploration could provide deeper insights and guide future research. - While the authors attempt to explore reasons for LLM failures in Section 3.3, this analysis feels underdeveloped. A more systematic investigation of failure modes could yield valuable insights. - While the authors compare summarized vs. raw history, more systematic ablations of different prompt components could help isolate the factors contributing to successful exploration. Technical Quality: 2 Clarity: 3 Questions for Authors: None. Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *While the focus on multi-armed bandits provides a clean experimental setup, it may not fully represent the challenges of more complex reinforcement learning problems. The generalizability of the findings to broader settings is unclear.* Failures on a fundamental special case such as MAB plausibly carry over to the broader scenarios. But we completely agree for the positive findings. However, this would be the case for any experiment setup: the findings do not necessarily generalize beyond the precise experiment setting. Among all possible experiment setups, we argue that simple/clean/fundamental special cases are preferable. MAB is precisely such a setup. > *The paper is predominantly empirical. A theoretical framework for understanding why LLMs struggle with exploration could provide deeper insights and guide future research.* Indeed! But we view it as a next step in this line of research. Our contribution is the prerequisite to this: we identify that there is a need for such a framework. Nevertheless, we hope that the suffix failure and MinFrac statistics provide some hint as to how one might develop a deeper understanding. > *While the authors attempt to explore reasons for LLM failures in Section 3.3, this analysis feels underdeveloped. A more systematic investigation of failure modes could yield valuable insights.* Agreed! We included section 3.3 primarily to point out the difficulty of such investigation, particularly of using single-round experiments to isolate long-horizon issues. We can emphasize this in a revision. > *While the authors compare summarized vs. raw history, more systematic ablations of different prompt components could help isolate the factors contributing to successful exploration.* Again we agree. For this we were primarily limited by cost for GPT-4. Note that we have reported results for all 24 prompt configurations for GPT-3.5-Turbo in the appendix, so “systematic ablations” on a weaker LLM are included in the manuscript.
Summary: This paper investigates the exploration capabilities of contemporary large language models. The authors use LLMs as agents in multi-armed bandit environments, describing the environment and interaction history in-context without any training interventions. Their experiments involve a variety of configurations and models, and they conclude that only GPT-4 with chain-of-thought reasoning and an externally summarized interaction history demonstrates satisfactory exploratory behavior. Strengths: - The paper comprehensively analyzes the LLMs in various configurations and prompt choices to understand their exploratory capabilities. Weaknesses: - In line 161, it is mentioned that no parameter tuning is performed for the baselines with tunable parameters. In this case, we are not sure that the baselines are performing to their highest capacity and therefore, the comparison might not be fair. - Longer horizons T and replicates N might be required to obtain statistically significant results. Technical Quality: 2 Clarity: 3 Questions for Authors: - Why isn't parameter tuning performed for the baselines with tunable parameters? - In line 97, it is mentioned that all the arms, except for the best arm, have the same mean reward. What is the reason for this choice? - If the summary is not available for more complex situations, can few-shot examples help instead? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: - Longer horizons T and replicates N might be required to obtain statistically significant results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *In line 161, it is mentioned that no parameter tuning is performed for the baselines with tunable parameters. In this case, we are not sure that the baselines are performing to their highest capacity and therefore, the comparison might not be fair.”* (Also, Q1) This is rather standard in bandit experiments. It puts the baselines at a disadvantage, and even then they (for the most part) outperform the LLMs. So even without parameter tuning there is strong evidence that LLMs do not effectively explore. > *Longer horizons T and replicates N might be required to obtain statistically significant results.* This is a fair concern. As we discuss in Section 2 ("Scale of the experiments"), our scale (T x N x #configs x #instances) is near-prohibitive given the high cost of invoking modern LLMs, yet insufficient if one relies on standard statistics such as accumulated rewards. This necessitates a more subtle study focusing on surrogate statistics focusing suffix and uniform-like failures. We note that most/all follow-up or concurrent studies of LLMs in bandit problems were smaller-scale (e.g. [68]; [1-6] mentioned by Reviewer Jht5; Wu et al, "A benchmark for LLMs as intelligent agents".ICLR'24; Park et al., "Do LLM agents have regret?A case study in online learning and games. arXiv:2403.16843, 2024). > [Q2] *In line 97, it is mentioned that “all the arms, except for the best arm, have the same mean reward”. What is the reason for this choice?* This choice (a) is the standard lower bound construction for MABs, and (b) has the fewest degrees of freedom, leading to a simpler and more defensible experiment setup. Of course other designs are also reasonable. > [Q3] *If the summary is not available for more complex situations, can few-shot examples help instead?* We suspect so, and mention few-shot prompting as a natural direction for future work in Section 4.
Summary: This paper investigates whether popular LLMs can engage in exploration in a in-context manner (all experiences are stored as context/prompt). To achieve, the paper deploys LLMs (GPT-3.5, GPT-4, LLAMA2) as agents in multi-armed bandit environments, using various prompt designs to specify the environment description and interaction history entirely in-context. The results show that only GPT-4 with chain-of-thought reasoning and externally summarized interaction history exhibited satisfactory exploratory behavior. Other configurations failed to robustly explore. The authors suggest that non-trivial algorithmic interventions, such as fine-tuning or dataset curation, may be required for LLMs to effectively explore in more complex settings. Strengths: The paper investigates an interesting question: can LLMs explore in-context. Human holds the ability to explore in-context, with their self-summarized histories or abstractions. The results may indicate the intelligence embedded in LLMs is still not aligned with human. The findings have meaningful implications for the use of LLMs in decision-making tasks (especially when directly employing LLM as the agent). The identification of the need for external summarization and potential training interventions makes sense. Weaknesses: As an experimental article, the findings are naive and obvious: the LLMs are not designed for solving decision-making tasks. It is consistent with intuition that LLMs can explore in-context when CoT with summarized history. I feel that experiment results on more challenging tasks are necessary, such as some textual interactive games. In practical, the scenarios human encounters and explores are more complex than a bandit. More discussions about related work are appreciated. There lacks a more systematic analysis of which specific prompt design elements are most critical for successful exploration? This could help in understanding the sensitivity of LLMs to prompt variations. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Only one configuration (GPT-4 with chain-of-thought reasoning and an externally summarized interaction history) resulted in satisfactory exploratory behavior. Why this specific configuration was successful and others were not? Are there any insights into the underlying mechanisms? 2. The paper concludes that external summarization is essential for desirable LLM behavior in simple environments. How do the authors envision scaling this approach to more complex settings where external summarization might be infeasible? Are there any preliminary ideas or experiments that could address this limitation? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: I do not find any limitations discussed in the main body. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > *As an experimental article, the findings are naive and obvious: the LLMs are not designed for solving decision-making tasks. It is consistent with intuition that LLMs can explore in-context when CoT with summarized history.* Whether LLMs are “designed” for decision-making tasks is not very relevant to our study. LLMs are already being deployed as decision-making agents (e.g. [40, 54, 63, 48, 5, 71]), and so it is crucial to understand their capabilities as such. Besides, LLMs are not explicitly “designed” for many tasks that they are good at. We do not understand how LLMs not being designed for (or failing at) decision-making is consistent with the success of CoT and history summarization. Indeed, it was unclear if CoT would help much and LLMs are quite capable of performing the simple arithmetic required for history summarization.) Re. “naive and obvious”: we emphasize that the scale of our experiments (as discussed in Section 2) was already near-prohibitive given the LLM costs, but still insufficient with the standard statistics (cumulative rewards), and hence necessitated a considerable subtlety (e.g., tracking suffix and uniform-like failures). The “prompt space” also required a somewhat careful design. > *I feel that experiment results on more challenging tasks are necessary, such as some textual interactive games. In practical, the scenarios human encounters and explores are more complex than a bandit.* Real-world decision-making problems are indeed much more complicated than ours. However we emphasize that our MAB setting distills the essence of exploration (as justified by decades of theoretical research) and embeds as a special case into essentially all RL/decision-making formulations. If an agent cannot solve a simple task, what hope does it have for more complex ones? Also: simpler settings require the experimenter to make fewer arbitrary choices, and can lead to more generalizable findings. > *More discussions about related work are appreciated.* You might have missed our detailed related work section, deferred to Appendix A due to page limit. Nevertheless, we’d be happy to add any specific references you’d like. > *There lacks a more systematic analysis of which specific prompt design elements are most critical for successful exploration? This could help in understanding the sensitivity of LLMs to prompt variations.* We already consider a non-trivial space of prompt designs, see Fig2. (And we note that it is at least comparable to that considered in concurrent work on bandit tasks for LLMs.) We totally agree a more systematic analysis of prompt designs could be very useful. However, it is also extremely challenging because the prompt space (even just for simple bandit problems) is already massive. It might be more tractable to do this in simpler (non-sequential) settings, but LLM sensitivity suggests it is difficult to generalize the findings. > [Q1] *Only one configuration (GPT-4 with chain-of-thought reasoning and an externally summarized interaction history) resulted in satisfactory exploratory behavior. Why this specific configuration was successful and others were not? Are there any insights into the underlying mechanisms?* An advanced LLM (GPT-4) and summarization are obvious benefits. The latter avoids occasional arithmetic errors that LLMs are known to make (and we found in our logs). CoT is known to help in many other tasks, but the reason is unobvious. One hypothesis (from other work) is that CoT gives the model access to more compute, which improves reasoning capabilities. In our experiment logs, CoTs often did describe correct algorithm behavior, like the correct posterior or upper confidence bound computations. We can describe this in more detail in the final version. > [Q2] *The paper concludes that external summarization is essential for desirable LLM behavior in simple environments. How do the authors envision scaling this approach to more complex settings where external summarization might be infeasible? Are there any preliminary ideas or experiments that could address this limitation?* This is a fundamental open question for future work. One perhaps naive idea is via “orchestration”: we invoke the LLM once and ask it to summarize the history, then we use this “self-produced” summarization in a second invocation where we request a decision. While not provably correct, such summaries seem simple enough to experiment with. > *The results may indicate the intelligence embedded in LLMs is still not aligned with human.* True. However, we focus on comparing LLMs to algorithms rather than humans. --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: It seems that major concerns raised by other reviewers are the complexity of the evaluation tasks. While I expect to see the performance of LLM in-context exploration in challenging tasks, I agree with the authors that MAB setting presents significance in exploration. Exploration in simple tasks is the foundation for more complicated tasks. Overall, I feel that LLM in-context exploration is an interesting direction. Thus I raise my score from 6 to 7.
Rebuttal 1: Rebuttal: Thanks for taking the time to review our submission. To summarize our contributions, we perform a systematic analysis of the extent to which LLMs are capable of exploration, a core component of reinforcement learning/decision-making, by deploying LLMs as agents in multi-armed bandit environments. Out of all configurations we tried, we found that only one configuration resulted in satisfactory exploratory behavior. Along the way, we develop several statistics which allow us to measure the degree to which LLMs can explore in a sample-efficient manner. We hope our responses have adequately addressed your concerns. If not, we are happy to engage further in the discussion period.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CoMat: Aligning Text-to-Image Diffusion Model with Image-to-Text Concept Matching
Accept (poster)
Summary: The paper introduces CoMat, an end-to-end diffusion model fine-tuning strategy for text-to-image generation that addresses misalignments between text prompts and generated images. This method integrates a novel image-to-text concept activation module and an attribute concentration module, aimed at improving text-to-image alignment. Strengths: 1. **Innovative Integration of Image-to-Text Models**: Utilizing an image-to-text model for concept activation in a diffusion-based approach is a novel application that enhances the alignment between generated images and text prompts. 2. **Comprehensive Modules for Fine-Tuning**: The attribute concentration module and the concept activation module are well-designed to tackle specific challenges in text-to-image generation, such as concept mismapping and concept omission. Weaknesses: 1. **Complexity of Implementation**: The integration of multiple components such as image-to-text models, segmentation models, and the fine-tuning strategy may introduce significant complexity and computational overhead. 2. **Insufficient Comparative Analysis**: Although differences from methods like TokenCompose [1] are discussed, there is a notable lack of direct quantitative comparisons with state-of-the-art methods including TokenCompose [1], Structure Diffusion [2], and Attend-and-Excite [3]. This comparison is crucial to establish the relative performance and advancements over existing techniques. 3. **Resource Intensiveness of Loss Computation**: The loss computation method $L_{i2t}$ may be resource-intensive and inefficient as it requires the diffusion model to undergo multiple denoising steps to generate an image before the loss can be calculated, potentially impacting scalability and practical usage. [1] Wang, Zirui, et al. "TokenCompose: Text-to-Image Diffusion with Token-level Supervision." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Feng, Weixi, et al. "Training-free structured diffusion guidance for compositional text-to-image synthesis." arXiv preprint arXiv:2212.05032 (2022). [3] Chefer, Hila, et al. "Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models." ACM Transactions on Graphics (TOG) 42.4 (2023): 1-10. Technical Quality: 2 Clarity: 3 Questions for Authors: Can the image-to-text model **effectively** offer pixel-level optimization guidance to the diffusion model using $L_{i2t}$? Additionally, are there any ablation studies that demonstrate the effects of using $L_{i2t}$ for training? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The proposed CoMat model incorporates several sophisticated components, including an image-to-text concept activation module and an attribute concentration module. While these additions are innovative, they significantly increase the model's complexity and computational demands. This complexity may limit the scalability of the approach, especially in resource-constrained environments or when handling large-scale datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We found them extremely helpful in improving our draft. We address each comment in detail, one by one below. **Comment 1. Complexity of Implementation** Thank for your feedback. Indeed, we acknowledge the intricate nature of our approach. However, we argue that the complex design is necessary. Our method's multiple components work as a whole to address the misalignment problem while maintaining high-quality image generation: 1. Misalignment Solution: a) Concept Activation Module: Utilizes an image-to-text model to activate individual concepts within the prompt. b) Attribute Concentration: Employs a segmentation model to guide the activated concepts to their appropriate spatial locations in the generated image. 2. Quality Preservation: a) Fidelity Preservation Module: Incorporates a pre-trained diffusion model as a discriminator to mitigate potential quality degradation resulting from the image-to-text model's guidance. b) Mixed Latent Strategy: Leverages ground truth images to provide additional guidance during the diffusion process. As for the resource overhead, instead of full finetuning, our method uses LoRA to reduce the computational cost. Please refer to **the response to Comment 3** for further discussion. **Comment 2. Insufficient Comparative Analysis** Thanks for your advice. We provide more quantitative comparisons with state-of-art methods like TokenCompose [1], Structure Diffusion [2], and Attend-and-Excite [3] in the T2I-CompBench benchmark, as shown in Table 1 in the author rebuttal PDF. We will include these results in our final draft. **Comment 3. Resource Intensiveness of Loss Computation** Sorry for the confusion caused. We respectfully disagree for the following two reasons: 1. **The denoising process is a must for all the training losses, not only $\mathcal{L}_{i2t}$.** Our training process indeed needs multiple denoising steps to generate the image. However, we add multiple supervision during this process to foster the model to align the generated image with the prompt. The multiple denoising process starts from a pure noise $x_T$, and then we iteratively denoise the noise to produce an image $\mathcal{I}$. Of all the denoising steps, we uniformly sample $K$ steps to enable the gradient. We supervise the attention maps during denoising in these steps, which corresponds to $\mathcal{L}\_\text{pos}$ and $\mathcal{L}\_\text{neg}$. The gradient of $\mathcal{L}\_{i2t}$ and $\mathcal{L}\_{adv}$ is back-propagated through the image to the LoRA for the sampled $K$ steps. 2. **More efficient and beneficial proven in previous works** Compared with directly supervised fine-tuning the diffusion model, our training process is identical to the inference process of the diffusion model. This training and test alignment contributes to a more efficient learning process. In fact, our method only needs 2000 iters to achieve a good performance, while directly finetuning typically requires much more iteration (Please refer to the table in Limitation 1). This paradigm of training has been proven more efficient and beneficial in previous works like DPOK [5]. **Question 1. Can image-to-text model offer effective pixel-level guidance?** We provide the visualization of the gradient of $\mathcal{L}\_{i2t}$ on the generated image in the Fig. 1 in the author rebuttal PDF. **Question 2. Ablation studies that demonstrate the effects of using $\mathcal{L}_{i2t}$ for training** We conduct ablation studies on the T2I-CompBench to verify the effectiveness of the $\mathcal{L}\_{i2t}$ losses in Table 4 in the paper. The $\mathcal{L}\_{i2t}$ greatly enhances the text-image alignment of the diffusion model. Please refer to Section 5.4 for more details. **Limitation 1. Increase model's complexity and computational demands** Thanks for your feedback. Although our method does introduce multiple components, these components bring about abundant supervision to foster the fast training speed and also excellent performance. Besides, we list the performance on T2I-CompBench and training cost compared with GORS [4]: | Model | Iteration | GPU Num | Color$\uparrow$ | Shape$\uparrow$ | Texture$\uparrow$ | Spatial$\uparrow$ | Non-Spatial$\uparrow$ | Complex$\uparrow$ | | ----------- | ------------ | ------- | --------------- | --------------- | ----------------- | ----------------- | --------------------- | ----------------- | | GORS (SDv2) | 50000-100000 | 8 | 0.6603 | 0.4785 | 0.6287 | 0.1815 | 0.3193 | 0.3328 | | Comat-SD1.5 | 2000 | 8 | **0.6734** | **0.5064** | 0.6243 | **0.2073** | 0.3166 | **0.3575** | Our method well balances the training iterations and the performance. Comat-SD1.5 generally achieves better performance with roughly 2% of iterations. Besides, all these modules are removed during inference. Therefore, no inference cost is introduced. [1] Wang, Zirui, et al. "TokenCompose: Text-to-Image Diffusion with Token-level Supervision." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. [2] Feng, Weixi, et al. "Training-free structured diffusion guidance for compositional text-to-image synthesis." arXiv preprint arXiv:2212.05032 (2022). [3] Chefer, Hila, et al. "Attend-and-excite: Attention-based semantic guidance for text-to-image diffusion models." ACM Transactions on Graphics (TOG) 42.4 (2023): 1-10. \[4\] Huang, Kaiyi, et al. "T2i-compbench: A comprehensive benchmark for open-world compositional text-to-image generation." *Advances in Neural Information Processing Systems* 36 (2023): 78723-78747. \[5\] Fan, Ying, et al. "Reinforcement learning for fine-tuning text-to-image diffusion models." *Advances in Neural Information Processing Systems* 36 (2024). --- Rebuttal Comment 1.1: Comment: Thank you for your response. Most of my concerns have been addressed. However, I'm still unsure whether the image-to-text model provides effective optimization guidance as presented in Fig. 1 of the author rebuttal PDF. Additionally, I have some concerns about the complexity of the method design. Despite these reservations, I believe this is solid work. Therefore, I’ve decided to raise my score to borderline accept in recognition of your efforts. Thank you for your hard work. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for the kind support of our work! **A reasonable explanation to effective optimization guidance of the image-to-text model** The image-to-text model is trained to predict the caption, which is highly aligned with the given image. Therefore, once the generated image is not aligned with the prompt, the image-to-text model will be "reluctant" to output the prompt as the caption for the generated image. This "reluctance" comes from the misalignment of certain concepts in the image. The image-to-text model views the image, discovers the misalignment areas, and finally gives a high $\mathcal{L}\_\{i2t}$. We leverage this attribute and try to minimize this "reluctance". The gradient of $\mathcal{L}\_\{i2t}$ mainly summits at the areas where misalignment occurs, as shown in Fig.1 in the author rebuttal PDF. This gradient is further back-propagated to the diffusion model to fix the misalignment of this area. That's where the pixel-level guidance comes from. We will definitely make this point clearer in the revised version!
Summary: The paper breaks down the misalignment problem in T2I two: concept ignorance and concept mismapping, and propose a fine-tunning strategy to enhance the prompt understanding and following. The methods include two modules: The concept activation module to maximize the posterior probability; An attribute concentration module is proposed for positive and negative mapping. Strengths: This paper proposes a general framework for supervising image generation, including a Image to text model scoring method and a prior preservation loss for better text alignment without losing fidelity; and a attribute concentration method which utilizes open vocabulary segmentation method to force attention modification area. Weaknesses: 1. The training cost is disproportionate to this complicated framework. This framework includes diffusion training - full steps inference - mllm judging - grounding segmentation for each entity ... However, the training cost is only ten hours for 8A100 and only needs 2k iterations to converge, and 128 rank lora is enough. I doubt the limited training is enough to convert the SDXL model to attain the attribute learning ability therefore the soundness of this paper. It is needed to provide more training information to testify the experiments are enough and real. 2. The real training cost is a concern since the training involves manipulating diffusion results in the pixel space and inference with MLLM. 3. The proposed method seems cannot assign attributes to multiple same-name objects, such as an Asian girl with an Indian girl, the segmentation model cannot differentiate two girls and therefore cannot assign attributes. Others are questions not weakness, please see the questions part Technical Quality: 2 Clarity: 3 Questions for Authors: 1. how to use image-to-text LLVM to provide a matching score?, since most LLVM models are developed for VQA and captioning, scoring is usually not a skill for them. 2. The description of Mixed Latent Strategy is not sufficient. After reading twice, I didn't get the meaning of "in addition to the latent starting from pure noise", T2i training never starts from pure noise but from $x_0 + \epsilon_{t}$, 3. Since the training details concern me, if authors can kindly open-source the code would make the paper's soundness no concern. Would you open-source the code? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes, to a certain extent.. The paper doesn't contain a limitation section in the main paper. However, in supplementary material, training cost and mllm usage concern is discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We found them extremely helpful in improving our draft. We address each comment in detail, one by one below. **Comment 1. Training cost is disproportionate** Thank you for your feedback. We will open-source our training code for reproducibility. We test on more training iterations in our pilot study but no apparent gain is witnessed. So we keep 2000 iterations as default. Also, we test the different settings for training parameters: rank 128, 256 for LORA, and full parameter finetuning. We find no apparent gain for larger rank or full parameter fintuning. We finally choose LoRA with rank 128 for efficient training. LoRA of relatively small rank could already convert the model with different abilities. Various previous works adopt this setting and achieve good performance [1, 2]. **Comment 2. Real training cost** Thank you for your feedback. 1. Pixel space supervision During the full steps inference, we uniformly sample $K$ steps to perform the pixel space supervision, namely calculating the $\mathcal{L}\_\text{pos}$ and $\mathcal{L}\_\text{neg}$ loss. The $K$ is 5 in our experiments. Therefore, only 10% of steps in the inference require gradient and pixel space supervision. This may partially account for the fast training speed. 2. MLLM inference In practice, we leverage BLIP trained on captioning tasks instead of MLLM (Please refer to Appendix A.3 in the main paper for detailed information). This choice accelerates the computation of the $\mathcal{L}\_{i2t}$ loss. **Comment 3. Assign attributes to multiple same-name objects** This is a really interesting question. Currently, it is difficult to assign the attribute to multiple same-name objects by only using the segmentation model. However, we argue that the image-to-text model with advanced image-text understanding ability may distinguish the attribute for each same-name object and offer valid guidance. As shown in the Fig. 1 in the author rebuttal PDF, the image-to-text model could also provide pixel-level supervision to the image. We will work on to solve this question in our future work. **Question 1. How to use image-to-text LLVM to provide a matching score?** Sorry for the confusion caused. The scoring is conducted by calculating the loglikelihood for the LLVM to output the prompt as the caption, given the generated image. In practice, we leverage the LLVM specifically trained for the captioning tasks (e.g., BLIP) to do the scoring. Their scoring capability comes with their training nature. These captioning models are trained with the negative loglikelihood loss, namely, the model needs to maximize the probability for generating the caption given the corresponding image. Therefore, whenever the generated image does not align with the text prompt, the LLVM will output low loglikelihood, namely, the $\mathcal{L}\_{i2t}$ is big. We treat the $-\mathcal{L}\_{i2t}$ as the alignment score of LLVM and try to maximize it, i.e., minimize $\mathcal{L}\_{i2t}$. **Question 2. More details about Mixed Latent Strategy** Sorry for the confusion caused. We detail our mixed latent strategy below: The mixed latent strategy contains two types of latents in the fine-tuning procedure, i.e., the latent starting from the pure noise and the noisy latent from the GT Images. 1. **Latent starting from the pure noise** This serves as the main branch in our pipeline. Our fine-tuning process shares the same procedure to generate an image as the diffusion model does in the inference time. We uniformly sample $K$ steps from all the inference steps to enable the gradient. Therefore, the latent is sampled from the pure noise $\mathcal{N}(0,I)$. We iteratively denoise it to obtain the generated image. The image is then used to calculate the $\mathcal{L}\_{i2t}$ and $\mathcal{L}\_{adv}$ loss. It is also sent to the segmentation model to provide the object mask for computing the $\mathcal{L}\_\text{pos}$ and $\mathcal{L}\_\text{neg}$. The latent starting from the pure noise corresponds to the upper left part in Fig. 4. in the main paper. Please refer to [2] for how to receive the gradient from the loss. 2. **Noisy latent from the GT Images** We also aim to inject information from the GT images to stabilize the fine-tuning process. We randomly sample a timestamp $\tau$ from a pre-defined range $[T\_1, T\_2]$. Then we obtain $x\_{\tau}$ by adding the timestamped noise $\epsilon\_{\tau}$ on the latent of the GT Image $x_0$. We also iterativaly denoise this noisy GT latent to get $\hat{x}\_0$ as we do for the latents starting from the pure noise. This $\hat{x}\_0$ is only used to calculate the $\mathcal{L}\_{i2t}$ loss. The latent starting from the noisy GT corresponds to the upper left part in Fig. 4 in the main paper. **Question 3. Will the code be open-sourced?** Thanks for your feedback. We will open-source the training code. [1] Sun, Jiao, et al. "Dreamsync: Aligning text-to-image generation with image understanding feedback." *Synthetic Data for Computer Vision Workshop@ CVPR 2024*. 2023. [2] Wu, Xiaoshi, et al. "Deep Reward Supervisions for Tuning Text-to-Image Diffusion Models." *arXiv preprint arXiv:2405.00760* (2024). --- Rebuttal Comment 1.1: Title: Thanks for your detailed reply, i'll keep my score as Borderline accept Comment: This paper needs a lot of effort to build the whole pipeline and the result's soundness is credible which makes me maintain the accepted score. On the other hand, the paper's innovation is more about the engineering part with code not open-sourced. So I decided to maintain my score as Borderline accept --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their support and valuable feedback on our work! We appreciate your suggestions, particularly regarding the clarification of LVLM scoring usage and the mixed latent strategy. These insights will significantly strengthen our paper, and we will incorporate detailed explanations in our revision. Additionally, we will work on to address the issue of assigning attributes to multiple objects with the same name. We believe these revisions above will substantially enhance the quality and contribution of our paper. We are committed to open-sourcing all of our training codes to ensure full reproducibility of the training process upon acceptance. Given this commitment and our planned revisions, we respectfully ask if you would consider reevaluating our work and raising the score.
Summary: This paper proposes a fine-tuning strategy for text-to-image diffusion models to improve the alignment of generated images to text prompts. The solution components are summarized as follows: 1. Concept Activation Module: This module helps the model focus on ignored text concepts by leveraging an image-to-text model to supervise the generation process 2. Attribute Concentration Module: This module aims to improve the localization of attributes within images, ensuring that characteristics like color or texture are correctly applied to the right parts of the image. Strengths: 1. An end-to-end fine-tuning strategy is employed to address various text-image misalignment issues in a unified manner, making it easy to deploy. Weaknesses: 1. Functionally, the Fidelity Preservation proposed in this paper is similar to the Class-specific Prior Preservation Loss introduced in reference [1]. Please provide a detailed analysis of the differences between the two. 2. The model presents a potential risk of overfitting the characteristics of the training data, which may affect its generalizability to various real-world scenarios. Therefore, additional experiments related to generalization are necessary. For instance, it should be evaluated whether the model fine-tuned on dataset A can demonstrate improvements when tested on dataset B. 3. The misalignment between generated images and textual concepts is a complex issue with potentially intricate underlying causes. Therefore, it is recommended to use the methods proposed in reference [2] to identify any remaining misalignment issues in the fine-tuned model. [1] Ruiz, Nataniel, et al. "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. \ [2] Du, Chengbin, et al. "Stable diffusion is unstable." Advances in Neural Information Processing Systems 36 (2024). Technical Quality: 4 Clarity: 4 Questions for Authors: See weakness. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: Already discussed in the paper Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We found them extremely helpful in improving our draft. We address each comment in detail, one by one below. **Comment 1. Differences between Class-specific Prior Preservation Loss with Fidelity Preservation** Thank you for your advice. Indeed, our Fidelity Preservation (FP) module shares a similar high-level idea with the Class-specific Prior Preservation Loss (CPP Loss) introduced in [1], i.e., preserving the generation quality while finetuning. However, our method is very different in the following two aspects: 1. **Target Task and Preserve Domain** DreamBooth seeks to personalize image generation for specific objects. While the introduced CPP Loss primarily maintains generative capabilities within a narrow domain—specifically, the object class present in the training data—our proposed FP module operates within the context of text-image alignment. FP aims to preserve general generative capabilities by computing adversarial loss across the entire training dataset, encompassing a diverse range of text prompts. 2. **Methodology** DreamBooth finetunes the diffusion model with the pretraining loss, i.e., the squared error denoising loss on a certain timestamp. CPP Loss follows its form. In contrast, our fine-tuning procedure simulates the inference process of the diffusion model to conduct a full-step inference. We aim to directly supervise the generated image to achieve the training-test alignment. Therefore, we propose the novel FP module to leverage a discriminator to adversarially preserve its quality. The applied discriminator is also updated along with the fine-tuning process, enabling finer control of the image quality. We will add this discussion to the final draft for clarification. **Comment 2. Potential risk of overfitting the training data** Thank you for your feedback. We conduct zero-shot evaluation on long and complex prompt in DPG-Bench in the Table 2 in our main paper. All of our training data contains only short text prompts similar to COCO captions. The length of sentences in the training set is 12.13 words on average. The average prompt length in DPG-Bench is 78.23 words. As shown in the Table 2 in the main paper, our method significantly enhances the alignment by over 10 points on SD1.5 and over 2 points in SDXL. The result proves that our proposed method improves the general prompt-following capability instead of overfitting to the training data. The qualitative example on DPG-Bench is shown in Fig. 11 in the Appendix. Besides, TIFA in Table 2 in the main paper is another alignment benchmark, which our training data also does not overlap. We witnessed a 7.4 points improvement in Comat-SD1.5 and a 1.6 points improvement in Comat-SDXL. The result further reveals the generalization. **Comment 3. Identify remaining misalignment issues in the fine-tuned model** Thank you for your advice. This is a very interesting suggestion. We investigate our method proposed in [2]. We will add an extra section in the appendix to discuss our findings. [1] Ruiz, Nataniel, et al. "Dreambooth: Fine tuning text-to-image diffusion models for subject-driven generation." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [2] Du, Chengbin, et al. "Stable diffusion is unstable." Advances in Neural Information Processing Systems 36 (2024). --- Rebuttal Comment 1.1: Comment: Since my concerns have not been fully addressed, I have decided to keep the score unchanged. For example, the author did not provide a positive response to Q3, not even a brief experimental analysis. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. We sincerely apologize for the concise response and any misunderstanding it may have caused. **We initiated testing of our method using the referenced work [1] at the start of the rebuttal period. However, due to time constraints and limited GPU resources, we were unable to complete the experiments before the rebuttal phase concluded.** Consequently, we plan to present the full extent of our findings in the final draft. Below, we provide details of the experiments conducted and **our current observations** to offer insights into the remaining misalignments we've identified with the assistance of [1]. First of all, we briefly summarize [1]: the paper introduces an attack method to automatically find out what prompt caused Stable Diffusion model to generate misalignment images. Based on the successful attack prompts, [1] summarizes four prompt patterns where Stable Diffusion model often fails to generate aligned images. The four patterns are: a) Variability in Generation Speed; b) Similarity of Coarse-grained Characteristics; c) Polysemy of Words; d) Positioning of Words. Our experiment and analysis contains the following 3 parts: **Experiment 1. Do the discovered four patterns of the original impairment of Stable Diffusion model still exist in CoMat?** Based on the example four patterns in [1], we manually create 20 prompts for the patten (a), (b) and (c), and 5 meta patterns for pattern (d), which generates 30 prompts. Then we use them as prompts for the original Stable Diffusion model and CoMat. The generated images are evaluated by humans. We show the ratio of successful generation below: | Model | Pattern (a) | Pattern (b) | Pattern (c) | Pattern (d) | | --------------------- | ----------- | ----------- | ----------- | ----------- | | Stable Diffusion v1.5 | 45.0% | 35.0% | - | 13.3% | | CoMat-SD1.5 | 65.0% | 50.0% | - | 30.0% | This result align with the enhancements targeted by CoMat. Pattern (a), (b) and (d) all involves multiple objects in the prompt. The Concept Activation Module contributes to the object's existence. Besides, the Attribute Concentration Module restricts the diffusion model to only attend to one object's token for each object area in the image. This further prevents the object combining problem found in pattern (b). The pattern (c) is about the polysemy of words. We argue that without sufficient contextual information, it is infeasible to evaluate the generation result. This experiment demonstrates that CoMat greatly mitigates the original vulnerabilities discovered in Stable Diffusion. However, we acknowledge that there remains a gap between CoMat's performance and perfect alignment, particularly in addressing pattern (c) where prompts contain more than two objects. We are actively working to address these remaining challenges. We apologize that the visualization result cannot be provided in the discussion stage, we will include the visualization result in our final draft. **Experiment 2. Can the learned Gumbel Softmax distribution of Stable Diffusion v1.5 still effectively attack CoMat-SD1.5?** We implement the method of [1] using its open-sourced code in GitHub. Following the setting in [1], we learn a Gumbel Softmax distribution for each class from ImageNet-1K [2]. Due to the time limit, instead of generating 50 images for each class, we generate 5 images for each class and calculate the attack success rate. Then, we directly use this learned Gumble Softmax distribution to attack CoMat-SD1.5. The result is below: | Model | Short Prompt Success | Long Prompt Success | | --------------------- | -------------------- | ------------------- | | Stable Diffusion v1.5 | 47.6% | 51.1% | | CoMat-SD1.5 | 39.8% | 45.3% | As the result shown, CoMat-SD1.5 suffers less from the attack. This result is reasonable since CoMat has gone over the fine-tuning process so the distribution for SD1.5 may be not valid for CoMat-SD1.5. We will include the result following the original setting in [1], where 50 samples are generated for each class, in our final draft. **Experiment 3. How does CoMat act under the auto-attack method proposed in [1]?** Finally, we directly apply the method introduced in [1] to CoMat-SD1.5. Again, due to the time and resource limit, we only test on 5 samples for each class. The result is below: | Model | Short Prompt Success | Long Prompt Success | | ----------- | -------------------- | ------------------- | | CoMat-SD1.5 | 45.2% | 52.3% | Our experiments reveal that CoMat demonstrates superior robustness compared to SD1.5 in scenarios involving short prompt attacks. However, we observed a decline in performance when dealing with long prompts. This discrepancy may be due to the lack of long prompts in CoMat's training dataset. --- Rebuttal 2: Comment: Thanks for your response. We also conduct zero-shot evaluations on the DPG-Bench, which consists of long and complex prompts. Our method exhibits excellent zero-shot performance both qualitatively and quantitatively. Please refer to our rebuttal for Comment 2 for details. We believe this could address the concern of the generalization ability to the long prompts. Besides, we visualize some successful attacks, and we find that the generated prompt contains many meaningless letters, which could largely affect the generation process. An example is: "these small, colorful fiddler crab are known for their distinctively asymmetric claws, with one being much larger than the other. males use their enlarged claw to attract mates and defend their territory., hair, joheat col< troll < shepherds a < ; children "" < lake rest <". Therefore, we argue that the result in Experiment 3 may not fully represent the result of our method's generalization ability. --- Rebuttal 3: Comment: There is some controversy regarding the issue of generalization, but this is not a critical flaw. The experimental results with short prompts do show that the model fine-tuning has improved text-image alignment to some extent (although it is difficult to determine the extent of this improvement given the limited amount of data). Therefore, I have decided to raise my score to a Borderline accept.
Summary: This paper studies the prompt-following issues within text-to-image generation models and proposes a very simple yet effective solution by supervising the diffusion models with recognition models like BLIP for image captioning and Grounded-SAM for image segmentation. The authors also propose a fidelity preservation strategy and a mixed latent strategy to address overfitting risks. They fine-tune the SDXL or SD1.5 models on 20K complex text prompts and demonstrate encouraging results on various benchmarks that focus on evaluating attribute binding and object relationship accuracy. Strengths: - The idea of fine-tuning text-to-image generation models with image recognition models, which act as a reward model, is reasonable and has been explored in previous works such as AlignProp [1] and DRaFT [2]. [1] Aligning Text-to-Image Diffusion Models with Reward Backpropagation, arXiv 2023 [2] Directly Fine-Tuning Diffusion Models on Differentiable Rewards, ICLR 2024 - The experimental results effectively verify the proposed method's effectiveness. Weaknesses: - The technical novelty is limited, as similar ideas have been explored in both AlignProp [1] and DRaFT [2]. The key difference lies mainly in the different choices of text prompts and the additional supervision on the attention maps. - The authors fail to provide a thorough user study to justify the effectiveness of the proposed approach from the user's perspective. - Although the proposed approach achieves encouraging results, these results focus primarily on fine-grained text-following capability rather than more fundamental challenges such as counting-aware prompts and spatial-aware arrangement prompts. Consequently, the actual contribution of this paper is relatively weak, as the presented results focus on improving benchmark performance rather than addressing the core challenges within text-to-image generation models. Technical Quality: 3 Clarity: 3 Questions for Authors: Refer the Weaknesses Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Refer the Weaknesses Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your valuable comments. We found them extremely helpful in improving our draft. We address each comment in detail, one by one below. **Comment 1. Limited technical novelty** Thank you for your feedback. We respectfully disagree with the reviewer's assessment. Indeed, our method follows the basic reward fine-tuning pipeline explored in [1, 2]. **We argue that we only adopt this basic paradigm for fine-tuning the diffusion model. We introduce a variety of novel designs in our procedure to specifically solve the underlying problems in the image-text alignment issue. It is inappropriate to simply classify our method as the same approach of TODO(AlignProp, Draft).** As stated in the strength section of reviewer 9f7Y, it is innovative to introduce the image-to-text model to solve the image-text alignment task. And the other modules are well-designed for tackling specific challenges in image-text alignment task. We summarize our technical novelty as follows: 1. **Image-to-text model as reward model** We investigate how to use the hidden knowledge of the pre-trained image-to-text model to supervise the diffusion model for alignment. We extend the basic reward fine-tuning pipeline to the image-text alignment task and prove that the pre-trained image-to-text model is a suitable reward model for a large setting of training. 2. **Attribute Concentration module** Our visualisation results reveal the activated concepts often fails to mapping to the correct area in the image, which causes the misalignment. We introduce the novel attribute concentration module to supervise on the pixel level to address this issue. Previous works like [1, 2] only supervises the generated image, omitting guiding the middle steps when generating. How to add supervision for finer-grained control in these steps has not been explored before our method. 3. **Fidelity Preservation module** [1, 2] either focus on the aesthetic task or is limited to a very small setting (tens of prompts for the training data). How to effectively preserve the generation capability in a larger setting remains unexplored. We introduce the novel Fidelity Preservation module to leverage the underlying knowledge of the pre-trained diffusion model to be the discriminator. This adversarial design effectively solves the image corruption. 4. **Mixed Latent Strategy** We also novelly propose to mix the noisy GT latent with the latents starting from pure noise. This design facilitates the information from the GT image and stabilizes the fine-tuning procedure. While [1, 2] only focus on the noise from pure noise. **Comment 2. Thorough user study** Thanks for your advice. We extend our original user preference study introduced in Appendix A.1. We categorize the alignment metric in the following aspects as in DSG1K: entities, attributes, relations and global. Entities contain the whole object and the part of the object. Attributes contain color, type, etc. Relations contain spatial and actions. Global contains other aspects like illumination, etc. 5 participants are asked to evaluate the 100 image pairs generated by SDXL and CoMat-SDXL. For each alignment metric, we set 1 as aligned and 0 as not aligned. We show the result in Table 2 in the author rebuttal PDF. Our method enhance the baseline model in all the metrics with the most significant improvement on entities. **Comment 3. Weak Contribution** Sorry for the confusion caused. We respectfully disagree the reviewer's assessment for the following reasons: 1. **No fundamental challenges solved** First of all, we argue that the fundamental challenges of the text following ability span various domains. Both the reviewer's mentioned tasks (i.e., the counting-aware prompts and spatial-aware prompts) and other tasks like object existence and attribute binding are of the same importance. If the object concept in the prompt is not activated, the object will not even appear on the image, let alone the spatial, counting problem. Therefore, it is inappropriate to prior the counting-aware or spatial-aware tasks to the other tasks, and hence didiminish the importance of our work. Besides, the results in Table 1 in the main paper reveals that our method successfully addresses the spatial-aware prompts and bring significant improvement to both SD1.5 and SDXL, where nearly 80% increase is witnessed for SD1.5. We also add the evaluation result for the counting-aware prompt (numeracy) introduced in [3]. The result is shown in Table 3 in author rebuttal PDF. Our training data does not contain prompts specifically designed to solve the counting-aware task, the improvement further testify the effectiveness of our method. We will investigate how to specifically address the spatial-aware and counting-aware tasks in the future work. 2. **Focus on improving benchmark performance** We provide both quantitative and qualitative evaluation for the zero-shot performance on long and complex text prompts in Table 2 and Fig. 11 in the main paper. The huge improvement proves our method's generalizability to various scenarios in the alignment tasks. Besides, we provide visualization results in Fig. 12 to 14 in the main paper to prove our method improve the prompt following ability across different tasks. These results includes various challenges like spatial-aware prompts (the bottom of Fig. 13), attribute-aware prompts (the spider and the cabin in Fig. 12), complex prompts (the lighthouse and the man in Fig.12), etc. [1] Aligning Text-to-Image Diffusion Models with Reward Backpropagation, arXiv 2023 [2] Directly Fine-Tuning Diffusion Models on Differentiable Rewards, ICLR 2024 [3] T2I-CompBench++: An Enhanced and Comprehensive Benchmark for Compositional Text-to-image Generation
Rebuttal 1: Rebuttal: Overall author rebuttal: We thank all reviewers for their thoughtful comments. We greatly appreciate all the reviewers' acknowledgment that our method is **effective and achieves excellent results**. We have added new evaluations and visualization in our author rebuttal PDF. The main concerns raised by the reviewers revolve around technical details, experiment settings, and contributions. We make a global response to the pixel-level guidance provided by the image-to-text model, i.e., the $\mathcal{L}\_{i2t}$ loss here: As shown in Fig. 1 in the author rebuttal PDF, the image-to-text model offers pixel guidance by attending to the place for the wrong generation, e.g., the eraser, the street name, and the jersey. It also leaves the correct generation untouched, e.g., the yellow pencil, the number 21, and the silver tie. For other individual comments or questions, we answer all of them under each review. We are committed to incorporating these improvements and addressing all the raised concerns in our final draft. Pdf: /pdf/c73470cbd5e0d5de17c0a57bf14bef66d1dcfb5b.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
HLM-Cite: Hybrid Language Model Workflow for Text-based Scientific Citation Prediction
Accept (poster)
Summary: This paper proposes a Hybrid Language Model workflow for citation prediction, where core citations are predicted from superficial citations and non-citations rather than using a simple binary classification approach. This method can handle candidate sets of up to 100K papers and demonstrates better performance compared to previous methods. Strengths: 1. The paper expands the citation prediction task from binary citation prediction to distinguishing core citations, superficial citations, and non-citations. This distinction is important as core citations form the main foundation of a paper. Core citations are defined through a citation network. 2. The method uses two modules: a retrieval module and an LLM agentic ranking module. This design enables the method to handle large candidate sets efficiently. 3. The paper employs a one-shot example as a guide and compares the effectiveness of one-shot and few-shot methods. The one-shot example of the Transformer is notable. 4. The method is tested on real-world data through various experiments, demonstrating good performance. Weaknesses: 1. The paper mentions in the appendix that they keep the top candidate unchanged and only rank the remaining candidates in the decider. This should also be mentioned in the main text to avoid confusion, especially in Figure 3, regarding the omission of the top 3 candidates. 2. In the ablation study of the curriculum stage, the performance without Stage 1 is very close to the full curriculum, especially in Social Science. Moreover, the Prec@3 of Social Science is actually better without the full curriculum. This suggests that Stage 1 might not be that effective, especially considering its computational cost. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How is the number of unchanged candidates in the decider stage selected? 2. The paper uses a text-based method in the first stage to retrieve candidates. How does this compare to previous binary citation prediction methods? For example, what if binary citation prediction is used to retrieve citations first, then the LLM agentic ranking module is used to further distinguish between core citations and superficial citations? 3. How are the numbers of tq1 and tq2 selected? Would this affect performance? Additionally, since superficial citations should be more than core citations in real cases, why are tq1 and tq2 kept equal? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The paper discusses limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer 7Ub3 **Q1.** *Design of unchanged candidates in the LLM decider.* **Response:** Thank you for carefully reading through the appendix. We apologize for not clearly explaining the the LLM decider's working process. Here, we explain the detailed designs. With the retrieval size of $r_q$ and $t^1_q$ core citations in the candidate set, the decider needs to select $t^1_q$ candidates most likely to be core citations from the $r_q$ retrieved candidates. One approach is to directly reranking the $r_q$ retrieved candidates and select the top $t^1_q$. However, there are two shortcomings: (1) reranking all the $r_q$ retrieved candidates leads to long input context for the LLM decider, harming the performance; (2) directly reranking the $r_q$ retrieved candidates does not utilize any information from the embedding model. In other words, the $r_q$ retrieved candidates have different inner products by the embedding model, which are informative as the prediction of their core citation likelihood in the retrieval module. We utilized the likelihood prediction in the retrieval module, regarding the $(2t^1_q-r_q)$ retrieved candidates with largest inner products as safe core citations and exempting them from reranking. Then we reranked the remaining $2(r_q-t^1_q)$ retrieved candidates with the LLM and selected the top $(r_q-t^1_q)$ ones, resulting in $t^1_q$ selected candidates in total. For example, we used $t^1_q=5$ and $r_q=8$ in our main results. Here, the number of fixed top candidates is 2, and we reranked the remaining 6 retrieved candidates and selected the top 3. Also, we kept $t^1_q=5$ and tested $r_q=10$ in Section 4.5.2. Here, the number of fixed top candidates is 0, and we reranked all 10 retrieved candidates with the LLM agent and selected the top 5. We will add these details in the main text of our paper. To avoid confusion, we will also modify Figure 3 accordingly, highlighting the unchanged top candidates and the reranking of the remaining ones. **Q2.** *Ablation study of the curriculum stage 1.* **Response:** Thanks for pointing out this. It is true that the performance without Stage 1 is very close to the full curriculum, especially in social science. To examine this, we use t-test to compare the full design with the ablation versions. Please refer to "result.pdf" in the global response for detailed results. The results give us a deeper understanding of the role of each part in our designs. Generally, all parts of our designs are valid with significance ($p<0.01$ or $p<0.1$ in overall performance through t-test). Moreover, when focusing on social science papers, which only comprise a small proportion of all papers, Stage 1 of curriculum finetuning is only slightly beneficial. Therefore, when only applying to social science papers, it is an alternative for users to skip Stage 1 if they want to save computational cost with the cost of a slight performance drop. In contrast, when applying to natural science papers, it is necessary to keep Stage 1 for better performance. **Q3.** *Comparison to previous binary citation prediction methods.* **Response:** Thanks for this question. Actually, the baseline embedding models we compared are binary citation prediction methods. In the retrieval stage, either the baselines or our models encode the query and candidates into vectors and then calculate each candidate's score via inner products. The only difference is that the baseline methods have not been trained to distinguish core/superficial citations. Therefore, they only try to ensure the score as cited papers>non-citations (binary citation prediction). In contrast, our model learned to distinguish citations and non-citations in the curriculum Stage 1 and further learned to distinguish core/superficial citations in the curriculum Stage 2. Therefore, our model can try to ensure the score as core citations>superficial citations>non-citations. Also, the ablation study of curriculum Stage 2 is the case that our model only learned binary citation prediction. The significant performance drop indicates the limitation of merely using binary citation prediction model in our core citation prediction task. We will add the explanations to the revised version of our paper. Hopefully, our interpretations can help readers understand the relationships and differences between our model and the binary citation prediction methods. **Q4.** *Details of the numbers of $t_q^1$ and $t_q^2$.* **Response:** Thanks for this question. In this paper, the values of $t_q^1$ and $t_q^2$ are manually set to be 5. We used this setting to align with SciDocs, which is a widely applied benchmark dataset in scientific text embedding and citation prediction [1]. In SciDocs, each query is provided with a candidate set of 5 cited papers. Therefore, we extended it into a candidate set of 5 core citations ($t_q^1$) and 5 superficial citations ($t_q^2$). Our model can directly adapt to testing sets with different values of $t_q^1$ and $t_q^2$. In all parts of our designs, our model did not fit to specific values of $t_q^1$ and $t_q^2$ but only learned to rank the candidates in the order of core citation>superficial citations>non-citations. Therefore, its performance would not be affected by the values of $t_q^1$ and $t_q^2$. Besides, according to our statistics over 12M papers across 19 scientific fields in the Microsoft Academic Graph (MAG), on average, each paper has 10.207 ($\sigma=$0.007) core citations and 10.727 ($\sigma=$0.007) superficial citations. Therefore, it is reasonable to set $t_q^1=t_q^2$. We will add the explanations and statistics to the revised version of our paper. Hopefully, our interpretations will make it easier for readers to understand our experimental settings. [1] SPECTER: Document-level Representation Learning using Citation-informed Transformers. *ACL 2020*. --- Rebuttal Comment 1.1: Title: Reminder to reply Comment: Dear reviewer, if you have not clicked the reply button, please don't forget to do so as the deadline is approaching. Your input is important to the authors! - AC
Summary: The paper proposes a framework, HLM-Cite (Hybrid Language Model) for scientific citation prediction based on incorporating generative language models embeddings and LLMs as agents. The pretrained text embeddings are used to retrieve high likelihood core citations (a term the paper introduces to define as more meaningful citations, compared to superficial or non-citations). Three different LLM agents are then used to rank the papers by reasoning over them. Strengths: - This is an interesting use of language models, both smaller in scale and LLMs, for a task relevant to researchers. In particular, this type of work is very relevant and could benefit the open science community. - The system created is well designed, the choices are justified (e.g. the three agents) - The experiments are extensive, and multiple models are compared, which grounds the analyses scientifically - The results are strong and performance surpasses other models, showing that the proposed framework is indeed well designed - The paper is clearly written and the work is well presented Weaknesses: - The idea of a "core" citation is not necessarily novel, as citation classification systems have been around for a while. While there isn't one classification system the scientific community agrees upon, there have been multiple proposed. The paper should acknowledge this body of work around citation classification systems, especially if proposing a new one: "core", "superficial" and "non-citations" - which is essentially a binary classification on the citation type ("core" vs "superficial"). Some references: - [1] Cohan, Arman, et al. "Structural scaffolds for citation intent classification in scientific publications." arXiv preprint arXiv:1904.01608 (2019). - [2] Garfield, Eugene. "" Science Citation Index"—A New Dimension in Indexing: This unique approach underlies versatile bibliographic systems for communicating and evaluating information." Science 144.3619 (1964): 649-654. - [3] Jurgens, David, et al. "Measuring the evolution of a scientific field through citation frames." Transactions of the Association for Computational Linguistics 6 (2018): 391-406. - [4] Moravcsik, Michael J. "Citation context classification of a citation classic concerning citation context classification." Social Studies of Science 18.3 (1988): 515-521. - [5] Nicholson, Josh M., et al. "scite: A smart citation index that displays the context of citations and classifies their intent using deep learning." Quantitative Science Studies 2.3 (2021): 882-898. - [6] Teufel, Simone, Advaith Siddharthan, and Dan Tidhar. "Automatic classification of citation function." Proceedings of the 2006 conference on empirical methods in natural language processing. 2006. - While the paper provides an interesting and well designed system, it is definitely geared towards applications of LLMs and models, rather than novel scientific contributions. Might be more appropriate for a specialized workshop. This is why my recommendation is borderline. - Minor notes: - Lines 13-15 are a bit hard to understand, could be rephrased e.g. [...an LLM to predict citations, which leads to results ..] - Line 88 -> notations instead of notifications? Technical Quality: 3 Clarity: 4 Questions for Authors: - Since you are using MAG to finetune the various models, how are you annotating the entries in the graph with the three citation classes you proposed: "core", "superficial" and "non-citations", especially given the size of this knowledge graph? - In Section 3.3.2, it is mentioned that the analyses are manually reviewed and revised, to make sure that they correctly reveal the expected behavior (lines 193-195). How many papers is this done for? The entire set of papers? - In Section 4.1, it is mentioned that 5 core citations and 5 superficial citations are sampled for each of the 450k queries. Similarly to the first question - how are these citations initially annotated with one of the "core" vs "superficial" classes? - In several places in the paper (e.g. Section 4.5) you mention "Without loss of statistical significance". How was this computed? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: Authors mention LLM limitations, such as hallucinations, in their conclusion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer b1Uz **Q1.** *Novelty of the core citation idea.* **Response:** Thanks for the literature. We agree that citation classification is not new. Existing works classified citations with traditional ML [2,3,5] and DL [1,4] according to the roles in the context (background, method, etc.) [1,2] and the author's attitude towards the citations (supporting, contrasting, etc.) [3-5]. However, they all focused on the semantics of each individual paper, which requires manual annotation, limiting the data size to thousands and consuming up to years of manual work [4]. In contrast, we defined core citations on the citation network among vast papers. For a query paper $q$ and its citations $S_q$, if there exists a subsequent paper that cites $q$ and one of $q$'s citations $s_q\in S_q$ at the same time, then we regard $s_q$ as a core citation of $q$ (Section 2.1). Instead of subjective semantic labels by manual work, our design objectively reflects the collective behavior of the scientific community, i.e., the subsequent citations of $q$ indicate the degree of recognition of $q$ and its citations $S_q$ by the community. Thus, we classified citations from a novel perspective. Also, because our classification criteria are objective, we require no subjective manual annotations and can label large datasets by automatic structural analysis on the citation network. After obtaining the labels from the citation network, we trained our model to predict them purely based on the texts. Hopefully, we have clarified the novelty of our core citation idea compared to existing works. **Q2.** *Annotation of core and superficial citations.* **Response:** Yes, we labeled citations in the entire MAG automatically and required no manual annotations (See Q1). **Q3.** *Scientific contributions and compliance with NeurIPS's scope.* **Response:** Thanks for recognizing our contribution to the application. Here, we want to clarify that our work is not only applications of LLMs and models but includes novel scientific contributions: - **Definition and identification of core citation.** AS we mentioned in Q1, our definition of core citation classifies citations from a novel aspect, which exhibits statistical rationality. Instead of massive human work, we can label large-scale datasets by automatic network structure analysis based on our idea. - **Hybrid workflow combining small and large language models.** More importantly than the finetuning and prompting details, we proposed the hybrid framework incorporating small and large LMs. Small models are computationally efficient but lack reasoning capability, only capturing textual similarities, while large models have text reasoning ability but are computationally expensive and limited in context length. Our framework combines the advantages of both models, ensuring computational efficiency in large-scale tasks and maintaining the reasoning ability beyond simple textual similarities. Besides, as mentioned in the NeurIPS call for papers, the conference's topics include: - Applications (e.g., language, etc.) - Machine learning for sciences (e.g., social sciences, etc.) In this paper, we first proposed a novel and meaningful NLP task in social sciences, and then designed an applicable ML workflow that reached good performance on the proposed task, which we believe is within the scope of NeurIPS. Also, we notice that a number of papers with similar scopes and contributions have been published in NeurIPS. For example: - [6] designed a communicative LLM agent framework via prompting, which can generate conversational data for studying. - [7] designed two failure modes to guide the jailbreak of LLMs, which enlightened the safe training of LLMs. Considering these, we believe that our paper is suitable for NeurIPS, and hopefully, our interpretations can better illustrate the novelty and value of our work. **Q4.** *Details about manual review and revision.* **Response:** Thanks for the question. We only manually reviewed and revised one query paper (the selected Transformer-XL paper) and its several candidates. We used the manually reviewed and revised texts as the one-shot example for the analyzer and decider in processing other papers in the dataset, where no more manual work is needed. Please refer to Appendix A.6 for the full texts we manually reviewed and revised. **Q5.** *Statistical significance.* **Response:** Thanks for the question. We used the two-tailed t-test for statistical significance of the performance differences between different models. In detail, we tested in: - **Overall performance.** Ours VS the strongest baseline: $p<0.001$ averaging all fields. - **Ablation studies.** The full design VS different ablation versions: $p<0.01$ or $p<0.1$ averaging all fields. - **Other analyses.** One-shot VS few-shot, GPT models VS other models. Here, we only tested 10\% of the testing set to reduce computational consumption, but small enough $p$ indicates that 10\% samples can reflect the performance differences without loss of statistical significance. **Q6.** *Minor typos.* **Response:** Thanks. We will check and correct these. [1] Structural scaffolds for citation intent classification in scientific publications. *NAALC 2019*.\ [2] Measuring the evolution of a scientific field through citation frames. *Transactions of the Association for Computational Linguistics 2018*.\ [3] Citation context classification of a citation classic concerning citation context classification.*Social Studies of Science 1988*.\ [4] Scite: A smart citation index that displays the context of citations and classifies their intent using deep learning. *Quantitative Science Studies 2021*.\ [5] Automatic classification of citation function. *EMNLP 2006*.\ [6] CAMEL: Communicative Agents for "Mind" Exploration of Large Language Model Society. *NeurIPS 2023*.\ [7] Jailbroken: How Does LLM Safety Training Fail? *NeurIPS 2023*. --- Rebuttal Comment 1.1: Title: Reminder to reply. Comment: Dear reviewer, if you have not clicked the reply button, please don't forget to do so as the deadline is approaching. Your input is important to the authors! - AC
Summary: The authors introduce the concept of core citations to distinguish important citations from superficial ones and non-citations. This shifts the citation prediction task from simple binary classification to the more subtle approach of identifying core citations. They then propose HLM-Cite, a hybrid language model workflow that combines embedding and generative language models. This two-stage pipeline first retrieves high-likelihood core citations from a vast set of candidates using a pretrained text embedding model and then ranks them using a logical reasoning process conducted by LLMs. With the two-stage pipeline, the authors show that they can scale the candidate sets to 100K papers, thousands of times larger than existing works. They evaluate HLM-Cite on a dataset across 19 scientific fields, demonstrating a 17.6% performance improvement comparing SOTA methods. Strengths: The problem is an interesting one, and right in the beginning of the paper, the authors show that the idea of core citations is an important one by empirically studying the Microsoft Academic Graph. The paper makes a substantive contribution on the problem. The authors employ the GTE-base pretrain model [15], one of the top models on the Massive Text Embedding Benchmark (MTEB) leaderboard. Additionally, they have a curriculum finetuning module. To complete the architecture, the LLM Agentic Ranking Module aims to improve the accuracy of core citation prediction by incorporating LLMs’ textual reasoning capability to rectify the ranking of papers retrieved in the previous stage by core-citation likelihood. Experimentally, the authors are able to scale the candidate sets to 100K papers, thousands of times larger than existing works. The two-stage approach (called HLM-Cite) is evaluated on a dataset across 19 scientific fields, demonstrating a 17.6% performance improvement comparing SOTA methods. The method could have positive broader impacts. Accurate citation prediction can help reveal information hiding in link space of citation networks, owning value in aiding citation-based computational social science studies. These studies may investigate the patterns of paper publication and scientific innovation, enlightening researchers with efficient research approaches and putting forward the advancement of modern science. Weaknesses: My main comment is that the terminology in Section 2.1 is overly dense. There is way too much use of superscripts and subscripts for relatively simple concepts. If possible, the authors should try to make the symbols simpler. On the technical front, I still feel that the approach the authors are taking might be overly complex, and I'm unsure what impact it will have on actual computational science practice. I'm also wondering whether a simpler approach based on a core-periphery algorithm from network science, rather than text analysis, might not be useful for identifying core citations in existing papers. Obviously, this is less useful for prediction, but I am not sure there is any impact here in prediction; people writing papers already know what the core citations in their field are. Technical Quality: 3 Clarity: 2 Questions for Authors: Given that keyword overlap seems to be highly predictive of whether a paper is a core citation or not, I would have liked to see some comparisons to simpler baselines than what the authors ultimately showed. Also, is text analysis even necessary, or can network science be used here, as in so much of science of science research? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors consider a technical limitation, namely "LLMs' illusion problem." LLMs may output unfaithful analysis under certain circumstances and poison specific samples. When researchers want high-likelihood citation suggestions in preparing manuscripts, these samples may cause confusion. The authors note that verifying the output of LLM agents and improving the reliability of their hybrid workflow could be pursued in the future. Also, the curriculum finetuning process requires a certain amount of computational resources. Lightening this load also merits further investigation. There are no significant social/ethical limitations of this work, including negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer 3KQH **Q1.** *Overly dense terminology in Section 2.1.* **Response:** Thanks for pointing out this. In this paper, we follow previous computational social science [1,2] studies and define the core citations of a query paper from the citation network. We apologize for making the formulas too complicated when trying to express the intuitive definition mathematically. Here, we rephrase the terminology: *For a query paper $q$ and its citations $S_q$, if there exists a subsequent paper that cites $q$ and one of $q$'s citations $s_q\in S_q$ at the same time, then $s_q$ is among the core citations of $q$. Therefore, $q$'s core citations can be expressed as:* $$ \tilde{S}_q=\\{s_q\in S_q\mid\exists p, q\in S_p, s_q\in S_p\\}. $$ Hopefully, this simplified version will be easier for readers to understand. **Q2.** *Necessity and impact of text-based (rather than network science-based) core citation prediction.* **Response:** Thank you for this question. You are right that network science is helpful for identifying core citations of existing papers. As mentioned in Q1, we did utilize knowledge in network science and science of science to define the core citations given query paper $q$ based on subsequent papers that cite $q$ in the citation network. Then, we classify citations in the Microsoft Academic Graph (MAG) as core/superficial. However, this paper did not focus on analyzing these classifications of existing citations. Instead, we treat the network-based classifications as labels and train a text-based model, which can predict the core citations of $q$ without knowing subsequent paper that cites $q$. Here are two representative scenarios where the subsequent paper that cites $q$ is unknown for the given query $q$, illustrating the meaningful applications of our text-based model: - **Paper recommendation (for ongoing papers).** Our model can recommend suitable citations to researchers who are writing papers. You are right that researchers should know what the core citations in their field are. However, the rapid development of modern science brings about exponential growth in publications, where researchers may find an overwhelming number of publications that match their interests but are largely irrelevant to their needs in writing the specific paper [4]. The so-called "burden of knowledge" [2,3] makes manual identification of papers to read or cite challenging, relying on a researcher’s ability. Therefore, a lot of works have been done on paper recommendation systems [4,5]. Our model can serve as such a system, where users input their draft of the abstract as the query and set a group of possibly related papers as the candidates. Then, our model can help identify papers that are most suitable to cite in the ongoing paper. In this scenario, it is impossible to identify the core citations of the ongoing paper based on subsequent papers citing it. - **Science of science study (for existing papers).** As you mentioned, there exists abundant science of science researches that utilize citation networks to analyze published papers [1,2]. However, recent studies have recognized that the subsequent citations of a paper require three to five years after publication to accumulate, and a proportion of papers, named "sleeping beauties", may experience a burst of attention after up to ten years [6-8]. Several famous studies, e.g., measuring disruption, have reported the impact of delayed citation on their analyses [1]. Therefore, although identifying core citations based merely on the citation network is feasible for old papers that have accumulated sufficient subsequent citations, it is not applicable for the latest papers that have not yet been cited. In contrast, our text-based method can identify the core citations of a newly published paper without relying on subsequent papers citing it. This allows us to analyze the latest science using the idea of core citation, which is meaningful for understanding cutting-edge trends in scientific research. Hopefully, these interpretations can better illustrate the potential impact and applications of our study. **Q3.** *Simpler baseline that directly utilizes keyword overlap.* **Response:** Thanks for proposing this simplified approach. Here, we test using the overlap ratio between candidate keywords and query keywords to predict the likelihood of the candidate becoming core citations. ||PREC@3|PREC@5|NDCG@3|NDCG@5| |:-:|:-:|:-:|:-:|:-:| |PATTON|0.253|0.205|0.271|0.234| |Keywords Overlap|0.336|0.269|0.304|0.264| |SPECTER|0.454|0.460|0.570|0.504| |H-LM (GPT3.5)|0.725|0.644|0.735|0.677| |**H-LM (GPT4o)**|**0.736**|**0.655**|**0.743**|**0.686**| The results prove that the keywords overlap ratio is truly predictive for core citations, achieving performance similar to medium baselines. However, its performance is still far behind that of advanced text-based methods, including our design. This illustrates the advantage and necessity of predicting core citations based on the texts rather than only considering the keywords. [1] Large teams develop and small teams disrupt science and technology. *Nature 2019*.\ [2] Papers and patents are becoming less disruptive over time. *Nature 2023*.\ [3] The burden of knowledge and the “death of the renaissance man”: Is innovation getting harder? *The Review of Economic Studies 2009*.\ [4] Scientific Paper Recommendation: A Survey. *IEEE Access 2019*.\ [5] Scientific paper recommendation systems: a literature review of recent publications. *International Journal on Digital Libraries 2022*.\ [6] Defining and identifying Sleeping Beauties in science. *PNAS 2015*.\ [7] Modeling citation dynamics of “atypical” articles. *Journal of the Association for Information Science and Technology 2018*.\ [8] New directions in science emerge from disconnection and discord. *Journal of Informetrics 2022*. --- Rebuttal 2: Comment: Dear reviewer, Thanks again for your valuable suggestions. Following your suggestions, we have rephrased the dense terminology and clarified the potential impact of our citation prediction task. Hopefully, our revision can help the readers better understand the contributions and potential applications of our study. Please let us know if you have any questions. We are looking forward to further discussion. The authors
Summary: This paper studies text-based citation prediction by exploring the varying roles of paper citations from foundational to superficial. The authors introduce the concept of core citations, emphasizing the most important citations over superficial ones. Then, they propose HLM-Cite, a hybrid language model workflow combining embedding and generative language models. The approach involves a two-stage pipeline: first, a pre-trained embedding model coarsely retrieves high-likelihood core citations, and then a large language model ranks these candidates through reasoning. Experiments on the Microsoft Academic Graph across 19 scientific fields demonstrate the effectiveness of HLM-Cite in comparison with scientific pre-trained language models, text-rich network representation learning methods, and general-domain language models for text embedding. Strengths: + The proposed concepts of core citations and superficial citations are intuitive. + The adopted retrieval-reranking paradigm is well-motivated, combining the strengths of pre-trained embedding models and large language models. The usage of curriculum finetuning is also reasonable in the retrieval module. + Besides performance comparisons, the authors conduct various additional experiments to validate their motivation and design choices, including ablation studies, hyperparameter analyses, and the effect of examples/LLMs. Weaknesses: - Some scientific language model baselines are missing. To be specific, SPECTER 2.0 [1] and SciMult [2] have shown superior performance over SciNCL on various tasks including citation prediction, but neither of them is compared in the experiments. - The dataset used in this paper is constructed by the authors. It is unclear whether the proposed model works for other datasets. For example, the SciDocs dataset from the SPECTER paper [3] is a widely used benchmark dataset for citation prediction, but it is not considered in the experiments. Also, it is unclear why the authors cluster the 19 scientific fields into 2 big areas (i.e., natural science and social science). Would it be possible to directly show the results in each field (e.g., like [4])? - Statistical significance tests are not conducted. It is not clear whether the gaps between HLM-Cite and the baselines/ablation versions are statistically significant or not. In fact, in Tables 2 and 3, some gaps are subtle, therefore multiple runs are needed and the p-value should be reported. [1] SciRepEval: A Multi-Format Benchmark for Scientific Document Representations. EMNLP 2023. [2] Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding. Findings of EMNLP 2023. [3] SPECTER: Document-level Representation Learning using Citation-informed Transformers. ACL 2020. [4] The Effect of Metadata on Scientific Literature Tagging: A Cross-Field Cross-Model Study. WWW 2023. Technical Quality: 3 Clarity: 3 Questions for Authors: - Could you compare HLM-Cite with SPECTER 2.0 and SciMult? - Could you report the results on the SciDocs benchmark dataset (or explain the reasons why you did not use it)? - Could you conduct a statistical significance test (e.g., two-tailed t-test) to compare your model with the strongest baseline/ablation version in Tables 2 and 3? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors discuss the limitations and potential negative societal impact of their work in the Checklist. I suggest they further consider the limitations mentioned in my review above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: # Response to reviewer tEdQ **Q1.** *Missing baselines, e.g., SPECTER 2.0 and SciMult.* **Response:** Thank you for providing these up-to-date baselines for scientific texts. We test SPECTER 2.0 [1] and SciMult (both vanilla and MoE) [2] on our tasks as suggested. Also, we investigate new baselines for general text embedding to compensate for the existing ones. We include GTE-v1.5, the upgraded version of GTE [3], where the latter one is our previous best baseline. We find the new baselines surpass our existing ones as expected, and still, our method outperforms all baselines. These new baselines provide stronger evidence of the advantage of our method, and they also indicate the possibility of updating the GTE model, which we used for initialization, to stronger pretrained LMs to further improve the performance of our method. Here, we show the overall performance for your convenience. Please refer to "result.pdf" in the global response for details. |Model|PREC@3|PREC@5|NDCG@3|NDCG@5| |:-:|:-:|:-:|:-:|:-:| |SPECTER|0.545|0.460|0.570|0.504| |SciMult-vanilla|0.569|0.485|0.593|0.529| |SciMult-MoE|0.579|0.496|0.603|0.539| |SPECTER 2.0|0.602|0.515|0.627|0.560| |GTE-base|0.639|0.556|0.659|0.597| |GTE-base-v1.5|0.641|0.557|0.662|0.599| |GTE-large-v1.5|0.649|0.563|0.671|0.606| |HLM-Cite (GPT-3.5)|0.725|0.644|0.735|0.677| |**HLM-Cite (GPT-4o)**|**0.736**|**0.655**|**0.743**|**0.686**| **Q2.** *SciDocs dataset.* **Response:** Thanks for mentioning SciDocs, which is a widely applied benchmark dataset in scientific text embedding. The major obstacle to testing our method on SciDocs is the lack of ground truth label of core/superficial citation. As reported in the SPECTER paper [4], SciDocs provides approximately 30K papers in total for the citation-prediction task, including 1K queries with a candidate set of 5 cited and 25 uncited papers for each query. However, the cited papers are regarded equally without more detailed information. In our design, the core citations are identified on the citation network based on subsequent papers citing them. Regretfully, the 30K papers in SciDocs alone are not enough to build a citation network for identifying core citations, beause the citations among them are too sparse. To solve this dilemma, we built a citation network with hundreds of millions of research papers in Microsoft Academic Graph (MAG), identifying core/superficial citations in it. Since there is no direct mapping that bridges SciDocs and MAG, we cannot locate the raw 30K papers in MAG, and thus, we directly sample new queries and candidates from MAG for testing (Section 4.1). We compare our testing dataset and SciDocs from the following aspects: - **Data scale.** Our testing dataset is much larger in scale than SciDocs. We include 50K queries with a set of 10 cited (5 core and 5 superficial) papers for each query. - **Field coverage.** Like SciDocs, our testing dataset covers multiple scientific fields. We randomly sampled papers in MAG, where the sampled papers covered all 19 fields in MAG. - **Testing format.** Our testing uses the format of query-candidates, which is the same as the citation-prediction task in SciDocs. Considering these, we believe that the testing results on the new dataset we constructed are convincing enough to compensate for SciDocs' absence. Still, we agree that it is meaningful to use SciDocs to test how various methods perform on the core-citation prediction task we proposed. To enable this in the future, we will work on adding labels of core/superficial to the cited papers provided in SciDocs. We will locate SciDocs's 30K papers in MAG by comprehensively matching the titles, authors, and keywords. Therefore, we can label the citations in SciDocs as core/superficial, providing our task with a more widely acknowledged dataset, as well as contributing an auxiliary applicable scenario to SciDocs. **Q3.** *Results in each field.* **& Q4.** *Statistical significance.* **Response:** Thanks for these suggestions. Like [5], we show the detailed performance in each field. Also, we conduct statistical significance tests to better compare our model with the strongest baseline/ablation version. We used the two-tailed t-test between the performance of (1) our model VS strongest baseline and (2) full design VS ablation versions. Please refer to "result.pdf" in the global response for details. The results provide us with a deeper understanding of the performance and validity of our designs, illustrating that: - **Overall performance (our model VS strongest baseline, Table 2 in paper).** Our method surpasses the top baselines in all fields ($p<0.01$ in most individual fields, and $p<0.001$ averaging all fields through t-test), demonstrating its general applicability to a wide range of fields. - **Ablation studies (full design VS ablation versions, Table 3 in paper).** Generally, all parts of our designs are valid with significance ($p<0.01$ or $p<0.1$ in overall performance through t-test). Moreover, we notice that when focusing on social science papers, which only comprise a small proportion of all papers, Stage 1 of curriculum finetuning is only slightly beneficial. Therefore, when only applying to social science papers, it is an alternative for users to skip Stage 1 if they want to save computational cost with the cost of a slight performance drop. In contrast, when applying to natural science papers, it is necessary to keep Stage 1 for better performance. [1] SciRepEval: A Multi-Format Benchmark for Scientific Document Representations. *EMNLP 2023*.\ [2] Pre-training Multi-task Contrastive Learning Models for Scientific Literature Understanding. *Findings of EMNLP 2023*.\ [3] Alibaba-NLP/gte-large-en-v1.5. *Hugging Face 2024*.\ [4] SPECTER: Document-level Representation Learning using Citation-informed Transformers. *ACL 2020*.\ [5] The Effect of Metadata on Scientific Literature Tagging: A Cross-Field Cross-Model Study. *WWW 2023*. --- Rebuttal Comment 1.1: Title: Last chance to reply to the authors! Comment: Dear reviewer, if you have not clicked the reply button, please don't forget to do so as the deadline is approaching. Your input is important to the authors! - AC --- Rebuttal 2: Comment: Dear reviewer, Thanks again for your insightful suggestions. Following your advice, we have incorporated the up-to-date baselines and clarified all other issues. Hopefully, our revision can help the readers better understand the contributions and potential impact of our study. Please let us know if you have any questions. We are looking forward to further discussion. The authors --- Rebuttal 3: Comment: I thank the authors for their detailed responses and additional experiments, which address some of my concerns. After reading the responses and the other reviews, I decided to increase my rating from 6 to 7. Meanwhile, I strongly suggest the authors include the additional results in their revised version. --- Rebuttal Comment 3.1: Comment: Thanks for your valuable suggestions again. We will include the additional results and improve our paper accordingly.
Rebuttal 1: Rebuttal: Dear Reviewers, Thanks for taking the time to review our paper. We greatly appreciate your valuable feedback and insightful suggestions. Please find our detailed one-on-one responses to the raised questions in 'Rebuttal' following the reviews. In addition, we have attached here 'result.pdf', which includes the supplementary results suggested by the reviews. When we mention it in the responses to some specific questions, please refer to that file accordingly. Once again, we want to express our gratitude for your efforts and professional advice. We are committed to incorporating all your suggestions to enhance the quality of our paper. Sincerely,\ The Authors Pdf: /pdf/ec742168edeba78751d3888a4bbb94c156750848.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Perplexity-aware Correction for Robust Alignment with Noisy Preferences
Accept (poster)
Summary: The paper proposes Perplexity-Aware Correction (PerpCorrect) to improve the alignment of large language models (LLMs) by detecting and correcting noisy preferences (NPs). PerpCorrect identifies NPs using the differences in perplexity (PPLDiff) between chosen and rejected responses. The method involves aligning a surrogate LLM with clean validation data, refining it with reliably clean and noisy data, and correcting NPs based on PPLDiff. Experiments demonstrate that PerpCorrect enhances alignment performance, is practical with a small amount of validation data, and is compatible with various alignment techniques. Strengths: 1. This paper innovatively addresses the impact of noisy preferences on alignment from the perspective of correcting noisy data. 2. The experiments are comprehensive, which cover existing online, offline, and robust alignment methods, such as PPO, DPO, cDPO, and rDPO. The results show that PerpCorrect achieves the state-of-the-art robust alignment preference evaluated with different LLMs and datasets. Besides, PerpCorrect can be further combined with the other loss-based robust alignment methods. Weaknesses: 1. The standard deviation of reward accuracy is not reported in Tables 3, 4, and 6. Technical Quality: 3 Clarity: 3 Questions for Authors: I have two minor questions about the experiments: 1. As shown in Tables 1 and 3, the reward accuracy of other DPO family alignment methods evaluated with Llama2-7B is better than that evaluated with phi-2 (2B). However, PrepCorrect performs better when evaluated on phi-2 (2B). Could you give some explanation for this phenomenon? 2. As shown in Table 6, PerpCorrect shows significant improvement when combined with other methods, such as SLiC, IPO, and cDPO, but less improvement when combined with rDPO. Could you give some analyses of this phenomenon? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper explains its limitations in the appendix, such as time efficiency and the requirement for a validation dataset. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your comments! Please find our replies below. ``[Reply to W1] Please refer to our general author rebuttal [R3]. `` ``[Reply to Q1] We provide a discussion about this phenomenon as follows. `` We found that the two models have similar proportion of NPs in the denoised dataset using PerpCorrent. The table below reports the proportion of NPs before and after PerpCorrect with phi-2 and Llama2, respectively. These empirical results suggest that PerpCorrect (Stage II) is not the main reason why PerpCorrect-DPO performs better when evaluated on phi-2 (2B). | Before | After (Llama2) | After (phi-2) | | ------------- | -------------- | ------------- | | 9.86% (~10%) | 6.05% | 6.41% | | 19.93% (~20%) | 9.76% | 7.33% | | 29.94% (~30%) | 14.14% | 14.5% | | 40.06% (~40%) | 13.98% | 16.06% | As shown in Table 3, the reward accuracy of vanilla DPO on the phi-2 is better than that on the Llama2-7B. We conjecture that the phi-2 model has better robustness and that is why PerpCorrect performs better when evaluated on phi-2. ``[Reply to Q2] The superior performance of rDPO and the gap between the estimated and actural proportion of NPs are the two main reasons.`` **The superior performance of rDPO:** Under different proportions of NPs (ranging from 10% to 40%), rDPO demonstrated superior performance, achieving greater than 90% reward accuracy. This performance surpasses that of other methods such as cDPO, SliC, and IPO, leaving less room for improvement. **The gap between the estimated and actural proportion of NPs:** One reason why PerpCorrect-rDPO is not as effective as rDPO for certain proportions of NPs (10% and 20%) is that there is a gap between the estimated and actural proportion of NPs in the denoised dataset based on Eq. 15. We present the estimated and actual proportions of NPs in the denoised dataset in the table below. | The Estimated Proportion of NPs | The Actual Proportion of NPs | | ------------------------------- | ---------------------------- | | 4.36% | 6.05% | | 8.34% | 9.76% | --- Rebuttal Comment 1.1: Comment: I appreciate the authors' response, which has solved my concerns.
Summary: This paper presents a novel method called PerpCorrect for robust alignment in large language models, particularly in the presence of noisy preferences in training data. PerpCorrect addresses noisy preferences by evaluating the perplexity difference (PPLDiff) between selected and rejected responses. By leveraging PPLDiff to identify and rectify these noisy preferences, PerpCorrect generates a denoised training dataset. Experimental results demonstrate that PerpCorrect achieves state-of-the-art alignment performance with only a small amount of validation data. Strengths: 1. The paper is well-written, offering a comprehensive explanation of the proposed method, including detailed descriptions, mathematical formulas, and pseudocode. 2. The proposed method can be integrated with both existing online and offline alignment methods, enhancing their robustness without requiring significant modifications to their core methodologies. 3. PerpCorrect demonstrates strong performance with only a modest number of clean validation data points, which is beneficial in scenarios where such data is scarce or expensive to obtain. 4. The proposed method has been empirically tested on several datasets and models, consistently showing superior performance over traditional methods in the presence of noisy data. Weaknesses: 1. Tables 3 and 4 do not include the standard deviations for reward accuracy. 2. Despite requiring fewer validation data points, its performance and accuracy still heavily depend on the availability of a high-quality clean validation dataset to initially train the surrogate LLM. Therefore, it is necessary to discuss potential solutions to this limitation. 3. The process of calculating perplexity differences (PPLDiff) for each data point and aligning a surrogate LLM multiple times could be computationally intensive and time-consuming. Therefore, this additional cost needs to be discussed. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. Will the PPLDiff calculate on the new dataset, using models similar to those used with other data (e.g., Llama-2-7b-chat), result in a normal distribution with a mean of 0? 2. What are the reasons that make PrepCorrect-IPO perform better than PrepCorrect-cDPO and PrepCorrect-rDPO? Please analyze this phenomenon. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: As discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your comments! Please find our replies below. ``[Reply to W1] Please refer to our general author rebuttal [R3].`` ``[Reply to W2] Please refer to our general author rebuttal [R2].`` ``[Reply to W3] Please refer to our general author rebuttal [R1].`` ``[Reply to Q1] We validated this assumption by conducting experiments on other datasets using various models. `` We randomly selected 10,000 data points from each dataset and calculated PPLDiff using different models. The datasets and LLMs are downloaded from the Huggingface website. The following table report the mean of PPLDiff values. | Model | HH-RLHF [1] | SafeRLHF [2] | SHP [3] | WebGPT [4] | Avg. | | :----------------------: | :---------: | :----------: | :-----: | :--------: | :----: | | Qwen2-1.5B | -0.140 | -0.002 | 0.040 | -0.018 | -0.030 | | Qwen2-1.5B-Instruct | -0.149 | -0.009 | 0.046 | -0.021 | -0.033 | | Yi-1.5-6B | -0.158 | -0.103 | 0.105 | -0.036 | -0.048 | | Yi-1.5-6B-Chat | -0.159 | -0.054 | 0.069 | -0.040 | -0.046 | | gemma-2-2b | -0.140 | -0.051 | 0.001 | -0.024 | -0.053 | | gemma-2-2b-it | -0.163 | -0.053 | 0.063 | -0.028 | -0.045 | | falcon-7b | -0.113 | -0.001 | 0.037 | -0.016 | -0.023 | | falcon-7b-instruct | -0.121 | 0.021 | 0.039 | -0.017 | -0.019 | | Mistral-7B-v0.3 | -0.133 | -0.045 | 0.048 | -0.026 | -0.039 | | Mistral-7B-Instruct-v0.3 | -0.201 | -0.058 | 0.065 | -0.038 | -0.058 | | glm-4-9b | -0.134 | -0.019 | 0.045 | -0.027 | -0.034 | | glm-4-9b-chat-1m | -0.135 | -0.019 | 0.049 | -0.028 | -0.033 | | Llama-2-7b-hf | -0.133 | -0.052 | 0.051 | -0.028 | -0.040 | | Llama-2-7b-chat-hf | -0.139 | -0.041 | 0.063 | -0.032 | -0.037 | The following table report the standard deviation of PPLDiff values. | Model | HH-RLHF [1] | SafeRLHF [2] | SHP [3] | WebGPT [4] | Avg. | | :----------------------: | :---------: | :----------: | :-----: | :--------: | :---: | | Qwen2-1.5B | 0.764 | 0.475 | 0.367 | 0.361 | 0.492 | | Qwen2-1.5B-Instruct | 0.798 | 0.491 | 0.385 | 0.369 | 0.511 | | Yi-1.5-6B | 1.119 | 0.614 | 0.564 | 0.500 | 0.699 | | Yi-1.5-6B-Chat | 0.855 | 0.493 | 0.417 | 0.373 | 0.534 | | gemma-2-2b | 0.885 | 0.517 | 0.702 | 0.373 | 0.619 | | gemma-2-2b-it | 0.975 | 0.535 | 0.467 | 0.378 | 0.588 | | falcon-7b | 0.631 | 0.400 | 0.324 | 0.325 | 0.420 | | falcon-7b-instruct | 0.648 | 0.436 | 0.342 | 0.344 | 0.442 | | Mistral-7B-v0.3 | 0.719 | 0.411 | 0.332 | 0.313 | 0.444 | | Mistral-7B-Instruct-v0.3 | 0.981 | 0.482 | 0.385 | 0.343 | 0.548 | | glm-4-9b | 0.709 | 0.454 | 0.357 | 0.350 | 0.467 | | glm-4-9b-chat-1m | 0.719 | 0.456 | 0.367 | 0.356 | 0.474 | | Llama-2-7b-hf | 0.670 | 0.390 | 0.322 | 0.309 | 0.423 | | Llama-2-7b-chat-hf | 0.709 | 0.411 | 0.360 | 0.339 | 0.455 | ``[Reply to Q2] We conjecture the main reason is that IPO performs better than cDPO and rDPO under a low proportion of NPs. `` The proportion of NPs in the dataset corrected by our method is very low (**~10%**). We provide the proportion of NPs before and after using our method on the Golden HH dataset with the Llama-2 model in the table below. | Before | After | $\Delta$ | | ------------- | ------ | -------- | | 9.86% (~10%) | 6.05% | 3.81% | | 19.93% (~20%) | 9.76% | 10.17% | | 29.94% (~30%) | 14.14% | 15.80% | | 40.06% (~40%) | 13.98% | 26.08% | According to Table 6, the performance of IPO under low NPs (10%) is the main reason for the good performance of PerpCorrect-IPO. PerpCorrect-cDPO and PerpCorrect-rDPO are not as effective as PerpCorrect-IPO because both cDPO and rDPO are robust alignment methods that cannot fully utilize CPs. [1] Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback. ArXiv 2022 [2] Safe RLHF: Safe Reinforcement Learning from Human Feedback. ICLR 2024 [3] Understanding Dataset Difficulty with V-Usable Information. ICML 2022 [4] WebGPT: Browser-assisted question-answering with human feedback. ArXiv 2021 --- Rebuttal Comment 1.1: Comment: Thank you for your reply, my concerns have been resolved.
Summary: The paper proposed a novel method to mitigate the preference noise in alignment. The authors first provide insights into how the PPLDiff can recognize the noisy preferences and then use the PPLDiff to select and correct noise preferences. Extensive experiments demonstrate that the method can significantly improve the robustness under high noise ratios. Strengths: * The paper proposed a novel detect and correct paradigm for noise-robust alignment. * Novel insights were revealed for the failure with the noise preferences. Weaknesses: * Some writing is not clear. - In Line 50, "However, Mitchell [15] and Chowdhury et al. [6] overlooked the essential, differences between noisy and clean preferences, which is critical for mitigating the issue of NPs." What is the overlooked difference here? The logic is not clear. Technical Quality: 3 Clarity: 2 Questions for Authors: * In Line 50, "However, Mitchell [15] and Chowdhury et al. [6] overlooked the essential, differences between noisy and clean preferences, which is critical for mitigating the issue of NPs." What is the overlooked difference here? The logic is not clear. Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: No obvious limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your comments! Please find our replies below. ``[Reply to Q1] The overlooked difference is that NPs have incorrect labels, which can be identified using PPLDiff.`` Both cDPO (ref to Eq. 9) and rDPO (ref to Eq. 10) use a universal loss to treat CPs and NPs. They overlooked the difference between CPs and NPs, that is, NPs have incorrect labels, while CPs have correct labels. This results in the underutilization of CPs and the negative impact of NPs. In contrast, our approach, PerpCorrect, addresses this issue more effectively by using PPLDiff to identify and correct NPs from a data perspective rather than a loss perspective. --- Rebuttal Comment 1.1: Comment: Thank you for the response which addressed my concerns.
Summary: The paper introduces Perplexity-aware Correction (PerpCorrect), a method for robust alignment of large language models (LLMs) with noisy preferences (NPs). PerpCorrect detects and corrects NPs by analyzing the perplexity difference (PPLDiff) between chosen and rejected responses. The approach involves aligning a surrogate LLM with clean validation data, iteratively refining with reliable training data, and ultimately producing a denoised training dataset. Experiments demonstrate that PerpCorrect significantly improves alignment performance and is compatible with various alignment techniques. Strengths: 1. The problem of aligning LLMs with human preferences while effectively handling noisy data is a significant challenge in the field of AI, and this paper provides a valuable contribution towards solving it. 2. This paper introduces PPLDiff, a novel metric for distinguishing between clean and noisy preferences, enhancing the precision of corrections. The proposed method effectively handles noisy preferences (NPs), improving the reliability of LLM alignment. 3. PerpCorrect obtains significant improvements compared to previous methods, requiring only a modest amount of clean validation data, making it practical for real-world applications. Weaknesses: 1. Implementing PerpCorrect involves additional steps of calculating and analyzing perplexity differences (PPLDiff), which can add complexity to the alignment process. Can the authors discuss the complexity and additional computation time of the method? 2. The method requires a clean validation dataset to align the surrogate LLM initially. The need for manually annotated validation data can be labor-intensive and may not always be feasible in large-scale applications. 3. The method assumes that the perplexity of clean and noisy preferences will differ consistently, which may not always hold true across all datasets and model configurations. Technical Quality: 4 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Limitations have been discussed in Conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Many thanks for your comments! Please find our replies below. ``[Reply to W1] Please refer to our general author rebuttal [R1].`` ``[Reply to W2] Please refer to our general author rebuttal [R2].`` ``[Reply to W3] We provide a discussion of the assumption as follows.`` **Technically**, this assumption is inspired by the loss function of DPO (As shown in Eqs. 5-6). According to Rafailov et al. [1], the gradient can be written as: $$ \nabla\_\theta\mathcal{L}\_{\mathrm{DPO}}(\pi\_\theta;\pi\_{\mathrm{ref}})=-\beta\mathbb{E}\_{(x,y\_w,y\_l)\sim\mathcal{D}}\bigg[\sigma(\beta\log\frac{\pi\_\theta(y\_l|x)}{\pi_\text{ref}(y\_l|x)}-\beta\log\frac{\pi\_\theta(y\_w|x)}{\pi\_\text{ref}(y\_w|x)})\bigg[\nabla_\theta\log\pi(y\_w| x)-\nabla_\theta\log\pi(y\_l| x)\bigg]\bigg], $$ where $\nabla_\theta\log\pi(y_w| x)$ increases likelihood of $y_w$ and $\nabla_\theta\log\pi(y_l| x)$ decreases likelihood of $y_l$. Furthermore, for CPs where $(x,\tilde{y_w},\tilde{y_l}) = (x,y_w,y_l)$, the gradient directs the decrease of $\mathrm{PPL}([x;\tilde{y}\_{w}];\theta)$ and the increase of $\mathrm{PPL}([x;\tilde{y}\_{l}];\theta)$. This ultimately leads to the reduction of $\mathrm{PPLDiff}(x,\tilde{y}\_w,\tilde{y}\_l;\theta)$. Conversely, for NPs, the gradient leads to an opposite effect. As a result, the PPLDiff of CPs and NPs will consistently differ. **Empirically**, the results of our method, PerpCorrect, using different LLMs (including phi-2 and Llama-2) on various datasets (such as the Golden HH and OASST1 datasets) have verified our assumption. [1] Direct Preference Optimization: Your Language Model is Secretly a Reward Model. NeruIPS 2023
Rebuttal 1: Rebuttal: We thank all reviewers for their insightful comments and suggestions. Please find our replies below. ``[R1] We discuss the complexity and additional computation time of method as follows.`` The additional computation time is primarily due to PerpCorrect. Both the theoretical and practical times are reported in the table below (using XXX to represent the theoretical time required for robust alignment) and are followed by a detailed analysis. | Stage | Theoretical Time | Practical Time | | ------------------------------------------------- | ---------------------- | -------------- | | PerpCorrect (Stage II) | $\frac{T}{3} \times X$ | ~12 hours | | Robust alignment (Stage III and baseline methods) | $X$ | ~12 hours | | Total | $(1+\frac T3) X$ | ~24 hours | **In theory**, during the PerpCorrect process, we need to calculate PPLDiff and train the surrogate model in each epoch. The computation time introduced by PerpCorrect (Stage II) is approximately $\frac{T}{3}$ that of the robust alignment (Stage III and baseline methods). - The calculation of PPLDiff in each epoch requires only $\frac{1}{3}$ of the time needed for robust alignment (Stage III). The primary computational load in robust alignment arises from the complexity of forwarding and back-propagation, while the complexities of gradient updates and parameter updates are relatively low. Additionally, back-propagation takes twice as long as forwarding. In addition, the calculation of PPLDiff only requires forwarding. - For surrogate model training, PerpCorrect utilized data points that represented $t\times \alpha$ of the total dataset during epoch $t$. Since both $\alpha$ and $t$ are small, the time required for surrogate model training can be approximately ignored. **In practice**, Our entire robust alignment pipeline (**~24h**) takes only **twice** as long as the baseline (**~12h**). We set $T=5$ and $\alpha=2\%$, and used the AdamW optimizer. The practical efficiency of the PerpCorrect is due to the use of fp32 precision by the AdamW optimizer, which increases the GPU's calculation time during the robust alignment process. ``[R2] We provide a discussion about the need for clean datasets and potential solutions as follows. `` We only need 50 clean data points, which constitutes **less than 0.5%** of the entire dataset. In practice, such small-scale annotation is feasible. Reviewing and labeling data are essential steps to enhance data quality. For instance, Scale AI utilizes a large workforce to review and ensure the high quality of their data [1]. To address the challenge of obtaining sufficient clean datasets, one potential approach is to employ the LLM-as-a-judge method, which allows models to self-annotate and thereby reduce reliance on manually curated clean data. This method has been discussed in recent works [2] [3] [4], and could serve as a promising solution to this limitation. ``[R3] We repeated the experiments three times with the same settings for Tables 3, 4, and 6, and the average reward accuracy and standard deviation are reported in the PDF file.`` [1] Scale AI, https://scale.com/docs [2] Self-Rewarding Language Models, ICML 2024 [3] Constitutional AI: Harmlessness from AI Feedback, Arxiv 2024 [4] Trustllm: Trustworthiness in large language models, ICML 2024 Pdf: /pdf/1885912ecbc04d8b9802028e742dc793084eed80.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
CLUES: Collaborative Private-domain High-quality Data Selection for LLMs via Training Dynamics
Accept (poster)
Summary: The paper proposes a novel data quality control technique to enhance the quality of data from different private domains in collaborative training settings, such as model merging. The technique scores each training sample by tracing gradients of low-rank adapters and filters out low-quality data locally. The model parameters are then adaptively merged based on individual training. The method is evaluated on heterogeneous medical domain data, demonstrating that training on high-quality data selected by this method outperforms other data selection methods across diverse private domain datasets. The paper claims performance improvements and offers a general, efficient way to identify high-quality data for large language models. Strengths: s1. The use of training dynamics to score and filter data for collaborative fine-tuning is a good idea. s2. The method provides a general pipeline applicable to various models without the need for task-specific adjustments. s3. The structure of the article is complete and easy to understand Weaknesses: w1. The paper's evaluation is confined to medical datasets. To demonstrate the generalizability and robustness of the proposed method, experiments should be conducted on a wider range of datasets from different domains. w2. The paper generates 40% of the data as low-quality, which may not accurately reflect real-world scenarios. It would be beneficial to experiment with varying proportions of low-quality data to understand the method's effectiveness across different levels of data quality. w3. The method uses only a small amount of public data (10 samples in the paper) as anchor data to calculate the global threshold. The reliability of selecting just 10 samples is questionable, and the basis for this choice should be clarified. A more rigorous justification or experimentation with different numbers of anchor samples could strengthen the method's credibility. w4. Table 1 in the paper shows that the proposed method performs better than the Oracle method, which uses only high-quality data. The paper does not provide an analysis or explanation for this superior performance. Understanding and explaining why this method outperforms the Oracle method could provide valuable insights and strengthen the overall argument. Technical Quality: 3 Clarity: 4 Questions for Authors: Q1. Please explain how the proposed method performs on datasets from different domains beyond the medical field. Have you considered conducting experiments with datasets from various other domains to validate the generalizability of your approach? (w1) 2. Please clarify if 40% is representative of real-world scenarios. Would you consider testing with different proportions of low-quality data to assess the robustness and effectiveness of your method under varying conditions? (w2) 3. Have you tested the impact of using different numbers of anchor samples on the performance and reliability of the method?(w3) 4. Please explain why our method performs better than Oracle and why the other three data evaluation methods perform so poorly in Table 1. (w4) Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: 1. The author's work is similar to federated learning. Should it be compared with the recent federated learning? Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1 & Q1**: > Please explain how the proposed method performs on datasets from different domains beyond the medical field. As mentioned in s1, our method by nature can be adapted to various models ‘without the need for task-specific adjustments’. We conduct comprehensive evaluations on different medical and healthcare dataset which covers different domains in medicine. Our assumptions are not limited to any specific domain, and our selection of anchor data is not domain-specific. We chose medical data because of the availability of high-quality public datasets in this field. We are running the experiments on the financial dataset: [FiQA](https://huggingface.co/datasets/LLukas22/fiqa). We randomly sampled 2000 data samples for each of the 4 clients from FiQA dataset, and polluted each of them with a low-quality data ratio of 80%, 20%, 10%, and 50% respectively. Once we get the result, we’ll update our response. **W2 & Q2** > Experiment with varying proportions of low-quality data to understand the method's effectiveness across different levels of data quality. We have been running the experiments on different proportions of low-quality data: 80% and 100%. Once we get the result, we’ll update our response. We reported the estimated running time of our collaborative fine-tuning pipeline (for each proportion of low-quality data on a certain dataset on A40). Due to the time limits, we are still running the experiments and expected to have additional experimental results within the discussion period. |Step |Estimated Time| |----|-----| |Collaborative finetuning with low-quality data| 9 hours| |Calculate gradients| 3 hours| |Compute scores and select|5 hours| |Collaborative finetuning with selected data| 9 hours| |Prediction on test set| 8 hours| |GPT-4 evaluation| 0.5 hour| **W3 & Q3** > Impact of using different numbers of anchor samples on the performance and reliability of the method We show the ablation study of the anchor data selection and validation set selection in Figure 2 in our PDF, which demonstrates the robustness and unbiased selection in our experiments. Fig 2 shows that the threshold keeps stable with the increase of the number of anchor data, thus demonstrating the robustness of the anchor data selection. I would like to also highlight that the number of anchor data will not affect the order of the selected high-quality data, the only impact is the number of data selected. **W4 & Q4** > Why our method performs better than Oracle? In our experimental setup, the global threshold is $0.48$, and the total number of selected data is slightly more than the number of oracle data, and considering more data diversity potentially lead to better performance. We also discussed it in our future work part in our paper. > Why the other three data evaluation methods perform so poorly in Table 1? IFD assumes that all data in the dataset is of good quality where it selects informative data samples. It cannot effectively handle scenarios with poor-quality data, where IFD tends to select complex, difficult-to-learn noise patterns. PPL tends to select longer data samples. Both IFD and PPL are not well-equipped to deal with situations where malicious or intentionally corrupted data is present in the dataset. The dataInf method is only related to the optimal theta point and does not consider whether the validation loss actually decreases. This may result in selecting bad data that could increase the loss. In contrast, our approach can better handle settings with bad data. Additionally, DataInf makes several strong assumptions, which lead to more limitations such as: 1. the total approximation error is bounded by $O\left(\sum_{l=1}^L d_l^2\right)$, and the approximation error is tolerable only when $d$ is small. 2. The effectiveness of adapting DataInf to LoRA is limited to the number of learnable parameters is small, such as 1, 2, 4. The correlation coefficient of DataInf generally decreases as the rank increases. These limitations highlight the challenges in developing influence measurement functions and data selection, especially when dealing with diverse, heterogeneous data, whereas our proposed methods are good at broader scenarios. --- Rebuttal 2: Comment: We have completed all the requested experiments and are pleased to share the results. Regarding generalizability beyond the medical domain, we conducted experiments using the FiQA dataset, which focuses on financial question answering. *Dataset:* We have four clients, each with 2000 training samples. We split the data into a training set (8000 samples in total), a validation set (also called the anchor set, 100 samples), and a test set (500 samples). *Evaluation metric:* Responses from the model fine-tuned on our dataset are rated by GPT-4 on a scale from 1 to 10, reflecting criteria including relevance, accuracy, and fluency. To address potential positional bias, we send our response along with the benchmark output to GPT-4 twice, with different orders. We then calculate the average of these scores as the final performance score. **Experiment I: Quality Heterogeneity** To evaluate our data selection method's effectiveness in scenarios with **heterogeneous data quality**, we simulated varying levels of data pollution across clients. Specifically, we randomly polluted 80%, 20%, 10%, and 50% of the training set for each of the four clients, respectively. $\textcolor{blue}{Table 1}$ presents the results of our model merging experiments. These findings demonstrate that our method significantly enhances data quality even when clients have differing proportions of low-quality data: $\textcolor{blue}{Table 1:}$ | | GPT-4 Scoring | | ------------- | ------------- | | Low-qual Data | 3.6495 | | Oracle | 3.7480 | | **Ours (w/ linear merging)** | 3.7755 | |**Ours (w/ weighted merging)**| **3.8550**| We conducted an **ablation study** on our global threshold to further validate our approach. $\textcolor{blue}{Table 2}$ illustrates the advantage of using a global threshold determined by our anchor data for data selection in this heterogeneous setting, compared to selection based on average ratio or pre-determined scores. These results demonstrate that our approach successfully balances the identification of positive cases with the minimization of false positives, offering a robust and superior solution. $\textcolor{blue}{Table 2:}$ | | Precision | Recall | F1 Score | Accuracy | | -------------------------- | ---------- | ---------- | ---------- | ---------- | | select by ratio | 79.17% | 79.17% | 79.17% | 75% | | select by static score | 92.77% | 99.13% | 95.84% | 95.00%| | select by global threshold (**Ours**) | **97.44%** | **99.38%** | **98.39%** | **97.91%** | Additionally, we performed a **qualitative analysis** by manually comparing the outputs generated by models fine-tuned on our selected high-quality data versus the original low-quality data. This comparison ($\textcolor{blue}{Table 3}$) provided insights into the tangible improvements in model performance and output quality. $\textcolor{blue}{Table 3:}$ | Questions | Model Response (w/ data selection) | Model Response (w/ our data selection) | | ----------------- | ------------- | --------- | | How does a 2 year treasury note work? | It is a 2 year treasury note. | Treasury notes are a short-term government security that matures in two years. They are issued by the U.S. Treasury Department to help finance the nation's deficit. |0.505| |What causes a stock to drop in price?|Investors are afraid to buy stocks because they are afraid the stock will drop in price. |If the stock price is falling, it's because there's a lot of supply, but not enough demand. If there's a lot of demand, but not enough supply, the price will rise. | --- Rebuttal 3: Comment: **Experiment II: Varying Levels of Low-Quality Data** To evaluate the robustness of our data selection method under different data quality conditions, we conducted a series of experiments with varying proportions of low-quality data. We maintained a consistent proportion of low-quality data across all clients for each experiment, ranging from 0% to 100%, including 20%, 50%, and 80% pollution levels. $\textcolor{blue}{Table 4}$ presents the performance of models trained with and without our data selection method across these different proportions. The results demonstrate that our method effectively enhances data quality across all scenarios with **GPT-4 scoring**. And in terms of **accuracy** of the data selection, our method consistently selected over 99% of the high-quality data across different proportions of low-quality data. $\textcolor{blue}{Table 4:}$ | | GPT-4 Scoring Performancew/o data selection | GPT-4 Scoring Performancew/ our data selection | Accuracy | Global Threshold (score) | | ---------------------- | ------------------------------------------- | ---------------------------------------------- | -------- | ------------------------ | | 0% bad data (raw data) | 3.7400 | ——— | ——— | ——— | | 20% bad data | 3.4735 | 3.6270 | 99.3125% | 0.0033 | | 50% bad data | 2.9625 | 3.7175 | 99.7250% | 0.0017 | | 80% bad data | 2.9155 | 3.8335 | 99.9375% | 0.0012 | | 100% bad data | 2.2860 | ——— | ——— | ——— | To understand the adaptability of our global threshold, we analyzed how the global threshold changes with different proportions of low-quality data. $\textcolor{blue}{Table 4}$ illustrates that our **global threshold adjusts across varying levels of data quality**. ------- > Limitation: The author's work is similar to federated learning. Should it be compared with the recent federated learning? Our work has covered federated learning as one of the collaborative training paradigms, and our evaluation was conducted in a federated learning setting. For a detailed discussion of empirical results, please refer to [our response to Reviewer ngAe regarding W2 & L1](https://openreview.net/forum?id=OU1uqd1vyw&noteId=rALL4TsjnZ). If you're referring to specific recent developments in federated learning that warrant additional comparison, we'd appreciate more details and elaboration. ---- To summarize, these results collectively demonstrate the robustness and effectiveness of our method across a wide spectrum of data quality scenarios, reinforcing its potential for real-world applications. --- Rebuttal Comment 3.1: Title: Looking forward to your responses and further discussion! Comment: We would like to thank you once again for your time and effort in providing feedback and comments. As there are less than 12 hours remaining for further discussion, we would greatly appreciate if you could let us know of any remaining concerns or points of confusion. We would be more than glad to discuss, address, and resolve them with you! --- Rebuttal 4: Title: W4 & Q4 Further experimental results Comment: > Why the other three data evaluation methods perform so poorly in Table 1? We have compared different data scoring methods for their accuracy in selecting high-quality data out of the mixed-quality datasets in the FiQA Experiment I setting mentioned above. Our method demonstrates significant advantages as follows. | | Accuracy | | -------- | -------- | | PPL | 58.44% | | IFD | 58.90% | | **Ours** | **97.91%** |
Summary: This work proposes a method for finetuning LLMs in federated or collaborative training scenarios that selects the most informative examples for each client to train on such that their local parameter updates are likely to improve the global, merged model's performance on a public (shared) test set. The proposed two phase solution first scores local samples by computing the inner products between a gradient computed on a training sample and a gradient computed on a limited number of shared test samples, and only performs the next episode of local training on points that are above a specific quality threshold determined and distributed by the server in the previous round. Then, the server combines the parameter updates sent from each client, and using the new global parameters, iterates over the same shared anchor set to compute a new quality threshold to send back to the clients. By iterating this approach, in some settings they achieve performance improvements on medical domain tasks as compared to existing data valuation methods and baselines of training on curated subsets or all available data, without federation. Strengths: - The diagrams are well done and help the reader understand the method in an intuitive manner. - The method is principled, and relatively simple to implement requiring only gradients (rather than higher order computations), it is grounded well in prior works on data valuation and active learning, and is amenable to efficient implementations when using adapter training methods as demonstrated by their LoRA based experiments. - The ablations chosen are interesting, particularly the effect of layer selection for the gradient comparisons. Weaknesses: - The precise ordering of the algorithmic steps is not clear. Section 3.1-3.3 would greatly benefit from an Algorithm definition that clearly defines the order of operations regarding what gradients are computed on what data using what version of the parameters throughout the training process. It is hard to interpret the experimental results due to the lack of clarity in the section that introduces them. - Unless the reviewer is misunderstanding something about the evaluation, Table 1, 2, and 3 are all misleadingly bolded. The row label "Ours" is sufficient to tell that the row indicates the proposed method, however, "Ours" is bolded in all three tables, and all columns, regardless of whether or not the method outperforms other valuation criteria, or baseline training settings. On a quick glance this suggests the method offers uniform improvement in all settings, but this is not the case. Please only bold the best score in each column and note the convention in the captions. - The ablations (while appropriately chosen) are not convincing. The differences reported in 4.3 are likely not statistically significant, and in 4.4, the "hit rate" is not defined, which hurts because this part is of technical interest. Section 4.5 is confusing and the final sentence claims that the anchor score leads to the lowest proportion of low quality data, but this is exactly the opposite of Fig 4(c)? Technical Quality: 2 Clarity: 2 Questions for Authors: - It seems like the table and figure reference numbering might be off in a few places, please check the \ref commands. - I would like to better understand the dataset construction process. There are 3 raw sources listed, but what is the exact sampling used to build the training dataset? How are the n=20 client local datasets selected (are they uniformly sampled from the full training or grouped in some way?) Also, critically, how are the test evaluation samples (the GPT4 evaluation 200) sampled, and what is the relation to the _anchor_ data? Are the anchor data held out Medalpaca, just like the test set? - If the above description is somewhat correct, followup question is what are the statistics for which samples were selected in each round by each client as "above the quality threshold"... were these locally filtered subsets more often than not the medalpaca training questions? (and when running the other baseline scores, was this not true?) My current hypothesis is that this grad x grad product is a similarity score between the local training data and the global test/anchor samples. Thus, it biases each client to train on the test-like data, and not the other training data. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: - The primary limitation is in the fact that the method does not improve over the baselines or other scores in many settings evaluated. In Table 2 and 3, while "Ours" beats the low quality baseline, it doesn't beat the oracle or raw baselines often. - Similarly in Table 1, "Ours" only beats the other quality criteria in the GPT4 based evaluation. Depending on the clarification about the construction of the anchor set, I worry that the method only works well when the test set and the anchor set are closely related which limits the real world applicability of the method. I would be happy to engage with the authors to help improve the explication of the method and the presentation of the results, but my concerns with the strength of the empiricals are not insignificant. Clarifications would need to demonstrate a misunderstanding on my part of the evaluation setup or the results themselves in order for my score to improve. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks so much for your valuable comments and feedback. **W1**: > Detailed algorithm definition Thank you for your valuable suggestions! Following the reviewer's constructive feedback, we have included our pseudo-code algorithm in the PDF. **W2 & L1**: > Evaluation We appreciate your questions on the bolded numbers! We have fixed them all in our uploaded PDF. To further clarify our evaluation setup and clear up misunderstandings: Our preliminary results (Table 1 and Table 2 in the uploaded PDF) show severe performance degradation when mixing our constructed bad data into the raw data. This demonstrates the large impact of low-quality data in collaborative settings, which motivates the importance and pressing challenges for data selection methods in such settings. Our objective is to achieve better performance compared to the low-quality baseline, not the raw data performance. Oracle serves as the theoretical upper bound. To further highlight our empirical results: 1. For both pre-trained models and tasks, with other settings remaining the same, model merging performs better than federated learning. This indicates that loose communication between the local model and the server, compared to frequent communication, might lead to better generalization. 2. The performance boost with selected data in the federated setting is larger than in the model merging setting. This might be because during federated learning, we calculate the data score based on the global model (instead of the local model in the model merging setting) at each timestamp, which can better trace and regularize the training trajectory to the optimal location. 3. In both federated and model merging settings, our data selection can achieve over 96% and over 91% of the theoretical upper bound performance, respectively. 4. Our method outperforms the other centralized data selection baselines under the GPT4 Scoring metrics. Compared to the other methods which cause severe forgetting during instruction tuning, the performance of our method on the Knowledge-based benchmark remains within an acceptable range. This shows that our methods are able to improve domain-specific tasks without forgetting knowledge injected during pretraining. **Q1**: We appreciate your detailed suggestions on the table and figure reference numbering! We have fixed them all and will update in our final version. **Q2**: We split the whole dataset into the training set, the validation set (and anchor data), and the test set. They are all from the same data sources. The anchor data/validation set is in the same distribution as the training data, which is essential to ensure that the measurement of high-quality data aligns both during the selection (validation set and anchor set) and in the evaluation (test set). > Dataset construction process: For the Medical QA task: We use 16k samples in total, with 8k samples randomly sampled from PMC-LLama and Medalpaca-flashcards each. We uniformly partition the total samples into 20 clients in this task. For the Multilingual QA task: We use 6312 samples randomly sampled from MMedBench and 1052 samples per language for each of 6 clients. (Domain Heterogeneity setting) For our newly added Financial QA task: We randomly sampled 2000 data samples for each of 4 clients from the FiQA dataset, and polluted each of them with low-quality data ratios of 80%, 20%, 10%, and 50% respectively. (Quality Heterogeneity setting) We provided examples of raw data and low-quality data with their scores calculated by our proposed method in our uploaded PDF. > Test set: Test data are sampled from the same data sources. For open-ended GPT4 evaluation, the test samples are randomly sampled from Medalpaca-flashcards and MMedBench for the medical QA task and multilingual QA task, respectively. For knowledge-based evaluation, the test samples are multiple-choice tasks, which use accuracy as the main metric. This additional evaluation is to see if fine-tuning will degrade or forget the knowledge obtained from pretraining. Following convention, we adopt three prominent medical benchmarks for evaluation. > Anchor data and validation data: We followed the convention of existing data selection/valuation methods, which requires a clean validation set drawn from the target distribution. This setup has been adopted and proven to work well in many previous data selection methods, including Data Shapley and Influence Function, etc.[1] It doesn't limit the real-world applicability of the method, because there are always a few public high-quality data available; otherwise, we couldn't clearly define what high-quality data looks like. [1] LAVA: Data Valuation without Pre-Specified Learning Algorithms, ICLR 2023. --- Rebuttal Comment 1.1: Title: Response to Rebuttal Comment: I appreciate the author's inclusion of the algorithm pseudocode. I have some followup questions on the order of operations. 1) Is the new global threshold computed _before_ aggregating the client models from the previous round? This seems like introduces a lag, or gap between choosing the new threshold based on the state of the meta-model from the prior iteration, not the state of the model after incorporating the most recent client updates. 2) Is there supposed to be a "for each checkpoint t in T" loop in the "On the Server" section? Section 3.2 "Step One" is the section in the manuscript that describes checkpoints being used, so it seems like this only applies in the local phase. Maybe I still don't understand the algorithm fully though. The clarifications provided are encouraging. Allow me to reiterate them to confirm that my new understanding of the key points of the work. A) (preamble in rebuttal) You create testbed scenario where clients have training datasets with some significant corruption/low quality data within them. You first show the impact of your intervention by comparing the results training on the raw data versus your corrupted data, and also include a baseline of training on just the remaining clean data, a reasonable upper bound on expected performance of an optimal filtering method. (This is made more clear as either s separate table as you show in the supplement, or by more clearly referring to "Raw Data", "Low-qual Data", and "Oracle" as baseline measurements, or something other than "methods" because this is slightly confusing. B) (point 4. in your rebuttal) You compare your data selection algorithm method to others in Table 1 in the original draft (Table 4 in the supplement) and show that it performs better than PPL, IDF, and DataInf in 2 out of 4 scenarios with the other strongest method being PPL. Please bold the strongest method in each column, even when it is not your own. Using the Table 1 in the manuscript (the horizontal line separating baselines from selection methods is good), this means bolding Ours in Mistral GPT-4 column, bolding PPL in Mistral Knowledge column, boldingOurs in Llama GPT-4 column, and bolding PPL in Llama Knowledge column. The caption should say "the best performing selection method (row) in the lower subtable is bolded for each benchmark evaluation (column)." C) (points 1,2,3 in rebuttal) On Federated versus Model Merging... Can you explain how these are different with respect to your newly formalized algorithm in the supplement? I am not able to fully understand the reiterations of results in the rebuttal that discuss the differences because the draft does not make precise how the federated update or the model merging happens in context of the data selection process. (The model merging step, is now clarified in the supplement as a weighted average of the sent parameters $\theta'_k$ from clients. How would the federated update differ?) Point 2. in particular speculates that this is because of a key difference between how the data valuation scores are computed in the two settings - this is completely missing in the manuscript? --- Rebuttal 2: Comment: **Evaluation:** Thank you for your suggestions on presenting the experimental results. Since we cannot directly modify the original PDF at this stage, we will implement the following changes in the updated version of the paper: * We will bold the best-performing selection method in each column of Table 1 (Table 4 in the supplement), as you suggested. This includes bolding "Ours" in the Mistral GPT-4 and Llama GPT-4 columns, and "PPL" in the Mistral Knowledge and Llama Knowledge columns. * We will update the table caption to read: "The best performing selection method (row) in the lower subtable is bolded for each benchmark evaluation (column)." If you have any further questions or concerns about the experimental setup or results, we would be more than glad to continue the discussion on OpenReview. ----- **Threshold:** We do not use dynamic thresholds, nor is there any description related to dynamic thresholds in our submitted paper. We appreciate the opportunity to clarify this misunderstanding and address the confusion: The threshold is calculated only once throughout the entire pipeline. For details and proof on determining the global threshold, please refer to [our response to R1](https://openreview.net/forum?id=OU1uqd1vyw&noteId=xKx1wdiFS3). To clarify our method, here's a detailed breakdown of the data selection pipeline in collaborative fine-tuning, summerizing from Section 3.1 in our original paper and the pseudocode in our manuscript: * Stage 1 (On each client): Local fine-tuning with low-quality data; save model checkpoints. * Stage 2 (On each client): Calculate gradients, compute scores for each training sample, send scores to the server. * Stage 3 (On server): Calculate gradients, compute scores of anchor data, determine the global threshold using anchor data scores and client scores. * Stage 4 (On each client): Select data with scores not lower than the global threshold. * Stage 5 (On each client): Local fine-tuning with high-quality data, then send model parameters to the server. * Stage 6 (On server): Merge client model parameters to obtain the final global model. Your understanding that *"in expectation, grad x grad products between local training samples and local validation samples will be large for clean samples and small for corrupted samples"* is correct! However, we want to clarify that we do not use "a contextual threshold based on the current loss." As mentioned earlier, the threshold is calculated only once. ----- **Federated versus Model Merging** We integrate both federated learning and model merging into a unified algorithmic framework because they fundamentally involve local training followed by model aggregation in the parameter space. Since weights in neural networks are updated through optimization algorithms based on accumulated gradients, our gradient-based method influences the model's weight space through selective gradient use. This underlying intuition allows our method to be applicable to both collaborative learning scenarios. The only difference between federated and model merging algorithms is: In the federated setting, there's periodic communication between the server (global model, or $\theta\_{merge}$ in our paper) and clients (local models). Clients perform local training between communication rounds. At each round, clients send their current model parameters to the server, which aggregates them to form the current global model, then sends it back to all clients. After each communication round, server and client models are aligned. In contrast, in the model merging setting, there's no periodic communication during local training; local models train independently throughout the entire process. Parameters are sent to the server only once, at the end of training, to form the final global model. To clarify **Point 2** further: In the federated setting, the checkpoints we save for each local client are based on the global model updated from the last model aggregation on the server, which typically occurs after a fixed number of local training epochs or iterations. This process incorporates information from other clients' models. Crucially, at the end of each communication round, the model parameters on the server and all participated clients are synchronized. This ensures that all participating clients start the next round from the same point, incorporating collective knowledge. This synchronization can help our data selection method better trace and regularize the training trajectory toward the optimal location. We don't calculate gradients or scores during the FL process itself; instead, we compute the scores only after the entire training is complete. --- Rebuttal 3: Comment: (contd) We express the relationship and differences between federated learning and model merging using mathematical formulas as follows: **Model merging:** let $f_\theta \in \mathcal{F}$ denote the language model and $D\_k \in \mathcal{D}$ denote the training dataset on client $k$. Given the training datasets $D_k$, we can define a model merging operator $\mathcal{M}\_K( \cdot ; D_k , k \in K=\{1, \cdots, n\}): \mathcal{F} \rightarrow \mathcal{F}$. The model merging process can be expressed as $$ f\_{merging}=\mathcal{M}\_K(f) $$ **Federated Learning:** Based on the notation of model merging, the federated averaging process can be expressed as $$ f\_{fed}= (\prod\_{t=1}^T \mathcal{M}\_{S\_t(K)})(f) $$ where $T$ is the round number. $S_t(K)$ is the index set of selected datasets to train on each client at round $t$. ------ **Ablation Studies:** Thank you for giving us the opportunity to clarify our approach! 1. Merging Techniques (Section 4.3): Figure 4(a) in our paper demonstrates that different weighted merging or aggregation techniques lead to varying performance. Notably, the performance of our data selection method with the Linear Merging technique doesn't even reach the performance of low-quality data with TIES merging technique, highlighting the **significant impact of weighted merging techniques** on overall performance. To make the comparison more clear, we presented it in the following $\textcolor{blue}{Table 1}$: $\textcolor{blue}{Table 1:}$ | | GPT-4 Scoring Performance of Model Merging | | ---------------------------- | --------------------------------------- | | Raw Data | 0.505 | | Low-qual Data | 0.485 | | Oracle | 0.490 | | **Ours (w/ linear merging)** | 0.470 | | **Ours (w/ TIES merging)** | **0.487** | 2. Hit Rate and Low-Quality Data Selection (Sections 4.4 and 4.5): We apologize for the confusion. The "hit rate" in 4.4 and the "number of selected low-quality data" in 4.5 refer to the same metric: the proportion of bad data being filtered out. We presented this in two forms (percentage and data size) to provide different perspectives. A higher value indicates more effective filtering of bad data by our algorithm. 3. Additional Experiments: (Since the start of the rebuttal period, we have been conducting additional experiments to perform ablation studies. We now have comprehensive results that we believe are both convincing and robust. We are pleased to report these findings as follows) To provide a more comprehensive ablation study, in addition to the experiments in the "domain heterogeneity" setting shown above, we conducted additional experiments in a "quality heterogeneity" setting using the FiQA dataset, which focuses on financial question answering. We randomly polluted the training sets of four clients with varying degrees of low-quality data: 80%, 20%, 10%, and 50%, respectively. The following $\textcolor{blue}{Table 2}$ illustrates the advantage of using a global threshold determined by our anchor data for data selection: $\textcolor{blue}{Table 2:}$ | | Precision | Recall | F1 Score | Accuracy | | -------------------------- | ---------- | ---------- | ---------- | ---------- | | select by ratio | 79.17% | 79.17% | 79.17% | 75.00% | | select by hardcode score | 92.77% | 99.13% | 95.84% | 95.00% | | select by global threshold (**Ours**) | **97.44%** | **99.38%** | **98.39%** | **97.91%** | These results demonstrate that our approach successfully balances the identification of positive cases with the minimization of false positives, offering a robust and superior solution. We have also conducted additional ablation studies on the global threshold, which can be found in Figure 2 of the updated manuscript. Furthermore, we conducted the ablation study of different merging techniques on the FiQA dataset, demonstrating the importance of weighted merging, shown in $\textcolor{blue}{Table 3}$. We use pairwise evaluation here, where responses from the model fine-tuned on our dataset are rated by GPT-4 on a scale from 1 to 10, reflecting criteria including relevance, accuracy, and fluency. $\textcolor{blue}{Table 3:}$ | | GPT-4 Scoring | | ---------------------------- | --------------------------------------- | | Low-qual Data | 3.650 | |Oracle| 3.748| | **Ours (w/ linear merging)** | 3.776 | | **Ours (w/ weighted merging)** | **3.855** | Title: (contd) --- Rebuttal Comment 3.1: Title: Looking forward to your responses and further discussion! Comment: We would like to thank you once again for your time and effort in providing feedback and comments. As there are less than 12 hours remaining for further discussion, we would greatly appreciate it if you could let us know of any remaining concerns or points of confusion. We would be more than glad to discuss, address, and resolve them with you!
Summary: This paper proposes a novel approach for data quality control in collaborative fine-tuning of large language models (LLMs), particularly in settings where data cannot be directly shared between different silos due to privacy concerns. The authors introduce a method that scores training samples based on tracing gradients of low-rank adapters, filters out low-quality data locally, and then adaptively merges model parameters. They evaluate their approach on medical and multilingual question-answering datasets, demonstrating significant performance improvements compared to baselines. Strengths: 1. The authors have presented their ideas in a well-organized and logical structure, making the paper easy to follow. 2. The paper addresses an important problem in collaborative LLM training, where data quality control is challenging due to privacy constraints. The proposed method shows promising results and could have significant implications for improving the performance of collaboratively trained language models across various clients. 3. The method leverages training dynamics, providing a more fine-grained and comprehensive approach to data quality measurement compared to traditional methods which only looked at the last model checkpoint. 4. The use of LoRA and layer selection in the method of tracing gradients cleverly makes it particularly well-suited for large language models, addressing the computational challenges often associated with training such models. Weaknesses: 1. Although the methodology is thoroughly described, the authors could improve the clarity of Figure 3 by providing a more intuitive explanation of how training dynamics enhance collaborative training processes. 2. The paper contains some grammatical errors and typos. For example, in Table 3, "Performance of Data quality" should be "Performance of data quality", and on line 290, "the data influenced calculated via the last layer weights prone to a cancellation effect" should be corrected to "the data influence calculated via the last layer weights is prone to a cancellation effect". Technical Quality: 3 Clarity: 3 Questions for Authors: Please refer to weaknesses part Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the insightful and positive feedback! **W1 ans Q1**: > Although the methodology is thoroughly described, the authors could improve the clarity of Figure 3 by providing a more intuitive explanation of how training dynamics enhance collaborative training processes. Figure 3 demonstrates an optimized training approach for collaborative learning of multiple models. By selecting high-quality training data for each local model, we are selecting gradients that positively impact loss trajectories. These trimmed gradients are accumulated, leading to an improved position in the weight space. By considering the interference during our data selection (gradient selection) of $\Delta\theta'{1}$ and $\Delta\theta'{2}$, we reduce the interference of weight updates from different models. After parameter aggregation, the merged model $\Delta\theta_{merged}$ can be improved to an enhanced position in the weight space represented by $\Delta\theta_{targeted}$. **W2 and Q2**: > The paper contains some grammatical errors and typos. For example, in Table 3, "Performance of Data quality" should be "Performance of data quality", and on line 290, "the data influenced calculated via the last layer weights prone to a cancellation effect" should be corrected to "the data influence calculated via the last layer weights is prone to a cancellation effect". We appreciate your detailed suggestions on the typos! We have fixed them all and will update them in our final version. --- Rebuttal 2: Comment: Thank the author for providing further information and contributing great work to the community! I have read other reviewers' comments and the rebuttal information. I would like to remain my rating as 'Accept.' --- Rebuttal Comment 2.1: Comment: We are grateful for the reviewer's endorsement! We would also like to highlight that we have additional results on FiQA with different settings that all support our claims and conclusions, further strengthening our contribution. We plan to add all of these results to the final version of our paper and the appendix. For more details, please refer to the global comment and our response to Reviewer V7za.
Summary: This paper proposes a data quality control technique for the collaborative training of large language models from filtered private heterogeneous data domains via a quality score function that tracks the gradient of each training sample. The proposed framework is tested in medical and multilingual settings to demonstrate its effectiveness. Strengths: - This paper studies an important issue in private data quality control for collaborative model development, especially for the applications of LLMs. - The proposed gradient-based score function is straightforward and effective. Weaknesses: - The assumption of homogeneity for model architecture, usage of low-rank adaptation, and anchor data with global threshold limit the overall applicability of the proposed framework. - The experimental settings/details and the ablation study are limited. Technical Quality: 2 Clarity: 2 Questions for Authors: - Why use a global threshold as the unified standard of data quality? How were the anchor data selected? Would the usage of anchor data (validation set) introduce an additional bias toward the overall data quality (some datasets could potentially be all excluded) and harm the generalization ability? It would be better to see the performance comparison between global and local thresholds. - How is the per-sample gradient computed from Eq. (1) line 172? Could the authors provide a complexity analysis on this part? - Line 196, “sparse nature of low-rank gradients” refers to the trainable weight matrices being low rank or the obtained per-sample gradient being sparse and low-rank. - It would be great to show the ablation study of directly applying the proposed methods to raw data (supposed high-quality data), given the federated results reported in Table 2. - Could the authors provide several examples of low-quality data samples as presented in Table 4? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See W1-2, Q1-2, 4 Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1** > The assumption of homogeneity for model architecture We clarify that we follow the commonly agreed assumption of previous work on collaborative learning (federated learning, model merging) [1, 2, 3]: in order to aggregate the model in the parameter space, we should have the same model architecture or the same adapter. It's impossible to perform model aggregation if the models have different architectures. Exploring different model architectures may be beneficial for other applications but is not the focus of our paper; for example, multimodality. > Usage of low-rank adaptation We wanted to clarify that the setup and motivation for using collaborative fine-tuning is to gather knowledge in an efficient manner. We focus on situations where individual clients don't have enough data or compute resources to pre-train a large language model or even perform full parameter fine-tuning. Low-rank adaptation is a commonly used parameter-efficient fine-tuning method for such scenarios. Therefore, the usage of low-rank adaptation is not a limitation but a way to achieve broader applicability. It is also regarded as a strength from other reviewers' perspectives: ngAe and igLM. **W2** > The experimental settings/details and the ablation study are limited. We have provided more ablation studies and experimental details in the uploaded PDF. We would really appreciate hearing your feedback regarding the limitations in more detail! ------- **Q1** > Global threshold as the unified standard, versus local thresholds: It is necessary to have a global threshold to deal with the heterogeneity of data quality. Since the ratio of local low-quality data is unknown, and the ratios for each client are different from each other, local thresholds are not able to adapt to different and diverse low-quality data ratio scenarios. We show the comparison between global and local thresholds in Figure 1 of our PDF. There, the local optimal threshold represents the optimal thresholds for each client and is unknown in practice. The global threshold can be close to the local optimal, while local thresholds are far from the optimal thresholds. > Anchor data selection and biases The anchor data is selected as a held-out high-quality dataset from the same data source as the test data. We show the ablation study of the anchor data selection and validation set selection in Figure 2 in our uploaded PDF, which demonstrates the robustness and unbiased selection in our experiments. In Figure 2 (Left), the global threshold remains stable and robust with different numbers of anchor data selected. In Figure 2 (Right), the box plot shows the data scores for 20 training samples randomly sampled from our training dataset over 5 different validation sets of equal size. Although the scores vary depending on the validation set used, the variance is within an acceptable range, ensuring that the order of their scores remains stable and robust. Thus, bias is not a concerning factor in our paper. We acknowledge that, in other cases, validation set selection will significantly affect the data selection process, and this is a common issue for all data selection work using validation sets. **Q2** Per-sample gradients are calculated for each training sample from the checkpoint saved during the model training. For SGD, $\boldsymbol{\theta}^{t+1}-\boldsymbol{\theta}^t=-\eta_t \nabla \ell\left(\boldsymbol{z} ; \boldsymbol{\theta}^t\right)$ For Adam, $\boldsymbol{\theta}^{t+1}-\boldsymbol{\theta}^t=-\eta_t \mathcal{L}\left(\boldsymbol{z}, \boldsymbol{\theta}^t\right) $, $\mathcal{L}\left(\boldsymbol{z}, \boldsymbol{\theta}^t\right) \triangleq \frac{\boldsymbol{m}^{t+1}}{\sqrt{\boldsymbol{v}^{t+1}}+\epsilon} $ For AdamW, $\boldsymbol{\theta}^{t+1}-\boldsymbol{\theta}^t=-\eta_t \mathcal{L} \left(\boldsymbol{z}, \boldsymbol{\theta}^t\right) $, $\mathcal{L}\left(\boldsymbol{z}, \boldsymbol{\theta}^t\right) \triangleq \frac{\boldsymbol{m}^{t+1}}{\sqrt{\boldsymbol{v}^{t+1}}+\epsilon}{+\lambda \boldsymbol{\theta}^t}$ - overall compute complexity, where $N$ is number of checkpoints, and $d$ is gradient dimension. $$ \mathcal{O}\left(N \cdot |\mathcal{D}| \cdot\left|\mathcal{D}_{\text {val }}\right| \cdot d\right) $$ - overall storage complexity: $$ \mathcal{O}(|\mathcal{D}| \cdot N \cdot d + |D_{val}| \cdot N \cdot d) $$ **Q3** We use LoRA to reduce the number of trainable parameters, which freezes the pre-trained weights and adds a low-rank adaptor to linear layers throughout the network. This means that, by nature, the trainable matrices are low rank. While our method is based on LoRA, it would also be very easy to adapt our proposed method to training where per-sample gradients are sparse and low-rank, for example, GaLore [4]. Our method is orthogonal to different training methods, and we believe this is a very promising direction for another future work. **Q4** > ablation study of directly applying the proposed methods to raw data (supposed high-quality data). As we clarified in our global response, raw data is not one of the baselines we are comparing our data selection method with, since the scope of our paper is to conduct data selection to filter out the low-quality data mixed in with the high-quality data. However, we agree it would be interesting to see how our methods perform on the dataset without low-quality data. We are currently running these experiments and will update the results once we have them. **Q5** > Examples of low-quality data samples Sure, we have provided the low-quality and high-quality data samples with their scores in our PDF for your kind review. [1] TIES-Merging: Resolving Interference When Merging Models, NeurIPS 2023. [2] Editing Models with Task Arithmetic, ICLR 2023. [3] Communication-Efficient Learning of Deep Networks From Decentralized Data, Artificial Intelligence and Statistics, 2017. [4] GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection, ICML 2024. --- Rebuttal Comment 1.1: Comment: Thank the authors for the response and clarification. The added new results certainly help us understand the proposed framework. - Regarding W1, I wanted to highlight the potential use case for mixed fine-tuning with other PEFT methods and LoRA with different parameters (e.g. rank). - Q1, I suggest that the authors incorporate the discussion of the necessity and justification for global thresholds and their implications. Also, the current conclusion is based on limited observation from the selected datasets. - Q3 Here, do the authors use macro-batching to obtain the per-sample gradient? If that's the case, then I reserve the scalability concerns over the proposed framework. In recognizing the effort of the authors for rebuttal, I increase my score to 5. --- Rebuttal 2: Comment: First of all, we would like to clarify the following definition and notations: **Definition 1.1** (Data Quality on Specific Domain $k$)**.** Given a model architecture $\theta$, a training configuration (optimizer, etc.), and a validation set $D_{val}$ in a specific domain $k$, the quality of training data $z$ is defined as follows: for $z\_1, z\_2 \in \mathcal{D}\_{train}$, if $\mathcal{L}\_{val}(\theta(z\_1), D\_{val}) < \mathcal{L}\_{val}(\theta(z\_2), D\_{val})$, then the quality of $z_1$ is considered higher than that of $z_2$. Here, $\mathcal{L}_{val}$ denotes the validation loss. In other words, the lower the validation loss, the higher the data quality. **Definition 1.2** (Data Quality on Collaborative Private Domains)**.** Given a model architecture $\theta$, a training configuration (optimizer, etc.), and a validation set $D\_{val}=\{\mathcal{D}^{(1)}\_{val}, \mathcal{D}^{(2)}\_{val}, \ldots, \mathcal{D}^{(K)}\_{val}\}$ for all $K$ tasks, the quality of training data $z$ is defined based on the validation loss of **the global model** $\theta_{merged}$ on $D_{val}$. Specifically, for $z\_1, z\_2 \in \\mathcal{D}^{(k)}\_{train}$, if $\mathcal{L}\_{val}(\theta\_{merged}(z\_1), D\_{val}) < \mathcal{L}{val}(\theta_{merged}(z\_2), D\_{val})$, then the quality of $z_1$ is considered higher than that of $z_2$. As in the single-domain case, lower validation loss indicates higher data quality. **Remarks** (Enhancing Data Quality on Collaborative Private Domains)**.** In the collaborative learning framework, the ratio and distribution of low-quality data are unknown a priori. Only the server has access to the global distribution of both high-quality and low-quality data, while individual users cannot infer the global distribution from their local distributions due to statistical heterogeneity. The server can infer the distribution of high-quality data from public anchor data. Our objective is to select data points that most significantly reduce the validation loss of the global model, rather than optimizing for each local model independently. It is important to note that the scope of this study is not considering new models joining during training or continual learning paradigms. -------- Below are our responses to each follow-up question. We appreciate your insightful comments and the time and effort you've engaged in helping us clarify our proposed method. We look forward to continuing discussion with you on the OpenReview system if you have any remaining concerns, questions, or if you feel there's any misunderstanding on our part regarding the questions! **1. Mixed Model Architectures:** As per **Definition 1.1** above, data quality is specific to each model architecture. Different architectures naturally lead to the selection of different high-quality data points. Under the current definition of data quality, it is not a reasonable assumption to uniform data selection across models with varying configurations. For example, the data quality standard would differ significantly between LoRA models with rank $r=16$ and those with $r=4$. When it comes to individual models or groups of models from different domains with identical configurations, our proposed method is orthogonal and can be broadly applied. It is compatible with any model that allows gradient computation and can seamlessly integrate with state-of-the-art PEFT techniques such as AdaLoRA or Prompt Tuning. Given that current collaborative learning and distributed training scenarios predominantly focus on models with homogeneous architectures, we believe our algorithm has broad applicability. While we acknowledge the potential for mixed fine-tuning with various PEFT methods, our current focus is on providing a SOTA solution for the most common use cases in the field. --- Rebuttal 3: Comment: **2. Global Threshold:** > I suggest that the authors incorporate the discussion of the necessity and justification for global thresholds and their implications According to **Definition 1.2** and **Remarks**, global thresholds are essential in our framework to effectively handle **domain heterogeneity** mentioned in our paper. As we highlight in our paper, local models can interfere with each other when merged in the weight space. Since our objective is to reduce the validation loss of the global model rather than local models, data selected by local thresholds don't necessarily translate to an optimal global model when merged. Global thresholds are necessary to coordinate different local distributions, thereby avoiding potential conflicts or interference during model merging or aggregation. > The current conclusion is based on limited observation from the selected datasets. We have also conducted experiments on the FiQA dataset with different settings (see more details in our response to Reviewer V7za). They all demonstrate the same pattern, showing that the global threshold minimizes the sum of the distances to each local optimal threshold, while local thresholds cannot lead to global optimal solution. According to the instructions for the discussion period, we are not allowed to include new figures. However, we will incorporate figures showing the score distribution with different thresholds in the final version of our paper. In addition, we provide details and proof on how we determine the global threshold as follows: Suppose the score distribution of all data (including both good and bad data) is denoted by $F(x)$. The score distributions for good data and bad data are represented by $f(x)$ and $g(x)$ respectively. The relationship between these distributions can be expressed as: $$ F(x)=\alpha*f(x)+(1-\alpha)*g(x) $$ where $\alpha$ is the proportion of good data in the dataset. We can estimate $F(x)$ using the scores from all clients: $$ \hat F(x) = \Sigma_{z_i \in D_{train}} kernel(x-S(z_i)) $$ Similarly, we estimate $f(x)$ using the scores from anchor data: $$ \hat f(x) = \Sigma\_{z\_i^* \in D\_{anchor}} kernel(x-S(z^*\_i)) $$ Assuming $\alpha$ is known, which can be estimated from the quality of the global data, we can estimate the bad data distribution as follows: $$ \hat g(x)=\frac{\hat F(x)-\alpha*\hat f(x)}{1-\alpha} $$ The optimal global threshold $\tau^*$ can be obtained by : % $$ % (1-\alpha)\hat g(\tau^*)=\alpha*\hat f(\tau^*) % $$ % This implies that the proportion of good data above the threshold $\tau^*$ is maximized. $$ \tau^*= argmax\{\frac{\int_{\tau}^{+\infty}\alpha\hat f(x) \, dx}{\int_{\tau}^{+\infty}(1-\alpha)\hat g(x) \, dx}+\lambda\int_{\tau}^{+\infty}\hat F(x) \, dx\} $$ where $\lambda$ is a hyper-parameter that balance the data number and good data ratio. For a specific submodel $k$, the algorithm can be further refined. We denote the estimated good data distribution for a specific data source as $\hat{f}^{(k)}(x)$: $$ \hat f^{(k)}(x) = \Sigma_{z_i^* \in D^{(k)}_{anchor}} kernel(x-S(z^*_i)) $$ The threshold for the specific submodel can then be determined by: % $$ % \hat g(\tau^*)=\hat f^{(k)}(\tau^*) % $$ $$ \tau^*= argmax\{\frac{\int_{\tau}^{+\infty}\alpha\hat f^{(k)}(x) \, dx}{\int_{\tau}^{+\infty}(1-\alpha)\hat g(x) \, dx}+\lambda\int_{\tau}^{+\infty}\hat F(x) \, dx\} $$ ------ **3. Per-sample gradient calculation:** We appreciate the opportunity to clarify our approach to calculating sample gradients from a checkpoint: For a given checkpoint with parameters $\theta_t$, we compute the gradient of a single sample $z_i$ as follows: $$ \nabla\ell\left(\boldsymbol{\theta}\_t;\boldsymbol{z}\_i\right) =\nabla\_{\theta_t} \text{LOSS}(f\_{\theta_t}(x\_i),y\_i) $$ where $x_i$ and $y_i$ are the input and label of $z_i$, respectively. To enhance computational efficiency, we can focus on specific parts of $\theta$, such as individual layers or LoRA modules (e.g., q, k, v, o). For instance, if we concentrate solely on the gradients of parameters in layer 0, we adjust our gradient calculation as: $$ \nabla\ell\left(\boldsymbol{\theta}\_t;\boldsymbol{z}\_i\right) =\nabla\_{\theta\_t^{layer0}} \text{LOSS}(f\_{\theta\_t}(x\_i),y\_i) $$ It's important to note that this gradient computation process is based on a single checkpoint, with no parameter updates occurring throughout. This allows us to process each training data point in parallel, significantly improving computational efficiency. Regarding your concern about scalability, did you mean "micro-batching" here? In our scoring calculation step, we use a micro batch size of 1. Since there are no model updates during this process, scalability is not an issue. We'd be glad to open-source our code implementation for more details. --- Rebuttal Comment 3.1: Comment: Thanks for the response. As for point 3, it should be 'micro-batching.' And I understand that the computation of the per-sample gradient is based on a single checkpoint. But I'm not sure what the authors mean here by "allow us to process each training data point in parallel". Popular auto differentiation libraries such as PyTroch provide the aggregated gradient for a batch but not per-sample gradients. So, to get the per-sample gradient, micro-batching is generally used to run backpropagation one time per data sample, where the benefit of parallelism is lost. So, could the authors further elaborate on why this part's scalability is **not** an issue? --- Rebuttal 4: Comment: We appreciate your insightful question regarding the scalability of our method! From an implementation perspective, per-sample gradient calculation is a widely encountered quantity in differential privacy and meta-learning. There are many tools we can use to compute per-sample gradients efficiently if needed, such as: (1) the Opacus library, which focuses on extending PyTorch for differential privacy, or (2) functorch, where composing *vmap* and *grad* provide significant speedup. In general, vectorization with *vmap* should be faster than running a function in a for-loop and competitive with manual batching. Moreover, it's important to clarify the context of our work: our focus is on collaborative fine-tuning scenarios where clients have limited resources and data, naturally constraining the scale of per-client data selection. This setting often doesn't involve extremely large datasets on individual clients. In multi-GPU setups, we can leverage parallel computation across GPUs to enhance efficiency. For cases involving clients with extremely large datasets or single GPU, in addition to using tools orthogonal to our proposed method like the Opacus library or functorch, we can also adapt our algorithm to address scalability requirements: We can compute a coarse-grained per-batch score, select all samples in batches exceeding our per-batch threshold, and apply per-sample gradient computation only to lower-scoring batches. This extended approach allows us to balance efficiency and precision in data selection, addressing potential scalability issues in large-scale scenarios from the algorithm perspective. --- Rebuttal 5: Comment: We would like to thank you once again for your time and effort in providing insightful feedback and comments. As there are only a few hours remaining for further discussion, could you kindly let us know if you have any remaining concerns or points of confusion? We would be more than happy to discuss, address, and resolve them with you! If our responses have addressed your follow-up concerns, we would greatly appreciate your considering an increase in your rating to reflect these new improvements and clarifications, which we'll incorporate in our updated paper. Thank you for your continued engagement and support in strengthening our research! Title: Looking forward to your feedback and further discussion! --- Rebuttal 6: Title: Experimental results on raw data Comment: **Q4** > ablation study of directly applying the proposed methods to raw data (supposed high-quality data). We have completed all the requested experiments and are pleased to share the results. Regarding generalizability beyond the medical domain, we conducted experiments using the FiQA dataset, which focuses on financial question answering. For other results, please refer to our discussion with Reviewer V7za. *Dataset*: We have four clients, each with 2000 training samples. We split the data into a training set (8000 samples in total), a validation set (also called the anchor set, 100 samples), and a test set (500 samples). *Evaluation metric*: Responses from the model fine-tuned on our dataset are rated by GPT-4 on a scale from 1 to 10, reflecting criteria including relevance, accuracy, and fluency. To address potential positional bias, we send our response along with the benchmark output to GPT-4 twice, with different orders. We then calculate the average of these scores as the final performance score. The table presents the performance of models trained with and without our data selection method across these different proportions. The results demonstrate that our method effectively enhances data quality across all scenarios with **GPT-4 scoring**. And in terms of **accuracy** of the data selection, our method consistently selected over 99% of the high-quality data across different proportions of low-quality data. | | GPT-4 Scoring Performancew/o data selection | GPT-4 Scoring Performancew/ our data selection | Accuracy | Global Threshold (score) | | ---------------------- | ------------------------------------------- | ---------------------------------------------- | -------- | ------------------------ | | 0% bad data (raw data) | 3.7400 | ——— | ——— | ——— | | 20% bad data | 3.4735 | 3.6270 | 99.3125% | 0.0033 | | 50% bad data | 2.9625 | 3.7175 | 99.7250% | 0.0017 | | 80% bad data | 2.9155 | 3.8335 | 99.9375% | 0.0012 | | 100% bad data | 2.2860 | ——— | ——— | ——— | To understand the adaptability of our global threshold, we analyzed how the global threshold changes with different proportions of low-quality data. It illustrates that our **global threshold adjusts across varying levels of data quality**.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable feedback and appreciate the great efforts made by all reviewers, ACs, SACs, and PCs. We are grateful that the reviewers have multiple positive impressions of our work, including:  * *[Motivation]* **studies an important issue** (Xfnf, igLM)  * *[Method]* **straightforward and effective** (Xfnf) and **principled, and relatively simple to implement requiring only gradients (rather than higher order computations)** (ngAe) and **provides a general pipeline applicable to various models without the need for task-specific adjustments**(V7za) * *[Results]* **promising results and could have significant implications** (igLM) * *[Presentation]* **diagrams are well done and help the reader understand the method in an intuitive manner** (ngAe), **presented their ideas in a well-organized and logical structure, making the paper easy to follow** (igLM), and **the structure of the article is complete and easy to understand**(V7za). Firstly, we would like to restate and clarify our assumptions for data selection in collaborative fine-tuning to help audiences better understand the scope of our work: * We maintain the same setup as other previous works. To enable collaborative fine-tuning, each model shares the same architecture or the same LoRA adapter to adapt to different target tasks. To improve the efficiency of assigning utilities to each data point, we only calculate gradients for the parameter-efficient QLoRA modules. * Evaluation Setup: For all methods, we use the same setup following previous literature. Our raw training set, hold-out validation set, and test set are IID. For the experimental setup, they are from the same data source. For the Medical QA task, we split the MedAlpaca dataset into raw training set, hold-out validation set, and test set. For the multilingual task, we split the MMedBench dataset into raw training set, validation set, and test set. For the low-quality training set, we pollute the raw training set by mixing data from other domains into it; more details are in the PDF. In our paper, we focus on in-domain evaluation and leave out-of-domain or cross-domain (where training set and test set are from different data sources) for future work. Secondly, we wanted to highlight our contributions drawn from our method design and experimental results: * To the best of our knowledge, we are the first to propose a data selection method for large language models in a collaborative setting, while previous work has mainly focused on traditional centralized settings. We bring up the insights to view federated learning and model merging within the same framework, incorporate different experimental setups and unify federated learning and model merging methods, making it universally applicable. * Our method performs well on generation datasets and takes into account scenarios with bad data, while previous work has not considered downstream domain-specific generation tasks for large language models. Our method does not require repeated training. We once again express our gratitude to all reviewers for their time and effort devoted to evaluating our work. We eagerly anticipate your further responses and hope for a favorable consideration of our revised manuscript. Pdf: /pdf/92ef809e251440a92525cd907b970c439b41cc53.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
AMAGO-2: Breaking the Multi-Task Barrier in Meta-Reinforcement Learning with Transformers
Accept (poster)
Summary: This paper proposes a method for training transformer-based policies for multi-task meta-RL settings, where the task distribution has multiple tasks each of which has parametric variation and the training and test sets both include all tasks. Focus is on developing a method that handles the scale variation in returns between different tasks. Strong empirical results. Strengths: - The method description is clear and well motivated. - The empirical results are strong and clearly presented. Weaknesses: - Framing issue: The considered setting is interesting and a fine target for research. However, parametric variation is not the grand challenge in meta-RL and the paper, in my opinion, does not do enough to clearly delineate the subproblem it tackles. - The ultimate goal of meta-RL is training an adaptive agent that can learn any new task it is presented with, not just parametric variation of existing tasks. This paper does not explore generalization to unseen non-parametric task variants. This should be more clearly stated early in the paper. - The test set of Meta-World ML45 benchmark is mentioned in the experiments, and reasonably ignored. However, since ML45 is already mentioned in the abstract, it would be better to mention that the paper only considers the training set. - Minor framing issue: Line 27 says meta-RL is about task identification at its core. While many meta-RL tasks can be solved via methods that reduce to identifying the task and then doing multi-task learning, meta-RL as a whole isn't limited to that. The paper adopts a broader viewpoint in the scaling beyond multi-task barrier paragraph. It would be better to update the introduction to match this perspective. Technical Quality: 4 Clarity: 3 Questions for Authors: - This is out of scope for the review, but I'm curious, what do you think is the best bet for a meta-RL benchmark that would enable investigating the non-parametric generalization ml45 fails to provide? Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed briefly in the conclusion. Especially limitations concerning the non-parametric generalization mentioned in weaknesses should be discussed more extensively. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and for the constructive comments on limitations and writing. We will discuss your suggestions below. > Framing issue: The considered setting is interesting and a fine target for research. However, parametric variation is not the grand challenge in meta-RL and the paper, in my opinion, does not do enough to clearly delineate the subproblem it tackles. Yes, we fully agree that the goal should be adaptation to new tasks. Our perspective is that non-parametric variation will be possible if we can scale up our training set to keep that variation within distribution. This will likely take far more tasks than are available in benchmarks like Meta-World, and we generally think that attempts to measure or improve non-parametric variation on Meta-World, Atari, Procgen, and similar benchmarks can risk drawing conclusions from too small of a training set. The challenge is that we still struggle to overfit these small task sets and mostly leave this issue to be tackled by MTRL techniques where solutions might not necessarily be transferable to larger benchmarks. A first step might be to address the key MTRL challenges while maintaining the label-free meta-RL perspective. We will add more discussion to the paragraph on lines 42-55 of the introduction. >The test set of Meta-World ML45 benchmark is mentioned in the experiments, and reasonably ignored. However, since ML45 is already mentioned in the abstract, it would be better to mention that the paper only considers the training set. Yes, we can do that. We are not aware of any results that demonstrate strong transfer to the test set without introducing some restriction on the environment or policy to regularize training or without adding additional knowledge like language, goal states, or demonstrations. The reviewer is clearly familiar with Meta-World, but for any other readers who may not be, the train set of 45 tasks can still serve as a reasonable test of parametric variation because the environments are carefully designed to make these changes unobservable. > Minor framing issue: Line 27 says meta-RL is about task identification at its core … The paper adopts a broader viewpoint in the scaling beyond multi-task barrier paragraph. It would be better to update the introduction to match this perspective. We will rephrase to align this with the more detailed explanation given in Section 2. This line was meant to be a fast summary of the meta-RL perspective where adaptation reduces to exploring the environment to infer missing details of the unobserved parameters of the current task, which lets us treat multi-task RL problems as a special case. > This is out of scope for the review, but I'm curious, what do you think is the best bet for a meta-RL benchmark that would enable investigating the non-parametric generalization ml45 fails to provide? Meta-RL is facing a difficult benchmarking challenge at the moment, and it’s the reason our experiments need to drift into long-term memory and multi-task domains. A more affordable subset of XLand 2 [1] would probably be the best large-scale benchmark if it were accessible. There have been some recent efforts to create an open-source grid world version of this domain. One interesting direction for large-scale experiments (that is open-source) could be Gym Retro, which supports thousands of ALE-like games. Arcade games and platformers are a natural fit because they often give the player multiple lives/attempts before a reset. Meta-learning was the original goal of Retro, but this is obviously challenging and there hasn’t been much progress made. [1] *Human Timescale Adaptation in an Open-Ended Task Space*, Adaptive Agent Team, 2023 --- Rebuttal Comment 1.1: Comment: Thanks for the thoughtful response. I believe the paper would be a welcome contribution to meta-RL.
Summary: In this paper, the authors investigate to utilize the Transformer architecture for multi-task RL tasks by giving the trajectories from multiple previous episodes as input tokens. More specifically and technically, they addressed the issue of the imbalanced rewards from different task (e.g., For Atari games, some games can give very high valued rewards while it is not for other games). It has been hypothetically resolved by utilizing additional knowledge such as the task label or normalizing rewards. They tried to address this without the additional knowledge for the task different from the previous works. Through this, they showed their modeling is effectively working well for diverse multi-task environments, especially when the reward distribution for each task is very different. Strengths: - Their problem setting is well positioned. As they pointed out, lots of meta-RL environments are mainly designed to evaluate that the agent can work when there are the small different in the tasks such as different positions in a single task, but we should aim to more extensive range of tasks to build the generalist agent. To do that, the imbalanced reward issue is importantly addressed even it has been handled with additional knowledge such as task label. - This paper is well written for their motivation and proposal. Their writting for their motivation and proposal is very clear and good enough to understand for me. - They reported strong experimental analyses. They did experiments on diverse environments such as Meta-World, POPGym, Procgen, Multi-Game Atari and Gym Retro for their modeling by comparing it to the ablation versions with and without some components of their modeling. Those results show that the strength of their modeling (better handling the imbalanced rewards without task label). Additionally, they also analyzed the in-context learning performance on Procgen environment. Weaknesses: - Some experimental analyses are hard to understand. I will ask it in Questions section. Technical Quality: 3 Clarity: 3 Questions for Authors: - Questions on the experimental results - For Multi-Game Procgen experimental analysis, especially the result in Figure 6 right, I am not sure that I understood correctly. You analyzed as "the policy update reduces to IL on a dynamic percentage of the replay buffer, but the ability to ignore a fraction of the dataset automatically allows for self-improvement in a way that standard IL does not.". What I understood from this analysis is that your agent can reduce to train the policy by imitating the given action, then the y-axis value of the graph should be increased as training progress goes on. But it is different for different environments, which is that what you mentioned as the "ignore a fraction of the dataset automatically"? - I didn't understand clearly the Gym Retro results. Why did you do this experiment? As I understood, it is to evaluate the larger scale of difference between tasks, why Gym Retro gives the setting for this testing? How did you set to test this? What means the SuperMario1 and 3? What means SuperMario2japan and SuperMarioWorld? How could I analyze the experimental results? - For Multi-game Atari, can you compare this results with the single task agent results such as DreamerV2 [1] or state-of-the-art results on this environment? I want to compare the relative performance of your model with the single task agent like shown in Figure 5. - For results in Figure 7, can you test it with diverse length of context? I am interested what happens if very large number of context is given. [1] Hafner, Danijar, et al. "Mastering atari with discrete world models." arXiv preprint arXiv:2010.02193 (2020). Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Toward generalist agent, their investigation can be seen as being limited in the homogenous environment setting. For example, extending it to train a single agent for multiple environments where action spaces are different as did in [1]. However, it can be seen as over their scope, I didn't think it is a critical limitation of their study. [1] Reed, Scott, et al. "A generalist agent." arXiv preprint arXiv:2205.06175 (2022). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the review and comments on the experimental results. The evaluation domains in our work are computationally expensive, so it was not possible for us to fully address your questions with finalized results during the rebuttal window. We have begun work on these experiments and will update the paper when the results are complete. > For Multi-Game Procgen experimental analysis, especially the result in Figure 6 right, I am not sure that I understood correctly. The actor loss is performing supervised learning on a filtered subset of (obs, action) pairs from the replay buffer that are estimated to have a positive advantage by the critics. This update would be equivalent to imitation learning if the filter were to give a weight of one to every example, and we are relying on an accurate critic to mask low quality actions from the loss. Figure 6 right measures the percentage of each batch that passes the filter (A(s, a) > 0) and is used to compute the supervised loss. When the critic network is initialized it approves approximately half of the actions. As training continues, the curve changes according to a combination of the rate of experience collection, environment difficulty, gradient steps per environment step, and the total size of the replay buffer. > For Multi-game Atari, can you compare this results with the single task agent results such as DreamerV2 [1] or state-of-the-art results on this environment? I want to compare the relative performance of your model with the single task agent like shown in Figure 5. Atari is a highly optimized domain and we are not claiming state-of-the-art results here by any means. Our goal with this experiment is to use Atari as an example of a standard multi-task setting without the multi-episode adaptation or partial observability of the preceding results. We are measuring whether the change in training objective is enough to make progress without the established Atari trick of clipping rewards in [-1, 1] and multi-task RL techniques that rely on per-task gradients or network outputs. We want to measure a direct ablation of the same basic method that does not add orthogonal details that have led to much of the sample efficiency gains on the ALE. We will work to aggregate the results of single-task versions of the method using the default scale-dependent update (in green) for additional context. > For results in Figure 7, can you test it with diverse length of context? I am interested what happens if very large number of context is given. The trade-off between context length and performance depends on the episode length of the environment. The high frame rate of pixel-based domains can create much longer episodes than typical toy meta-RL experiments. Ideally, the context would span the entire length of the 2-episode rollout, but computational constraints (GPU memory) prevent this. It is worth noting that although we are not using as long of a context as we would like, 768 images is still an unusually long input sequence when training online RL from scratch. > their investigation can be seen as being limited in the homogenous environment setting. For example, extending it to train a single agent for multiple environments where action spaces are different as did in [1] … I didn't think it is a critical limitation of their study. Yes we are padding the observation/action spaces to the same shape across environments when necessary. A variable-length token encoding like Gato would probably be the first step in attempting this kind of heterogeneous setting. We avoided this because it adds significant computational cost in terms of sequence length per timestep of policy memory, and we felt that the main point of return scale variance is best demonstrated in domains where the observation and actions are similar but the reward functions are different. > I didn't understand clearly the Gym Retro results. Why did you do this experiment? This experiment is meant to be an extension of the Atari setting where rewards are on very different scales. The difference between the games we’ve chosen here and Atari is that the Mario levels are similar enough where some kind of zero-shot transfer might be realistic. The SuperMario names refer to the titles of the Mario video game series that are supported by the Gym Retro environment. We are working to add more context to these results by including the other filtered imitation learning ablation (“Ind. Actor / Dep. Critic” - the red curve in Figure 3) as another reference. The "Dep. Actor / Dep. Critic" comparison is not possible here due to the action space of the environment. These runs began before the reviews were released but are expensive to collect and have not finished. They will be added to Figure 9. --- Rebuttal 2: Title: Reply to the rebuttal Comment: Thank you the authors for their rebuttal. They substantially addressed my concerns, so I maintain my score.
Summary: This paper addresses the challenge of scaling meta-reinforcement learning (meta-RL) to handle multiple tasks without explicit task labels. It introduces a method where both the actor and critic objectives are converted to classification terms, decoupling optimization from the scale of returns. This approach builds upon recent Transformer-based meta-RL techniques. The effectiveness of this method is demonstrated through large-scale comparisons on several benchmarks, including Meta-World ML45, Multi-Game Procgen, and Multi-Task POPGym. The results show significant improvements in online multi-task adaptation and memory problems. Strengths: ### S1. Novel Approach and Problem Identification The paper addresses the critical issue of imbalanced training losses in multi-task RL, which often arise due to uneven return scales across tasks. By converting the actor and critic objectives to classification terms, the method effectively decouples optimization from return scales, which is a novel and practical approach. ### S2. Empirical Validation The method is validated through comprehensive experiments on multiple benchmarks, demonstrating significant improvements in task adaptation and memory problems. The results are clearly presented, with detailed comparisons to existing methods, highlighting the advantages of the proposed approach. ### S3. Clarity of Presentation The paper is well-structured, with clear and concise explanations of the proposed method. Figures and tables are effectively used to illustrate results, enhancing the clarity of the presentation. Weaknesses: ### W1. Lack of Theoretical Motivation and Analysis The paper focuses heavily on empirical results but lacks a thorough theoretical analysis of why the proposed method outperforms existing approaches. A deeper exploration of the theoretical foundations and a comparison with existing theoretical frameworks would strengthen the paper. ### W2. Limited Generalization Analysis While the method is validated on several benchmarks, the paper would benefit from a broader analysis of its generalization capabilities. Additional experiments on more diverse datasets and environments would provide a more comprehensive assessment of the method's applicability. For example, meta-world to multi-game procgen. ### W3. Assumption of Return Scale Variability Although the paper discusses that return scales across tasks can vary significantly, the proposed method does not explicitly address how to handle extreme variations in return scales beyond the classification transformation. A more detailed discussion on how the method can be adapted or modified for tasks with highly diverse return scales would be valuable. Technical Quality: 3 Clarity: 3 Questions for Authors: ### Q1. Scalability What are the computational requirements and scalability of the proposed method when applied to very large-scale environments and datasets? Are there any practical limitations or considerations? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have acknowledged the limitations related to the variability of return scales and the focus on specific benchmarks. However, a more detailed discussion on potential negative societal impacts and how to mitigate them would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We will try to address your questions and would be happy to continue the discussion. > By converting the actor and critic objectives to classification terms, the method effectively decouples optimization from return scales, which is a novel and practical approach. To clarify, the classification loss terms are an implementation detail that appears in recent work we have cited and discussed in Sections 2 and 3. We do not want to claim the objectives themselves are novel. Instead, we show that while these ideas may have various motivations and minor benefits in other settings, they provide a major improvement to a key problem in the area of meta-learning and generalization without task labels. > The paper focuses heavily on empirical results but lacks a thorough theoretical analysis of why the proposed method outperforms existing approaches. We would be open to suggestions about how we can expand our analysis. Prior works have studied the challenges of multi-task optimization (Section 2). We do not think our work adds to the understanding of this issue, but it does help us remove an extra challenge where RL methods unintentionally place the objective of each task on a different scale (Section 3). We have added a more thorough explanation of this effect in a new figure attached to the rebuttal material. > While the method is validated on several benchmarks, the paper would benefit from a broader analysis of its generalization capabilities … For example, meta-world to multi-game procgen. We agree that transfer between domains like meta-world → multi-game procgen would be very interesting. This kind of transfer between different observation/action spaces would require a specialized input/output format for multi-modal trajectories of different shapes. Gato [1] is a good example for how we might go about this, but their method stretches input sequence lengths in a way that makes it expensive to evaluate memory. > Although the paper discusses that return scales across tasks can vary significantly … A more detailed discussion on how the method can be adapted or modified for tasks with highly diverse return scales would be valuable. In our experience, the extreme limits of value classification can be managed by globally rescaling all the rewards to a more stable range. The more challenging issue is the way variations in task difficulty drive uneven changes in values even when the initial and optimal returns are fairly well-bounded. For example, Multi-task POPGym and Meta-World ML45 both have bounded returns, but the benefits of the classification-style loss are much more clear on ML45 than POPGym. Reducing the number of classification bins may help improve performance in tasks where returns rapidly shift from the lower bound to the upper bound. We have added a demonstration of this on a POPGym task in the extra rebuttal material. > What are the computational requirements and scalability of the proposed method when applied to very large-scale environments and datasets? Are there any practical limitations or considerations? The learning update itself scales with the forward/backward of a single sequence model policy. It is comparable to imitation learning because the relative cost of the extra critic term decreases as the size of the shared Transformer increases. It does not directly depend on the task count or dataset size (replay buffer). The classification-style critic loss can be slower than standard regression as we have increased the dimension of the critic networks’ outputs to B bins. Relative to typical online RL experiments, we are evaluating large policies (10-20M+ parameters) and large datasets (20M+ timesteps). All the experiments were conducted on NVIDIA A5000 GPUs. The model is trained on one GPU while an extra GPU can be used for data collection to reduce wall-clock time. Please see our global reply for a discussion of limitations. > a more detailed discussion on potential negative societal impacts and how to mitigate them would be beneficial Sequence-based RL on off-policy data is a flexible approach to memory, adaptation, and generalization settings. The aim of our work is to make this technique easier to use by making more complex multi-task benchmarks realistic for research. We do not think our method adds negative societal impacts beyond the existing risks of RL systems, but we would be happy to discuss this further if you have concerns. [1] *A Generalist Agent*, Reed et al., 2022 --- Rebuttal Comment 1.1: Title: Acknowledgement Comment: I appreciate the authors for the detailed response. Most of my concerns are addressed and I think this is a good paper to be accepted, so I maintain my assessment.
Summary: This paper studies multi-task reinforcement learning using a context-based Transformer policy without task labels. To address optimization difficulties caused by imbalanced losses across different tasks, it proposed replacing actor and critic losses with classification losses. Ablation studies on several benchmarks show that the proposed classification-based actor-critic losses outperform the original algorithm. Strengths: 1. The paper is well-focused on the problem of balancing losses in multi-task RL. The proposed classification losses are well-motivated and make sense. 2. Diverse benchmarks are used to evaluate the method. Weaknesses: 1. Multi-task RL using in-context adaptation without task labels is not new. Some recent works [1,2] learn in-context meta-RL policies that adapt to diverse tasks, rather than adapting to variations within a single task. 2. In the setting of online RL, task labels are actually accessible in the simulator. All environments used in the experiments, such as ML45 and multi-game Atari, can provide task labels for each task. Therefore, balancing multi-task losses without task labels is not necessary. Additionally, there are methods that first learn a multi-task label-conditioned policy online and then distill it into a context-based policy [3]. 3. The technical contribution is not strong. Compared to AMAGO [2], which proposes the multi-task RL framework using context-based Transformers and already achieves great results on these benchmarks, the contribution of this paper (classification-based actor-critic loss, which exists in the multi-task RL literature) is incremental. 4. According to the literature, the paper lacks comparisons with many baseline methods. The experiments only present ablation studies of the proposed classification loss. 5. Open access to the code is claimed in the checklist. But there is no supplementary or external link provided to access the code. I also cannot find a discussion of limitations, which is claimed in the checklist. [1] Generalization to New Sequential Decision Making Tasks with In-Context Learning, 2023 [2] AMAGO: Scalable In-Context Reinforcement Learning for Adaptive Agents, 2024 [3] In-Hand Object Rotation via Rapid Motor Adaptation, 2022 Technical Quality: 2 Clarity: 3 Questions for Authors: Please see the Weaknesses. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 1 Limitations: I cannot find a discussion of limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review. We will discuss your concerns below. > Multi-task RL using in-context adaptation without task labels is not new. Some recent works [1,2] learn in-context meta-RL policies that adapt to diverse tasks We are focused on end-to-end online RL while Raparthy et al. belongs to a different category of in-context imitation learning methods that is cited on line 90 of our paper. To expand on this discussion, there are two main differences between (supervised) “in-context” imitation learning (ICIL) and context-based meta-RL. 1) ICIL typically relies on a dataset of expert demonstrations collected by single-task RL agents. This removes the opportunity for positive transfer between tasks during data collection. Many examples of large-scale generalist policy methods distill single-task experts into a multi-task policy, which highlights the difficulty of online multi-task RL even after many years of study. 2) ICIL approaches in-context learning much like few-shot prompting in language models, where we select a prompt demonstration and ask the model to continue this behavior over the following inputs. Meta-RL agents begin from a blank sequence and collect examples that will maximize the overall return. IL on aggregated single-task datasets can make it challenging to explore a new environment by creating a mismatch between the task prior of the multi-task policy and the single-task policies it is trying to imitate [1]. In other words, the multi-task policy may be imitating behavior that is not adaptive, but it lets us generalize from demonstrations. AMAGO is not a multi-task method, which we will discuss below. [1] *Offline Meta Reinforcement Learning - Identifiability Challenges and Effective Data Collection Strategies*, Dorfman et al., 2021 > In the setting of online RL, task labels are actually accessible in the simulator … Therefore, balancing multi-task losses without task labels is not necessary. This perspective treats current benchmarks as an end goal instead of a step towards a more general method. The ultimate goal of adaptive RL is to automatically generalize to very large numbers of tasks. Methods that rely on task labels scale with the number of tasks (gradient editing, separate task heads, etc.). So while it’s true that we can always find the ground-truth task label in the gym environment of common benchmarks like ML45 and Atari, our work ignores this information in hopes of researching a more general method that can extend past popular benchmarks. We do not want to rely on knowing the identity of a new task at test-time, because this removes chances of generalization to unseen tasks. And we would like to scale to more open-ended domains where the total task set is so diverse it is unclear how to one-hot label / count tasks at train-time. Meta-learning without task labels is capable of these goals in theory while multi-task RL with labels is not. This argument is covered in the main text, including in lines 52-55, 135-140, and 255-261, but will add additional discussion in Section 2. > AMAGO [2], which proposes the multi-task RL framework using context-based Transformers and already achieves great results on these benchmarks, the contribution of this paper … is incremental. **Following the terminology in this paper, AMAGO is not a multi-task method**. It learns from many procedurally generated versions of a single task, and uses its Transformer to identify and adapt to those subtle variations. AMAGO evaluates on single-task POPGym, single-task Meta-World (ML1) and so on. It does not attempt ML45 or the multi-task POPGym setting introduced here. The green curves in our figures (“Dep. / Dep.”) represent an AMAGO-style optimization step. We do not claim our method is fundamentally new, but we make an observation that an increasingly common implementation detail in recent work has a simple justification in an online multi-task setting, which unlocks the (very non-incremental) performance improvement in our results. > the paper lacks comparisons with many baseline methods. The experiments only present ablation studies of the proposed classification loss. The high-level approach of memory-based meta-RL is simple but does not allow for many changes, so most of the differences between methods in this area are implementation-specific. We are evaluating changes in training objectives and ensure a fair ablation of those changes by holding all other details fixed. Our experiments cover a variety of domains, model sizes, and sequence lengths. We provide external reference scores for context where possible (Figures 3, 4, and 5), but some of these settings are rarely (if ever) attempted. Tuning hyperparameters to measure state-of-the-art performance across different conceptual approaches on these benchmarks is not the goal of our work. --- Rebuttal Comment 1.1: Comment: Dear reviewer Ypn5, We have received a reply from the other three reviewers, who maintain their scores and recommend accepting our paper. If you have any additional questions or feel that we can do more to address your initial concerns, please let us know, and we will do our best to get back to you before the end of the author discussion period. Thanks, Authors --- Rebuttal Comment 1.2: Comment: Thank you for your response and the clarifications regarding the differences between offline meta-imitation learning and online meta-RL. I appreciate the explanation that AMAGO is not a multi-task method. Both AMAGO and the proposed method are online meta-RL methods. It appears that this work builds on AMAGO, with the novelty primarily in the modification of the actor-critic loss. Given this, my initial concern about baseline selection persists. If the primary contribution of this work is the novel actor-critic loss, it would be valuable to demonstrate its efficacy on more commonly used architectures in online meta-RL, not just within the AMAGO framework. There are numerous online meta-RL methods [1,2,3] that utilize simpler architectures without relying on the specific Transformer-based design shown in Figure 2. On the other hand, if the contribution lies in the combination of the loss function and the Transformer architecture, it becomes crucial to compare with these online meta-RL methods to clearly highlight the advantages of your approach. I believe that the implementation of these baselines is not overly specific, as they have been successfully applied across diverse domains in their original papers. References: [1] VariBAD: A Very Good Method for Bayes-Adaptive Deep RL via Meta-Learning, 2020. [2] Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables, 2019. [3] Improving Context-Based Meta-Reinforcement Learning with Self-Supervised Trajectory Contrastive Learning, 2021. --- Reply to Comment 1.2.1: Comment: Thank you for the reply. We hope we can give you a bit more context on this concern: - We are only using AMAGO as an example of the simplest RL^2 pure memory strategy that relies entirely on the RL update to drive meta-learning. This framework adds the fewest extra meta-RL components and lets us focus on the bottleneck of the actor/critic objective without introducing other factors. We would not say this technique relies on a specific architecture. If anything the architecture in Figure 2 is simpler than many context-based meta-RL techniques: All we have is a sequence model and two outputs. Transformers are a strong choice for memory if we can train them stably, and AMAGO happens to include some extra details that make training the larger models used here more practical. - We are trying to push this technique beyond the familiar toy continuous control benchmarks. Our results immediately jump to Meta-World ML45, which is essentially the upper limit of established meta-RL benchmarks in terms of scale (the methods you have cited here are not evaluating ML45). From there we move to problems that require longer memory (POPGym) and pixel-based learning (Procgen). This is why we cannot provide many comparisons with prior work, although we’ve included or run extra baselines when possible. - It’s possible in theory to add the actor/critic change to the codebases of several more specialized meta-learning approaches and run the same direct ablation we have done with RL^2/AMAGO. Aside from the computational cost of repeating the comparison in every domain, there are some technical and practical reasons we have not done this. - Official implementations of these methods can be quite focused on continuous control and the classic meta-RL locomotion benchmarks. - variBAD and most variants of RL^2 use an on-policy base RL update (usually TRPO or PPO). The classification-style value net can still be used but the details would be different. We would also need a different policy update to make the actor resistant to the scale of returns. - The PEARL [2] (or TCL-PEARL [3]) task exploration strategy is not necessary in these domains because we can adapt quickly within the context of a single episode using memory. When you take this aspect of PEARL away and replace the sequence model with something that is not invariant to temporal order you get off-policy RL^2, which we’ve thoroughly evaluated. - It is very non-trivial to extend the dynamics modeling of methods like variBAD (and others) to pixel-based environments. We don’t doubt that would work (and it would be interesting to find out what extra details are needed to bring accurate pixel modeling to meta-RL / MTRL) but any effort to do this fairly would quickly turn into a Dreamer-like engineering problem and might be a paper on its own. - Adding Transformers to older meta-RL methods designed to train small RNNs is a significant change, even if the final architecture is not much more complex. The AMAGO paper describes this in detail. The model sizes evaluated in our existing results are much larger than those used in these other codebases. **In summary:** We’ve chosen to evaluate our change in the vanilla sequence model framework where it is most directly impactful. The scale and novelty of our experimental domains limits how many reference comparisons we can include in the results, so we focus on creating a fair self-comparison. It’s possible to repeat the same comparison on top of other codebases, but in practice this involves much more than replacing one loss function with another.
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their comments, and we will respond to individual questions and concerns below.  Several reviewers asked for an expanded discussion of our method’s limitations, which we will add to our conclusion. The main technical limitation of our technique is that it does not address all of the challenges of optimizing distinct task objectives. These challenges are not unique to RL and are the focus of many prior works, as discussed in Section 2. Instead, we are minimizing an additional challenge that value-based RL algorithms can introduce, which is that we are unintentionally rescaling our learning objective(s) throughout training according to a schedule that is difficult to predict or control. We demonstrate a simple approach that lets us manage this issue without introducing explicit task knowledge that would make our agents less applicable to general settings. Another limitation is that our experiments evaluate the ability to generalize across unseen variants of many different tasks but do not evaluate the ability to generalize to entirely new tasks.  This kind of non-parametric task variation requires large training sets of unique tasks that are beyond the reach of many current benchmarks. We think the path towards this goal requires the ability to learn from the small disjoint task sets currently available. Existing methods struggle to do this without multi-task optimization techniques that rely on our ability to label tasks. We aim to provide a simple way to scale a standard meta-learning approach to increasingly diverse training sets and enable new research on complex domains beyond typical toy problems. Pdf: /pdf/1e2dfa545b2e6e173f6e0e5148bb65556e46f0ba.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On the Power of Small-size Graph Neural Networks for Linear Programming
Accept (poster)
Summary: This paper explores the effectiveness of small-sized GNNs in solving LP problems, addressing the discrepancy between theoretical predictions requiring large GNNs and empirical observations showing small GNNs' capability. The authors provide a theoretical foundation for the success of compact GNNs by proving that GNNs with polylogarithmic depth and constant width can solve specific LP classes, such as packing and covering LPs, by simulating a variant of the gradient descent algorithm. They introduce a novel GNN architecture, GD-Net, which outperforms traditional GNN models with fewer parameters. This understanding could lead to more efficient MILP solvers and reduce computational resources needed for solving LPs. Strengths: 1. The paper successfully establishes a novel and intriguing link between the AK algorithm and GNNs. This connection is not only innovative but also has the potential to spark interest and pave the way for future research in the field. 2. The introduction of the unrolled GNN architecture represents a contribution to GNN-based LP and MILP solvers. Weaknesses: There are several areas where improvements could significantly enhance its contributions. Addressing these points satisfactorily would make a strong case for elevating the paper's status to "Accept". 1. The presentation of Theorems 2-4, given that GD-Net is an unrolled version of the AK algorithm, appears somewhat straightforward (Question 1,2). 2. To convincingly demonstrate the performance superiority of the proposed methods, it is crucial to include more realistic benchmarks and baselines for comparison (Question 3, 4). 3. Given that unrolled neural networks typically enjoy several advantages over more general neural architectures, it would be highly beneficial for the paper to explicitly verify these benefits in the context of the proposed GD-Net (Question 5). Technical Quality: 3 Clarity: 3 Questions for Authors: 1. The paper claims that it closes the gap between theoretical and empirical perspectives in GNN-based LP solvers by illustrating the relationship between GD-Net and the AK algorithm. However, it has been observed that previous empirical studies predominantly utilized general-purpose GNNs, such as GCNs. Could similar theorems be achieved with general GNN backbones? 2. The demonstration that polylogarithmic-depth and constant-width GNNs can approximately solve LP problems is compelling. To better understand the significance of this result, could the authors discuss how close these findings are to the theoretical lower bound of what is required to solve LP problems? 3. This paper relaxes MILP problems to LPs and uses them as benchmarks, which may not fully reflect the practical significance of the proposed method. I suggest adopting more standard LP benchmarks and comparing GD-Net with heuristic algorithms. 4. In the specific setting described by [23], it is noted that GCNs underperform compared to GEN. To provide a comprehensive evaluation landscape, would it be possible for the authors to include GEN as a baseline in Table 1? This addition would help readers better assess the relative performance enhancements introduced by GD-Net. 5. The paper highlights the unique advantage of unrolling methods in achieving exceptional size generalization. However, the current presentation in Table 2 might not fully capture this advantage. Would the authors consider augmenting their experimental setup to include: - Training on medium-scale datasets and testing on both small and large-scale datasets; - Introducing the performance of GCNs as baselines for a direct comparison; Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1: It has been observed that previous empirical studies predominantly utilized general-purpose GNNs, such as GCNs. Could similar theorems be achieved with general GNN backbones?** A1: Yes, we can establish similar theorems with general-purpose GNNs. We can still derive polylog-depth in the bound, but constant width is no longer guaranteed. By the universal approximation theorem, the ELU activation function can be simulated with arbitrary precision by a 2-layer and sufficiently wide perceptron. So if we replace each occurrence of ELU in GD-net with this 2-layer perceptron, we then obtain a GCN. Note that this GCN still has polylog-depth but the required width is no longer constant in $\epsilon$. Thanks for the comment. We will add this discussion in the next version. > **Q2: Could the authors discuss how close these findings are to the theoretical lower bound of what is required to solve LP problems?** A2: The polylogarithmic depth is also the lower bound, and thus our bound is tight. Due to the space limit, we kindly refer Reviewer to **Part 5: Tightness of Our Bound** in "Author Rebuttal" for further elaboration. Thanks for the great comment. We will add this lower bound in the next version. > **Q3: This paper relaxes MILP problems to LPs and uses them as benchmarks, which may not fully reflect the practical significance of the proposed method. I suggest adopting more standard LP benchmarks and comparing GD-Net with heuristic algorithms.** A3: We consider the Bipartite Maxflow, a common model formulation applied to areas such as wireless communication [R1]. In our dataset, each bipartite graph is obtained by deleting all edges between $V'$ and $U'$ from a fully connected bipartite graph, and then randomly sample from the remaining edges with a probability of 60%, where $V'$ (and $U'$ resp.) is a random subset consisting of half of the left nodes (and the right nodes). | | | GD-Net | | | GCN | | | ------ | -------- | ------ | --------- | ----- | ------- | ------ | | #Nodes | Obj | A. Gap | R. Gap | Obj | A. Gap | R. Gap | | 1200 | 35398.62 | 429.8 | **1.20%** | 31206 | 4622.41 | 12.89% | | 2000 | 58844.8 | 943.33 | **1.58%** | 52085 | 7703.14 | 12.88% | |||||||| According to the result, GD-Net consistently obtains better prediction compared to GCN, with only 1% optimality gap from optimal solutions. We also conducted comparisons against the Ford Fulkerson heuristic, specifically designed for solving maximum network flow problems. The table below presents the time taken to achieve the same optimality to GD-Net: | #Nodes | GD-Net Obj | GD-Net time | Ford Fulkerson time | | -: | -: | -: | -: | |1200 |35398.62 |0.592s |2.152s | |2000 |58844.80 |1.691s |9.184s | ||||| GD-Net is significantly faster than the Ford Fulkerson heuristic in achieving high-quality solutions, demonstrating its efficiency and effectiveness. > **Q4: In the specific setting described by [23], it is noted that GCNs underperform compared to GEN. To provide a comprehensive evaluation landscape, would it be possible for the authors to include GEN as a baseline in Table 1? This addition would help readers better assess the relative performance enhancements introduced by GD-Net.** A4: We implemented the GEN [R2] model using the DGL package. The GEN model has 4 layers and a width of 64, the same as the implemented GD-Net. Experiment results show that GEN struggles with high training loss and consequently yields significantly lower-quality predictions compared to GD-Net and GCN: | Model | #Params. | V. Err (Cover) | V. Err (Packing) | | ------ | -------- | -------------- | ---------------- | | GD-Net | 1,656 | 2.19E-6| 2.18E-4 | | GCN | 34,306 | 1.81E-6 | 1.69E-4 | | GEN | 18,177 | 0.021 | 0.024 | ||||| Note that we have tuned hyper-parameters of GEN such as learning rate and dropout level, but GEN still under-performs GD-Net and GCN. This seems to contradict the results in [23] where GEN performs similarly or better than GCN. A potential reason for such discrepancy is that [23] applies GEN based on a tripartite modeling of LP, while our paper adopts the conventional bipartite modeling in [7]. > **Q5: The paper highlights the unique advantage of unrolling methods in achieving exceptional size generalization. However, the current presentation in Table 2 might not fully capture this advantage. Would the authors consider augmenting their experimental setup to include: -Training on medium-scale datasets and testing on both small and large-scale datasets; -Introducing the performance of GCNs as baselines for a direct comparison;** A5: We train GD-Net on instances with 500 variables and generalize it to instances with 50 and 1,000 variables. The results are presented in the below table. ||||GD-Net|||GCN|| |:-|-:|-:|-:|-:|-:|-:|-:| |Ins| #Vars.|Obj|A. Gap|R. Gap|Obj|A. Gap|R. Gap | |Cover|50|5.009|1.631|**48.30%**|5.027|1.649|48.80%| ||1000|6.186|2.788|83.60% |6.056|2.722|**81.70%**| |Packing|50|0.399|2.959|**88.10%**|0.331|3.028|90.20%| ||1000|3.001|0.333|**10.00%**|3.001|0.333|**10.00%**| ||||||||| We see that while GD-Net requires much fewer parameters, it still achieves comparable or even better generalization as GCN does. The results might not be satisfactory for scenarios with huge differences in problem sizes. However, our method can still accelerate the solving of non-negative LPs where training and testing distributions are similar. To our knowledge, no existing work in L2O has tackled the generalization problem across such a huge size gap. Resolving this issue is an interesting research topic for future. [R1] Li Wang, Huaqing Wu, Wei Wang, and Kwang-Cheng Chen. Socially enabled wireless networks: Resource allocation via bipartite graph matching. IEEE Communications Magazine, 53(10):128–135, 2015. [R2] Guohao Li, et al. Deepergcn: All you need to train deeper GCNs, 2020. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed response. The response has addressed my concerns and I have raised my rating to 7. This work establishes a link between the AK algorithm and GNNs and develops the lower bound matching GNNs for LP problems. This connection is not only innovative but also has the potential to spark interest and pave the way for future research in the field. For instance, it may inspire future GNN works in MILP, which is a more challenging and significant area. --- Reply to Comment 1.1.1: Comment: We greatly value your suggestions and recommendations. We will certainly incorporate the new results into our revised manuscript, as they not only strengthen our argument but also bridge the gap between theoretical bounds and practical applications.
Summary: This paper examines the expressive power of GNNs in representing linear programs (LPs). The authors first introduce a first-order iterative algorithm for packing and covering LPs, conceptualizing this algorithm as a GNN called GD-Net, applied to these LP types. They then provide a convergence rate of the proposed first-order algorithm, deriving a complexity upper bound for GNNs representing LPs. Finally, numerical experiments compare the performance of GD-Net with a general GCN. Strengths: The writing is clear and easy to follow. The mathematical aspects of the paper are technically correct. Weaknesses: The paper lacks significant contributions. - The idea of conceptualizing iterative algorithms as GNNs is not novel: - Yang et al. "Graph Neural Networks Inspired by Classical Iterative Algorithms." ICML 2021. - Zhang and Zhao. "Towards Understanding Graph Neural Networks: An Algorithm Unrolling Perspective." 2022. - The convergence rate of first-order algorithms for LPs is not new, even though GD-Net is not exactly the same with existing algorithms. For example: - Wang and Shroff. "A New Alternating Direction Method for Linear Programming." NeurIPS 2017. - Applegate et al. "Faster First-Order Primal-Dual Methods for Linear Programming using Restarts and Sharpness." Mathematical Programming 2023. Note that these papers provide exact linear convergence rates, which are better than GD-Net's sublinear rate that only converges to an approximate solution. Additionally, the sublinear rate itself is not new, as it is provided by Awerbuch and Khandekar. - The empirical performance of GD-Net is not convincing. The paper only compares GD-Net with GCN, without showing advantages over traditional algorithms like ADMM or PDHG. Furthermore, comparing with commercial LP solvers like CPLEX or Gurobi would be a more ambitious but necessary goal to demonstrate real-world applicability. Technical Quality: 2 Clarity: 4 Questions for Authors: Refer to "Weaknesses". Confidence: 4 Soundness: 2 Presentation: 4 Contribution: 1 Limitations: This theoretical paper appears to have no potential negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1: The idea of conceptualizing iterative algorithms as GNNs is not novel. The convergence rate of first-order algorithms for LPs is not new. Given existing linear rates, the sublinear rate of GD-Net is not good enough.** Thanks for raising this insightful comment. We would like to make several clarifications on our contributions: (1) Our complexity bound of GD-Net focuses better **dependency (polylog) on problem sizes** $(m, n)$ rather than **better dependency on accuracy** $\epsilon$. (The latter is the focus of most convergence rate analysis of optimization algorithms.) In terms of dependency on problem sizes, our result improves the best-known bound of L2O networks **from polynomial to polylogarithmic**. We remark that the polylogarithmic dependency on $n$ is a little surprising, as it means that: For packing/covering LPs, which are **global optimization** problems, GD-Net can solve a near optimal solution based on **very local information**. (2) Our bound is established by unrolling a carefully chosen algorithm (i.e., AK algorithm). We are not aware of any other existing first-order algorithm such that unrolling it would lead to a polylog depth GNN. For example, consider the LP instance $$\max_{x\geq 0} \quad 1^T x$$ $$\mbox{s.t.} \quad Ax \leq 1$$ where A is an $n\times n$ Boolean matrix and contains three ones in each row and each column. Observe that the optimal solution is $(1/3,1/3,…,1/3)$. For any constant $\epsilon$, then the algorithm proposed by the paper “A New Alternating Direction Method for Linear Programming” needs $\Omega(n)$ iterations (since $R_x=\sup_k |x^k|=\Omega (n)$); in contrast, the GD-net only needs $\mathrm{polylog}(n)$ layers, exponentially better than $\Omega(n)$. --------------------------------------------------- > **Q2: The empirical performance of GD-Net is not convincing. The paper only compares GD-Net with GCN, without showing advantages over traditional algorithms like ADMM or PDHG. Furthermore, comparing with commercial LP solvers like CPLEX or Gurobi would be a more ambitious but necessary goal to demonstrate real-world applicability.** A2: We conduct experiments comparing GD-Net with PDLP [R1] and Gurobi. The table below presents the time taken by each method to achieve the same optimality to our GD-Net: | Ins. | #Vars. | Opt | GD-Net Obj | GD-Net time | Gurobi | PDLP | | :------ | -----: | ------: | ---------: | ----------: | -------: | -----: | | | 1,000 | 3.334 | 3.701 | **0.105s** | 0.244s | 0.919s | | Cover | 5,000 | 100.667 | 130.931 | **0.218s** | 9.401s | 0.921s | | | 10,000 | 407.386 | 546.666 | **0.335s** | 103.322s | 1.001s | | | | | | | | | | | 1,000 | 3.334 | 3.018 | **0.095s** | 0.208s | 0.746s | | Packing | 5,000 | 100.88 | 78.994 | **0.216s** | 3.980s | 0.756s | | | 10,000 | 406.946 | 302.04 | **0.314s** | 8.593s | 0.809s | | | | | | | | | It shows that GD-Net consistently outperforms both Gurobi and PDLP in terms of speed while achieving comparable or superior objective values. More complementary experiments are presented in Author Rebuttal to bolster the robustness of our methods. [R1] David Applegate, et. al. Practical large-scale linear programming using primal-dual hybrid gradient, 2022. --- Rebuttal Comment 1.1: Title: Response to authors Comment: I sincerely appreciate the authors' efforts during the rebuttal stage, including their responses to my questions and the additional experiments provided. - The polylog complexity. I overlooked this aspect in my initial review. Thanks for pointing this out! I would like to tune my score based on this point. - Novelty. As I mentioned earlier, the concept of unrolling a first-order method as a GNN is not new. The two papers I referenced are just examples on this topic. I strongly recommend that the authors thoroughly review the references within these papers, as well as those that cite them. A comprehensive discussion of the existing literature would be beneficial. - Additional experiments. The results still appear unconvincing, as the "GD-Net Obj" is significantly higher than "Obj." Does this indicate that the objective/solution obtained by GD-Net is far from optimal? What are the accuracy (stopping tolerance) parameters for Gurobi and PDLP? --- Reply to Comment 1.1.1: Comment: Thanks for your valuable feedback. > **Q1:Novelty. As I mentioned earlier, the concept of unrolling a first-order method as a GNN is not new. The two papers I referenced are just examples of this topic. I strongly recommend that the authors thoroughly review the references within these papers, as well as those that cite them. A comprehensive discussion of the existing literature would be beneficial.** **A1**: Thanks for the comment and the helpful advice. **Existing works on unrolling iterative algorithms as GNNs**: We acknowledge the reviewer’s point that unrolling first-order methods as GNNs has been explored in prior research. We will incorporate a comprehensive literature review in the revised version, as presented below. Please let us know if any other relevant works are missing from this review. > The design of GD-Net is based on the concept of unrolling iterative algorithms as GNNs. Indeed, there is a body of research that has explored this approach. For example, Velickovic et al. [R1] investigated solving basic graph problems (e.g., the shortest path, the minimum spanning tree) by GNNs. By unrolling classical graph algorithms (e.g., breadth-first search, Prim’s algorithm) as GNNs, they suggest that message-passing neural networks with a maximization aggregator may be best suited for such graph problems. Aiming at mitigating oversmoothing, long-range dependencies, and spurious edges issues of GNNs, Yang et al. [R2] proposed a new family of GNN layers by unrolling and integrating the update rules of two classical iterative algorithms, namely the proximal gradient descent and iterative reweighted least squares. References [R3, R4] showed that many existing GNN models (such as GCN, GAT, APPNP) can be viewed as unrolling gradient descent serving specific graph signal denoising problems. Chen et al. [R5] proposed new GNNs to improve graph signal denoising by unrolling sparse coding and trend filtering algorithms. References [R6, R7, R8] bridge the gap between graph convolution and iterative algorithms by providing a unified optimization framework for GNNs. [R1] Petar Velickovic, et. al. Neural Execution of Graph Algorithms. ICLR 2020. [R2] Yongyi Yang, et. al. Graph Neural Networks Inspired by Classical Iterative Algorithms. ICML 2021. [R3] Yao Ma, et. al. A unified view on graph neural networks as graph signal denoising. CKIM 2021. [R4] Zepeng Zhang, and Ziping Zhao. Towards Understanding Graph Neural Networks: An Algorithm Unrolling Perspective. KDD 2022. [R5] Siheng Chen, et. al. Graph unrolling networks: Interpretable neural networks for graph signal denoising. IEEE Transactions on Signal Processing, 2021. [R6] Xuran Pan, et. al. A unified framework for convolution-based graph neural networks. Pattern Recognition, 2024. [R7] Meiqi Zhu, et. al. Interpreting and unifying graph neural networks with an optimization framework, Proceedings of the Web Conference, 2021. [R8] Hongwei Zhang, et. al. Revisiting graph convolutional network on semi-supervised node classification from an optimization perspective, *arXiv preprint arXiv:2009.11469* (2020). --- Reply to Comment 1.1.2: Comment: **Contribution of our paper**: Despite the above discussion, we would like to kindly provide further comment on our paper's contribution. Our main contribution is not merely "proposing another new GNN architecture by unrolling another iterative algorithm." Instead, our major contribution is twofold. 1. We present a new **theoretical explanation** of the empirical phenomenon that small-size GNNs can solve LPs. Specifically, we show that polylog-depth constant-width GNNs are expressive enough to solve a broad class of LPs. Moreover, **this polylog bound is also tight** (Please see Part 5: Tightness of Our Bound in "Author Rebuttal" for details). 2. For **practical use**, we propose a parameter-efficient and interpretable GNN architecture, namely GD-Net, for LPs. Experiments verifies its efficiency and effectiveness. Notably, GD-Net generates better solutions with an order of magnitude fewer parameters than GCN. We believe the above contribution is not straightforward because of the following reasons. 1. This work is motivated by observing a significant gap between empirical phenomenon and theoretical explanation in L2O (learning to optimize). Precisely, in practice, GNN with a modest width and fewer than ten layers often suffice to achieve good performance in approximating LP with hundreds of nodes and constraints. However, the best-known theoretical explanation [R9] requires the depth of GNN to grow polynomially with the problem size. 2) The selection of AK algorithm is nontrivial. AK-algorithm can be viewed as applying a variant of gradient descent on a potential function. The design of potential function is careful and nontrivial, since it is required to be differentiable and convex; and more importantly, any stationary point of the potential function should be a nearly optimal solution. These properties make AK algorithm has low computational complexity. We are not aware of any existing other first-order algorithm such that unrolling it would lead to a polylog depth GNN. [R9] Chendi Qian, et. al. Exploring the power of graph neural networks in solving linear optimization problems. AISTATS 2024. --- Reply to Comment 1.1.3: Comment: > **Q2: Additional experiments. The results still appear unconvincing, as the "GD-Net Obj" is significantly higher than "Obj." Does this indicate that the objective/solution obtained by GD-Net is far from optimal? What are the accuracy (stopping tolerance) parameters for Gurobi and PDLP?** **A2:** In our previous experiments, 1. The parameter $\epsilon$ in GD-Net is set to be 0.2, which means GD-Net is expected to output a $(1+\epsilon)=1.2$-approximation solution. 2. We run Gurobi and PDLP with default relative tolerance (precisely $10^{-6}$ for Gurobi, and $10^{-9}$ for PDLP), and stop them once they achieve the same objective value to the output of GD-Net. We acknowledge that, in general, an L2O network may not achieve the very high precision of $10^{-6}$ or $10^{-9}$ that traditional optimization algorithms can reach. This is because traditional algorithms can run for an arbitrary number of iterations, given sufficient resources, while the layers of a neural network are fixed after training. **However, L2O networks can still help accelerate the solving of optimization problems that require higher precision.** Specifically, as demonstrated in many related studies [R10, R11, R12], the L2O approach is often used to quickly generate feasible solutions or to provide strong initial warm-starts for traditional optimization algorithms. **Combining L2O networks with traditional optimization algorithms can achieve the desired precision faster than using traditional optimization algorithms alone.** To demonstrate the practical utility of GD-Net, we conducted experiments to assess how effectively GD-Net can serve as a warm start to enhance the efficiency of PDLP. We measured the time it takes to solve problems via PDLP using warm starts provided by GD-Net, aiming to meet predefined stopping criteria. The relative tolerance was set at 1e-3, which is a typical precision requirement in real-world applications. We also recorded the time PDLP required to resolve instances for comparison. Additionally, we calculated the improvement ratio by comparing the time taken by PDLP alone with the time taken when using GD-Net's output as a warmstart. |dataset|size|time for PDLP alone (s)|time for GD-Net + PDLP (s)|Improvement Ratio| |-:|-:|-:|-:|-:| |Packing|5000|5.461|4.920|9.906%| |Covering|5000|6.893|5.619|18.483%| |||||| The experimental results show that utilizing GD-Net as warm-starts can reduce the time of solving by about 10% and 18% respectively for Packing and Covering dataset, comapred to using PDLP alone. We thank the reviewer again for the valuable comments and suggestions. We will include the above results and discussions in the manuscript. [R10] Qinyu Han et al. A GNN-guided predict-and-search framework for mixed-integer linear programming. ICLR, 2023. [R11] Rajiv Sambharya et al. Learning to Warm-Start Fixed-Point Optimization Algorithms. JMLR, 2024. [R12] Rajiv Sambharya et al. End-to-end learning to warm-start for real-time quadratic optimization. Learning for Dynamics and Control Conference, 2023.
Summary: The paper investigates the capability of small-sized Graph Neural Networks (GNNs) to solve linear programming (LP) problems, specifically focusing on polylogarithmic-depth, constant-width GNNs. It provides both theoretical proofs and empirical evidence demonstrating that these GNN architectures can effectively solve packing and covering LPs, which are common in various optimization contexts. The introduction of a novel GNN architecture, termed GD-Net, is highlighted, showing superior performance in terms of parameter efficiency and problem-solving capability compared to traditional GNNs. Strengths: 1. **Theoretical Foundation**: The paper successfully bridges the gap between theoretical predictions and empirical results by proving that small-sized GNNs can efficiently solve packing and covering LPs. 2. **Innovative Architecture**: The introduction of GD-Net, which significantly outperforms existing GNN models using fewer parameters, provides practical insights into the design of efficient neural network architectures for optimization problems. 3. **Comprehensive Experiments**: Extensive empirical evaluations demonstrate the effectiveness of the proposed approach across different datasets and problem sizes, which solidifies the paper’s claims. 4. **Impactful**: As you noted, the results are inspiring and provide valuable guidance for the L2O community on GNN structure and size design. Weaknesses: **Scope of Applicability**: The paper primarily focuses on packing and covering LPs. The generalizability of the proposed methods to other types of LPs or optimization problems is not addressed. Technical Quality: 4 Clarity: 3 Questions for Authors: 1. Could the authors elaborate on potential modifications or extensions of the GD-Net architecture that might enable it to handle a broader range of LPs or even mixed integer linear programming problems? 2. What are the limitations in terms of scalability when dealing with extremely large datasets or more complex network architectures? Confidence: 2 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: **Generalizability**: The techniques are validated primarily on packing and covering LPs, and there is limited discussion on their effectiveness for other classes of LPs or more complex optimization tasks. Although I am not an expert in this specific area, the encouraging results give the L2O community some guidance on designing GNN architectures and sizes. I defer to other reviewers for a more detailed critique. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > **Q1: Could the authors elaborate on potential modifications or extensions of the GD-Net architecture that might enable it to handle a broader range of LPs or even mixed integer linear programming problems?** A1: Intuitively, GD-Nets can be viewed as unrolling the gradient descent on a carefully selected potential function. This potential function is differentiable and convex; more importantly, any stationary point of the potential function is a nearly optimal solution. For a broader class of LPs and even MILPs, a natural potential modification of the GD-net is to design another potential function that still enjoys the above good properties. --- > **Q2: What are the limitations in terms of scalability when dealing with extremely large datasets or more complex network architectures?** A2: If we understand the question correctly (please tell us if not so), "extremely large datasets" means datasets on large problem dimensions, say, LPs with 10,000 variables or above. When dealing with such datasets, we need complex network architectures with more parameters. There are two main limitations in terms of scalability that need to be addressed. **Computational resources**: Handling large datasets requires longer training time, a larger amount of memory, and higher costs for infrastructure. Systems with limited resources would struggle to handle large datasets, resulting in performance issues and inefficiencies. In that sense, designing more parameter-efficient L2O networks is crucial for scaling up and enhancing performance in resource-constrained systems. Our GD-Net, for example, offers an improvement in parameter efficiency compared to traditional GCNs. Besides, efficient training and inference methods are also needed to reduce computational costs. **Number of training examples**: As the problem dimension grows and the network becomes more complex (with more parameters), the risk of overfitting (a.k.a. poor generalization) also increases. To mitigate overfitting, the number of training examples should grow along with the problem dimension and model parameters. This post a challenge on data collection. Efficient data generation methods or self-supervised training paradigms can be leveraged to resolve this challenge. Thanks for the inspiring comments. We will add the above discussion on Q1 and Q2 in the next version. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed and thoughtful rebuttal. I appreciate the insights you've provided on extending the GD-Net architecture to handle a broader range of LPs and potentially MILPs, as well as the considerations for scalability with large datasets and complex network architectures. I'm genuinely excited to see how your work progresses and its potential application in the MILP domain. I will maintain my score, and I look forward to the future developments of your research. --- Reply to Comment 1.1.1: Comment: Thank you for your encouraging and insightful feedback. We greatly appreciate your recognition of our efforts. Your insights are valuable as we advance our work. We will definitely incorporate these new results into our revised manuscript to further enhance our work.
null
null
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable comments. We provide detailed responses individually to each reviewer. Note that more than one reviewer suggested additional numerical experiments to bolster the robustness of our findings. We conduct these experiments and summarize as below. --- **1. Comparison with Traditional Algorithms and Commercial Solvers:** We include two additional baselines: a traditional algorithm PDLP [R2] and a commercial solver Gurobi. The table below presents the time taken by each method to achieve the same optimality to our GD-Net: | Ins. | #Vars. | Opt | GD-Net Obj | GD-Net time | Gurobi | PDLP | | :------ | -----: | ------: | ---------: | ----------: | -------: | -----: | | | 1,000 | 3.334 | 3.701 | **0.105s** | 0.244s | 0.919s | | Cover | 5,000 | 100.667 | 130.931 | **0.218s** | 9.401s | 0.921s | | | 10,000 | 407.386 | 546.666 | **0.335s** | 103.322s | 1.001s | | | | | | | | | | | 1,000 | 3.334 | 3.018 | **0.095s** | 0.208s | 0.746s | | Packing | 5,000 | 100.88 | 78.994 | **0.216s** | 3.980s | 0.756s | | | 10,000 | 406.946 | 302.04 | **0.314s** | 8.593s | 0.809s | | | | | | | | | GD-Net consistently outperforms both Gurobi and PDLP in terms of speed while achieving comparable or superior objective values. --- **2.Comparison with GEN [R3]** We implemented the GEN model using the DGL package. The GEN model has 4 layers and a width of 64, the same as the implemented GD-Net. Experiment results show that GEN struggles with high training loss and consequently yields significantly lower-quality predictions compared to GD-Net and GCN: | Model | #Params. | V. Err (Cover) | V. Err (Packing) | | ------ | -------- | -------------- | ---------------- | | GD-Net | 1,656 | 2.19E-6 | 2.18E-4 | | GCN | 34,306 | 1.81E-6 | 1.69E-4 | | GEN | 18,177 | 0.021 | 0.024 | | | | | | --- **3. More Practical Setting** We consider the Bipartite Maxflow, a common model formulation applied to areas such as wireless communication [R1]. In our dataset, each bipartite graph is obtained by deleting all edges between $V'$ and $U'$ from a fully connected bipartite graph, and then randomly sample from the remaining edges with a probability of 60%, where $V'$ (and $U'$ resp.) is a random subset consisting of half of the left nodes (and the right nodes). ||| GD-Net ||| GCN | | | ------ |-|-|-|-|-|- | | #Nodes | Obj | A. Gap | R. Gap | Obj | A. Gap | R. Gap | | 1200 | 35398.62 | 429.8 | **1.20%** | 31206 | 4622.41 | 12.89% | | 2000 | 58844.8 | 943.33 | **1.58%** | 52085 | 7703.14 | 12.88% | |||||||| According to the result, GD-Net consistently obtains better prediction compared to GCN, with only 1% optimality gap from optimal solutions. We conducted comparisons against the Ford Fulkerson heuristic, specifically designed for solving maximum network flow problems. The table below presents the time taken to achieve the same optimality to GD-Net: | #Nodes | GD-Net Obj | GD-Net time | Ford Fulkerson time | | -: | -: | -: | -: | |1200 |35398.62 |0.592s |2.152s | |2000 |58844.80 |1.691s |9.184s | ||||| GD-Net is significantly faster than the Ford Fulkerson heuristic in achieving high-quality solutions, demonstrating its efficiency and effectiveness. --- **4. Size Generalization** We train GD-Net on instances with 500 variables and generalize it to instances with 50 and 1,000 variables. The results are presented in the below table. | | | | GD-Net | | | GCN | | | :------ | -------: | ----: | -----: | ---------: | ----: | -----: | ---------: | | Ins | #Vars. | Obj | A. Gap | R. Gap | Obj | A. Gap | R. Gap | | Cover | 50 | 5.009 | 1.631 | **48.30%** | 5.027 | 1.649 | 48.80% | | | 1000 | 6.186 | 2.788 | 83.60% | 6.056 | 2.722 | **81.70%** | | Packing | 50 | 0.399 | 2.959 | **88.10%** | 0.331 | 3.028 | 90.20% | | | 1000 | 3.001 | 0.333 | **10.00%** | 3.001 | 0.333 | **10.00%** | | | | | | | | | | We see that the GD-Net while requiring much fewer parameters, still achieves comparable or even better generalization as GCN does. To the best of our knowledge, no existing work in L2O has tackled the generalization across such a huge size gap. Improving the large-gap size generalization is an interesting research topic for future research. --- **5. Tightness of Our Bound** Our main result says that “polylogarithmic-depth constant-width GNNs are expressive enough to solve packing/covering LPs." Here, we remark that the polylogarithmic dependency on depth is also necessary. Specifically, Kuhn et al. showed that (first paragraph on page 6 in [R4]): For the fractional maximum matching problem, a special kind of packing LP, every constant-factor approximation distributed algorithm requires at least $\Omega(\sqrt{\log n/\log\log n})$ rounds. Moreover, since one layer of GNNs can be naturally simulated by one round of distributed LP algorithms (see the second paragraph on page 5 in [R4]), we conclude that GNNs need at least $\Omega(\sqrt{\log n/\log\log n})$ layers. [R1] Li Wang, Huaqing Wu, Wei Wang, and Kwang-Cheng Chen. Socially enabled wireless networks: Resource allocation via bipartite graph matching. IEEE Communications Magazine, 53(10):128–135, 2015. [R2] David Applegate, et. al. Practical large-scale linear programming using primal-dual hybrid gradient, 2022. [R3] Guohao Li, et al. Deepergcn: All you need to train deeper GCNs, 2020. [R4] Fabian Kuhn, et. al. Local Computation: Lower and Upper Bounds. J. ACM 63(2): 17:1-17:44 (2016).
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Parallel Backpropagation for Shared-Feature Visualization
Accept (spotlight)
Summary: This paper presents an innovative method for explaining why an IT neuron that is supposed to tune to a particular object class or category would respond to outside-of-category stimuli. The proposed method uses parallel backpropagation to highlight the features in the out-of-category stimuli that are shared by the stimuli in the tuned class. The method helps discover some shared features between highly activating bodies and objects in IT neurons but also fails in other cases. Strengths: The method is original, innovative, sound, and very clearly presented. It can be useful for IT neurophysiological research to resolve some mysteries in the neural codes in IT. Weaknesses: The paper focuses on the presentation of the method. The presentation of the scientific results is not very systematic. They are shown as anecdotal illustrations of some success and failure cases. Hence, the scientific insights provided are a bit preliminary, and the significance of the results is perhaps limited. The contribution is solid but perhaps a bit thin. Technical Quality: 3 Clarity: 4 Questions for Authors: Perhaps the usefulness of the method is more general than the authors envisioned. Can the method be applied to neurons in V4 and V1? Can it be used to analyze artificial deep networks to understand "neural codes" in CNN? Also, the assumption of the approach is that neurons are tuned to one single category, but could neurons be tuned to multiple categories or multiple features? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: There are no concerns about the limitation issue. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the valuable comments and the encouraging feedback. Regardings questions/criticisms: > The presentation of the scientific results is not very systematic [... and] a bit preliminary. We agree that the results on feature selectivity in macaque body patches can be expanded on to get a more complete picture of their tuning properties for both bodies and non-body objects. As you stated, we put the focus of the paper more towards a rigorous introduction of the visualization method. While this approach has its drawbacks, we argue that it also has advantages. Ideally, we would like to see the proposed method applied to not only body patches, but other category-selective regions as well to shed light on the general concept of category-selective processing in the brain. To make that feasible, we aimed to introduce the procedure in a detailed fashion to facilitate easy use by other researchers. Further, focusing on the class-agnostic parts of the paper may make it interesting for the broader audience at NeurIPS. We would also like to point out that this is, to the best of our knowledge, the first work on visualizing the features underlying category selectivity in IT cortex. That being said, we agree that a more systematic approach would be of interest and future work will focus on that. > Can the method be applied to neurons in V4 and V1? This is an intriguing question, since semantic-category selectivity is mostly studied in IT. One interesting approach could be to replace the semantic selectivity by selectivity for a specific shape/texture descriptor. As a naive example, a neuron that responds to semicircles could be classified as a (seemingly) circle-selective neuron, as the semicircle feature is more common in the image distribution of circle images than in the global distribution. Applying the proposed method should then highlight that the neuron responds to semicircles because they are a subfeature of the more general shape descriptor. While this example is somewhat obvious, application of our method might be of use when considering the complex feature tunings in V4. > Can it be used to analyze artificial deep networks? We would argue that the method can indeed be used for studying neural codes in CNNs. One interesting case is an application to a classification model's readout layer. Here, category-selectivity is vitally important since any response to out-of-category images is interpreted as evidence for an incorrect hypothesis. Consider an image $x$ which is incorrectly classified as category $c$. One could then use a pool of images from category $c$, find the most similar image to $x$ according to $s(\\cdot,\\cdot)$ and visualize the shared features. This could yield insight into why $x$ is misclassified, and why the model uses those specific features as evidence for class $c$. One could also study category selectivity in hidden layers of vision models. We provide a proof-of-concept for this case in the appendix (A.1.1), where we compute visualizations for artificially created category-selective units. > Could neurons be tuned to multiple categories? The question of a neuron being tuned to multiple categories is an interesting one, as definitions of a category can be quite fuzzy (consider, for example the case of face neurons and body neurons ). However, let’s assume that a neuron is tuned to two semantically rather different categories. In that case, our work, together with previous papers, would suggest that it is because there are shared features that are common in the image distributions of both categories. In that case, the proposed method could still be used to visualize these shared features. One could do this by sampling images from the two categories and feeding pairs of them through the visualization pipeline. > Could neurons be tuned to multiple features? We identify two avenues in which a neuron's possible tuning for multiple different features relates to the proposed method. The first is the possibility that a neuron likes different features across categories, i.e. feature A for category 1 and feature B for category 2, where feature A and B are the results of different computational processes. We employ two approaches to ensure that the method does not highlight these as shared features: First, we train the neurons' readout model only on within-category images and then test on outside-of-category images. Good performance on outside-of-category images implies that the features driving the responses are (at least partially) shared between the two categories. Second, the visualization method is able to detect cases in which features are not shared between two images. Considering the similarity metric in equation (2), the activation vectors $a_1$ and $a_2$ will be dominated by different dimensions if the features between the two images are not shared, yielding a small dot product and thus a small similarity $s$. Since the norm of the attribution map is bounded by the image similarity $s$ (equation (7)), the attribution map will have low intensity, reflecting that the two images drive the neuron due to different features. The second relevant avenue is the situation in which a neuron is activated by multiple within-class features (e.g. arms and torsos for bodies). If the ooc. stimulus to be visualized contains only one of the features, this should be reflected by the method. The weighting for feature $i$ in the attribution map, as given in equation (5), is $a_{\\text{in}}^{(i)} a_{\\text{out}}^{(i)}(w^{(i)})^2$, yielding that feature $i$ will be ignored if it is absent in either of the images, even if the neuron is tuned to it. Conversely, if the neuron is tuned to multiple features and these features are apparent in both images, they will all be highlighted. Therefore, we argue that the proposed method is able to deal with cases in which a neuron is tuned to a variety of different features. --- Rebuttal Comment 1.1: Title: Responses to Authors' rebuttal Comment: I would like to thank authors' interesting discussion on the questions I raised. I do think this is a valuable contribution.
Summary: A deep-learning-based approach is proposed in the paper "Parallel Backpropagation for Shared-Feature Visualization" to visualize shared visual features in high-level visual brain regions, which are typically thought to respond selectively to particular categories like bodies or faces. Despite this selectivity, these neurons occasionally respond to stimuli outside of their category, possibly due to shared visual characteristics. By backpropagating activations to the pixel level and enhancing shared features while attenuating non-shared ones, the authors present a strategy for highlighting these shared features using a deep neural network to model neural responses and identify relevant features. Strengths: The text is very well structured, and the questions and methodology are written in a very clear and detailed manner. This work follows an innovative approach which presents a novel and exciting way to use DL in order to gain insights of single neuron selectivity across the visual hierarchy. The method's steps—which include parallel backpropagation, determining neuron-specific picture similarity, and training a linear readout on top of a CNN—are fully explained. Finally, this work includes an original experimental component that differentiates from a lot of other applications of DL on already existing datasets. A strategy for visualizing shared elements that drive brain reactions to out-of-category stimuli is presented in this research, which shows promise. The method has the potential to further our understanding of how the brain processes visual information, even though there is still room for improvement. Weaknesses: In this work, the method is heavily dependent on the deep neural network used, and how effectively it can fit the neural data. Fortunately, the cases where the tunning properties are not captured by the model can be identified as mentioned in the end of the results section. The ability to generalize to other categories such as face patches was not tested and remains to be tested. The technique is predicated on the idea that within-category and out-of-category photos share a significant amount of attributes. It is not clear what we expect for the model to do and how useful outcomes are going to be in situations where there is little overlap. Finally, while the single neuron tuning specificity is the main topic of this work, it remains to be seen how a similar approach but with population representations compares. Technical Quality: 4 Clarity: 4 Questions for Authors: Subsequent research ought to concentrate on enhancing the underlying models and attribution methods as well as verifying the approach in various scenarios such as with different categories and in a closed-loop manner. The authors already acknowledge these directions. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The authors have a limitation section that addresses their work's assumptions. Flag For Ethics Review: ['No ethics review needed.'] Rating: 10 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and comments. We highly appreciate your confidence in our work. We agree that the mentioned limitations are largely adressed in the paper, but for the sake of completeness we post some remarks here. > The ability to generalize to other categories such as face patches was not tested and remains to be tested. We agree that this is a highly interesting next step and are planning to apply this method to study tuning properties of both body and face patches in more depth in future work. > The technique is predicated on the idea that within-category and out-of-category photos share a significant amount of attributes. It is not clear what we expect for the model to do and how useful outcomes are going to be in situations where there is little overlap. If a neuron is driven by a feature set $A$ for category-stimuli and a disjoint feature set $B$ for out-of-category stimuli, we agree that the proposed method would be unable to generate meaningful explanations, since the tuning for features $B$ simply cannot be explained (solely) by studying the tuning for features $A$. However, in that case the CNN activation vectors $a_{\\text{in}}$ and $a_{\\text{out}}$ should have distinct activation patterns, leading to a low image similarity $s(\\cdot,\\cdot)$ (equation (2)) which then implies an attribution map with low intensity (equation (7)). This allows the user to detect such cases. Furthermore, while we certainly don't claim that such neurons don't exist, the majority of neurons we recorded coded for similar features across categories (subsection 5.1). > Finally, while the single neuron tuning specificity is the main topic of this work, it remains to be seen how a similar approach but with population representations compares. This would indeed be an interesting comparison. At the population level, we hypothesize that most if not all features of the within-category image activate the population due to variations in tuning properties of single neurons. For an out-of-category stimulus, the method should then highlight the subset of features present in that image as since only those features will be weighted according to equation (5). --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for the response to my comments.
Summary: The paper shows that a hypothesis for the brain (category-selective neurons are actually selective to generic lower-level features that are present in those categories, not necessarily features specific to those categories) can be reproduced and visualized in a ResNet trained to predict neural activity. If I understand correctly, it is capable of generating hypothetical pairs of images with similar features (as in A.1.1). One could then test if an actual neuron responds similarly to the pair as the model predicts. Or find a real neuron, record, then based on responses, use the model to get a stream of stimuli that should elicit similar responses (because of similar low-level features) and show it to the real neuron to see if it responds as we expect. Strengths: This is a clear and straightforward method that contributes to tools available for analyzing what drives individual units in regression-to-neural-data deep learning models. Comprehensive and clear background/related work; very clear writing overall and description of methods on high and low levels. I got an intuition pretty quickly for what the method was even before the equations. Weaknesses: The results are interesting and intriguing to look at (Figure 4), however at the end of the day it comes across as a bit anecdotal as opposed to giving us insight into general principles of shape or object representation. This is not a strong criticism though - I accept it is a good starting point, and still a worthwhile contribution. Also, I note that all the objects are presented on blank backgrounds. I wonder how things would change when the objects or bodies are presented in their natural context within a visual scene? other comments: 111: Maybe this is standard terminology I'm not familiar with, but "readout vector" confused me at first because it makes it sound like it's logits or something? But it's just the weights from the latent representation to a single output unit. Line 131 more clearly calls it "learned weight vector" 145: should be a_2=f(x_2)? It would be interesting to see more mathematical/geometric analysis of the shared features (beyond stubby vs spiky) in the future, e.g. joints at a specific angle, combinations of contours in a certain way, etc. Technical Quality: 4 Clarity: 4 Questions for Authors: see above Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: see above Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for reviewing the manuscript, and for your positive feedback. > The results are interesting and intriguing to look at (Figure 4), however at the end of the day it comes across as a bit anecdotal as opposed to giving us insight into general principles of shape or object representation. This is not a strong criticism though - I accept it is a good starting point, and still a worthwhile contribution. We understand this criticism, as we steered the focus of the manuscript towards introducing the method in a clear and class-agnostic fashion to allow others to easily apply it to their own recordings of category-selective neurons. We acknowledge that further work should attempt to get a more systematic overview of the shared coding properties. > Also, I note that all the objects are presented on blank backgrounds. I wonder how things would change when the objects or bodies are presented in their natural context within a visual scene? We decided for neutral backgrounds to allow for good control of low-level image features. Further, we wanted the model to pick up on as few spurious image statistics as possible to enable it to generalize across categories. Since it seems like similarity between highly activating bodies and objects is primarily driven by local features / parts, we would now hypothesize that the tuning properties would be be preserved when presenting stimuli as parts of naturalistic scenes. We agree that it would be worthwhile to test this hypothesis. > Maybe this is standard terminology I'm not familiar with, but "readout vector" confused me at first because it makes it sound like it's logits or something? But it's just the weights from the latent representation to a single output unit. Line 131 more clearly calls it "learned weight vector" 145: should be a_2=f(x_2)? Thank you for pointing out the unclear language - we have adjusted these parts in the text. > It would be interesting to see more mathematical/geometric analysis of the shared features (beyond stubby vs spiky) in the future, e.g. joints at a specific angle, combinations of contours in a certain way, etc. We agree that it would be of high interest to find quantitative feature descriptors to go along with qualitative visualizations. We are concurrently working on further ways of characterizing the features driving these neurons. --- Rebuttal Comment 1.1: Title: Thanks for your response. Comment: Good paper, I have nothing further to add.
Summary: The authors proposed a deep learning based method to visualize shared features in neurons that are selective to specific categories, such as faces or bodies, when they respond to out-of-category stimuli. The method identifies visual features driving the selectivity of neurons by modeling responses to images based on latent activations of a deep neural network. The paper highlights the application of this method to body-selective regions in the macaque IT cortex, demonstrating that neurons encode overlapping visual features for bodies and objects. This approach provides insights into why certain non-body objects activate body-selective neurons and offers a more fine-grained understanding of neural responses. Strengths: The paper is well-structured, with a clear abstract, introduction, methodology, and results sections. While the proposed method is based on well-established deep learning techniques, specifically leveraging latent activations of a deep neural network to model neuron responses, the application to novel recordings from body-selective regions in macaque IT cortex demonstrates its practical utility and empirical soundness. I think the primary contribution lies in providing a tool that allows for a more fine-grained investigation of neuron responses in visual neuroscience. By revealing why certain non-body objects activate body-selective neurons, the paper contributes valuable insights to the understanding of neural selectivity and visual processing. Weaknesses: The paper primarily applies existing deep learning techniques rather than introducing new machine learning algorithms or models. This might be seen as a limitation for those expecting significant advancements in machine learning methods, especially readers from NeurIPS. Technical Quality: 3 Clarity: 3 Questions for Authors: Can The authors clarify the novel contributions of the proposed method in comparison to existing visualization techniques? How does the approach provide unique insights that are not achievable with current methods? How generalizable is the method to other types of neurons or different brain regions beyond body-selective areas in the macaque IT cortex? Can the proposed approach be adapted to study other semantic categories or species? What are the broader implications of these findings for the field of visual neuroscience? We knew from previous studies that out-of-category stimuli can also activate neurons coding for in-category features. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have addressed the limitations. Flag For Ethics Review: ['Ethics review needed: Data privacy, copyright, and consent'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for review and comments. We hope to be able to increase your confidence in recommending the manuscript for publication. > The paper primarily applies existing deep learning techniques rather than introducing new machine learning algorithms or models. This might be seen as a limitation for those expecting significant advancements in machine learning methods, especially readers from NeurIPS. It is correct that we do not introduce new models or learning algorithms. Our contribution is more in line with efforts towards explainability, as we propose a novel method to study the behaviour of a trained model with fixed weights. We do believe that explainability methods can be of interest to a relatively broad community of people working on vision. Further, we argue that applications to neuroscience are of special importance to the community at NeurIPS, in order to foster a relationship between the two related fields. > Can The authors clarify the novel contributions of the proposed method in comparison to existing visualization techniques? Our method can be understood as an approach to turn any (single-image) attribution method into a (multi-image) similarity-attribution method by reweighting feature-wise attribution maps. In doing so, it is completely agnostic to the underlying visualization technique used to compute these feature-wise attributions. Previous work on similarity attribution methods (see related work) focuses on output layers to visualize global image similarity, while we are interested in local features that drive responses of neurons in the brain. Therefore, our approach is applicable to (functions of) hidden units. > How does the approach provide unique insights that are not achievable with current methods? We are interested in why an out-of-category (ooc.) stimulus $x_{\\text{out}}$ activates an otherwise category-selective neuron. One could use an existing visualization technique to study the neuron's preference of $x_{\\text{out}}$. This would provide an attribution map over the image, highlighting the visual features driving the model neuron's response. However, it does not yield insight into *why* the image would activate a category-selective neuron, specifically. We argue here, in line with previous work, that neural preference of these ooc. images is due to shared features with the preferred category. Therefore, a proper visual explanation should also include how the driving features in $x_{\\text{out}}$ occur among images of the preferred class, to understand why preference of these features has developed. The proposed method therefore explains neural responses to an ooc. image by providing a strongly driving category image, highlighting the preferred features in the category image, and then highlighting the corresponding features in the ooc. stimulus. > How generalizable is the method to other types of neurons or different brain regions beyond body-selective areas in the macaque IT cortex? Can the proposed approach be adapted to study other semantic categories or species? Yes, the method is completely category-agnostic and therefore can be used for any semantically selective brain region, as well as selective units in artificial networks. Future work could apply the method to, e.g., face cells as another category in more depth; as a preliminary example, consider Fig. 4 panel (4,1) where we visualize an object driving a face preferring-neuron. For details regarding category agnosticism, we kindly refer to section 3 of the manuscript. > What are the broader implications of these findings for the field of visual neuroscience? We knew from previous studies that out-of-category stimuli can also activate neurons coding for in-category features. Regarding broad implications, we first provide additional evidence for the hypothesis of category-agnostic tuning properties in macaque IT cortex. There is still conflicting evidence regarding this question [1], necessitating further work. Furthermore, given the finding that out-of-category stimuli can activate category-selective neurons, the interesting question becomes why this is the case. Our work supports the notion that such stimuli exhibit features which are also common in images of instances from the preferred category. This is, to the best of our knowledge, the first attempt at visually characterizing these features. Our findings suggest, that the objects activating body-selective neurons have local parts that are visually similar to local body parts. We hypothesize that this finding transfers to other category-selective areas. For example, [2] found that simple shape descriptors like 'roundness' are not enough to explain face cells' responses to objects. In our experiments, we also found cells that fire in response to the head; their best-driving objects are not round, but our method finds parts which resemble a head (Fig. 4 (4,1), (4, 2), (5,3)). This yields insight into *why* these objects might activate the neurons, beyond descriptors like 'roundness' [1] Shi Y, Bi D, Hesse JK, Lanfranchi FF, Chen S, Tsao DY. Rapid, concerted switching of the neural code in inferotemporal cortex. bioRxiv [Preprint]. [2] Kasper Vinken et al. ,The neural code for “face cells” is not face-specific.Sci. Adv.9 --- Rebuttal Comment 1.1: Title: Thanks for the response Comment: I went through the answers from the authors and am happy with them. However, I am still concerning about the techenical novelty after going through the paper again. I will keep my score.
Rebuttal 1: Rebuttal: We would like to thank all reviewers for their time and their valuable insights. We were glad to see that all reviewers found the presentation of the work clear and understandable, which is also reflected by the fact that all summaries clearly capture the main points of the paper. We also did not identify any factual errors in the reviews. Further, we appreciate that all reviewers suggest that the manuscript should be accepted for publication, albeit with varying confidence. We hope that we were now able to clarify remaining questions, and properly addressed the raised issues. Even though there was partial overlap between comments, we answered all of them individually to make the rebuttals for each reviewer self-contained. If some of our answers were not to the point and you would like to discuss further, we are happy to do so over the next couple of weeks.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Ordering-Based Causal Discovery for Linear and Nonlinear Relations
Accept (poster)
Summary: This paper studies the causal discovery problem in mixed functional relations data, where both linear and non-linear relationships exist in the causal graph. The author presents a Jacobian score-based method (essentially a score-matching method) to identify leaf nodes and thereby recover the causal order. The experimental results demonstrate the efficiency of the proposed methods. Strengths: 1. The paper is clearly written and well-organized. 2. The setting of mixed functional relations data is interesting and may be important for real-world scenarios. 3. The author proposes a Jacobian score-based method, which is an extension of the score-matching method for non-linear Additive Noise Models (ANM). Weaknesses: 1. The non-decreasing variance of noises assumption is too strong and restrictive. Typically, in ANM, the noise term is assumed to be mutually independent. 2. It appears that the primary difference between the score-matching method for ANM and the proposed method is the introduction of Assumption 1. 3. If I use an independent residuals-based method, it seems to work in your setting. So, what are the advantages of the proposed method? For example, is the proposed method capable of handling large-scale structures? If so, the experimental results should demonstrate this. Technical Quality: 3 Clarity: 3 Questions for Authors: See Weaknesses. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: NAN Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Your insightful advice is greatly valued, as it can contribute to the progress of our work. We hope that the answers provided below have resolved your inquiries. **Q4.1:** non-decreasing variance too strong. **A4.1:** Before answering this question about assumptions, we want to highlight some facts of ANM. For an ANM $y=f(x)+\epsilon$, we have to make some assumptions on $f$ or $\epsilon$ due to the problem of backward model (see Prop.23 in ref. [9]). This paper does not make any additional assumptions on $f$ (linear or nonlinear), therefore, some assumptions on $\epsilon$ are inevitable. To the best of our knowledge, our assumptions is the weakest assumption that works well under both linear and nonlinear ANM. Then, it is necessary to emphasise that CaPS works under (i) **or** (ii) in assumption 1. So, our assumption is **weaker** than non-decreasing variance. CaPS is able to work well using the condition (ii) if non-decreasing variance does not hold. For example, considering a variance-unsortable scenario with $\sigma^2\sim U(1,10)$ and causal effect greater than 0.9, CaPS can also work well because the the sum of parent score is greater than the given lower bound in condition (ii). In other words, non-decreasing variance is just one of the scenarios in which CaPS works well, and CaPS can also handle many variance-unsortable scenarios. **Q4.2:** The primary difference between the SCORE and CaPS is assumption 1. **A4.2:** We understand your concerns, but SCORE and CaPS is totally different for the following reasons: (1) More generalized scenarios. CaPS aims to handle both linear and nonlinear and most possibly mixed relations while SCORE only handles nonlinear relations. CaPS is the first ordering-based method to deal with this scenario. (2) Different theoretical ideas. As mentioned in A4.1, SCORE is derived through the properties of nonlinear $f$ while CaPS derives Theorem 1 by the properties of $\epsilon$, which is a totally different ideas in theoretical perspective (see details in App. A.2). (3) New concept for identifiability and post-processing. This paper proposes a new concept of "parent score", which gives a new lower bound for identifiability with causal effect (see Corollary 1). This concept can be used to accelerate the pruning process (see Fig. 8) and correct inaccurate predictions in the pruning step (see Fig. 4 and 7). **Q4.3:** independent residuals-based method work in this setting? what are the advantages of CaPS? (e.g. handing large-scale structures) **A4.3:** Thanks for the kindly reminder of comparision of the conditional independence (CI) test-based methods. About the independent residuals-based method, we're not sure exactly which method you mean, but we guess it's ReCIT[1, 2]. Since the source code of ReCIT is not available for comparision, we adopt the closest CI-based baseline KCIT[3] compared with ReCIT. Yes, as you mentioned, although these two CI test-based methods not explicitly need on linear and non-linear assumptions, they have high computional complexity. They need to search the solution on a fully connect DAG while ordering-based method only need a smaller search space with the topological ordering. This make CI test-based methods ususally has exponential complexity and hard to implement in a large-scale structures. In Table 5, we show the performance of CaPS vs KCIT and there are two conclusions we can learn from these experimental results. (1) Although KCIT does not significantly degrade in linear and nonlinear performance, it consistently underperforms CaPS in all scenarios with different linear rate. (2) For capablity of handling large-scale structures, CaPS is consistently better than KCIT with significantly outperforming metrics and faster training speeds (e.g. 18 seconds vs 2 hours in Table 5). In addition, even compared to the ordering-based approach, CaPS are able to better handle the large-scale structures (see Fig.3, App. C.5 and Fig. 8). *Since KCIT and ReCIT are compatible with CaPS, KCIT and ReCIT can use CaPS to largely reduce their search space. This comparison further shows the advantages of CaPS and we will add these discussion of ReCIT,KCIT and CaPS to our related work and experiments in the latest version. [1] Zhang H et al. (2018) Measuring conditional independence by independent residuals: theoretical results and application in causal discovery. [2] Zhang H et al. (2019) Measuring conditional independence by independent residuals for causal discovery. [3] Zhang K et al. (2011) Kernel-based Conditional Independence Test and Application in Causal Discovery. --- Rebuttal Comment 1.1: Title: Thank you for your response Comment: Regarding the independent residuals-based method, I am referring to the traditional nonlinear Additive Noise Model (ANM), where the causal direction is identified by testing the independence between the residual and its parent variables. Additionally, I am satisfied with the responses to the other questions, so I am raising my score accordingly. By the way, it seems that during the author-rebuttal stage, it is not allowed to use the 'official comment' button. For example, the author can use the general rebuttal button to submit supplementary experiments. I overlooked this issue here. --- Reply to Comment 1.1.1: Title: Thanks for your feedback Comment: Thank you for revising your score and for your thoughtful review. Sorry for the inappropriate use of the 'official comment', and thank you again for overlooking this issue. We greatly appreciate your recognition of our efforts. About independent residuals-based method, we get your point with your additional interpretations. This method will often be used to identify the direction of causal sub-structures, e.g. chain, fork and collider, which is similar to CI test-based method. As you already pointed out, although this method may not significantly degrade in linear and nonlinear performance, it is not capable of handling large-scale structures. As this paper [4] states and our additional experiments in Table 4, the daunting cost of checking every candidate sub-structure is intolerable. Thus, CaPS will consistently have better capablity of handling large-scale structures compared with this method. And, as stated in the rebuttal above, the complexity of this method can be greatly reduced by CaPS since this method are compatible with CaPS. [4] He Y et al. (2021) DARING: Differentiable Causal Discovery with Residual Independence --- Rebuttal 2: Title: Table 4: comparison of CaPS and CI-based methods Comment: | dataset | Linear rate | KCIT | | | CaPS | | | |-------------------------|-------------|-----------------|------------|-------------|------------|------------|--------------| | | | SHD | SID | F1 | SHD | SID | F1 | | SynER1 d=10 | 0 | 4.2±0.4 | 18.8±9.8 | 0.617±0.050 | 0.6±0.8 | 4.2±7.9 | 0.958±0.061 | | | 0.25 | 4.8±0.4 | 19.6±8.1 | 0.541±0.126 | 0.8±0.7 | 5.6±7.6 | 0.944±0.057 | | | 0.5 | 4.8±0.4 | 16.2±7.9 | 0.573±0.070 | 0.6±0.8 | 1.6±2.7 | 0.961±0.055 | | | 0.75 | 5.4±2.1 | 20.0±13.0 | 0.546±0.166 | 0.8±1.1 | 3.0±4.2 | 0.924±0.098 | | | 1 | 4.8±1.9 | 17.6±12.2 | 0.588±0.146 | 1.2±1.1 | 3.6±4.0 | 0.901±0.090 | | Training time (seconds) |||| 308.2±191.3 ||| 8.02±1.08 | | SynER1 d=20 | 0 | 13.0±4.14 | 102.6±49.9 | 0.444±0.171 | 0.8±0.40 | 3.4±2.87 | 0.981±0.010 | | | 0.25 | 11.4±2.3 | 91.0±38.6 | 0.528±0.137 | 1.2±1.17 | 7.6±10.25 | 0.960±0.039 | | | 0.5 | 10.8±2.5 | 75.4±20.7 | 0.596±0.103 | 4.2±3.05 | 17.0±11.48 | 0.949±0.046 | | | 0.75 | 12.4±2.1 | 81.4±15.2 | 0.538±0.112 | 1.6±1.62 | 11.8±10.91 | 0.949±0.046 | | | 1 | 11.0±2.5 | 69.2±16.3 | 0.593±0.137 | 2.0±2.28 | 12.4±12.14 | 0.937±0.060 | | Training time (seconds) |||| 6551.6±1570.6 ||| 15.84±3.36 | | SynER1 (d=50) | 0 | 40 | 549 | 0.37 | 7.2±4.44 | 56.6±49.01 | 0.914±0.062 | | | 0.25 | 39 | 549 | 0.377 | 11.4±1.85 | 68.4±18.63 | 0.857±0.033 | | | 0.5 | 40 | 550 | 0.367 | 11.4±4.49 | 72.8±47.52 | 0.865±0.054 | | | 0.75 | 26 | 325 | 0.62 | 8.2±3.06 | 35.8±18.92 | 0.900±0.039 | | | 1 | 40 | 422 | 0.444 | 6.2±2.85 | 36.8±21.87 | 0.923±0.035 | | Training time (seconds) ||||$\geq 12h$| ||319.85±98.82| Due to the long training time of KCIT at 50 nodes, we only report its performance with one trial in Table 4. Other results are reported with 5 trials.
Summary: This paper proposes an ordering based causal discovery method when the underlying causal model has both linear and nonlinear causal relationships. Starting with a method to iteratively find leaf nodes, this paper proposes to use parent score for better pruning. Results show that the proposed method outperforms baselines. Strengths: 1. Paper is written well and easy to understand. 2. Theoretical motivations are clearly explained and the proof are adequately provided. 3. Experiments are extensive and cover all theoretical aspects. Weaknesses: 1. Results are not great on real-world datasets. 2. Topological divergence is a popular metric for evaluating the topological order. Very few results are presented in supplementary on this metric. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see weaknesses section. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive feedback, as it can help us improve our work. We trust that the following answers have clarified your questions. **Q3.1:** Results are not great on real-world datasets. **A3.1:** Yes, CaPS achieves the best performance on Sachs dataset but only the second best in Syntren. We have analyzed the source of the discrepancy in Appendix C.1, the pattern of Syntren is very special containing many star networks with short topological dependency path. This pattern is not friendly to ordering-based methods (see details in Fig. 5 and lines 524-528). Despite this unfavourable dataset, CaPS achieves the second in all baseline and the best performance in all ordering-based methods. We believe, in a sense, this fact reinforces the effectiveness of CaPS. **Q3.2:** Few results of topological divergence **A3.2:** We already shows the order divergence of SynER1 and SynER4 in App. C.6. To further address your concerns, we show more results of this metric in different datasets in Table 4. The conclusion is CaPS consistently achieves the best order divergence under the datasets with different DAG tpye, sparsity, scale and sample sizes. We will update it to the latest version of our manuscript. --- Rebuttal 2: Title: Table 4: more results of order divergence Comment: **Table 4. More results of order divergence.** | Dataset | linear_rate | sortnregress | SCORE | DiffAN | CAM | CaPS | | -------------- | ----------- | ------------ | ----------- | ------- | ----------- | ----------- | | SynSF1 | 0 | 1.4±1.4 | **0.0±0.0** | 3.2±2.7 | 0.2±0.4 | 0.2±0.4 | | | 0.25 | 1.6±1.3 | **0.2±0.4** | 3.0±2.6 | 1.4±0.4 | **0.2±0.4** | | | 0.5 | 1.6±1.0 | 1.2±1.1 | 3.0±2.0 | 1.2±0.7 | **0.0±0.0** | | | 0.75 | 2.0±1.6 | 1.4±1.3 | 2.6±1.6 | 2.4±1.8 | **0.2±0.4** | | | 1 | 2.2±1.3 | 2.2±2.2 | 2.6±1.4 | 4.4±1.9 | **0.2±0.4** | | SynSF4 | 0 | 6.6±1.6 | 0.6±1.2 | 4.4±2.6 | 2.0±1.6 | **0.4±0.8** | | | 0.25 | 4.8±1.9 | **0.6±1.2** | 4.4±1.8 | 3.8±4.0 | 1.0±1.2 | | | 0.5 | 6.2±1.9 | 2.2±1.4 | 4.6±1.6 | 6.0±2.8 | **1.4±1.0** | | | 0.75 | 4.2±1.7 | 2.8±1.4 | 4.6±1.7 | 9.6±2.8 | **1.8±1.1** | | | 1 | 3.2±0.9 | 2.8±1.1 | 7.6±2.5 | 13.0±3.5 | **1.2±1.1** | | SynER1(d=20) | 0 | 3.6±2.0 | 0.2±0.4 | 2.4±1.4 | 0.6±0.4 | **0.0±0.0** | | | 0.25 | 3.2±1.4 | **0.2±0.4** | 3.2±1.1 | 2.4±1.3 | 0.6±0.4 | | | 0.5 | 3.2±0.7 | 1.2±1.6 | 5.0±1.4 | 2.6±0.8 | **1.2±1.1** | | | 0.75 | 2.6±0.4 | 1.2±0.7 | 3.4±1.3 | 6.4±1.4 | **0.8±0.7** | | | 1 | 2.4±1.0 | 2.0±0.6 | 3.8±1.1 | 7.8±1.9 | **1.0±0.8** | | SynER1(n=1000) | 0 | 1.6±0.4 | **0.0±0.0** | 1.8±1.1 | **0.0±0.0** | **0.0±0.0** | | | 0.25 | 1.0±0.6 | 0.2±0.4 | 2.8±1.1 | 1.0±0.6 | **0.0±0.0** | | | 0.5 | 1.0±1.0 | 1.4±1.0 | 2.0±0.8 | 2.2±1.7 | **0.4±0.4** | | | 0.75 | 1.4±1.4 | 1.0±0.8 | 2.8±1.9 | 1.2±0.7 | **0.6±0.8** | | | 1 | 1.4±1.4 | 1.6±0.8 | 2.6±1.0 | 1.8±1.1 | **0.6±0.7** | --- Rebuttal Comment 2.1: Comment: I thank the authors for their response. I've read their response and I will stay with my score. --- Reply to Comment 2.1.1: Title: Thanks for your feedback Comment: Thank you very much for your valuable suggestions and insightful comments on our manuscript. We would like to express our sincere gratitude for your recognition of our efforts . If you have any further questions, please feel free to reach out.
Summary: The authors propose an ordering-based causal discovery algorithm designed to handle both linear and nonlinear causal relations in an SEM. In contrast to existing methods that assume purely linear or nonlinear relations, CaPS introduces a unified criterion for topological ordering and a new "parent score" to quantify the average causal effect, which aids in pruning and correcting predictions. Experimental results show that CaPS outperforms some sota methods on synthetic data with mixed linear and nonlinear relations and demonstrates competitive performance on real-world data. Strengths: * CaPS provides a new approach that can handle both linear and nonlinear causal relationships, addressing a relevant gap in current causal discovery methods. * The introduction of the parent score is interesting and provides a quantitative measure of causal strength, which improves the pruning process and prediction accuracy. * The authors present a new criterion for distinguishing leaf nodes using the expectation of the Hessian of the data log-likelihood and provides sufficient conditions for the identifiability of the causal graph, inspired by SCORE and LiSTEN. Weaknesses: * All noises are assumed to be Gaussian. * The identifiability conditions rely on assumptions such as non-decreasing variance of noises, which is hard to hold in practical scenarios. * Some more recent methods are not compared against. Technical Quality: 4 Clarity: 3 Questions for Authors: * Looking at the derivations, the approach seems difficult to generalize to more general noises. What are your thoughts on this? * Both conditions in Theorem 1 seem impossible to verify, is this sentiment correct? * Did you assume equal variances in the experiments? I think experimenting on settings where noise variances are random might make sense in this case. * I think a couple of more recent methods such as DAGMA (Bello et al. 2022) and TOPO (Deng et al. 2023) are known to outperform both NOTEARS and GOLEM. I think it could be worth comparing against those methods. Bello et al. (2022), "DAGMA: Learning DAGs via M-matrices and a Log-Determinant Acyclicity Characterization". Deng et al. (2023), "Optimizing NOTEARS objectives via topological swaps" * Line 52: "creterion" should be "criterion." Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: There are some limitations not "explicitly" stated such as assumptions on causal sufficiency and Gaussianity of noises. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable suggestions, as they can aid in enhancing our work. We hope that the responses below have addressed your concerns. **Q2.1:** non-decreasing variance of noise **A2.1:** The first thing we need to emphasise is that CaPS works under (i) **or** (ii) in assumption 1. So, our assumption is **weaker** than non-decreasing variance. CaPS is able to work well using the condition (ii) if non-decreasing variance does not hold. For example, considering a variance-unsortable scenario with $\sigma^2\sim U(1,10)$ and causal effect greater than 0.9, CaPS can also work well because the the sum of parent score is greater than the given lower bound in condition (ii). In other words, non-decreasing variance is just one of the scenarios in which CaPS works well, and CaPS can also handle many variance-unsortable scenarios. **Q2.2:** Gaussian assumption & derivations seems difficult to generalize to more general noise **A2.2:** Before answering the this question about assumptions, we want to highlight some facts of ANM. Without interventional data, all ANM-based models ($y=f(x)+\epsilon$) have to make some assumptions on $f$ or $\epsilon$ due to the problem of backward model (see Prop.23 in ref. [9]). Some paper make purely linear or nonlinear assumption on $f$, e.g., SCORE and LISTEN. This paper does not make any additional assumptions on $f$ (linear or nonlinear), therefore, some assumptions on $\epsilon$ are inevitable. To the best of our knowledge, our assumptions is the weakest assumption that works well under both linear and nonlinear ANM. Theoretically, the Gaussian assumption is necessary, which are used in Eq. 10 to prove Theorem 1. However, experimentally, the results in App. C.7 shows that CaPS performs consistently well under non-Gaussian noise (Gumbel and Laplace), which shows the potential of CaPS generalization to other noises. We are still working to find a more elegant solution to relax the $\epsilon$ to non-Gaussian noise. However, we believe that CaPS, as the first ordering-based method capable of handling both linear and nonlinear, contributes sufficiently to the field. **Q2.3:** Both conditions in Theorem 1 seem impossible to verify. **A2.3:** We understand your concerns and this sentiment is right, but most of conditions is unverifiable without the ground truth SEM. In synthetic data, we can easily verify our conditions since we have the real parameters of $f$ and $\epsilon$. In the real-world application, as shown in Fig. 1, we can't even verify $f$ is linear and non-linear since we do not have the ground truth SEM. Thus, we need a method with the conditions which more likely to be close to the real-world patterns while many conditions is usually unverifiable. CaPS is able to solve this problem better than existing works based on the following reason: (1) CaPS can work well in both linear and nonlinear and most possibly mixed cases, which is widespread in real-world data. (2) CaPS gives a new lower bound for identifiable causal effects in Assumption 1(ii), which can be use to discovery non-weak causal relations in real-world data. (3) Experimentally, CaPS is an outperforming ordering-based method under different tpyes of synthetic datasets and real-world datasets. Especially, our experiment results show CaPS can effectively support non-Gaussian noise (Gumbel and Laplace) in addtion to theoretically proved Gaussian noise. **Q2.4:** assume equal variances in the experiments? **A2.4:** We consider the scenario both equal variances and unequal variances in our manuscript. The results of unequal variances are given in App. C.7. Following the most popular settings in previous works, for each variable $x_i$, the noise $\epsilon_i\sim N(0, \sigma_i^2)$, where $\sigma_i^2$ are independently sampled uniformly in $U(0.4, 0.8)$ . Under this random variances scenario, CaPS still achieved the best or competitive performance under unequal variances noise. **Q2.5:** comparing with DAGMA and TOPO. **A2.5:** Thank you for sharing these two strong continuous optimization baselines. With their wonderful open source code, we successfully implemented them in our experimental scenarios. Since this paper considers both linear, non-linear and mixed scenarios, for a fair comparison, we compare the performance of CaPS and their linear (-L) and nonlinear (-N) versions, respectively, in Table 3. All parameters and settings of these two baseline follow their original manuscript or source code. Yes, as you mentioned, DAGMA and TOPO are stronger baselines than NOTEARS and GOLEM. And, we can learn two conclusions from Table 3: (1) Compared with DAGMA and TOPO, CaPS can consistently achieves best performance under both synthetic and real-world data with different sparsity and linear rate. (2) Similar to other baselines, DAGMA and TOPO are also suffering significant performance loss when linear/nonlinear assumptions mismatch. This comparison is important but does not affect any conclusions of this paper. We will update it to the latest version of our manuscript in the related work and experiment. --- Rebuttal 2: Title: Table 3: addtional baseline DAGMA and TOPO Comment: **Table 3. Addtional baselines** | dataset | Linear rate | Metrics | DAGMA-L | DAGMA-N | TOPO-L | TOPO-N | CaPS | |---------|-------------|---------|-------------|-------------|-------------|-------------|--------------| | SynER1 | 0 | SHD | 6.0±1.4 | 4.0±1.7 | 6.8±1.4 | 22.6±11.8 | **0.6±0.8** | | | | SID | 16.0±6.4 | 11.4±6.2 | 16.8±7.9 | 17.4±6.2 | **4.2±7.9** | | | | F1 | 0.477±0.118 | 0.684±0.183 | 0.446±0.107 | 0.282±0.069 | **0.958±0.061** | | | 0.25 | SHD | 5.2±2.0 | 4.6±1.8 | 5.6±1.4 | 23.2±12.1 | **0.8±0.7** | | | | SID | 15.4±7.4 | 16.2±4.9 | 14.6±9.83 | 16.8±8.7 | **5.6±7.6** | | | | F1 | 0.580±0.166 | 0.589±0.170 | 0.582±0.120 | 0.312±0.033 | **0.944±0.057** | | | 0.5 | SHD | 4.2±2.3 | 3.4±2.2 | 3.6±2.3 | 18.4±14.1 | **0.6±0.8** | | | | SID | 12.0±7.8 | 11.2±7.1 | 10.6±7.9 | 14.6±10.3 | **1.6±2.7** | | | | F1 | 0.681±0.194 | 0.708±0.161 | 0.739±0.175 | 0.429±0.161 | **0.961±0.055** | | | 0.75 | SHD | 3.2±1.9 | 3.0±2.2 | 3.4±1.8 | 8.8±4.3 | **0.8±1.1** | | | | SID | 8.8±8.9 | 9.0±9.8 | 8.8±8.9 | 16.6±6.8 | **3.0±4.2** | | | | F1 | 0.778±0.135 | 0.760±0.155 | 0.768±0.129 | 0.452±0.058 | **0.924±0.098** | | | 1 | SHD | 2.4±1.4 | 3.4±2.05 | 2.4±1.4 | 8.0±4.8 | **1.2±1.1** | | | | SID | 7.6±9.4 | 12.4±9.9 | 7.6±9.4 | 16.2±6.5 | **3.6±4.0** | | | | F1 | 0.844±0.097 | 0.719±0.172 | 0.844±0.097 | 0.506±0.105 | **0.901±0.090** | | SynER4 | 0 | SHD | 31.6±1.0 | 32.4±1.4 | 31.6±1.3 | 28.6±4.1 | **14.6±2.4** | | | | SID | 67.6±3.9 | 75.8±7.1 | 69.8±5.1 | 70.2±10.6 | **26.2±2.7** | | | | F1 | 0.138±0.050 | 0.087±0.024 | 0.129±0.020 | 0.298±0.194 | **0.728±0.040** | | | 0.25 | SHD | 27.2±4.7 | 25.6±4.8 | 22.8±2.4 | 25.4±4.4 | **12.4±1.9** | | | | SID | 60.0±7.7 | 60.6±16.0 | 57.2±6.9 | 65.8±7.4 | **26.8±3.5** | | | | F1 | 0.313±0.185 | 0.354±0.214 | 0.476±0.108 | 0.470±0.133 | **0.763±0.046** | | | 0.5 | SHD | 25.2±2.2 | 22.6±2.3 | 13.0±4.3 | 24.4±2.0 | **10.8±2.6** | | | | SID | 61.8±8.9 | 58.6±6.8 | 37.6±9.6 | 72.0±4.6 | **27.2±10.2** | | | | F1 | 0.398±0.110 | 0.482±0.098 | 0.755±0.088 | 0.440±0.055 | **0.791±0.059** | | | 0.75 | SHD | 14.2±4.8 | 17.4±2.8 | 13.0±4.3 | 23.6±2.4 | **6.6±3.5** | | | | SID | 46.6±14.0 | 57.0±12.2 | 37.6±9.6 | 67.2±10.5 | **20.0±10.5** | | | | F1 | 0.719±0.102 | 0.624±0.078 | 0.755±0.088 | 0.488±0.064 | **0.876±0.069** | | | 1 | SHD | 9.6±3.8 | 14.0±3.5 | 8.4±2.4 | 21.2±3.0 | **3.2±1.8** | | | | SID | 38.4±9.3 | 51.6±13.1 | 33.0±9.0 | 65.6±6.5 | **14.6±9.5** | | | | F1 | 0.817±0.089 | 0.709±0.076 | 0.855±0.056 | 0.524±0.099 | **0.936±0.035** | | sachs | / | SHD | 13.0±0.0 | 17.0±0.0 | 21.6±0.4 | 47.4±2.3 | **11.0±0.0** | | | | SID | 46.0±0.0 | 53.0±0.0 | 44.0±0.0 | **38.6±5.08** | 42.0±0.0 | | | | F1 | 0.370±0.0 | 0.0±0.0 | 0.303±0.0 | 0.211±0.064 | **0.5±0.0** | Even under purely linear/nonlinear, the experimental results differ from their original manuscripts because of the different settings of synthetic data generation. One different setting is that our nonlinear function is "gp", while they are "mlp". Another different setting is DAG weights, which we set to $[-1, -0.1]\cup[0.1, 1]$ while they are $[-2, -0.5]\cup[0.5, 2]$. In our setup, it would be more difficult to identify DAGs effectively due to the weaker strength of the causal effect. --- Rebuttal Comment 2.1: Comment: I thank the authors for their response. The additional experiments are helpful. I am not so convinced about A2.2. "To the best of our knowledge, our assumptions is the weakest assumption that works well under both linear and nonlinear ANM." As far as I can tell, LiNGAMs are identifiable, and so are nonlinear models with Gaussian and non-Gaussian noises. Thus, ANM with non-Gaussian noises should also be identifiable. I will keep my score for now. --- Rebuttal 3: Title: Thanks for your feedback Comment: Thank you for your valuable comments and recognition of our efforts, but there seems to be some misunderstanding regarding A2.2. What we are trying to convey is that the assumption of CaPS is the weakest assumption that **can handle linear, nonlinear and even mixed relations simultaneously**. As you point out, LiNGAM can work under non-Gaussian noises. However, it can **only handle purely linear causal relations**. To further address your concerns, we provide the comparison of CaPS and LiNGAM under SynER1 with Gumbel noise in Table 6. We can learn two conclusions from Table 6: (1) LiNGAM suffers a significant decrease with increasing nonlinear ratio because it can only work on linear & non-Gaussian. (2) The empirical results of CaPS is consistently better than LiNGAM under non-Gaussian settings, which show CaPS can effectively support non-Gaussian noise in addtion to theoretically proved Gaussian noise. **Table 6. CaPS vs LiNGAM under SynER1 with Gumbel noise.** | Linear rate | DirectLiNGAM | | | CaPS | | | |-------------|--------------|-----------|-------------|---------|---------|--------------| | | SHD | SID | F1 | SHD | SID | F1 | | 0 | 8.2±1.7 | 25.8±11.2 | 0.125±0.174 | 0.6±0.8 | 0.6±0.8 | 0.966±0.044 | | 0.25 | 6.8±1.7 | 20.8±11.9 | 0.358±0.159 | 1.2±1.6 | 1.8±2.4 | 0.925±0.1 | | 0.5 | 6.0±1.4 | 18.6±11.1 | 0.467±0.109 | 2.4±1.9 | 3.8±3.8 | 0.842±0.130 | | 0.75 | 3.6±2.1 | 10.4±8.8 | 0.735±0.174 | 1.2±1.1 | 2.0±1.8 | 0.927±0.075 | | 1 | 2.4±1.5 | 7.6±9.4 | 0.844±0.097 | 0.8±0.7 | 2.0±2.6 | 0.944±0.050 |
Summary: This paper addresses the challenge of ordering-based causal discovery, which involves first determining the topological ordering of variables (typically by recursively identifying sub-leaf nodes) and then identifying the parent set for each variable. Existing methods often focus on either nonlinear or linear relationships. For instance, SCORE relies on a constant score Jacobian, which fails in the absence of nonlinear relationships, whereas LISTEN employs a precision matrix, which makes no sense in nonlinear contexts. This work proposes an ordering-based method that accommodates both linear and nonlinear relationships. Specifically, it identifies the topological ordering using the expectation (instead of the variance) of the score's Jacobian, under a sortability assumption on the exogenous noise components. Subsequently, average treatment effect estimation is extended to identify the parent sets. Strengths: 1. The application of ordering-based causal discovery methods to models with both linear and nonlinear relationships is novel to me. 2. The theorems and mathematical details appear to be correct, though I haven't checked all the details. 3. The experimental results are comprehensive, covering various competitors, different settings, and cases where assumptions are violated (e.g., C.7). Weaknesses: 1. **Assumptions are too strong:** - For linear relationships in the ANM, additional assumptions are required for identifiability. This paper adopts assumptions similar to those in LISTEN, namely, that the variances of exogenous additive noise components follow the same topological ordering of the causal DAG, akin to VAR-sortability assumptions (Reisach et al., 2021). These assumptions are overly stringent, impractical, and lack testability. More discussions regarding this can be referred to "Structure Learning with Continuous Optimization: A Sober Look and Beyond". - The authors also assume that all additive noise components are zero-mean Gaussian. It is unclear if this assumption is utilized throughout the paper or why it was mentioned if not. Also, are these assumptions testable? - Regarding the Gaussian assumption, if it is not used for any proof, the authors might consider assuming non-Gaussian noise. This would allow the DAG to be identifiable even with linear relationships. Then, with score matching (which still works, as in Sec4.3 in the SCORE paper) and some straightforward processing (to preserve for non-Gaussianity/residual independence), the problem might still be solvable in a much more elegant way. 2. **Insufficient motivation for "parent score":** Once the topological ordering of the DAG is identified, one could use conditional independence tests between variables and all preceding variables to determine each edge's existence (as in most permutation-based methods), or employ sparse regression, as suggested in the original CAM paper. The authors need to justify the necessity of proposing a "parent score," which seems over-complicated with average treatment effect estimation framework. Are there any advantages (e.g., in terms of time complexity or finite sample guarantee, as in the LISTEN paper)? 3. **Lack of technical novelty:** The technical contributions mainly combine ideas from SCORE and LISTEN, making the results and derivations (e.g., from constant variances to expected value of variances) straightforward extensions of previous work. While novelty is not a primary concern for me, it is worth noting as a minor weakness. Technical Quality: 3 Clarity: 2 Questions for Authors: My major concerns and questions are listed above in "Weaknesses" section. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 1 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **Preliminaries** Thank you very much for your valuable comments. Before answering the three question about assumptions, we want to highlight some facts of ANM. Without interventional data, all ANM-based models ($y=f(x)+\epsilon$) have to make some assumptions on $f$ or $\epsilon$ due to the problem of backward model (see Prop.23 in ref. [9]). Some paper make purely linear or nonlinear assumption on $f$, e.g., SCORE and LISTEN. This paper does not make any additional assumptions on $f$ (linear or nonlinear), therefore, some assumptions on $\epsilon$ are inevitable. To the best of our knowledge, our assumptions is the weakest assumption that works well under both linear and nonlinear ANM. **Q1.1:** VAR-sortability assumption are too strong. **A1.1:** We need to emphasise that CaPS works under (i) **or** (ii) in assumption 1. So, our assumption is **weaker** than VAR-sortability. CaPS also can work well using the condition (ii) if VAR-sortability does not hold. For example, considering a VAR-unsortable scenario with $\sigma^2\sim U(1,10)$ and causal effect greater than 0.9, CaPS can also work well because the the sum of parent score is greater than the given lower bound in (ii). Experimentally, the results in the App. C.7 also support the conclusion that CaPS works well even without VAR-sortability. **Q1.2:** Is zero-mean Gaussian utilized? testable? **A1.2:** This zero-mean Gaussian is a de facto standard in all ANM-based method (see ref. [9,11,13,...]), as this setting does not lose any generalization. Any ANMs with non-zero-mean Gaussian $\epsilon_n \sim N(\mu,\sigma)$ is equivalent to a ANM with zero-mean Gaussian $\epsilon_z \sim N(0,\sigma)$, because $f(x)+\epsilon_n$$=f(x)+\mu+\epsilon_z$$=F(x)+\epsilon_z$. The non-zero-mean Gaussian can be considered as $f$ with different bias in ANM. As we already provided the performance under zero-mean Gaussian noise in Fig. 2, we additionally test the performance under non-zero-mean Gaussian in Table 1 and 2 to further address your concerns. The conclusion is that $\mu$ does not significantly affect the relative performance of all compared baseline and CaPS still achieves the best performance. **Q1.3:** non-Gaussian proof? like Sec4.3 in the SCORE. **A1.3:** As mentioned in the preliminaries of this rebuttal, SCORE can extend to non-Gaussian because they make strong assumption on $f$ (purely nonlinear) and highly rely on the nonlinear properties for the proof. This paper handles a scenario with more generalised relations without any addtional linear or nonlinear assumption, so we need to use some properties of $\epsilon$. Theoretically, the Gaussian assumption is necessary, which are used in Eq. 10 to prove Theorem 1. However, experimentally, App. C.7 shows that CaPS performs consistently well under non-Gaussian noise (Gumbel and Laplace), which shows the potential of CaPS generalization to non-Gaussian noises. Thanks for your inspiring suggestion. We are still working to find a more elegant solution to relax the $\epsilon$ to non-Gaussian noise. However, we believe that CaPS, as the first ordering-based method capable of handling both linear and nonlinear, contributes sufficiently to the field. **Q1.4:** motivation for parent score. **A1.4:** Before we recall the motivation of parent score, we want to remind that parent score can be directly decoupled from the score’s Jacobian using Algorithm 1, which does not introduce additional computational complexity. The average treatment effect framework you mentioned is only used for theoretical interpretation. Here is the advantages of introducing parent score. (1) More clear theoretical interpretation. Why condition (ii) is a lower bound of identifiable causal effects? What's the physical meaning of Theorem 1? It is hard to to explain without parent score and the given average treatment effect framework. However, we can clearly answer this questions with parent score in the Corollary 1 (see details in lines 189-201 and App. A.5), i.e., the left-hand side of condition (ii) is sum of causal effect and Theorem 1 is finding a leaf node with weakest causal effect. (2) Better performance and speed. Parent score can help to accelerate the pruning process and correct inaccurate predictions in the pruning step. With this post-processing, the performance can be further improve in real-world dataset (12.6% of Sachs and 3.6% for Syntren in F1). More importantly, it can largely accelerate the pruning process especially when nodes are increasing (78.8% for 50 nodes and 49.41 for 20 nodes; see Fig.8). This make CaPS have better capablity of handling large-scale structures. Therefore, parent score is not a over-complicated design. **Q1.5:** lack of novelty. CaPS combine ideas from SCORE and LISTEN? **A1.5:** We understand your concerns, but CaPS is totally different from SCORE and LISTEN the following reasons: (1) More generalized scenarios. CaPS aims to handle both linear and nonlinear and most possibly mixed relations while SCORE only handles nonlinear relations and LISTEN only handles linear relations. CaPS is the first ordering-based method to deal with this scenario. (2) Different theoretical ideas. As mentioned in the preliminaries of the rebuttal, SCORE and LISTEN is derived through the properties of nonlinear $f$ while CaPS derives Theorem 1 by the properties of $\epsilon$, which is a totally different ideas in theoretical perspective (see details in App. A.2). Therefore, although the theorem only different on "expectation" and "variance", the derivations is totally different and can not be straightforwardly extended whether from SCORE or LISTEN. (3) New concept for identifiability and post-processing. This paper proposes a new concept of "parent score", which gives a new lower bound for identifiability with causal effect (see Corollary 1). This concept can be used to accelerate the pruning process (see Fig. 8) and correct inaccurate predictions in the pruning step (see Fig. 4 and 7). --- Rebuttal 2: Title: Table 1&2: experiment of non-zero-mean Gaussian noise Comment: **Table 1. SynER1 with $\epsilon \sim N(1,\sigma^2)$** | Dataset | SynER1 ($\mu$=1) | | | | | | |-------------|---------------|-------------|-------------|-------------|-------------|--------------| | Linear rate | Metrics | COLEM | CAM | SCORE | DiffAN | CaPS | | 0 | SHD | 6.4±0.8 | **0.4±0.4** | 0.6±0.8 | 2.4±1.2 | 0.6±0.8 | | | SID | 19.4±8.2 | **1.6±2.7** | 2.6±4.7 | 10.2±5.3 | 4.2±7.9 | | | F1 | 0.403±0.134 | 0.967±0.044 | **0.969±0.040** | 0.788±0.073 | 0.958±0.061 | | 0.25 | SHD | 5.6±1.8 | 1.0±1.5 | 1.0±0.63 | 2.6±1.0 | **0.8±0.7** | | | SID | 19.0±10.4 | **2.4±3.4** | 5.0±5.7 | 13.6±8.3 | 5.6±7.6 | | | F1 | 0.489±0.215 | 0.911±0.119 | 0.919±0.092 | 0.760±0.114 | **0.945±0.05**7 | | 0.5 | SHD | 4.6±3.1 | 2.2±1.3 | 2.4±1.9 | 2.6±1.6 | **0.8±0.7** | | | SID | 20.2±17.3 | 6.8±3.9 | 9.8±8.7 | 7.0±4.5 | **2.4±2.7** | | | F1 | 0.582±0.275 | 0.799±0.108 | 0.788±0.155 | 0.788±0.112 | **0.932±0.064** | | 0.75 | SHD | 4.2±1.73 | 3.4±1.9 | 3.2±2.9 | 3.8±3.7 | **0.8±1.1** | | | SID | 22.0±10.1 | 16.8±12.1 | 7.6±8.2 | 8.2±7.7 | **3.0±4.2** | | | F1 | 0.612±0.291 | 0.663±0.198 | 0.739±0.252 | 0.740±0.213 | **0.924±0.098** | | 1 | SHD | 3.4±3.0 | 4.0±2.0 | 4.2±3.4 | 5.0±3.1 | **1.0±1.0** | | | SID | 10.6±10.5 | 18.0±11.8 | 12.0±10.6 | 12.0±6.2 | **3.2±4.1** | | | F1 | 0.746±0.201 | 0.625±0.199 | 0.655±0.286 | 0.631±0.180 | **0.913±0.091** | **Table 2. SynER1 with $\epsilon \sim N(10,\sigma^2)$** | Dataset | SynER1 ($\mu$=10) | | | | | | |---------|----------------|-------------|-------------|-------------|-------------|--------------| | Linear rate | Metrics | COLEM | CAM | SCORE | DiffAN | CaPS | | 0 | SHD | 6.6±0.8 | **0.4±0.4** | **0.4±0.4** | 3.2±0.7 | 0.6±0.8 | | | SID | 19.6±8.4 | **1.6±2.7** | 1.8±3.1 | 12.8±3.8 | 4.2±7.9 | | | F1 | 0.386±0.118 | 0.967±0.044 | **0.978±0.025** | 0.684±0.063 | 0.958±0.061 | | 0.25 | SHD | 5.8±1.9 | 1.0±1.5 | 1.0±0.6 | 4.0±1.6 | **0.8±0.7** | | | SID | 19.2±10.5 | **2.4±3.4** | 5.2±5.1 | 15.6±10.2 | 5.6±7.6 | | | F1 | 0.472±0.212 | 0.911±0.119 | 0.900±0.088 | 0.623±0.166 | **0.945±0.057** | | 0.5 | SHD | 4.6±3.0 | 2.2±1.3 | 2.2±1.6 | 4.6±1.0 | **0.6±0.8** | | | SID | 20.2±17.3 | 6.8±3.9 | 8.8±6.4 | 10.0±2.8 | **1.6±2.7** | | | F1 | 0.582±0.27 | 0.799±0.108 | 0.808±0.119 | 0.654±0.046 | **0.961±0.055** | | 0.75 | SHD | 4.3±0.80 | 3.4±1.9 | 2.0±1.7 | 3.4±2.2 | **0.8±1.1** | | | SID | 25.0±18.4 | 16.8±12.1 | 7.4±7.5 | 10.4±5.8 | **3.0±4.2** | | | F1 | 0.611±0.162 | 0.663±0.198 | 0.808±0.184 | 0.751±0.110 | **0.924±0.098** | | 1 | SHD | 3.0±3.7 | 4.0±2.0 | 3.2±2.9 | 5.4±2.4 | **1.2±1.1** | | | SID | 11.2±8.2 | 18.0±11.8 | 11.4±10.4 | 13.0±5.9 | **3.8±4.0** | | | F1 | 0.769±0.290 | 0.625±0.199 | 0.716±0.248 | 0.613±0.130 | **0.892±0.093** | --- Rebuttal 3: Comment: Thank the authors for the detailed response. For the assumptions, technically yes, they are slightly weaker than VAR-sortability with another condition allowed. However, the another condition, just similar to VAR-sortability, is mainly human-crafted for the framework. It lacks any natural theoretical interpretability or practical testability. For zero-mean Gaussian, thanks for the additional experiments, but "$\mu$ does not significantly affect the relative performance" -- could the authors please confirm that whether zero-mean is needed for the asymptotic identifiability guarantee? For Gaussian noise assumption, "it is used in Eq. 10 to prove Theorem 1" -- however, I couldn't see from the proof on where specifically a Gaussian distribution is needed. Instead, I can only see the use on general forms on means and variances, together with the assumptions. But in any way, if only Gaussian noise is allowed, it would be a further shortcoming for this work: the identifiability will be unclear, and the the method will be less practical. For motivation of parental score, thanks for reminding me that it does not introduce additional computational complexity. Though not theoretical interesting, it indeed offers empirical gain. I have adjusted my score to reflect this point. --- Rebuttal Comment 3.1: Title: Thanks for your feedback Comment: Thank you for your insightful comments and for adjusting the scores. We would like to clarify a few points where we believe there might have been a misinterpretation of our work. For condition (ii), we already provide its theoretical interpretability in Corollary1 and App. A.5. This condition is a straightforward lower bound for identifiable causal effect, which demonstrates **for the first time** how strong causal effects can be recognized. For practical testability of condition (ii), as given in Q&A 2.3 of Reviewer 8ao9, **most of conditions is unverifiable without the ground truth SEM (even simple conditions like linear and nonlinear)**. Therefore, to further provide more practical testability, additional experiments are given on synthetic data. We test different setting of the sum of causal effect to show the performance of CaPS when condition (ii) are perfectly satisfied / likely satisfied / likely unsatisfied. In order to accurately control the causal effect and the lower bound in condition (ii), we use the linear SynER1 with the noise standard deviation in $U(0.4, 0.8)$. Under this settings, condition (ii) will perfectly satisfied when the minimal causal effect is geater than $\sqrt{0.8^2(\frac{1}{0.4^2}-\frac{1}{0.8^2}))}=\sqrt{3}$ because the node with weakest SATE and single child will greater than the theoretical lower bround. The experimental results are shown in Table 5, which gives the practical testability and shows that **CaPS will work well when condition (ii) are perfectly satisfied and likely satisfied**. **Table 5. practical testability of condition (ii) on SynER1** | condition (ii) | causal effect | SHD | SID | F1 | |---------------------|---------------|---------|-----------|--------------| | perfectly satisfied | $U(1.8, 2.0)$ | 0.0±0.0 | 0.0±0.0 | 1.000±0.000 | | likely satisfied | $U(1.6, 1.8)$ | 0.0±0.0 | 0.0±0.0 | 1.000±0.000 | | likely satisfied | $U(1.4, 1.6)$ | 0.2±0.4 | 0.8±1.6 | 0.975±0.050 | | likely satisfied | $U(1.2, 1.4)$ | 0.4±0.4 | 1.4±1.7 | 0.964±0.049 | | likely unsatisfied | $U(0.6, 0.8)$ | 2.6±1.0 | 10.0±5.6 | 0.772±0.065 | | likely unsatisfied | $U(0.4, 0.6)$ | 4.2±1.4 | 8.8±6.8 | 0.689±0.082 | | likely unsatisfied | $U(0.2, 0.4)$ | 4.0±1.0 | 15.8±11.7 | 0.655±0.103 | For zero-mean Gaussian, this settings is widely used in previous work (ref. [9,11,13,19,...]) because the $\epsilon$ are usually consider as the residual of $f(pa(x))$. As LISTEN have pointed out, "without loss of generality, we assume that $E(X_i)=E(N_i)=0$". This is because the non-zero-mean and zero-mean is equivalent in a ANM, which we already explained in Q&A 1.2. Thus, any non-zero-mean ANM with $\epsilon_n \sim N(\mu,\sigma)$ can transform to a zero-mean ANM with $\epsilon_z \sim N(0,\sigma)$ then follow the same derivation. And that's why $\mu$ does not significantly affect the relative performance empirically in Table 1 & 2. For Gaussian noise assumption, to be precise, we use Gaussian pdf for derive Eq.2 and Eq.10 comes from Eq.2. Since CaPS **handles linear, nonlinear and even mixed relations simultaneously**, we put almost no restrictions on $f$ in ANM. Under this premise, as CaPS handles both types of relations for the first time, it seems too strict to require a solution for both linear & nonlinear and Gaussian & non-Gaussian at the same time. Although this is the weakest condition we can derive for both linear and nonlinear scenario, the experimental results in App. C.7 have been encouraging in cases of non-Gaussian noise. We are striving to broaden it to more relaxed conditions. Once again, thanks for your careful review and the time you have dedicated to our manuscript. We hope that our responses will address your concerns.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
DynaMITE-RL: A Dynamic Model for Improved Temporal Meta-Reinforcement Learning
Accept (poster)
Summary: The paper presents DynaMITE-RL, a meta-reinforcement learning (meta-RL) approach designed to handle environments with evolving latent states. The authors propose the Dynamic Latent Contextual Markov Decision Process (DLCMDP) model to capture the temporal structure of episodes where the latent state changes at varying rates. DynaMITE-RL incorporates three key components: session consistency, latent belief conditioning, and session reconstruction masking. The proposed method is validated through experiments in various domains, including discrete Gridworld environments, continuous-control tasks, and simulated robot assistive tasks. The results demonstrate significant improvements in policy adaptation and performance over state-of-the-art meta-RL baselines. Strengths: ### S1. Novel Problem Formulation The paper addresses a critical issue in meta-RL by introducing the DLCMDP model to capture temporal dynamics in latent states. Although this work is somewhat motivated by variBAD, this formulation is novel and provides a more realistic representation of many real-world environments where latent factors change over time. ### S2. Effective Empirical Validation DynaMITE-RL is thoroughly validated through experiments in diverse environments. The results show significant improvements in learning efficiency and policy performance compared to existing meta-RL methods. The detailed experimental setup and comprehensive evaluation enhance the credibility of the proposed approach. ### S3. Clear Presentation The paper is well-structured and clearly explains the proposed method and its advantages. The use of figures, such as the graphical model of DLCMDP and learning curves, effectively supports the presentation of results. The detailed pseudocode and model architecture diagrams further aid in understanding the implementation. Weaknesses: ### W1. Theoretical Analysis The paper lacks a detailed theoretical analysis of why the proposed DynaMITE-RL framework works effectively. While empirical results are strong, a deeper theoretical exploration of the underlying mechanisms and potential limitations would strengthen the paper. ### W2. Scalability Concerns The scalability of DynaMITE-RL to more complex, large-scale environments is not thoroughly discussed. While the method shows promising results in the tested environments, a broader analysis of its scalability and practical utility in more complex scenarios is needed. ### W3. Computational Complexity The computational complexity of the proposed method, particularly in terms of training and inference, is not explicitly addressed. Understanding the computational requirements and potential limitations in terms of resources and execution time would provide a more comprehensive evaluation of its applicability. Technical Quality: 2 Clarity: 3 Questions for Authors: ### Q1. Scalability to Complex Environments How does DynaMITE-RL scale to more complex, large-scale environments? Are there any specific challenges or limitations that need to be addressed for practical deployment in such scenarios? ### Q2. Computational Requirements What are the computational requirements for training and deploying DynaMITE-RL? How does the method perform in terms of execution time and resource consumption compared to existing meta-RL methods? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The authors acknowledge the limitations related to the assumption of Markovian latent dynamics and the focus on specific benchmark environments. However, a more detailed discussion on potential negative societal impacts and strategies to mitigate them would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s positive feedback regarding the formulation of DLCMDP and application in many real-world scenarios compared to prior methods. We are pleased that the reviewer found the experimental setup to be “detailed” and “comprehensive” and as a result enhanced the credibility of our approach. **W1: Theoretical Analysis** We agree with the reviewer regarding the lack of more rigorous theoretical analysis. However, as the reviewer pointed out, we provide comprehensive experimental results with several state-of-the-art baselines across multiple environments highlighting the effectiveness of DynaMITE-RL in practice. More intuitively, we introduce the inductive bias of slowly evolving latent contexts and design a simple algorithm with DynaMITE-RL to exploit this making the learning problem more tractable. We are excited to incorporate more detailed theoretical analysis for our future extensions of DynaMITE-RL. **W2: Scalability to More Complex Environments** We appreciate that the reviewer is excited by future directions in extending DynaMITE-RL to complex, large-scale domains. This is certainly possible as our algorithm does not make any restrictive assumptions about the environment’s observation and action spaces. However, there are some potential challenges to deploy DynaMITE-RL in real-world environments which are interesting to consider but outside of the scope of this work. For one, in visual environments, we need to handle noisy pixel-based observations with partial information. In our simulated environments, we assume access to the full low-level environment state. One potential direction could be to use modern video architectures such as ViViT [1] and Video State Space Models [2] designed to process long image sequences to learn our belief model. Such an approach will require using larger Transformer-based architectures which have been shown to be more effective in video understanding tasks. We are eager to explore using more advanced model architectures to scale task inference to more challenging visual domains. Further, for many problems, we cannot assume access to a hand-crafted dense reward function (e.g. recommendation system). We may only have a binary signal of task success. Prior work [3] proposes an exploration bonus based on novelty to improve meta-exploration in the augmented state space which we could incorporate directly into DynaMITE-RL for sparse-reward environments. To summarize, **we believe it is possible to scale DynaMITE-RL to more complex domains and are interested in pursuing this for future work.** ***Our work is the first work that formulates this problem, proposes a simple yet effective meta-RL solution in DynaMITE-RL, and conducts proof-of-concept experiments.*** **W3: Compute Resources and Runtime** We provide a description of the compute resources and runtime of DynaMITE-RL and each baseline method in Appendix A.6 which is provided in the supplementary material. We paste the text from the general response below. All experiments can be run on a single Nvidia RTX A6000 GPU. Our implementation is written completely in JAX. The following lists the average run-time for DynaMITE-RL and each baseline method for the online RL experiments with the HalfCheetah and ScratchItch environment. These numbers vary depending on the environment; JAX-based environments (e.g. Reacher and HalfCheetah) are highly parallelized and the runtimes are orders of magnitude lower than ScratchItch. We also run multiple experiments on the same device so runtimes may be overestimated. - RL2: 4 hour, 16 hours - VariBAD: 3 hours, 8 hours - BORel: 3 hours, 8 hours - SecBAD: 3 hours, 10 hours - ContraBAR: 2.5 hours, 7 hours - DynaMITE-RL: 3 hours, 8 hours **References:** [1] Arnab, Anurag, et al. "Vivit: A video vision transformer." Proceedings of the IEEE/CVF international conference on computer vision. 2021. [2] Li, Kunchang, et al. "Videomamba: State space model for efficient video understanding." arXiv preprint arXiv:2403.06977 (2024). [3] Zintgraf, Luisa M., et al. "Exploration in approximate hyper-state space for meta reinforcement learning." International Conference on Machine Learning. PMLR, 2021. --- Rebuttal Comment 1.1: Title: Response to authors rebuttal Comment: I appreciate the authors for the detailed explanation about the complexity. Most of my concerns are well addressed therefore I'm raising my score. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging the content and effort of our rebuttal! We greatly appreciate you for raising the score of our paper.
Summary: The authors introduce DynaMITE-RL, a meta-reinforcement learning (meta-RL) approach formulated as a dynamic latent contextual MDP (DLCMDP). This framework allows the latent context of an episode to change multiple times and at varying rates within a single episode, making it more general than both POMDPs and latent MDPs as a control framework. The authors show three criteria for an algorithm to be successful in solving a DLCMDP: consistency of latent information within sessions, session masking, and prior latent conditioning. DynaMITE-RL addresses the limitations of existing meta-RL methods in handling dynamic latent contexts that evolve over time, which is crucial for many real-world applications. The algorithm is tested on a range of meta-RL benchmarks, including discrete Gridworld environments, continuous control tasks (Reacher, HalfCheetah), and assistive robot tasks (Assistive Itch Scratch). These environments have been altered to allow for latent context switching which can do so stochastically. The proposed algorithm outperforms other meta-RL algorithms in the more general DLCMDP setting, in both offline and online scenarios. The results show significant improvements over state-of-the-art benchmarks in terms of training trajectories and final performance. The algorithm performs optimally when a transformer is used to encode the belief model in the offline setting. It is shown using ablations that session consistency, latent belief conditioning, and session reconstruction masking are all important in the model's performance. Strengths: Originality A novel meta-RL approach, DynaMITE-RL, is introduced as a DLCMDP. This is innovative as it allows the latent context of an episode to change multiple times at varying rates within a single episode, making it more general than both POMDPs and latent MDPs. This is clearly an important contribution as such context switching is evident in many real-world applications that previous methods could not handle effectively Quality Both the theoretical and experimental aspects of the paper bring rigour to the work and make it of high quality. The experimental setup includes a good range of benchmarks and the ablation studies highlight the effectiveness and necessity of each of the three components of the proposed method. The results show that DynaMITE-RL outperforms state-of-the-art meta-RL algorithms Clarity The paper is clearly written and well-structured. The problem statement and motivation are clear, and the authors provide a detailed explanation of the DLCMDP framework and of the DynaMITE-RL algorithm. The experimental results are also clear and discussion are also clearly presented. Significance The DynaMITE-RL framework highlights an important set of tasks for more adaptive RL systems. This is particularly important for applications where the environment or task dynamics change over time. The ability of DynaMITE-RL to handle both online and offline settings further shows its practical relevance and applicability. The proposed approach advances the state-of-the-art in meta-RL as well as defining a novel direction for future research. Weaknesses: While hyperparameters are provided, the code itself is not. This would be relatively easy to do and would allow for very simple verification of results. There is no discussion given to the computational complexity of the approach and how well it will scale to higher dimensional environments. It would be useful to have an ablation study of the effects of different architectures for the belief module as well as different types of latent context dynamics. It would be useful to have a section on real-world considerations, such as handling noisy observations or dealing with sparse rewards within this framework. Technical Quality: 3 Clarity: 3 Questions for Authors: Why is BORel, the meta-RL algorithm that mainly investigates offline meta-RL used in the online RL experiments? In order to test the novel algorithm, new combinations of environments needed to be created. Why do the authors think that no such environments already exist? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, in the sense that there is discussion of future work regarding non-Markovian latent dynamics and applications to real-world problems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s positive feedback regarding the DLCMDP model and its applicability in many real-world scenarios. We are especially grateful for the reviewer’s remarks stating that our “theoretical and experimental aspects … make it [the paper] of high quality”. Moreover, the reviewer found our experiments and ablation studies to be useful in highlighting the importance of each component of DynaMITE-RL. We will provide the code for the environments and algorithms for the camera-ready version of the paper. **W1: Compute Resources and Runtime** We provide a description of the compute resources and runtime of DynaMITE-RL and each baseline method in Appendix A.6 which is included in the supplemental material. All experiments can be run on a single Nvidia RTX A6000 GPU. Our implementation is written completely in JAX. The following lists the average run-time for DynaMITE-RL and each baseline method for the online RL experiments with the HalfCheetah and ScratchItch environment. These numbers vary depending on the environment; JAX-based environments (e.g. Reacher and HalfCheetah) are highly parallelized and the runtimes are orders of magnitude lower than ScratchItch. - RL2: 4 hour, 16 hours - VariBAD: 3 hours, 8 hours - BORel: 3 hours, 8 hours - SecBAD: 3 hours, 10 hours - ContraBAR: 2.5 hours, 7 hours - DynaMITE-RL: 3 hours, 8 hours **W2: Ablation study of the different architectures for belief model** We agree with the reviewer that it would be interesting to have a more in-depth analysis of different architectures for the belief module. We do have some results using a Transformer-based encoder for the encoder in the belief module instead of the standard recurrent network used in prior work. We find that this does yield improvement in our offline results. However, because of computational constraints we were unable to test this in an online setting. We hypothesize that the Transformer encoder would be more beneficial in a long-horizon context in which system identification requires attending to timesteps very far in the past. We are very keen to explore the problem of belief estimation and Bayes-RL for such long-horizon applications in future work. We hope to conduct a few more experiments comparing LSTM and Transformer belief models for the camera ready paper. **W3: Real-world considerations of DynaMITE-RL** We are pleased that the reviewer is excited by future directions in extending DynaMITE-RL to complex, large-scale domains. This is certainly possible as our algorithm does not make any restricting assumptions about the environment observation and action space. However, there are some potential challenges to deploy DynaMITE-RL in real-world environments which are interesting to consider but outside of the scope of this work. For one, in visual environments, we need to handle noisy pixel-based observations with partial information. In our simulated environments, we assume access to the full low-level environment state. One potential direction could be to use modern video architectures such as ViViT [1] and Video State Space Models [2] designed to process long image sequences to learn our belief model. Such an approach will require using larger Transformer-based architectures which have been shown to be more effective in video understanding tasks. We are eager to explore using more advanced model architectures to scale task inference to more challenging visual domains. Further, for many problems, we cannot assume access to a hand-crafted dense reward function. We may only have a binary signal of task success. Prior work [3] proposes an exploration bonus based on novelty to improve meta-exploration in the augmented state space which we could incorporate directly into DynaMITE-RL for sparse-reward environments. To summarize, we believe it is possible to scale DynaMITE-RL to more complex domains and are interested in pursuing this for future work. ***Our work is the first work that formulates this problem, proposes a simple yet effective meta-RL solution in DynaMITE-RL, and conducts proof-of-concept experiments.*** **Q1: Why is BORel, the meta-RL algorithm that mainly investigates offline meta-RL used in the online RL experiments?** The reviewer correctly points out that BORel [4] primarily investigates the offline approach to meta-RL. While BORel focuses on the offline-RL setting, they also propose an off-policy Soft-Actor Critic variant of their algorithm. However, we find that using an off-policy RL algorithm only provides sample-efficiency improvements over vanilla VariBAD as it allows us to reuse experience from behavior policy, but nevertheless fails to adapt to changing latent contexts in the DLCMDP environments. **Q2: In order to test the novel algorithm, new combinations of environments needed to be created. Why do the authors think that no such environments already exist?** To ensure fair comparison, we conduct our experiments on existing meta-RL benchmarks such as MuJoCo continuous control which have been extensively studied in prior literature. We made very minor modifications to adapt these tasks to have changing latent contexts. We further included a new itch scratching task to demonstrate a more realistic scenario where DLCMDPs may arise in the real-world. Following the reviewer's suggestion, we will look into existing open-source environments which exhibit the DLCMDP structure and include those into our experiments if possible. **References:** [1] Arnab, Anurag, et al. "Vivit: A video vision transformer." Proceedings of the IEEE/CVF international conference on computer vision. 2021. [2] Li, Kunchang, et al. "Videomamba: State space model for efficient video understanding." arXiv preprint arXiv:2403.06977 (2024). [3] Zintgraf, Luisa M., et al. "Exploration in approximate hyper-state space for meta reinforcement learning." International Conference on Machine Learning. PMLR, 2021. --- Rebuttal Comment 1.1: Title: Acknowledgement of receipt of response from authors Comment: I thank the reviewers for these responses, and hope that with some additional comments based on this discussion it will make the paper an even stronger contribution. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging the content and effort of our rebuttal! We will incorporate the feedback from this discussion to improve the final version of our paper.
Summary: The paper proposes a special variant of non-stationary MDPs, DLCMDP, where the latent context information changes according to an unknown transition function. Then the authors present DynaMITE-RL, a meta-RL approach to handle environments with evolving latent context variables. Experiments are conducted on GridWorld, two mujoco continuous control tasks, and Assistive Itch Scratch. Strengths: 1. The paper targets at a more general non-stationary MDP setting than meta-RL with fixed latent context. 2. Empirical results show that the algorithm achieves better performance than other meta-RL baselines and ablation studies justify the effect of different components of the algorithm. 3. The figures and diagrams are good and makes the paper easier to understand. Weaknesses: About DLCMDPs. The definition is a little strange. I admit it's more general than HiP/latent MDP, but I would expect the dynamics to be always dependent on m - the equation after line 125 shows that when switching latent context, the state is directly sampled from a fixed initial distribution without dependency on the previous states and actions. Therefore, for now I disagree with the argument "letting dt=1, a DLCMDP reduces to a general POMDP with state space M", as POMDP's transition is always conditioned on the previous states and actions except for the initial state. Also, is there any concrete example showing that this kind of decision process exists in real world? The algorithm section is a little vague to me. More intuitions and explanation would be helpful. Specifically, 1. Line 164, why the generative model is the probability distribution of states and rewards given the actions (is it non-stationary?), usually it's the inverse form for RL inference right? 2. How do you get $Z$ and $\Omega$ during training? If I understand correctly they are all hidden variables. About the experiments. In table 1 and 2, although the results clearly show that the proposed methods achieve better average reward than the baselines, many of them are still negative values (e.g., HC-vel, -146.0). Is there any evidence showing that the agent indeed learn some meaningful policies for these tasks instead of some weird behaviors that cause a increase in reward? Also I would expect see results on one more domain of mujoco continuous control tasks - halfcheetah and reacher are the relatively simplest ones. Technical Quality: 2 Clarity: 2 Questions for Authors: See above Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: See above Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s positive feedback and that they found the figures and diagrams helpful for understanding the paper. **W1: DLCMDP Definition** The reviewer correctly points out that according to the Equation after line 125, when the latent context changes, the next state is sampled from a fixed initial distribution which is not history dependent; in other words, there is no causal link between the final state of a session and the first state of the next session. In practice, this formulation follows prior meta-RL works where between trials, the agent is reset to the same initial state—this makes it important for the agent to maintain a belief over the latent variables across trials. However, we note **there is causal dependency between consecutive sessions through the latent context variable $m$**. We argue that our method should still work if the initial state distribution is history-dependent. The trajectory history should be fully contained in the latent $m$ because the LSTM encodes each timestep of the trajectory history to infer $m$. DLCMDPs are as general and expressive as POMDPs. POMDPs can have additional dependencies between observations, but that is not needed, and rather helps make the distinction between what is latent and a sufficient statistic, and what is observed. That said, the **specific temporal structure of DLCMDPs allows us to devise an efficient algorithm that exploits the transition dynamics of the latent context, thereby improving learning efficiency**. This, we believe, is one of the critical, and valuable, features of DLCMDPs, and an important contribution of our work. **W2: DynaMITE-RL Algorithm Clarification** In our DLCMDP setting, the latent contexts $\mathcal{Z}$ and session terminations $\Omega$ are unobserved by the agent. Following the probabilistic graphical model shown in Figure 1 of the paper, the latent variable is responsible for the generative process. Here we are not trying to learn a policy, but rather we want a model of the observable latent variables conditioned on the unobserved latent context variables. During training, the latent variable and session termination are both predicted from the hidden state of the recurrent network. At each timestep, the recurrent network encodes a tuple (state, action, reward), producing a new hidden state from which we infer the parameters of the posterior belief (Gaussian) and session termination (Bernoulli). The learning signal comes from the decoder network which predicts the ground-truth trajectory (states and rewards) conditioned on the posterior belief and termination information. During training, we first collect on-policy DLCMDP episodes. We then alternate between training the policy using any on-policy RL algorithm (e.g. PPO) and training the posterior belief model through maximize the ELBO objective derived in our paper. During inference time, we can use the trained belief model to estimate the posterior belief at each timestep given new observations and rollout the policy conditioned on the state and the current belief. **Q1: In table 1 and 2, although the results clearly show that the proposed methods achieve better average reward than the baselines, many of them are still negative values.** We emphasize that the rewards achieved by baseline in our setting are not directly comparable to those from the original paper. Under a DLCMDP, the latent dynamics causes the reward function to change between consecutive sessions. Depending on the rate of how the latent context (e.g., reward function) changes, the maximum return an agent can achieve will differ. If the latent context changes frequently, it is more difficult for the agent to infer the latent and act Bayes-optimally, consequently achieving a lower return. **Q2: Is there any evidence showing that the agent indeed learned some meaningful policies for these tasks instead of some weird behaviors that cause an increase in reward?** We have qualitative videos for each environment that highlight the differences between the resulting policies learned by DynaMITE-RL and the baseline methods. Here are links to videos of comparing agents trained with VariBAD (left) and DynaMITE-RL (right). We will include these qualitative visualizations in the supplemental material. Qualitative results: https://shorturl.at/rO917 **Additional Experiment on More Complex MuJoCo Ant Walking task** Following the reviewer's suggestion, we provide additional results on the Ant task in MuJoCo. The Ant task is to navigate a four-legged ant following targets that are placed along a semi-circle path where the targets for each session change according to a predefined transition function. The action space is 8 dimensional and observation dimension is 27 dimensional representing each body part including 3D position, orientation, joint angles and joint velocities. Given the time constraints, we only able to complete experiments for VariBAD, DynaMITE-RL, and SecBAD. We provide results average over 3 seeds and 25 evaluation rollouts. We plan to provide the complete set of results for the camera-ready paper. | Method | Evaluation Return | |------------|---------------| | VariBAD | -80.4 $\pm$ 4.3 | | SecBAD | -63.2 $\pm$ 6.2 | | DynaMITE-RL | -25.6 $\pm$ 4.2 | --- Rebuttal Comment 1.1: Comment: I thank the authors for the additional results and clarifications. I have increased my score to 5 correspondingly. But I'm still not fully convinced that DLCMDP is as general as POMDP. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging the content and effort of our rebuttal! We greatly appreciate you for raising the score of our paper. We will incorporate the feedback from this discussion and clarify the distinction between DLCMDPs and other variants of MDPs in the final version of the paper.
Summary: This paper introduces a meta-RL method for environments with evolving latent variables. To this end, the authors introduce the notion of dynamic latent contextual MDPs, a generalization of POMDPs, which they use to model the environments. The basic idea is to have latent variables that are sampled and remain fixed throghout "sessions". The approach is evaluated on a range of tasks and is shown to outperform related state-of-the-art methods. Strengths: The paper is well written and pleasent to read. Departures from VariBAD are well motivated and shown to be vital through the ablation study. I like the use of color do distinguish the different methods used for comparison! Weaknesses: Somewhat incremental work. Appendix is missing? Minor Mistakes: - L 146: $\max Q(s^{+'}, a')$ instead of $\max Q(s^{+'}, a)$ - L179: "remain(s)" - L 244: There is no Figure 8, is the Appendix missing? Technical Quality: 3 Clarity: 4 Questions for Authors: - Figure 4: Why is the reward achieved by VariBAD on Halfcheetah-Vel worse than what is reported in the original paper? - What do the $\Delta_{x}$ refer to on Line 113f? Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Limitations are not explicitly discussed but it is not necessary here in my opinion. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer’s positive feedback that the paper is “well-written and pleasant to read”. We are glad that the reviewer finds the DLCMDP problem setting “well-motivated” and our extensive set of baseline comparisons and ablation studies strongly supports our technical claims. We further appreciate the reviewer for pointing out mistakes in the paper and will certainly incorporate these edits into the camera ready version. The Appendix is included in the supplemental material zip folder. **Q1: Why is the reward achieved by VariBAD on Halfcheetah-Vel worse than what is reported in the original paper?** We paste the text from the general response below. We emphasize that the rewards achieved by baseline in our setting are not directly comparable to those from the original paper. Under a DLCMDP, the latent dynamics causes the reward function to change between consecutive sessions. Depending on the rate of how the latent context (e.g., reward function) changes, the maximum return an agent can achieve will differ. If the latent context changes frequently, it is more difficult for the agent to infer the latent and act Bayes-optimally, consequently achieving a lower return. **Q2: What does the $\Delta_x$ refer to on Line 113f?** In Line 113, we intend to write, $R: \mathcal{S} \times \mathcal{A} \times \mathcal{M} \rightarrow [0, 1]$ is the reward function and $T: \mathcal{S} \times \mathcal{A} \times \mathcal{M} \times \mathcal{S} \rightarrow [0, 1]$ is the transition kernel. We will fix this in the camera ready version. --- Rebuttal Comment 1.1: Comment: I acknowledge the authors' response and feel justified to remain at my current overall rating. --- Reply to Comment 1.1.1: Comment: Thank you for acknowledging the content and effort of our rebuttal! We will incorporate the feedback from this discussion to improve the final version of our paper.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive feedback. We appreciate positive comments regarding the novel problem formulation of slowly evolving latent context variables in DLCMDPs, clarity of writing and presentation, and strong, comprehensive empirical results of DynaMITE-RL against state-of-the-art meta-RL baselines. The reviewers raise interesting points regarding the scalability of DynaMITE-RL to real-world problems which we are excited to explore further in future work. We will make sure to incorporate each reviewer’s feedback and clarification discussions in the camera-ready paper. Below, we reiterate a few points from the individual reviewer responses that are important to clarify. ### **On real-world scenarios that exhibit DLCMDP structure** Many real-world applications can be modeled with a DLCMDP, from assistive robotics to recommendation systems and self-driving cars. In our paper, we extend the itch-scratching task in the Assistive Gym benchmark which demonstrates a real-world scenario in which a DLCMDP is a suitable model. Another interesting application that we are actively studying is recommendation systems. Consider a movie recommendation agent. Depending on unobserved latent factors, such as the user’s mood, their location, and environmental factors, the user might have different preferences for movie genres. For example, if the user is with their partner on a date, they might prefer romance movies while if they’re with friends they might prefer action or thriller genres. Importantly, their preferences do not change abruptly. There are multiple timesteps where the latent context remains constant (we refer to these as sessions) after which context will change gradually according to some latent dynamics. ### **Scalability of DynaMITE-RL to complex, real-world applications** We are pleased that all reviewers are excited by future directions in extending DynaMITE-RL to complex, large-scale domains. This is certainly possible as our algorithm does not make any restrictive assumptions about the environment’s observation and action spaces. However, there are some potential challenges to deploy DynaMITE-RL in real-world environments which are interesting to consider but outside of the scope of this work. For one, in visual environments, we need to handle noisy pixel-based observations with partial information. In our simulated environments, we assume access to the full low-level environment state. One potential direction could be to use modern video architectures such as ViViT [1] and Video State Space Models [2] designed to process long image sequences to learn our belief model. Such an approach will require using larger Transformer-based architectures which have been shown to be more effective in video understanding tasks. We are eager to explore using more advanced model architectures to scale task inference to more challenging visual domains. Further, for many problems, we cannot assume access to a hand-crafted dense reward function. We may only have a binary signal of task success. Prior work [3] proposes an exploration bonus based on novelty to improve meta-exploration in the augmented state space which we could incorporate directly into DynaMITE-RL for sparse-reward environments. To summarize, we believe it is possible to scale DynaMITE-RL to more complex domains and are interested in pursuing this for future work. ***Our work is the first work that formulates this problem, proposes a simple yet effective meta-RL solution in DynaMITE-RL, and conducts proof-of-concept experiments.*** ### **Computational Complexity and Resources** We provide a description of the compute resources and runtime of DynaMITE-RL and each baseline method in Appendix A.6 with the supplemental material. All experiments can be run on a single Nvidia RTX A6000 GPU. Our implementation is written completely in JAX. The following is the average run-time for DynaMITE-RL and each baseline method for the online RL experiments with the HalfCheetah and ScratchItch environments. These numbers vary depending on the environment; JAX-based environments (e.g., Reacher and HalfCheetah) are highly parallelized and the runtimes are orders of magnitude lower than ScratchItch. - RL2: 4 hour, 16 hours - VariBAD: 3 hours, 8 hours - BORel: 3 hours, 8 hours - SecBAD: 3 hours, 10 hours - ContraBAR: 2.5 hours, 7 hours - DynaMITE-RL: 3 hours, 8 hours ### **Negative returns in HalfCheetah experiments** We emphasize that the rewards achieved by the baseline in our setting are not directly comparable to those from the original paper. Under a DLCMDP, the latent dynamics causes the reward function to change between consecutive sessions. Depending on the rate of how the latent context (e.g., reward function) changes, the maximum return an agent can achieve will differ. If the latent context changes frequently, it is more difficult for the agent to infer the latent and act Bayes-optimally, consequently achieving a lower return. ### **Additional Experiment on More Complex MuJoCo Ant Walking task** Following Reviewer hpwN's suggestion, we provide additional results on the MuJoCo Ant task. The Ant task is to navigate a four-legged ant following targets that are placed along a semi-circle path. We average evaluation results over 3 random seeds and 25 rollouts. | Method | Evaluation Return | |------------|---------------| | VariBAD | -80.4 $\pm$ 4.3 | | SecBAD | -63.2 $\pm$ 6.2 | | DynaMITE-RL | -25.6 $\pm$ 4.2 | ### **References** [1] Arnab, Anurag, et al. "Vivit: A video vision transformer." Proceedings of the IEEE/CVF international conference on computer vision. 2021. [2] Li, Kunchang, et al. "Videomamba: State space model for efficient video understanding." arXiv preprint arXiv:2403.06977 (2024). [3] Zintgraf, Luisa M., et al. "Exploration in approximate hyper-state space for meta reinforcement learning." International Conference on Machine Learning. PMLR, 2021.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Data Augmentation with Diffusion for Open-Set Semi-Supervised Learning
Accept (poster)
Summary: This paper aims to address the challenge of utilizing unlabeled data in SSL, especially when there is a mismatch in class distributions between labeled and unlabeled data. The authors propose to leverage diffusion models to convert unlabeled data, especially out-of-distribution data, into in-distribution samples. Besides, a discriminator is jointly trained to filter out potentially irrelevant data. Experiments on image classification tasks show that the proposed method can greatly improve the performance of existing SSL methods, especially in scenarios with large class distribution mismatch. Strengths: 1. The paper proposes a novel approach to tackle the class distribution mismatch problem in SSL by leveraging diffusion models to convert unlabeled or OOD samples to in-distribution samples. This is a creative combination of ideas from generative modeling and SSL and is well-motivated. 2. The results demonstrate clear improvements over state-of-the-art SSL methods, especially in challenging scenarios with large distribution mismatch. The proposed method can also be served as a plug-in approach to enhance existing SSL methods. 3. The paper provides good visualizations that help understand the generated samples and the effectiveness of the method and discriminator's scores. Weaknesses: 1. The use of diffusion models introduces significant computational costs compared to standard SSL methods. While this is acknowledged, a more detailed analysis of the trade-offs between performance gains and computational cost would be valuable. 2. More comparisons with related generative augmentation in SSL are needed. The paper only compares with the two closely-related work, DPT and DA-Fusion, on one dataset (ImageNet-30). Comparisons to advanced data augmentation techniques used in SSL on more datasets would provide more context. 3. The experiments are limited to relatively two~three small-scale datasets (The SixAnimal dataset is sampled from CIFAR10). It's unclear how well the approach would scale to larger, more complex datasets. 4. The paper focuses primarily on successful cases. A discussion of scenarios where DWD might not work well or potential limitations would provide a more balanced view. 5. While the discriminator is a key component, the presentation of the discriminator and the related positive-unlabeled learning is a bit confusing. More explanation on how the discriminator is trained and why the PU learning is adopted would be helpful. Also, have you considered alternative designs for the discriminator? 6. The method introduces several hyper-parameters, such as the $\alpha$, $\mu$. A more thorough discussion or analysis of the sensitivity to these parameters would strengthen the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. The computation cost for different generative augmentation methods can be varied. How do you ensure the comparisons are fair? 2. How does the number of labeled data affect the effectiveness of the proposed method? Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Some limitations are discussed in Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive feedback. **W1.** To investigate trade-offs between performance gains and computational cost, we conducted additional experiments on the CIFAR-10/100 tasks, varying U-Net depths $d$ (i.e., the number of residual blocks per downsample) in the diffusion model. The results are summarized in the table below: | | $d = 1$ | $d = 2$ | $d = 4$ | |------------------------------------|----------|-------------------------------|-----------| | **FixMatch + DWD-UT Accuracy (%)** | 80.7 | 83.8 | 84.2 | | **Model parameters (M)** | 50.0 | 70.1 | 110.2 | | **Elapsed time (Hours)** | 15.6 | 21.5 | 32.3 | As expected, increasing the depth of the U-Net led to further performance improvements. Thus, by adjusting the depth according to available computational resources, one can effectively balance the trade-off between model performance and computational cost. Additionally, the architecture of diffusion models (e.g., U-ViT [A], DiT [B]) can also influence computational cost and performance, which we plan to explore in future work. **W2.** First of all, we would like to highlight that DPT and DA-fusion demonstrate state-of-the-art performance in generative data augmentation techniques. To the best of our knowledge, other existing generative approaches based on VAE [C] or GAN [D] are limited to low-resolution data, which is why we did not include them as baselines. Additionally, we remark that some of our baselines, such as Fixmatch, Mixmatch, and Fix-A-Step, can be also considered as data augmentation techniques. Therefore, our experimental results demonstrate that DWD outperforms the well-known data augmentation techniques used in SSL. If you could suggest a manageable set of other advanced augmentation methods in SSL, we would be more than happy to conduct further experiments during the author-reviewer discussion period. **W3, Q2.** To verify DWD’s scalability to larger datasets and investigate the impact of the size of labeled data on DWD, we conducted additional experiments on the ImageNet-100 dataset, varying the size of labeled data. To construct the ImageNet-100 dataset, we sub-sampled 100 classes from ImageNet, as described in [E]. We divided these classes equally into 50% ID and 50% OOD classes, following alphabetical order. From each ID class, we selected a small portion (10% or 30%) as labeled data, with the remaining data forming the unlabeled set. We report the results in Table B in the PDF. As shown in the table, DWD-SL outperforms the baselines, and DWD-UT significantly enhances baseline performance. From these results, we can conclude that DWD can be effectively applied to larger datasets. In addition, DWD proves effective with varying amounts of labeled data, performing well with both 10% and 30% sampling ratios. Notably, the performance gain is greater with the 10% sampling ratio compared to the 30% ratio, as the benefit of data augmentation is more pronounced when the dataset is smaller. However, it is important to note that DWD’s effectiveness is expected to be marginal when the amount of labeled data is extremely small, as the diffusion model struggles to accurately capture the labeled data distribution. We remark that this limitation is also common to other generative augmentation approaches. **W4.** We acknowledge the importance of discussing potential limitations to provide a more balanced view. To investigate scenarios where DWD might not perform well, we extended Figure 3 in our paper to include a lower degree of class distribution mismatch ($\zeta$=25%). We report the results in Table A in the PDF. Since DWD is designed to resolve class distribution mismatch, its effectiveness is expected to decrease with a lower mismatch ratio. However, we observed that DWD was still able to improve the baseline method in the $\zeta$ = 25% case, although the performance gain was relatively small. As previously discussed in response to [W3,Q2], another limitation of DWD is its limited applicability with extremely small size of labeled data. Although DWD can benefit from the diversity of unlabeled data, it must first learn the distribution of the labeled data. **W6.** We conducted additional experiments on the SixAnimal task ($\zeta$ = 75%) using DWD-SL to assess DWD's sensitivity to the hyper-parameters. 1) **$\alpha$** : We varied across values of {1, 3, 5, 10}, and the results are presented in the table below: | | $\alpha = 1$ | $\alpha = 3$ | $\alpha = 5$ | $\alpha = 10$ | |----------------|----------------|--------------|---------------|---------------| | Accuracy (%) | 84.0 | 85.9 | 83.8 | 83.5 | 2) **$\mu$** : We varied across values of {0.125, 0.25, 0.33, 0.5}, and the results are presented in the table below: | | $\mu = 0.125$ | $\mu = 0.25$ | $\mu = 0.33$ | $\mu = 0.5$ | |----------------|-----------------|--------------|--------------|-------------| | Accuracy (%) | 84.5 | 85.9 | 85.3 | 84.7 | We observed that a wide range of $\alpha$ and $\mu$ values successfully outperform most of the baselines (refer to table 1 in our paper). Regarding $\alpha$, an extremely small value may cause the diffusion model training to focus excessively on the labeled data, failing to reflect the diversity of the unlabeled data and potentially leading to overfitting. Conversely, an extremely large value may cause the training to skew towards the unlabeled data, failing to properly capture the labeled data distribution. In our experiments, an $\alpha$ value around 3 achieves an appropriate trade-off. Regarding $\mu$, the optimal value is near the true ratio $1-\zeta$, as expected. --- Rebuttal 2: Title: Response to [Q1] Comment: As you pointed out, even when the types of diffusion models and training processes are standardized, the total computational cost can vary across different generative augmentation methods. This is because each method includes unique auxiliary processes (e.g., training a classifier with partially labeled data, and extracting features in DPT, training a discriminator in DWD) in addition to the common computational costs associated with training the diffusion model. To ensure that the additional processes introduced in DWD do not compromise the fairness of comparisons, we conducted supplementary experiments to measure the extra cost of DWD compared to the extra cost of DPT on the CIFAR-10/100 tasks. The results are as follows: | | Diffusion model training | Extra cost of DWD | Extra cost of DPT | |--------------------------|-------------|------------------------|----| | **Elapsed time (Hours)** | 21.8 | 1.7 |1.1| | **Memory (GB)** | 8.0 | 2.9 |2.0| [Machine specification] GPU : NVIDIA GeForce RTX 3090 Ti, CPU : Intel(R) Core(TM) i9-10980XE The results showed that the increase in computational cost due to the additional processes in DWD was similar to that of DPT. Therefore, we can ensure that the fairness of the comparison is not compromised. **References** [A] Bao et al., “All are worth words: a vit backbone for score-based diffusion models”, NeurIPS, 2022. Workshop on Score-Based Methods. [B] Peebles et al., "Scalable diffusion models with transformers." ICCV, 2023. [C] Li et al., “Max-margin deep generative models for (semi-) supervised learning”, IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017. [D] Li et al., “Triple generative adversarial networks,” IEEE Transactions on Pattern Analysis and Machine Intelligence, 2021. [E] K Cao et al., “OPEN-WORLD SEMI-SUPERVISED LEARNING.”, ICLR 2022. [F] Chen et al., “Semi-supervised learning under class distribution mismatch“, AAAI, 2020. [G] Huang et al., “They are not completely useless: Towards recycling transferable unlabeled data for class-mismatched semi-supervised learning”, IEEE Transactions on Multimedia, 2022. [H] You et al., “Diffusion Models and Semi-Supervised Learners Benefit Mutually with Few Labels”, NeurIPS, 2023. --- Rebuttal Comment 2.1: Comment: I thank the authors for their detailed responses and efforts to conduct additional experiments. I am inclined to accept this paper. --- Reply to Comment 2.1.1: Title: Official Comment by Authors Comment: Thank you very much for the score improvement and your constructive feedback. We will further polish the paper in the final revision. Thank you!
Summary: The paper proposes an approach that leverages a diffusion model to enrich labeled data using both labeled and unlabeled samples to try to solve the traditional ssl method struggling in the real-world scenarios, i.e., a large number of irrelevant instances in the unlabeled data that do not belong to any class in the labeled data. Specifically, authors combine diffusion model training with a discriminator, and convert an irrelevant instance into a relevant one. Empirically, the data augmentation approach which is proposed by this paper significantly enhances the performance of SSL methods. Strengths: 1. The motivation is clear, and the method is effective and easy to follow. 2. The combination of DWD-UT and semi-supervised methods is interesting, and the results are impressive. 3. The results in the appendix address many of my questions, and the analysis is comprehensive. Weaknesses: 1. There is a lack of analysis regarding the structure and training methods of the diffusion model networks, such as DiT and DDPM. Is DiT also effective for DWD-UT and DWD-SL? 2. The ImageNet-30 dataset is small. It remains to be verified whether the method can perform well in an open semi-supervised setting. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. As mentioned in Weakness 2, can the method be applied effectively to larger datasets? 2. Can this method be applied to other generative model, such as GAN, Flow? 3. How about the generation performance? can author evaluate the common generation metric such as FID, IS? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your constructive feedback. **W1.** Thank you for bringing up the different diffusion model network structures and training methods for further improving our work. Given that DPT [A], which is closest prior work to our methodology, has demonstrated a remarkable performance with U-ViT [B], we anticipate that DiT [C], which is also based on ViT, will perform well with DWD. However, due to current limitations in computational resources and time, we plan to explore this aspect in future research. **W, Q1.** To verify DWD’s scalability to larger datasets, we conducted additional experiments on the ImageNet-100 dataset, which is larger than ImageNet-30. To construct the ImageNet-100 dataset, we sub-sampled 100 classes from ImageNet, as described in [D]. We divided these classes equally into 50% ID and 50% OOD classes, following alphabetical order. From each ID class, we selected a small portion (10% or 30%) as labeled data, with the remaining data forming the unlabeled set. We report the results in Table B in the PDF. As shown in the table, DWD-SL outperforms the baselines, and DWD-UT significantly enhances baseline performance. From these results, we can conclude that DWD can be applied effectively to larger datasets. **Q2.** Our training scheme, which uses the discriminator to obtain importance weights for unlabeled data, can be applied to other generative models since such importance sampling does not impose any specific restrictions on the types of generative models. However, since the proposed data generation method starts from a partially noised unlabeled image, it cannot be applied to generative models whose generation process does not involve a noisy image as an intermediate step. **Q3.** We acknowledge the importance of validating generation performance. We therefore measured the FID and IS scores of the generated samples. Additionally, we included intra-cluster pairwise LPIPS distance [E] which represents the degree of overfitting, as FID and IS scores might not capture overfitting issues in domains with limited data. To investigate whether DWD improves generation performance, we assessed these three scores under different training schemes: (a) with labeled data only, (b) with both labeled and unlabeled data using the objective (6) in our paper, and (c) with DWD. | | FID score (↓) | IS score (↑) | LPIPS (↑) | |---------------------------|---------------|--------------|-----------| | (a) Labeled data only | 60.1 | 21.4 | 0.167 | | (b) Labeled and unlabeled | 69.2 | 20.4 | 0.170 | | **(c) DWD** | **43.6** | **23.9** | **0.177** | We observed that case (b) results in a better intra-cluster pairwise LPIPS distance but worse FID and IS scores compared to case (a). This implies that while utilizing unlabeled data helps mitigate overfitting problems, irrelevant unlabeled data makes it difficult for the diffusion model to learn the distribution of the labeled data. On the other hand, DWD achieves the best scores across all three metrics. This indicates that DWD effectively addresses both overfitting issues and the problems caused by irrelevant unlabeled data, leading to better generation performance. We appreciate the suggestions for more diverse validation of our method, and hope that the response above addresses your comments. **References** [A] You et al., “Diffusion models and semi-supervised learners benefit mutually with few labels”, NeurIPS, 2023. [B] Bao et al., “All are worth words: a vit backbone for score-based diffusion models”, NeurIPS Workshop on Score-Based Methods, 2022. [C] Peebles et al., “Scalable Diffusion Models with Transformers”, ICCV, 2023. [D] K Cao et al., “OPEN-WORLD SEMI-SUPERVISED LEARNING.”, ICLR, 2022. [E] Ojha et al., “Few-shot Image Generation via Cross-domain Correspondence”, CVPR, 2021. --- Rebuttal Comment 1.1: Comment: Thank you for your response. This addresses my main concerns. I am now inclined to accept this paper and have increased my score to a weak accept. I look forward to seeing the final version. good luck! --- Reply to Comment 1.1.1: Title: Official Comment by Authors Comment: We're pleased to know that your concerns have been resolved. We will integrate all the points we talked about with you into the updated manuscript.
Summary: This paper proposes DWD, a new OSSL method that trains a diffusion model to transform OOD unlabeled data to ID images for SSL. DWD can mitigate the class mismatch problem in the OSSL task, which affects SSL performance. Strengths: While previous OSSL methods attempted to distinguish between ID and OOD data through OOD detection training methods, DWD offers a novel perspective by addressing the OSSL problem from the data generation level, which leads to the state-of-the-art OSSL performance. Weaknesses: 1. The experimental results are expected to be more extensive, for instance, using 1) different ratios of ID and OOD classes and 2) different ratios of labeled and unlabeled data on the same dataset to verify the method’s effectiveness. Such settings are common in recent OSSL methods like [1,2] 2. DWD does not truly enable the model to distinguish between ID and OOD data; instead, it eliminates the OOD samples from the data. Thus, DWD cannot handle potential unseen OOD samples in real-world scenarios. 3. DWD relies on the ratio of OODs in the unlabeled dataset as the prior knowledge to train the discriminator. Such prior knowledge is not always available in real-world scenarios. [1] Li Z, Qi L, Shi Y, et al. Iomatch: Simplifying open-set semi-supervised learning with joint inliers and outliers utilization, ICCV 2023 [2] Saito K, Kim D, Saenko K. Openmatch: Open-set consistency regularization for semi-supervised learning with outliers, NeurIPS 2021 Technical Quality: 3 Clarity: 3 Questions for Authors: For the two usage scenarios of generated data in Sec 4.2, the authors claim that the performance is better when it is served as unlabeled data. This slightly contradicts my intuition. If the generated data is reliable, why does removing labels for training result in better performance? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Please refer to the weakness. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your thoughtful feedback. **W1.** We agree that a more extensive set of experiments can help validate DWD’s effectiveness. We thus conducted additional experiments that are common in recent OSSL methods: 1) different ratios of ID and OOD classes, and 2) various sizes of labeled data. 1) Different ratios of ID and OOD classes As shown in Figure 3 of our paper, we conducted experiments on the SixAnimal task with different ratios of ID and OOD classes ($\zeta$ = 50%, 75%, 100%). These experiments demonstrated DWD's ability to enhance baseline SSL performance, under various class distribution mismatch scenarios. To further expand the results, we conducted additional experiments to include $\zeta$=25% (below 50%). We report the results in Table A in PDF. Since DWD is designed to resolve the class distribution mismatch, its effectiveness is expected to decrease with a lower mismatch ratio. However, we observed that DWD was still able to improve the baseline method in the 𝛇 = 25% case, although the performance gain was relatively diminished. 2) Varying the size of labeled data To provide a more extensive set of experiments, we conducted additional experiments on the ImageNet-100 dataset, varying the size of labeled data. To build the ImageNet-100 dataset, we sub-sampled 100 classes from ImageNet, as described in [A]. We divided these classes equally into 50% ID and 50% OOD classes following alphabetical order. From each ID class, we selected a small portion (10% or 30%) as labeled data with the remaining data forming the unlabeled set. We report the results in Table B in the PDF. As shown in the table, DWD still proves to be effective with varying amounts of labeled data, performing well with both 10% and 30% sampling ratios. Notably, the performance gain is greater with the 10% sampling ratio compared to the 30% ratio, as the benefit of data augmentation is more pronounced when the dataset is smaller. However, it is important to note that DWD’s effectiveness is expected to be marginal when the amount of labeled data is extremely small, as the diffusion model struggles to accurately capture the labeled data distribution. We remark that this limitation is also common to other generative augmentation approaches. **W2.** Although we didn’t explicitly handle OOD samples in the paper, it is important to note that our approach involves training a separate discriminator to distinguish between ID and OOD data during the diffusion model training. By integrating this discriminator with the trained classifier, we can straightforwardly reject OOD samples and only classify ID samples during inference time. **W3.** We remark that we did not assume any prior knowledge of $\mu$ in our experiments. Instead, we treated $\mu$ as a hyper-parameter and systematically tuned it using a validation set, following the standard protocol for hyper-parameter tuning in the SSL literature. **Q1.** We understand the possible ambiguity introduced by the last sentence in Section 4.2. The sentence can be interpreted either way: (1) DWD-UT demonstrates better results than DWD-SL and/or (2) applying DWD-UT to baseline SSL methods improves the baseline performance. We intended to claim the latter rather than the former interpretation. We will carefully rephrase the corresponding sentence to mitigate this ambiguity. Still, as discussed in Section 5.2, we found that utilizing the DWD-UT with baseline SSL methods outperformed DWD-SL in some cases. We suspect this is because there are variations in the qualities of the generated images, since they are generated without considering the relevance between the class condition and the guide image. If the class condition is highly relevant to the guide image (e.g. very similar class), a high-quality image would be generated; otherwise, the quality of the generated image can be relatively low. When DWD-UT is combined with sophisticated data selection mechanisms commonly present in SSL methods (e.g., thresholding used in pseudo-labeling, weighting functions in filtering-based methods), it can discern high versus low quality samples, thereby enhancing performance compared to DWD-SL. **Reference** [A] K Cao et al., “OPEN-WORLD SEMI-SUPERVISED LEARNING.”, ICLR, 2022. --- Rebuttal Comment 1.1: Comment: I appreciate the author's rebuttal and the efforts made to address my concern. I have no other problems. --- Rebuttal 2: Title: Official Comment by Authors Comment: We're glad to hear that your concerns have been addressed, and we sincerely appreciate your positive review! We will incorporate all the findings discussed with you into the revised manuscript.
null
null
Rebuttal 1: Rebuttal: ## **General response** We appreciate all the reviewers for taking the time to provide constructive feedbacks on our paper. We are very encouraged that the reviewers have recognized the following strengths in our work: 1) Proposition of a novel perspective by addressing the class distribution mismatch problem at the data generation level. (Reviewer KfDe, 5EHv) 2) Presentation of a creative methodology with clear motivation. (Reviewer 5EHv, 9H8V) 3) Demonstration of strong empirical performance. (Reviewer KfDe, 9H8V, 5EHv) 4) Provision of effective visualizations that aid in understanding. (Reviewer 5EHv) 5) Inclusion of comprehensive analysis with additional experiments in the appendix. (Reviewer 9H8V) Below, we summarize the main concerns raised: 1) Necessity of assessing DWD on larger datasets. (Reviewer 9H8V, 5EHv) 2) Necessity of evaluating DWD with different ratios of ID and OOD classes and varying sizes of labeled data. (Reviewer KfDE, 5EHv) 3) Lack of analysis on hyper-parameter sensitivity, generation performance, and trade-offs between performance gains and computational cost. (Reviewer 9H8V, 5EHv) We have addressed the comments with individual responses. If you have any questions or require further clarification, please let us know, and we will be glad to address them during the discussion period. Additionally, please refer to our one-page PDF, which includes the following results: * Table A : SixAnimal task with different ratios of ID and OOD classes. (Reviewer KfDE, 5EHv) * Table B : ImageNet-100 task with varying sizes of labeled data. (Reviewer KfDE, 9H8V, 5EHv) Pdf: /pdf/f6090f7dc50f30cfb355bf8a2bf2bb4eb8b42d3c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Suitable is the Best: Task-Oriented Knowledge Fusion in Vulnerability Detection
Accept (poster)
Summary: This paper proposes KF-GVD, a novel Knowledge Fusion-based Graph Neural Network (GNN) model designed for detecting vulnerabilities in C/C++ source code. KF-GVD employs Code Property Graphs (CPGs) to represent the structure and semantics of the source code. Based on the CPGs, KF-GVD flexibly extracts features and tailors them for specific tasks. Through this innovative approach, KF-GVD is able to fine-tune its detection mechanisms. The experimental results demonstrate that KF-GVD is effective in Vulnerability Detection (VD), by outperforming the state-of-the-art static analysis- and ML-based VD methods. Notably, KF-GVD successfully identified nine 0-day vulnerabilities, indicating its potential impact on software security. Strengths: - The paper is well written, and I enjoy the reading. - The authors conducted a rigorous and comprehensive comparison of KF-GVD against the state-of-the-art, spanning from traditional static analysis tools to recently emerging machine learning-based approaches. - KF-GVD significantly outperforms SOTAs, achieves notable improvement in vulnerability detection, and exposes nine 0-day vulnerabilities. Weaknesses: - The paper does not sufficiently justify the use of CPG to represent vulnerability semantics, neglecting to discuss the potential alternatives such as Control Flow Graph (CFG). - The paper misses details on feature extractions. - The description for statement level VD is ambiguous. Technical Quality: 3 Clarity: 4 Questions for Authors: ## Q.1 What is the motivation behind using CPG to represent vulnerability semantics? While KF-GVD demonstrates the impressive performance in vulnerability detection (VD), it still remains unclear why the authors employ CPG instead of the alternatives such as CFG. From my understanding, the Code Property Graph (CPG) can be regarded as a fusion of Abstract Syntax Trees (AST), Control Flow Graphs (CFG), Call Graphs (CG), and Program Dependence Graphs (PDG). Although it is undoubtedly powerful, building CPG for large-scale projects like Linux Kernel could be extremely expensive, particularly in terms of computing program dependence. - **Q.1.1) for authors to respond:** During the feature extraction process, have there been any instances where constructing the CPG failed? If so, how do the authors manage these construction failures? - **Q.1.2) for authors to respond:** Could it be possible to use CFG or other alternatives to replace CPG? Can the authors justify the superiority of using CPG on this task? Additionally, I'm curious about the decision not to use LLVM Intermediate Representation (IR) to analyze the C/C++ projects in KF-GVD. Typically, the workflow for analyzing C/C++ projects involves the following steps; - Compile the source code into a kind of IR (not limited to LLVM IR). - Build corresponding graph-based structures (e.g., CFG or PDG) according to the requirements. - Pass them to downstream applications (could be either static analyzer or neural network). Since LLVM IR offers a rich, standardized structure that provides detailed information on control and data flow, I believe using it can potentially enhance the capability of KF-GVD. Moreover, employing LLVM IR instead of solely depending on the source code (in the representation of AST) can minimize the noise. For instance, macros are expanded and machine-dependent code is removed. - **Q.1.3) for authors to respond:** Can the authors justify the reason for excluding LLVM IR? ## Q.2 How are the code statements determined to be related to vulnerabilities? I find the explanation of the statement-level vulnerability detection (VD) in the paper to be somewhat ambiguous. For clarity, could the authors explain how to determine which lines of code are marked as vulnerable? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: - The size of the dataset used in the study is relatively small, which could impact the generalizability of the KF-GVD model. However, I also understand the reality that it is hard to obtain high-quality and open-source vulnerability datasets. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the detailed review and the professional opinions you have provided, which are highly valuable to us. Below are our responses to the questions you raised, and we hope they address your concerns. # Answer for Q.1 + **A.1.1)** In most cases, Joern can successfully build CPG. During the data processing phase, we found that some functions could not generate corresponding CPGs normally. According to our observations, these instances are usually related to missing compilation dependencies, complex macro definitions, nested preprocessor directives, and so on. However, the proportion of such cases in the target files involved at this stage is very small, so we usually ignore them. + **A.1.2)** We use CPG instead of CFG alone because CPG can model most types of vulnerabilities, which has been proven by the proposer of Joern. For KF-GVD, in order to make the pre-trained general model more flexibly adapt to a wider range of downstream target tasks to achieve more efficient vulnerability detection, it is critical to consider more comprehensive vulnerability features in the modeling stage. The following table shows the F1-score (%) comparison of vulnerability detection using CFG, PDG, and CPG for vulnerability feature extraction on the target tasks corresponding to S119 and S416. ||$S_{119}$|Fs|Drivers|Net|Include|CWE-125|CWE-787| |:-----|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |CFG|54.4|46.8|50.8|45.2|49.3|54.6|50.3| |PDG| 61.8 | 60.7 | 57.9 | 52.3| 56.8 |59.6|56.5| |CPG| 86.7 | 95.7 | 92.3| 82.5 | 88.0 |67.9|82.1| ||$S_{416}$|Net|Fs|Drivers|Kernel|Block|Include| |:-----|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |CFG| 42.9|38.1|42.7|36.3|44.8|30.2|32.7| |PDG|69.6|66.2|67.9| 57.6| 62.4|59.9|64.3| |CPG|88.0|87.3|82.9|78.7|88.6|74.1| 87.5 | It can be observed that the CPG-based code representation can achieve the best vulnerability detection results on different target tasks compared with a single code property. While it is true that the CPG-based method will be slower than CFG during the model training stage, we consider that this trade-off is justified and acceptable. Specifically, in our experiments, the CPG-based method takes an average of 11.5 seconds to analyze a sample, whereas using CFG alone takes about 9.6 seconds. + **A.1.3)** The reasons for KF-GVD not choosing LLVM IR are as follows: _Loss of High-Level Semantic Information_: LLVM IR's representation is closer to the machine code level and focuses on instruction-level operations. Such standardized representation is fine-grained and detailed, but to a certain extent, it can lead to a loss of high-level source code logic structure and semantic information, thereby affecting vulnerability analysis performance. Compared with the CPG generated by Joern, the high-level representation is typically more efficient in the vulnerability identification process. _Compilation Overhead:_ LLVM IR requires the source code to be compiled first, the compilation phase that is time-consuming and makes large-scale vulnerability detection difficult. VulDeeLocator, a fine-grained vulnerability detector based on LLVM IR, is a typical example. It takes about 30.7 seconds to analyze a single sample, which is about three times that of KF-GVD. _Tool Support for Security Applications:_ While LLVM IR is a powerful tool for compiler optimization and low-level code analysis, it lacks tool support for security applications; Joern, on the other hand, is a tool designed for security analysis and vulnerability mining, with better ease of use, convenient query and analysis functions and interfaces, which provides the possibility for subsequent implementation of fine-grained localization of code associated with graph nodes and efficient vulnerability data management. # Answer for Q.2 KF-GVD achieves fine-grained localization of potential vulnerability statements based on the graph self-attention mechanism adopted in the model. The graph self-attention mechanism calculates an attention weight for each pair of neighbor nodes, reflecting the importance of a specific neighbor when aggregating information for a node. Through these attention weights, we can understand which neighbor nodes contribute more to the features of the current node, thereby providing a certain level of interpretability. Specifically, for an instance (CPG), KF-GVD calculates and obtains the graph nodes with higher attention scores in the current graph based on the graph self-attention mechanism. Since the attention score reflects the degree of influence of the node on the current decision made by the model (vulnerable/benign code), when the sample is predicted as vulnerable, KF-GVD considers the source code statements associated with these high-score nodes as potential vulnerability statements. In practice, the graph association file generated by joern can realize the mapping from CPG nodes to source file code statements, thereby achieving fine-grained localization of vulnerability statements. --- Rebuttal Comment 1.1: Comment: Thanks for your rebuttal. My comments are below. > A.1.1) In most cases, Joern can successfully build CPG. The response addresses my concerns on the CPG construction. > A.1.2) We use CPG instead of CFG alone because CPG can model most types of vulnerabilities, which has been proven by the proposer of Joern. While I appreciate the author's clarification and acknowledge that using CPG achieves far better performance than using CFG only in vulnerability detection, I am still curious about the reaons why CPG is proven to be better than CFG in vulnerability detection tasks. From my understanding, CPG is essentially built upon CFGs, since either data or control dependencies are derived from CFG. Consequently, my opinion is that using CFG with task-specific auxiliary information can be more efficient and offer better flexibility, as dependency information can be selectively integrated as needed. The response is overall adequate and clear to me, I would just like to offer another aspect of using CFG. > A 2. KF-GVD achieves fine-grained localization of potential vulnerability statements based on the graph self-attention mechanism adopted in the model. The graph self-attention mechanism calculates an attention weight for each pair of neighbor nodes, reflecting the importance of a specific neighbor when aggregating information for a node. Through these attention weights, we can understand which neighbor nodes contribute more to the features of the current node, thereby providing a certain level of interpretability. The response makes sense and addresses my concerns. --- Reply to Comment 1.1.1: Comment: I believe that CPG has been proven to be better than CFG in vulnerability detection tasks because CFG can only represent control dependencies, while CPG can capture control dependencies, data dependencies, and syntactic structure information of programs at the same time. Data dependencies are crucial to understanding the interactions between different variables and memory areas in a program, especially vulnerabilities such as buffer overflows and uninitialized variables. Moreover, syntactic details such as variable declarations and function calls that are missing in CFG are also relevant to some specific types of vulnerabilities such as SQL injections and cross-site scripting attacks. In addition, the graph structure of CPG allows more complex pattern matching algorithms, such as subgraph matching and path matching, which helps to use more ways other than deep learning to mine potential vulnerability patterns, rather than just relying on control flow paths. Your perspective is more comprehensive and insightful. In fact, in the early stage of method design, when we attempted to use only CFG or PDG for vulnerability detection to make the feature embedding process more efficient, we found that methods based on a single property were worse than using CPG, so we chose CPG without further research. It is also feasible and meaningful to use CFG with task-specific auxiliary information to selectively integrate dependency information as needed. We greatly appreciate your comments, which will help us further explore and improve our method.
Summary: The paper proposes KF-GVD, a vulnerability detection method that integrates task-specific vulnerability knowledge into a graph neural network model. KF-GVD aims to guide the model to learn vulnerability patterns tailored to the target task, rather than relying solely on a generalized approach. Experiments show KF-GVD outperforms baseline methods, especially on tasks involving specific code modules or vulnerability subtypes. KF-GVD also provides improved interpretability by identifying the most influential source code statements related to predicted vulnerabilities. While KF-GVD demonstrates the benefits of incorporating task-specific knowledge, the paper acknowledges limitations such as its dependence on the availability of relevant knowledge and weaker performance on cross-domain tasks compared to the gains on target-specific tasks. Strengths: Quality: The technical implementation of the KF-GVD framework appears to be reasonably well-designed, with a solid GNN architecture and a knowledge fusion process. The extensive experiments across multiple target tasks, including function-level and statement-level evaluation, demonstrate a thorough empirical assessment of the approach. The authors' attention to interpretability through the self-attention mechanism is a positive aspect. Clarity: The paper is technically dense but reasonably well-written. The authors provide clear descriptions of the key components of the KF-GVD framework, including the feature representation, knowledge extraction, and vulnerability detection and interpretation. The experimental setup and results are presented in a structured and comprehensible manner. Significance: Improving the effectiveness of vulnerability detection, especially for specialized tasks and target code, remains an important and relevant problem in software security. The KF-GVD paper's exploration of leveraging task-specific knowledge for vulnerability detection could contribute to ongoing efforts in this area, even if the core ideas are not entirely novel. Weaknesses: Lack of Novelty: The core idea of integrating task-specific vulnerability knowledge into a GNN-based detection model is not novel, as the "Interpreters for GNN-Based Vulnerability Detection" paper has already explored a similar approach. The authors' failure to identify and properly differentiate their work from this closely related prior research is a major weakness. Insufficient Literature Review: The paper lacks a thorough review of the existing literature on vulnerability detection, particularly in the context of GNN-based techniques and the use of task-specific knowledge. The authors should have conducted a more comprehensive survey to situate their work within the broader research landscape and identify the specific contributions they are making. Unclear Uniqueness of Contributions: Without a clear and compelling unique contribution, the overall significance and impact of the KF-GVD paper are diminished. The authors need to re-evaluate the paper's focus and either identify a novel aspect of their approach or reposition the work to highlight its incremental advances over prior research. Potential Overlap with Existing Work: The significant similarities between the KF-GVD paper and the "Interpreters for GNN-Based Vulnerability Detection" paper raise concerns about potential overlap or duplication of efforts. The authors should carefully examine the extent of this overlap and ensure that their work is not merely reiterating or incrementally building upon existing research. Lack of Rigorous Comparative Analysis: The paper does not provide a rigorous comparative analysis between KF-GVD and other closely related GNN-based vulnerability detection methods, particularly the "Interpreters for GNN-Based Vulnerability Detection" paper. A more comprehensive and systematic comparison would be necessary to demonstrate the unique merits and contributions of the KF-GVD approach. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. How does the KF-GVD approach differ from the "Interpreters for GNN-Based Vulnerability Detection" paper in terms of its core technical contributions and unique aspects? Please provide a more detailed analysis to clearly differentiate the novelty of your work. 2. Could you please conduct a more rigorous comparative evaluation between KF-GVD and other state-of-the-art GNN-based vulnerability detection methods, including the "Interpreters for GNN-Based Vulnerability Detection" paper? This would help better demonstrate the clear advantages of the KF-GVD approach across multiple performance metrics and target tasks. 3. What was the rationale behind choosing the Word2Vec embedding technique over code-specific embedding methods, such as code2vec or code2seq? Could you provide a comparative analysis of the performance using different code embedding techniques to justify the choice made in the KF-GVD framework? 4. To enhance the clarity and organization of the paper, could you restructure and simplify the technical descriptions to improve the overall accessibility? Could you also consider adding visual aids, such as diagrams or illustrations, to better explain the key components and workflow of the KF-GVD framework? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: 1. The paper does not provide a thorough discussion of the computational efficiency and scalability of the KF-GVD approach. As the size and complexity of software projects increase, the computational requirements and the ability of the method to handle large-scale code bases should be addressed. 2. The paper does not explore the potential limitations of the code embedding technique used (Word2Vec) and how it may impact the overall performance of the vulnerability detection. A more comprehensive analysis comparing different code embedding methods would strengthen the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate your insightful comments. We have carefully considered the issues you raised and provided our responses below, hoping to address your concerns. # Answer for Q.1 Based on the research content of the paper "Interpreters for GNN-Based Vulnerability Detection" (hereafter referred to as "that paper" for brevity), we believe that the work of KF-GVD does not overlap with it. We analyzed and compared the two papers from the following aspects: **1) Research Objectives:** That paper aims to evaluate the capability of different GNN interpreters to explain VD models. Our research is application-oriented, aiming to adopt the most suitable vulnerability pattern learning and identification strategies in VD for different tasks, so that the model can achieve more efficient and scalable VD on a variety of target objects. **2) Contributions:** That paper is the first empirical study to evaluate the ability of GNN interpreters on VD model. The study proposes principled guidelines to assess the quality of the interpretation approaches for GNN-based vulnerability detectors; Our research proposes a VD framework KF-GVD, which is the first task-oriented GNN-based VD model. KF-GVD adopts a simple and efficient task-oriented knowledge fusion method to enable the general-purpose model can be flexibly adapted to VD on various target tasks in practical applications while maintaining the generalization performance for source task. **3) Evaluation:** That paper evaluates the quality of interpretations of different GNN interpreters on VD models based on the proposed stability, robustness and effectiveness metrics; KF-GVD evaluates the VD performance compared with SOTAs based on precision, recall, F1-score and MAP. **4) Interpretability:** In that paper, the study introduces GNN interpreters to achieve the interpretation of VD models. For KF-GVD, the interpretability of our method comes from the graph self-attention mechanism adopted by the model rather than the introduction of a special interpreter, which is more efficient and intuitive on large-scale data. Furthermore, KF-GVD achieve fine-grained vulnerability localization by leveraging the inherent interpretability of the model to better serve practical applications. In other words, interpretability is the nature of KF-GVD rather than its purpose. # Answer for Q.2 According to the three evaluation criteria proposed in that paper, we applied the three GNN interpreters discussed in that paper to explain KF-GVD and compared these methods with the self-attention mechanism (Self-Att) adopted by KF-GVD. Due to word count limitations, the following provides partial evaluation results and concise analysis on the source task S119 and its corresponding target task dataset. **1) Effectiveness:** ||S119|Fs |Drivers| CWE-125|CWE-787| |:-----|:----:|:----:|:----:|:----:|:----:| |GradCAM|48.6|41.5|50.4|39.7|39.5| |GNNExplainer|33.9|32.1|33.7|30.6|32.4| |DeepLIFT|44.2|35.4|47.6|36.5|38.9| |Self-Att|54.4|42.7|51.8|38.9|44.8| The effectiveness evaluation criteria IoU (%) can reflect the accuracy of the association between the interpreter results and the ground-truth. It can be seen that for KF-GVD, the vulnerability localization based on the self-attention mechanism is the most effective compared with other interpretation methods. **2) Stability:** ||S119 |Fs|Drivers|CWE-125|CWE-787| |:-----|:----:|:----:|:----:|:----:|:----:| |GradCAM|36.9|37.3|38.5|41.8|35.5| |GNNExplainer|68.7|69.2|66.1|64.0|68.9| |DeepLIFT|5.6|3.3|1.2|4.7|2.2| |Self-Att|57.2|54.6|33.8|34.5|37.9| It can be observed that adjustments to the model parameters impact the stability of the self-attention mechanism directly related to the model. However, compared to the decomposition-based interpretability method DeepLIFT, the results are still relatively better. **3) Robustness:** ||S119|Fs |Drivers| CWE-125|CWE-787| |:-----|:----:|:----:|:----:|:----:|:----:| |GradCAM|43.7|41.0|43.3|38.6|42.5| |GNNExplainer|13.6|14.3|12.1|11.8|12.4 | |DeepLIFT|44.9|37.6|42.3|40.7|39.2| |Self-Att|62.4|58.9|60.9|49.7|43.1| The robustness criteria can reflect the quality of the interpreter's explanation on some examples related to code cloning. As part of the model structure and benefiting from the adaptability of KF-GVD to different tasks, the self-attention mechanism has better robustness than GradCAM and DeepLIFT. # Answer for Q.3 The following table shows the F1-score (%) comparison of vulnerability detection using different code embedding techniques on S119 and its corresponding target task datasets. ||S119|Fs |Drivers|Net|Include|CWE-125|CWE-787| |:-----|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |Code2Vec|84.2|92.8|94.6|83.8|88.9|68.3|79.6| |Code2Seq|67.5|70.9|68.4|63.3|66.1|52.8|61.4| |Word2Vec|86.7|95.7|92.3|82.5|88.0|67.9|82.1| It can be observed that the impact of using Code2Vec and Word2Vec embedding methods on the performance of KF-GVD is very close, and both are superior to the Code2Seq embedding method. KF-GVD chooses the Word2Vec embedding method for the following reasons: Code2Vec maps code snippets into vector space through path embeddings based on the AST during the encoding process. However, since the CPG already contains the AST property, the source code corresponding to each node is a code snippet obtained based on CPG parsing. Therefore, the AST parsing process of Code2Vec is redundant, and it will also incurs more time consumption at the embedding stage. In contrast, Word2Vec is simpler and more efficient. The Code2Seq embedding method combines the attention mechanism and has advantages in processing complex code structures and long-distance dependencies. However, in the feature embedding stage of KF-GVD, the complex structure and dependencies of source code files are represented as a whole through the CPG. The source code corresponding to CPG nodes are usually very short code snippets. This is why the Code2Seq embedding method performs relatively poorly on KF-GVD. --- Rebuttal 2: Comment: We apologize for any inconvenience caused. Could you please confirm if our response has fully addressed your concerns? We would greatly appreciate your prompt feedback.
Summary: This paper proposes KF-GVD, a Graph Neural Network (GNN) model that integrates specific vulnerability knowledge into its feature learning process to enhance vulnerability detection accuracy in source code. Unlike traditional deep learning methods that optimize for general performance, KF-GVD uses knowledge fusion to tailor its detection capabilities to specific tasks, effectively addressing diverse functional modules and vulnerability subtypes. Extensive experiments demonstrate KF-GVD's superiority over state-of-the-art methods, with notable improvements in precision and recall. Additionally, KF-GVD discovered nine previously undisclosed vulnerabilities in open-source C/C++ projects, showcasing its practical effectiveness in real-world applications. Strengths: Strengths: 1. The paper presents a novel approach to vulnerability detection by integrating task-oriented knowledge into a GNN model. 2. The results are robust, showing significant improvements in precision and recall across various datasets and tasks. The paper includes detailed explanations of the methodology, thorough evaluations, and a case study that validates the practical effectiveness of KF-GVD in real-world applications. I love the case studies. 3. The paper is a great presentation. 4. This paper shows its potential to substantially improve vulnerability detection in software development. By tailoring the detection process to specific tasks and contexts, KF-GVD addresses a critical gap in existing methods, which often fail to capture the nuanced patterns of different vulnerability types and functional modules. 5. The result comparison is comprehensive, including static analysis tools and LLM-assisted tools. Weaknesses: Weakness: 1. KF-GVD relies heavily on the quality and relevance of the task-specific vulnerability knowledge integrated into the model. The process of defining and extracting this knowledge is not extensively discussed, and the paper does not address potential challenges or limitations in obtaining accurate and comprehensive knowledge. More details on how this knowledge is curated, validated, and updated over time would be beneficial. 2. The limitation section is in the appendix; I think it should be moved to the main content. 3. While the paper mentions that KF-GVD achieves high transparency and interpretability through self-attention mechanisms, it does not provide sufficient details on how these aspects are practically implemented and evaluated. 4. Could you please provide pdf-editable figures? Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Is it possible some Sota static analysis tools could also detect such vulnerabilities? For example, why don't you try to write CodeQL rules to detect vulnerabilities for corresponding CWE types, or can such tools not be applied in your settings? 2. Have you considered evaluating KF-GVD on a wider range of software projects and vulnerability types beyond CWE-119 and CWE-416 and the Linux kernel modules? 3. If I understand right, according to section 3.2.1 the CPG is constructed based on function level, so how do you consider the vulnerabilities that have interprocedural issues? Do you need to construct an iCFG(Maybe iCPG here) to let GCN learn its features?  And why don’t you use CFG to capture the features? Will it have worse results than current CPG-based methods? In most cases, a Control Flow Graph may be enough. CPG combines more features and may be slower than CFG, but which would have better results is not sure. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The paper does acknowledge some limitations and challenges, particularly in the context of applying the proposed KF-GVD model, but leave this in appendix, I think this should be mentioned in main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough review and valuable feedback on our paper , which is very helpful to us. Below are the answers to the questions you have raised, hoping to solve your doubts. # Answer for Q.1 Some Sota static analysis tools may detect such vulnerabilities. In our comparative experiments, we also considered rule-based commercial static code analysis tools such as Cppcheck and Flawfinder. Based on the implementation principles of these rule-based static analysis tools, they can be applied to most static vulnerability detection scenarios. However, these rule-based methods have the following disadvantages: First, they rely on the pre-definition of vulnerability rules by developers and need to be continuously updated and maintained to ensure the effectiveness and completeness of the rules. In addition, when limited static pattern matching rules are applied to complex and diverse detection targets, it results in a high false positive and false negative rate. In contrast, the DL-based VD method can automatically learn vulnerability feature patterns without relying on manually written matching rules. Moreover, the task-oriented knowledge fusion method proposed in our paper can realize customized update and application of existing vulnerability knowledge with lower maintenance cost. # Answer for Q.2 With the continuous expansion of vulnerability data and the demand for practical application scenarios, we will continue to evaluate KF-GVD on a wider range of software projects and vulnerability types, which will help to continuously improve and verify the effectiveness of the method to serve practical applications. In addition, we are pleased to share that in a recent research project, KF-GVD was applied to vulnerability detection in BusyBox version 1.36.1 and discovered and confirmed two undisclosed vulnerabilities. Thus, a more comprehensive evaluation in the future will be beneficial and necessary. # Answer for Q.3 Currently, the CPG generated by Joern for KF-GVD corresponds to individual functions. Therefore, as mentioned in the Threats to Validity section in Appendix F, the method does not consider vulnerabilities that have interprocedural issues. Your comments are very valuable, which will guide the further improvement and optimization of the method in our future research. We use CPG instead of CFG alone because CPG can model most types of vulnerabilities, which has been proven by the proposer of Joern. For KF-GVD, in order to make the pre-trained general task model more flexibly adapt to a wider range of downstream target tasks to achieve more efficient vulnerability detection, it is critical to consider more comprehensive vulnerability features in the modeling stage. The following table shows the F1-score (%) comparison of vulnerability detection using CFG, PDG, and CPG for vulnerability feature extraction on the target tasks corresponding to S119 and S416. ||$S_{119}$|Fs|Drivers|Net|Include|CWE-125|CWE-787| |:-----|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |CFG|54.4|46.8|50.8|45.2|49.3|54.6|50.3| |PDG| 61.8 | 60.7 | 57.9 | 52.3| 56.8 |59.6|56.5| |CPG| 86.7 | 95.7 | 92.3| 82.5 | 88.0 |67.9|82.1| ||$S_{416}$|Net|Fs|Drivers|Kernel|Block|Include| |:-----|:----:|:----:|:----:|:----:|:----:|:----:|:----:| |CFG| 42.9|38.1|42.7|36.3|44.8|30.2|32.7| |PDG|69.6|66.2|67.9| 57.6| 62.4|59.9|64.3| |CPG|88.0|87.3|82.9|78.7|88.6|74.1| 87.5 | It can be observed that the CPG-based code representation can achieve the best vulnerability detection results on different target tasks compared with a single code property. While it is true that the CPG-based method will be slower than CFG during the model training stage, we consider that this trade-off is justified and acceptable. Specifically, in our experiments, the CPG-based method takes an average of 11.5 seconds to analyze a sample, whereas using CFG alone takes about 9.6 seconds. # Answer for Weakness 1) In general, task-specific vulnerability knowledge comes from the matching rules of existing static analysis tools, relevant academic research results, and public historical vulnerability information, most of which are publicly available and easily accessible. In the majority of application scenarios, historical vulnerability information related to the target software is one of the primary and direct sources of vulnerability knowledge. The maintenance of these vulnerability data depends on the vulnerability database and software developers. In the process of realizing task-oriented knowledge fusion, the retrieval, collection, and data preprocessing of this information will be automated and efficient. In addition, the continuous expansion of the data of the various vulnerability knowledge will also contribute to continuously optimize the generalization of the source task model. 2) Thank you very much for your suggestions. Due to page limitations at this stage, we have placed the limitations in the appendix. The subsequent version will be adjusted, and the current figures will be updated to PDF-editable figures. 3) The high transparency and interpretability of KF-GVD are achieved through the model self-attention mechanism, with the aim of achieving fine-grained vulnerability localization. The attention weights given to different graph nodes in each instance (CPG) by the graph neural network model reflect the importance of nodes that affect the current model decision (vulnerable code/benign code). On this basis, we consider the code statements associated with these nodes as potential statements introducing vulnerabilities for fine-grained vulnerability localization. In the experiments, the MAP metric is used to evaluate the average precision of the top K confidence predictions of the model, specifically assessing whether the associated code of the nodes with the top K highest attention scores are real vulnerability lines, thus evaluating the model's fine-grained vulnerability localization performance and the rationality of the decision. --- Rebuttal 2: Comment: Thanks for your rebuttal. The answers have addressed my concerns
Summary: This paper introduces KF-GVD, a knowledge fusion-based vulnerability detection method integrating specific vulnerability knowledge into the Graph Neural Network (GNN) feature learning process. Traditional VD methods apply uniform feature learning, which can miss diverse vulnerability types or functional modules. KF-GVD integrates specific vulnerability knowledge into the feature learning process, enhancing accuracy across different modules and subtypes without sacrificing general performance. Strengths: -Novelty: Novel technique to combine the generalizability of the pre-trained knowledge of vulnerability detection along with Task-Specific knowledge. -Task-Specific Knowledge Integration: Integrating task-specific knowledge makes the system more specialized on the domain. -Comprehensive Methodology: Methodology is presented comprehensively. Weaknesses: -More Statistics (only precision and recall is given) such as confidence intervals or p-values. Including these would provide a more robust evidence base for the claims and enhance the credibility of the results. -Broader Dataset Evaluation (The evaluation is primarily conducted on the Linux kernel and a few C/C++ open-source projects. -More discussion is needed in limitations and future work. Technical Quality: 3 Clarity: 3 Questions for Authors: See the weakness section. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate your detailed review and valuable suggestions. We have taken your comments into consideration and respond to the raised points below. **1) P-values Evaluation** We evaluated the improvements obtained by using the model fine-tuning strategy (GVD-ft) and the vulnerability detection method based on knowledge fusion (KF-GVD) in different target task scenarios corresponding to S119 and S416, respectively, compared with the same general task model. The following table shows the p-values of these two main improvement methods on each dataset. Meanwhile, more detailed p-value statistics for other comparative experiments will be promptly updated in the paper. |$S_{119}$|Fs|Drivers|Net|Include|CWE-125|CWE-787| |:-----|:----:|:----:|:----:|:----:|:----:|:----:| | GVD-ft | 0.058 | 0.062 | 0.046 | 0.051 | 0.088 | 0.079 | | KF-GVD |0.0036 | 0.0048| 0.0032 | 0.0043 | 0.0087 | 0.006 | | $S_{416}$ | Net | Fs | Drivers | Kernel | Block | Include | |:-----|:----:|:----:|:----:|:----:|:----:|:----:| | GVD-ft | 0.077 | 0.042 | 0.13| 0.071 | 0.094 |0.055| | KF-GVD | 0.0028 | 0.0035| 0.002| 0.0047| 0.0022 | 0.0029| It can be observed that the improvements brought about by using the task-oriented knowledge fusion method proposed in this article provide detection results of significant value. KF-GVD achieves significantly better detection performance on the target datasets corresponding to S119 and S416 compared to the baseline model. **2) Broader Dataset Evaluation** With the continuous improvement of vulnerability data and the demand for practical application scenarios, we will continue to evaluate KF-GVD on a wider range of software projects and vulnerability type datasets, which will help to continuously improve and verify the effectiveness of the method to serve practical applications. In this paper, we choose to evaluate KF-GVD mainly on CWE-119 and CWE-416 and the Linux kernel modules for the following reasons: First, the two types of vulnerabilities and their subtypes, CWE-125 and CWE-787, are included in the 2023 CWE Top 25 Most Dangerous Software Weaknesses. Since KF-GVD is proposed to be application-oriented, it is important to focus on these more prevalent types of vulnerabilities. Additionally, linux kernel and projects such as ffmpeg and openssl used in Appendix E.2 are one of the most common vulnerability detection targets in in this field of research. Therefore, evaluating KF-GVD on these projects provides a more intuitive comparison. Furthermore, for the evaluation objects employed in our paper, we currently have relatively more complete data sets to ensure the objectivity and validity of the experiments. We will also make this data publicly available in hopes of aiding research in this field. **3) Limitations and Future Work** In the data processing stage, the time spent generating CPG data from source files accounts for more than 75% of the total processing time. More efficient data generation tools and processing strategies should be considered in the future. During the feature embedding stage, we truncated the $V_{code}$ that exceeded the feature vector length threshold, which led to a certain degree of semantic information loss. Furthermore, although KF-GVD has achieved significant improvements in statement-level vulnerability localization compared to SOTAs, there is still considerable room for improvement in fine-grained localization performance compared to function-level detection results. In the next phase, combining the powerful representation and generation capabilities of current large code models to obtain more comprehensive and efficient source code features will be a future research direction to further improve the performance of vulnerability localization. Finally, since the representation object of CPG is a function, KF-GVD cannot detect vulnerabilities that have cross-function or cross-file dependencies. Considering more comprehensive code call relationship extraction schemes in the future will help efficiently discover more complex and varied types of vulnerabilities.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
A Fast Convoluted Story: Scaling Probabilistic Inference for Integer Arithmetics
Accept (poster)
Summary: The authors study the problem of probabilistic inference's scalability in the integer arithmetic setting. Their method is based on the representation of integer-valued random variables probability mass functions (PMFs) as vectors and the observation that PMFs of different operations applied on said random variables can be expressed as convolutions of the PMFs vectors by leveraging Fast Fourier Transforms. Strengths: The method and the mathematical background underlying the proposed approach for probabilistic inference over integer-valued random variables are well-detailed. The Fast Log-Conv-Exp trick and related computational aspects are properly discussed. The PLIA framework and its implementation seem fairly simple, allowing the authors to scale probabilistic inference over larger integer problems and neurosymbolic learning surpassing previous state-of-the-art approaches. Weaknesses: - The experimental part is a bit limited and could be extended to show more clearly potential and limitations of the method Technical Quality: 3 Clarity: 4 Questions for Authors: - More test to understand the limits of the approach in terms of scalability can be useful. How far can you push the size of MNIST addition by allowing, e.g. 1 hour computational time? Confidence: 2 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: - Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their supportive review. Please allow us to address your experimental concern below. Our experimental evaluation tackled two of the most prominent neurosymbolic benchmarks, where PLIA$_t$ outperformed both exact and approximate state-of-the-art methods for neurosymbolic learning. It demonstrates that the field of neurosymbolic AI is in need of more challenging benchmarks, as we have shown that it is now possible to solve both MNIST addition and visual sudoku quickly and accurately. The goal of our paper is to scale probabilistic inference on linear integer arithmetic, which we illustrate by outperforming the state-of-the-art in inference (exact method) by 100 000x and the state of the art in learning (approximate method) by 100x in terms of run time. Hence, we fully agree it is high time for more challenging benchmarks, yet proposing new benchmarks is not the aim of this paper. Further scaling the MNIST addition task to more digits is limited by the size of the dataset, as there are only 60 000 images available for training and 10 000 for testing. Each of these images can only be used once in the construction of the various sums, meaning increasing the number of digits leads to an overall reduction of available sums for training and testing. For example, our case of $N = 50$ requires 100 images to construct a single sum data sample. Consequently, the test dataset only contains 100 samples, which was the lowest number of samples we were still comfortable with to compute test statistics over. We did run learning experiments with $N = 100$ and even $N = 1000$, but we do not deem their test statistics faithful enough to be reported. --- Rebuttal Comment 1.1: Comment: Thanks for your comments
Summary: The paper presents a framework, PLIA_t, to solve the generally intractable problem of probabilistic inference using the fast Fourier transform (FFT) for integer-valued random variables. The paper shows how to use the log-sum-exp trick to solve the numerical stability issue in the FFT setting and defines arithmetic operations on integer-valued random variables. It also defines how to compute expected values and how to handle probabilistic branching. Experiments show that the gains can be realised in an implementation for the given examples. Strengths: - The paper tackles a central problem, probabilistic inference, from a new perspective to the best of my knowledge. - The results are promising. Weaknesses: - Even though the paper plans to introduce PLIA_t as a new framework, the framework itself remains rather opaque regarding its inputs, outputs, representations, algorithms, or whatever they consider part of PLIA_t. It almost reads as if a section is missing between Section 4 and 5. As such, I am also not so sure about the reproducibility of the experiments. (I have formulated a corresponding question. This is one of the main reasons for my overall score that I am willing to revise given a convincing rebuttal.) Technical Quality: 3 Clarity: 2 Questions for Authors: - Can you please characterise / define PLIA_t formally as a framework in the rebuttal? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their time and effort, and for their suggestion of formalizing PLIA$_t$. For the camera-ready version we propose to add the following subsection at the end of the Section 2: ### 2.4. Formalizing PLIA$_t$ **Definition** *(probabilistic linear arithmetic expression) Let $\{X_1, …X_N\}$ be a set of $N$ independent integer-valued random variables with bounded domains. A probabilistic linear integer arithmetic expression X is a bounded integer-valued random variable of the form $$ X = \sum_{i=1}^{N} f_i (X_i). $$ In the equation above the functions $f_i$ denote operations performed on the specified random variables that can be either one of the operations specified in Section 3 as well as compositions thereof.* PLIA$_t$ is concerned with computing the parametric form of the probability distribution of a linear integer arithmetic expression. It does so by representing random variables and linear combinations thereof (cf. Definition of probabilistic linear arithmetic expressions) as tensors whose entries are the log-probabilites of the individual events in the sample space of a the random variable that is being represented. Note that operations within PLIA$_t$ are closed. That is, performing either of the operations delineated in Section 3 will again result in a bounded probabilistic integer representable as a tensor of log-probabilities and an off-set parameter indicating the value of the smallest possible event (cf. Section 2.3). In Section 4 we also equip PLIA$_t$ with probabilistic inference primitives that allow it to compute certain expected values efficiently as well as to perform probabilistic branching. <END OF SECTION 2.4> **Reproducibility concerns.** We would like to stress that the complete implementation of PLIA$_t$, including all operations discussed in Sections 3 and 4, are provided in the supplementary material of our submission. This code also contains our complete experimental setup and automatically installs all necessary dependencies in an easy-to-run script as documented in the provided README file. PLIA$_t$ is a hyperparameter-free framework for inference tasks (cf. Section 5.1). For the neurosymbolic learning tasks (cf. Section 5.2) we provide all our optimal parameters in Appendix E. Upon acceptance, we will also make this implementation publicly available. In this regard we believe we follow best practices concerning reproducibility. We hope our answers have addressed your concerns and allow for an increase in score. Please let us know if there are any further concerns and we will gladly try to address them.
Summary: The paper addresses key challenges in applying neurosymbolic AI techniques, specifically in integer arithmetic. Leveraging the power of tensor operations and the fast Fourier transform (FFT), the authors propose a novel approach to perform probabilistic inference on integer-valued random variables. Central to their method is the tensor representation of the distributions of bounded integer-valued random variables, enabling efficient computation of distribution sums using FFT, thus significantly reducing computational complexity from quadratic to logarithmic time complexity. The study validates the effectiveness of this approach through experiments showing substantial improvements in inference and learning times, pushing the state of the art by several orders of magnitude. This contribution not only enhances computational efficiency but also facilitates gradient-based learning, which was previously challenging due to the discrete nature of integers. Strengths: 1. The paper introduces a creative approach to tackling the computational challenges in probabilistic inference for integer arithmetic. The innovative use of tensor operations and the fast Fourier transform (FFT) to represent and compute the distribution of integer-valued random variables is a notable advancement. This method is original in its application of FFT in the log-domain for probabilities, which is a sophisticated adaptation not extensively explored in previous studies within the neurosymbolic AI domain. 2. The paper is exceptionally well-written, with a clear structure and logical flow that enhances readability. Technical concepts and methods are explained with precision, making the complex content accessible to readers who may not be intimately familiar with the technical details of probabilistic inference or FFT. Diagrams and examples further aid in understanding and illustrating key points effectively. 3. The significance of this work is manifold. Firstly, it addresses a critical bottleneck in neurosymbolic AI, opening up new possibilities for applying these techniques to more complex, real-world problems. The ability to perform differentiable operations on discrete data structures could have profound implications for the field, potentially influencing future research directions and applications. Weaknesses: 1.While the paper briefly mentions the inherent #P-hardness of probabilistic inference, there is scant elaboration on how this impacts the scalability or applicability of the proposed method in practical, large-scale scenarios. A more detailed discussion of the practical limitations, potential computational bottlenecks, or scenarios where the method might not perform as expected would be invaluable for readers and practitioners looking to apply these techniques. 2. The clarity and depth provided in the theoretical formulation and experimental setup are excellent. However, the paper could be enhanced by including more detailed information on the implementation, such as the specifics of the software environment, libraries used, and parameter settings. This addition would aid in reproducibility and allow other researchers to more easily validate and build upon the work. 3. The paper could further detail the algorithmic complexity of the proposed method beyond the FFT application. While it highlights the reduction in time complexity, there's minimal discussion about space complexity or the computational overhead introduced by tensor manipulations and log-domain calculations. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. Could the authors provide additional details about the computational environment, versions of libraries used, and precise parameter settings? 2. How do the authors address potential underflow or overflow problems during computations, and what impact might these issues have on the method's accuracy and stability? 3. Are there plans or ongoing work to test the applicability of this approach in other fields that heavily rely on integer arithmetic, such as cryptographic algorithms or complex financial computations? Insights into such applications could significantly enhance the paper's impact. Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: One potential limitation is the paper's reliance on specific assumptions for the efficiency of FFT in the log-domain. While these assumptions are necessary for the mathematical framework, there is limited discussion on how sensitive the results are to deviations from these assumptions. Exploring the robustness of the proposed method under less ideal conditions or discussing potential modifications that could accommodate more general cases would strengthen the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their thorough review and appreciate the raised concerns. We will start below by answering the main questions: 1. Our precise parameter settings are detailed in Appendix E for the neurosymbolic experiments (cf. Section 5.2), being learning rate, number of training epochs and neural architecture. As PLIA$_t$ itself is a hyperparameter-free framework, there are no other parameters to set when just performing inference in PLIA$_t$ (cf. Section 5.1). All other implementation details are provided in the supplementary code material (submitted together with the paper), where the precise computational environment, including the exact versions of all necessary libraries, are given. Moreover, our complete implementation of PLIA$_t$, its experiments and the precise parameter settings can also be found in the supplementary code material for ease of reproducibility. Finally, we will publicly release PLIA$_t$ upon acceptance. 2. The issue of numerical overflow or underflow is addressed in Section 2.2. There, we introduce a new variant of the well-known log-sum-exp trick for FFT computations in the log domain, called the fast log-conv-exp trick. Using the fast log-conv-exp trick PLIA$_t$ runs without any numerical stability issues. 3. We are currently looking into many different application domains for PLIA$_t$ as the scalability results from our experiments, where we observe orders of magnitude better performance, seem to indicate it is an enabling technology. We did not yet consider applications in cryptography or finance, but these are indeed promising directions! We thank the reviewer for bringing these to our attention. As for the weaknesses, we would like to address your raised concerns about limitations, complexity and computational overhead. **Limitations.** We acknowledge the discussion on the practical applicability of PLIA$_t$ could be extended. We propose to add the following discussion to the camera-ready version of the paper. The inherent #P-hardness of probabilistic inference does remain the main bottleneck for PLIA$_t$ and can be observed in our experiments. During the comparison between PLIA$_t$ and the state-of-the-art approach of Cao et al. [1] in Figure 5, we can see that PLIA$_t$ also starts to take a significant amount of time when the domain of the probabilistic integers grows. Hence, if the explicit computation of probabilistic integers with a very large domain is required, then PLIA can still struggle. However, PLIA$_t$ does scale orders of magnitude further (5 to be precise) than what was possible so far and considerably pushes the envelope on the state of the art. **Space complexity.** It is well-known that the FFT has, apart from the $N \log N$ runtime complexity, also $N \log N$ theoretical space complexity, again in contrast to a $N^2$ space complexity of the naive approach. Moreover, we performed an additional empirical comparison between Dice [1] and PLIA$_t$ with respect to memory allocation (see attached PDF in global rebuttal), where we indeed confirmed the better scaling behaviour of PLIA$_t$. Hence, PLIA$_t$ not only outshines Dice (the current state of the art) in terms of runtime but also in terms of memory usage. We will add this comparison to the appendix and mention it in the main paper in Section 5.1. **Computational overhead.** Finally, the computational overhead, including construction of the computational graphs for PLIA$_t$, is already included in the time measurements of all our experiments. For smaller domains, it is indeed possible that PLIA$_t$ on CPU or even Dice can outperform PLIA$_t$ on the GPU (Figure 5, bitwidth strictly below 5). However, it is clear that this computational overhead has a diminishing effect, as PLIA$_t$ clearly scales better than Dice for larger, more practical domain sizes (Figure 5, bitwidth above 5). Moreover, much of this overhead can be avoided by running PLIA$_t$ on the CPU for smaller domain sizes, where it performs on par or better than Dice (Figure 5, bitwidth strictly below 5). We will also make this point clearer in our experimental discussion in Section 5.1. [1] Cao, W. X., Garg, P., Tjoa, R., Holtzen, S., Millstein, T., & Van den Broeck, G. (2023). Scaling integer arithmetic in probabilistic programs. In Uncertainty in Artificial Intelligence (pp. 260-270). PMLR. --- Rebuttal Comment 1.1: Comment: Thank you for your response. I have decided to keep my score.
Summary: This paper presents an approach to compute probability distributions over sums of random variables. To achieve this, the authors replace as slow quadratic computation with a Fourier transformation and a Hadamard product that can be computed in log-linear time. The authors introduce the log-sum-exp trick to improve the stability of the computations. In this process, the authors note that their approach is differentiable, which allows them to connect with work on neurosymbolic AI. The authors present metrics that can be computed, and finally the authors present an empirical evaluation which demonstrates the accuracy and speed improvements of their approach. Strengths: The connection to neurosymbolic AI is very interesting and in my opinion the most important part of the paper. The performance of the method both in accuracy and speed are significant. The implementation using GPUs and DNN libraries are valuable for the community. Weaknesses: My only concern with this paper, is that their work shares many similarities with the work in [1]. The authors do not cite this paper, which includes the convolutions and Fourier transform motivation which is known, as well as the use of the log-sum-exp trick. Providing a citation and a more thorough differentiation would be extremely valuable and maybe even required. [1] Parker, M. R., Cowen, L. L., Cao, J., & Elliott, L. T. (2023). Computational efficiency and precision for replicated-count and batch-marked hidden population models. Journal of Agricultural, Biological and Environmental Statistics, 28(1), 43-58. Technical Quality: 4 Clarity: 3 Questions for Authors: Do you do a special treatment for the case where the pmf is zero i.e., where the log is -inf? or do you simply allow it and let the exponential handle that? A small paragraph on that would be interesting. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: no issues Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your review and for bringing the work of [1] to our attention! Firstly, we would like to clarify that we do not introduce the log-sum-exp trick as this is indeed a well-known trick to avoid numerical instabilities when summing probabilities in log space. Instead, we introduce a similar trick, dubbed the fast log-conv-exp trick, that avoids numerical instabilities when applying the FFT on probabilities in log space. Second, we would like to stress the differences between PLIA$_t$ and [1]. Both utilise the FFT to improve computational efficiency of probabilistic inference. 1. PLIA$_t$ and the method of [1] use the FFT for probabilistic inference on different applications. PLIA focuses on scaling probabilistic inference in linear integer arithmetic, while [1] uses the FFT for N-mixture models and applies it to the open-population model of [2]. 2. While [1] does discuss numerical instability of computing convolutions in log space, their proposed solution differs from our proposed fast log-conv-exp trick in the following way. As can be seen from algorithm 2 in [1], their approach is to use the traditional log-sum-exp trick inside each iteration of the FFT and inverse FFT, which prevents using any off-the-shelf FFT algorithm and adds computational overhead (note that FFT algorithms are extremely tough to implement efficiently). In contrast, our trick exploits the linearity of the Fourier transform to show it is possible to simply rescale the in- and outgoing log probabilities by their maximum to prevent numerical instabilities. Just like the linearity of the summation does for the log-sum-exp trick. This means we go to and from linear space only once instead of at every iteration in the FFT. Our approach has the advantage of being implementation-agnostic, allowing for off-the-shelf and highly optimised FFT implementations to be used. 3. The proposed tensorised representations and operations of PLIA$_t$ are crucial for the measured increase in performance. Additionally, they also lead to out-of-the-box differentiability. Both of these aspects are not discussed nor examined by [1]. Moreover, [1] only reports an improvement of 6x-30x in computation time for two specific population models, while we observed improvements in the order of 100 000x in our benchmarks compared to the state of the art. 4. The work of [1] only proposes a solution for computing numerically stable convolutions (discussed in points 2 and 3 above). We do provide further operations that can be performed efficiently, e.g. multiplication by scalars and probabilistic branching. This is not discussed at all in [1]. We will add a reference to [1] to the paper and also include the above discussion. Third, to answer your question, our implementation does allow for -infinity in case the PMF is zero. We utilise the numpy implementation np.inf that adheres to the IEEE 754 [3] industry standard for representing infinity as a floating-point number. Moreover, this implementation is compatible with all operations of our deep learning backend, TensorFlow. We hope to have clearly differentiated PLIA from the cited work and to have addressed all your concerns. If so, we hope you are open to the idea of raising your score. If not, we will gladly try to discuss any remaining concerns. [1] Parker, M. R., Cowen, L. L., Cao, J., & Elliott, L. T. (2023). Computational efficiency and precision for replicated-count and batch-marked hidden population models. Journal of Agricultural, Biological and Environmental Statistics, 28(1), 43-58. [2] Dail, D., Madsen, L. (2011). Models for estimating abundance from repeated counts of an open metapopulation. Biometrics 67(2):577–587. [3] IEEE Standard for Binary Floating-Point Arithmetic. 1985. --- Rebuttal 2: Comment: Thanks for the reviewer for the detailed response. I acknowledge I have read the rebuttal and I have raised my score accordingly.
Rebuttal 1: Rebuttal: We thank the reviewers for their valuable comments and suggestions to improve the paper. Our rebuttal to the individual reviews can be found directly under each review. We also attached a plot (as pdf) used to address Reviewer Ltsw's concern regarding memory usage.We discuss this issue just below. We hope the rebuttal clears up some of the concerns and allows the reviewers to reconsider their inital scores. Please let us know if there are remaining imprecisions/unclarities. #### **Space Complexity** We thank Reviewer Ltsw for raising the concern of space complexity and memory usage, and acknowledge this aspect was not properly covered. While it is well-known that the space complexity of the FFT is also $N \log N$, just as its runtime, an empirical comparison was still missing. To this end, we analysed the memory consumption of PLIA$_t$ for the inference tasks in Section 5.1 and compared it to the state of the art, Dice [1]. The result can be seen in the attached PDF showing that PLIA$_t$ indeed scales better and seems more efficient in terms of memory usage compared to the state of the art across all inference benchmarks. We will add the attached figure to the appendix and refer to it during our experimental discussion in Section 5.1. [1] Cao, W. X., Garg, P., Tjoa, R., Holtzen, S., Millstein, T., & Van den Broeck, G. (2023). Scaling integer arithmetic in probabilistic programs. In Uncertainty in Artificial Intelligence (pp. 260-270). PMLR. Pdf: /pdf/923bb6e73bc585e6df6a3550ba16215fe7ff009c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
When is Multicalibration Post-Processing Necessary?
Accept (poster)
Summary: The paper investigates the necessity and effectiveness of multicalibration post-processing across data of different modalities and machine learning models. It finds that models which are calibrated out of the box tend to be multicalibrated without further post-processing, while multicalibration can help inherently uncalibrated models. Traditional calibration methods, such as Platt scaling, sometimes achieve multicalibration implicitly. The study, conducted on a wide range of datasets (tabular, image, language) and models (from decision trees to large language models), reveals that empirical risk minimization (ERM) often achieves multicalibration. It also notes that multicalibration post-processing is sensitive to hyperparameters and works best with large datasets. Additionally, traditional calibration methods can sometimes match the performance of multicalibration algorithms. Strengths: Writing is excellent and the paper is well organized, easy to read, and understandable without an effort. The message is clear and there are actionable recommendations. The topic is timely and the experiments done by the authors are fairly comprehensive; lots of models for tabular data, fewer for language and vision. This is one of the largest set of experiments I've seen in the literature. Overall, the work has the potential to have a meaningful impact on the community. Weaknesses: Not all the conclusions from the authors are well supported by their empirical findings. More details below. Regarding observation 3: * The association between the accuracy and the calibration fraction is somewhat clear for HMDA but it’s definitely unclear of the other datasets (Appendix F.3). Accuracy seems to even increase as the calibration fraction increases on certain datasets. Thus, the authors conclusion in observation 3 is only based on HMDA. I suggest dropping it or providing more support for that. The current experimental results do not support it. * Max smECE of ERM is about twice as large as the max smECE after calibration in case of Naive Bayes, SVM, and decision trees. Then how do the authors justify their observation? * Along the same lines, observation 7 suggests that there is a substantial difference between the results for language/vision data and tabular data because in the latter we do not see such gains. As mentioned right above, Figure 2 shows such gains. It just depends on which algorithms we look at. Other comments: * Observation 5 about the robustness of HJZ to hyperparameter choices does not hold for language and vision data? How do the authors explain this finding? Minor details: * What are the error bars in Figure 3? Standard deviations? * Do the \pm values in Figure 3 represent standard error or deviation? Technical Quality: 2 Clarity: 4 Questions for Authors: I hope that the authors can solve my concerns noted above. Confidence: 3 Soundness: 2 Presentation: 4 Contribution: 3 Limitations: Limitations are discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We address the weaknesses and questions below. >**W1**: Regarding observation 3. The association between the accuracy and the calibration fraction is somewhat clear for HMDA but it’s definitely unclear of the other datasets (Appendix F.3). Accuracy seems to even increase as the calibration fraction increases on certain datasets. Thus, the authors conclusion in observation 3 is only based on HMDA. I suggest dropping it or providing more support for that. The current experimental results do not support it. Note that in Observation 3, we say “a practitioner utilizing multicalibration post-processing **may** have to trade off worst-group calibration error and total accuracy.” We do not claim that this tradeoff exists in all cases. This is an important note to practitioners, in particular because they may not have been previously aware that they may need to make such a tradeoff. For example, we would not want to make the claim that “a practitioner utilizing multicalibration post-processing may NOT have to trade off worst-group calibration error and total accuracy.”, even though such a claim is supported in some cases by the data. The tradeoff we observe here tends to occur for larger calibration fractions. We do often observe accuracy improve as calibration fraction moves from 0 to 0.4, but that accuracy then deteriorates as calibration fraction goes from .4 to 1.0. See the following examples in Appendix F.3: MLP on Credit Default (p. 33), MLP on MEPS (p. 33), MLP on HMDA (p. 33), Decision Tree on Credit Default (p. 34), Decision Tree on Bank Marketing (p. 34), Random Forest on Bank Marketing (p. 35), SVM on MEPS (p. 37), Naive Bayes on Credit Default (p. 38). We thank the reviewer for raising this point, however, and we will make all this discussion more explicit in the next version. >**W2**: Regarding observation 3. Max smECE of ERM is about twice as large as the max smECE after calibration in case of Naive Bayes, SVM, and decision trees. Then how do the authors justify their observation? We assume that the reviewer is referring to “after [multi]calibration” for NB, SVM, and DTs. Indeed, we agree with the reviewer that multicalibration can be helpful; we mention this in Observation 2: “HKRR or HJZ post-processing can help un-calibrated models like SVMs or Naive Bayes”, where we also provide empirical justification. In Observation 1, we point out that models which are multicalibrated out of the box also tend to be multicalibrated (on tabular data), but the converse also has merit: mis-calibrated models are also likely mis-multicalibrated. We observe that Naive Bayes, SVMs, and decision trees tend to be miscalibrated out of the box (see Fig. 2, Appendix F.2), and that multicalibration post-processing can help them. We will improve clarity and expand this discussion in the next version. >**W3**: Regarding observation 3. Along the same lines, observation 7 suggests that there is a substantial difference between the results for language/vision data and tabular data because in the latter we do not see such gains. As mentioned right above, Figure 2 shows such gains. It just depends on which algorithms we look at. We apologize for the confusion, and will work to improve clarity. As you observe, the NNs trained on vision/language data really do benefit from multicalibration post-processing. Some models trained on tabular data also do benefit from multicalibration post-processsing. Note, however, that the MLPs trained on tabular data do not tend to benefit from multicalibration post-processing in a statistically meaningful way. In any case where we draw a direct comparison between vision/language models and tabular models, we meant to compare only the NNs trained on these different types of data. More broadly, we mean to convey that multicalibration outcomes are both model and data dependent. We will improve the clarity in the next draft and remove this confusion for future readers. Thank you for pointing this out! >**W4**: Other comments. Observation 5 about the robustness of HJZ to hyperparameter choices does not hold for language and vision data? How do the authors explain this finding? We kindly refer to our global response for this discussion. >**W5**: Minor details. What are the error bars in Figure 3? Standard deviations? Do the \pm values in Figure 3 represent standard error or deviation? The error bars and $\pm$ values throughout our paper always represent standard deviation. We will make this explicit in the next version. We thank the reviewer for the helpful comments and feedback to our work. We will clarify the mentioned discussion in the next version. We thank the reviewer for their time and thoughtful feedback. If the pressing concerns have been sufficiently addressed, we kindly ask the reviewer to consider adjusting their score by taking the rebuttal into account. Thank you! --- Rebuttal Comment 1.1: Comment: Thanks for the reply. I still believe that the paper, in its current version, presents several takeaways about algorithms/procedures that "can"/"may" or "cannot"/"may not" be helpful. However, in most cases, it's unclear why and when these statements hold true. While I acknowledge that the authors have conducted extensive experiments, more are needed, and I am concerned that practitioners may misinterpret these takeaways. Therefore, I suggest the authors carefully reconsider how each statement is phrased. I also agree with the other reviewers' comments regarding the lack of discussion on the specific nature of the groups, which were chosen arbitrarily. Different groups might lead to different conclusions about multicalibration. Although I don't disagree with the authors' response on this topic, I believe more should be said, and other groups should be explored on the same datasets to check if the results are consistent. This further adds to my concerns about how readers might interpret the paper's takeaways. Regarding W1: I believe the finding is not significant and could mislead practitioners, who are likely to skip the Appendix and focus only on the main body of the paper—or worse, just the takeaway without noticing the "may." As I mentioned in my initial rebuttal, this takeaway seems to overfit a specific dataset. If the trade-off only exists for larger calibration fractions, as highlighted in the examples, this should be clearly discussed in the takeaway. If the takeaway remains (and I don't think it should), the authors should emphasize the "may" in bold and explicitly state that the findings are supported by just one dataset or larger calibration fractions. This concern applies to all other questions/answers as well. The statements need to be crystal clear. Regarding W5: The authors likely mean standard error, not standard deviation. --- Reply to Comment 1.1.1: Comment: We sincerely thank the reviewer for their engagement and for acknowledging that we have conducted extensive experiments. You requested additional experiments with “more complex” groups, which we have been running since your last response, and hope to have uploaded by the end of today. You previously had some input for the phrasing of our Observations. We will modify them to make the takeaways clearer. In particular, you mentioned Observations 3, 5, and 7. Here are the old and newly proposed versions: >Observation 3: A practitioner utilizing multicalibration post-processing may have to trade off worst group calibration error and overall accuracy. New: A practitioner utilizing multicalibration post-processing could potentially be faced with a trade-off between worst group calibration error and overall accuracy. This is most salient in high calibration fraction regimes (40-80%). >Observation 5: There is no clearly dominant algorithm between HKRR and HJZ on tabular data but HJZ is more robust to choice of hyperparameters. New: When considering statistical significance, there is no clearly dominant algorithm between HKRR and HJZ on tabular data. However, HJZ is more robust to our hyperparameter sweep. This may allow practitioners to find good solutions faster than using HKRR when, for example, post-processing Naive Bayes or Decision Tree predictors. >Observation 7: Multicalibration post-processing can improve worst group smECE relative to an ERM baseline by 50% or more. New: On language and vision data, multicalibration post-processing can improve worst group calibration error relative to neural network ERM baselines by 50% or more. This stands in contrast to multicalibration post-processing for neural network ERM on tabular data, where we found no statistically significant improvements.
Summary: The authors provide a large and broad set of evaluations of how useful it is to supplement empirical risk minimization procedures with multicalibration (and/or calibration) post-processing. Their experiments span a wide variety of settings and datasets, broadly falling into the tabular data, image data, and language data category. For all these categories of tasks, they evaluate whether or not one/both/neither of two existing multicalibration post-processing methods (referred to as HKRR and HJZ) are useful to apply to an ERM-trained predictor in order to ensure the resulting predictor is calibrated over a pre-specified family of subgroups in the data; and they do the same with respect to post-processing by applying simpler, marginal calibration methods such as isotonic regression. They distill the findings into many practical insights that should be useful to future practitioners wishing to apply multicalibration in their settings. In broad strokes, it is found that in many cases, especially for tabular data, full multicalibration postprocessing may not make as much practical sense, as pure ERM (possibly with marginal calibration) may already get you close enough to calibration; but at the same time, in image/language processing settings, the usefulness of multicalibration post processing generally increases and may become worth it. Along with these general insights, the authors provide many other guidelines and observations on aspects such as hyperparameter tuning for multicalibration, the comparison in performance between HKRR and HJZ, and other empirical tradeoffs. Strengths: In general, the multicalibration literature has suffered from a lack of well-concerted assessments of the empirical validity of its undisputably significant theoretical achievements (both when it comes to its “multigroup fairness” motivation and in other applications). Such assessments are even more needed given that known theoretical bounds on multicalibration’s sample complexity, as well as the overfitting effects observed in limited empirical assessments, point to potentially pessimistic empirical performance (especially compared to non-multigroup, classic calibration algorithms like Platt or temperature scaling or isotonic regression). So one important merit of this paper is that it can serve as a good starting point for further empirical evaluation of multicalibration methods. The setup itself, which is to examine how to partition available data into training and calibration sets (+ validation set as necessary to tune the multicalibration algorithms’ hyperparameters), and to ask whether or not the calibration data would be better used for the training / ERM step, is a clean and reasonable question to ask; especially so given the many hints found in recent prior work suggesting that ERM procedures may already encapsulate some (multi)calibration guarantees even without explicitly enforcing them. In all, the authors have indeed managed to execute, to a fair degree (and modulo some concerns below), on their promise of a first comprehensive evaluation of multicalibration post-processing. As listed in the summary section above, they have also provided various useful rules of thumb along the way, which should benefit practitioners and also serve as conversation starters, and in several cases as cautionary notes, when it comes to applying multicalibration in practice. Weaknesses: The prior work results about ERM being sufficient for obtaining various guarantees are mentioned, but not discussed in enough depth to provide the full context for the question being asked in the present manuscript. For instance, an important thing that [Blasiok et al, 2023]’s note in their intro, before providing their result on NNs giving multicalibration, is that one might imagine that ERM-based solutions might a-priori need to rely not just on a larger-sized NN, but also on a certain larger family of subsets that encapsulate/imply the subgroups that the practitioner wants to calibrate their data on — and their results show that in fact, aside from increasing the size of the neural net, it is (in that setting) not necessary to design any special enlarged family of subsets. So, put simply, the current manuscript needs to note much more explicitly that designing an enlarged group family for the ERM could a-priori be an important design choice for practitioners, but may not be as important (as enlarging the size of the model), as hinted on by prior work. Furthermore, in the same vein as, and aside from, [Kirichenko et al, 2023], please also cite/mention [Rosenfeld et al, 2023] — and for both these works, please discuss the overlap between robustness/multigroup fairness considerations. Group families with respect to which to multicalibrate, especially for tabular datasets, are defined in various “intuitive” ways, such as demographic groups. In that way, consistent with observations from [Kirichenko et al] and [Rosenfeld et al], it appears that explicit multicalibration not being necessary may be due to the fact that the predictors already learn the requisite features that are both prediction-relevant and also significantly correlate with the chosen protected groups. The extent to which this is an important factor — and that multicalibration may indeed be necessary for more “unusual” groups (worst-case, one can almost always hand-craft groups that, at least on training data, show miscalibration) —is not studied in this manuscript. For instance, the worst-group calibration reported only ranges over chosen groups of interest, but of course other groups not considered may have much worse calibration guarantees. So I’m not convinced the results that say multicalibration may not be necessary are robust to defining groups that are somehow non-standard. Another example of potential discrepancy in what is being studied here is the smECE vs ECE distinction — to my knowledge, the HKRR and HJZ algorithms were developed prior to the smECE paper, and can in their various instantiations explicitly target metrics like ECE or L2 or L_infty calibration, rather than smECE. Thus, at least to a less-familiar reader, this must at least be clarified e.g. in the form of a statement that conclusions based on smECE performance are of a purely empirical character. There are other presentation issues from my perspective. Besides not enunciating group choices in the main part as just discussed, the algorithms that are actually used are not explained in any sort of pseudocode, nor are their objectives/claimed bounds ever stated. This is an important writing issue given that HKRR and HJZ are introduced in their respective papers using very different notation, with differing guarantees, and with finer distinctions, such as e.g. that HJZ can output both deterministic and non-deterministic recalibrated predictors. Technical Quality: 2 Clarity: 2 Questions for Authors: Please see the “Weaknesses” section above; some concerns I listed there (notably as far as writing/presentation is concerned) could be resolved within a revision. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We agree that recent theoretical consequences of multicalibration have provided more than sufficient motivation for an empirical analysis. In light of this observation, we point out that another contribution of our work is the benchmarking repository containing all experimental code (submitted with the supplemental material). We believe this will lower several barriers to future work in this direction. We address the weaknesses and questions below. We agree that a more thorough treatment of [1] is needed; this will be included in the next version. The connection to our work is subtle. In [1], the authors consider performing ERM over some C’ (superset of C), so that the resulting model is multicalibrated with respect to C. In their case, C’ is a family of neural networks (NNs), and C is some family of smaller NNs. In many of our experiments, we indeed perform (approximate) ERM over some family of NNs, and one might expect this to result in multicalibration with respect to our finite collection of groups G, which are easily computable by some class of smaller NNs. We find this to largely be the case in our tabular experiments (Observation 1). This is, as you said, one unique aspect of neural networks vs other model families: for NNs, it is therefore possible to obtain multicalibration guarantees over arbitrarily-complex groups by simply taking "large enough" networks (without making design choices specific to the group structure). We will clarify these points in our revision. We also agree that a more faithful account of connections to robustness is in order, especially in light of recent theoretical work drawing some connections between these areas [2, 3]. We note that [2] considers corruptions in subgroups of the training data rather than general distribution shift. In general, we view multi-group robustness as a notion of robustness which, like multicalibration, aims to “respect” multiple groups simultaneously. Perhaps the closest connection between multigroup robustness and multicalibration is that they intuitively may share similar mechanisms (as the reviewer noted). That is, the mechanistic question in both cases is to understand which groups can be easily computed from the predictor's "features" – ie, which groups are easy for the predictor to distinguish. We will elaborate on this point (and the relations to the mentioned works) in our revision. Apart from this similarity, we think it is necessary to delineate practical multicalibration concerns from practical robustness concerns. As noted in Appendix C.2, it is common for empirical works in subgroup robustness to consider groups of a fixed label, and to consider worst group accuracy. These groups alone are not meaningful from the perspective of multicalibration, however, as multicalibration requires more refined probability estimates even when expected labels lie in (0,1). We will explain this relationship and cite [4] (with further discussion) in the next version of our paper. [1]: Błasiok et al. Loss minimization yields multicalibration for large neural networks. [2]: Hu et al. Multigroup Robustness [3]: Wu et al. Bridging multicalibration and out-of-distribution generalization beyond covariate shift. [4]: Rosenfeld et al. (Almost) provable distribution shift via Disagreement Discrepancy. >**W2**: Findings may not hold for "non-standard" group definitions. We give a detailed discussion of this in our **global reviewer response**, to which we kindly refer. Briefly, the fact that our results don't hold for non-standard groups is inherent to the problem formulation. It is impossible for a fixed predictor to be multicalibrated with respect to all groups (as you point out), and thus we must decide which groups to consider when studying multicalibration. Certain theory works chose to consider "all groups computable by a small circuit/DNN" (e.g. [Blasiok et al, 2023] and [Hébert-Johnson et al. 2017]). Since our focus is on the application side, we instead chose groups that are relevant to fairness applications, which are often "intuitive" groups as you said. At a high level, the prior theory works and our empirical work use group definitions at two ends of the spectrum (from "all circuits" to "intuitive groups"). We could hope to eventually understand exactly *which* groups our algorithms are multicalibrated on, but this goal does not yet appear in reach; our work is just the first step. We will be sure to clarify these limitations, and the importance of group definitions, in our revision. >**W3**: Discrepancy in HKRR / HJZ guarantees and calibration measurement. We agree that the distinction between the theoretical guarantees of HKRR / HJZ (via ECE / L-p calibration) and what we measure (smECE) is important. We report both ECE and smECE for this reason, but will do better to specify which of our takeaways / observations are particular to the metric of interest. >**W4**: Other issues: "Besides not enunciating group choices in the main part as just discussed, the algorithms that are actually used are not explained in any sort of pseudocode, nor are their objectives/claimed bounds ever stated" We intend to add two more discussions to our next version: (1) a clear high-level description of the two families of multicalibration algorithms and their guarantees (including what kind of calibration metric they guarantee performance on), as well as psuedo-code for the algorithms (most likely relegated to the appendix); and (2) a description of how and why we selected the groups for each dataset. We will also include a more explicit discussion about the limitations of our work, regarding the questions you posed in W2. We thank the reviewer for their time and thoughtful feedback. If the pressing concerns have been sufficiently addressed, we kindly ask the reviewer to consider adjusting their score by taking the rebuttal into account. Thank you! --- Rebuttal Comment 1.1: Title: Response to Authors Comment: Thank you for your responses to me and to the other reviewers, as well as for the general response. There, you gave several more experiments that are good to see, and provided several detailed discussions of the various moving parts involved in evaluating the benefits of multicalibration on various kinds of data; I agree with most of your points there. One "fine-print" point that I would like to highlight is that when you say that you will clarify that your work focuses on feature-defined groups that would be useful to practitioners --- as opposed to various potentially more convoluted group families such as ones that might predict which region in the data the predictor might be miscalibrated on --- it is important to further stress that these two categories of group families are not entirely disconnected from each other. Indeed, for some less-fancy (and even some fancy) predictors that people may be using in practice, it could well be the case that one might be able to "eyeball" (or deduce on a small holdout set) some feature-defined groups that would also explicitly target the model's inaccurate predictions region --- e.g. one might suspect that the model mispredicts a lot on some unprotected but still natural demographic group, and include that group in the family. Such a scenario would fall squarely into the "practical" category of applications of multicalibration, and thus cannot be discounted. Therefore, I urge you to be careful in your formulation of what kinds of groups your paper considers --- given my above point, I would refrain from saying that your statements are made about the general class of intuitive feature-based groups that practitioners might use, and steer it more into the direction of saying that you consider group families that may seem inherently important to protect regardless of which predictor is used on the data. (Again, to reiterate, my point is: just like the groups you experiment with, groups that capture inefficiencies of the predictor don't have to be complex or unnatural; they may well be feature-defined and relatively natural, and practitioners may intentionally or unintentionally come across such groups.) Also, I would place extra emphasis in the revision on the fact that "practitioners" refers to something like "fairness practitioners"; indeed empirical research in other areas may lead to other groups being used, that this paper was not focusing on. Finally, regarding raising my score, as it stands, there are many key updates needed in terms of the phrasing of the paper's setting and of its takeaways and of its literature connections etc., and while I believe the authors are on the right track in terms of this, judging by the rebuttal discussions, but the result of incorporating so many of these updates will inherently be high-variance in terms of the ensuing readability, focus etc of the updated paper. In this way, I am feeling more positive about this paper now but I am still in the "borderline" region; so I hope this explains why I would prefer to keep my current score. --- Reply to Comment 1.1.1: Comment: Thank you again for these points and all of your thoughtful feedback. They will certainly help improve the clarity of discussion and limitations in our next draft!
Summary: This work presents an empirical analysis of multi-calibration post-processing applied to a variety of models for binary classification, ranging from decision trees to transformers. They perform experiments with tabular, image, and text data on datasets of varying sizes. They compare model group calibration to overall calibration after a) only ERM b) calibration and c) multicalbration. They observe that, while multicalibration postprocessing does improve multicalibration, in many cases, simple calibration improves it similarly. Additionally, they find that for models that are already calibrated after ERM, they are often multicalibrated as well. They observe differences in metrics used to calculate multicalibration, varying with dataset size that have implications for the situations in which ECE and smECE are appropriate to use. Finally, they find that for large datasets, such as those for vision and language, HJZ shows bigger improvements over HKRR. Strengths: This is a very expansive and thorough empirical study that has lots of practical takeaways for the ML fairness community that could be very valuable. Particularly in the context of larger models, where post-processing can be very costly, knowing what methods are likely to work is very important. The paper's organization is clear, results are easy to understand, and relevant takeaways are clearly highlighted. The paper builds well on prior work while providing novel key insights about how different methods perform in different settings. Weaknesses: The paper could be more self-contained. While it's understandable for some results to be in the appendix given the number of experiments run in this paper, I worry that too much may have been pushed to the appendix (though this may be unavoidable). Some details, such as how groups are chosen for language tasks, are also not clear when consulting the appendix, particularly for readers who are not familiar with how these groups are normally chosen. Details like this would help readers to better understand the contribution of the paper. Technical Quality: 4 Clarity: 4 Questions for Authors: 1. What are the groups for text experiments? 2. Figure 2 claims that MLPs are among the models that do not benefit from post-processing, however the table appears to show better multicalibration after post-processing. Is this claim correct? Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: The limitations of the paper seemed well addressed to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments and feedback. We will address the weaknesses and questions. >**W1**: The paper could be more self-contained. While it's understandable for some results to be in the appendix given the number of experiments run in this paper, I worry that too much may have been pushed to the appendix (though this may be unavoidable). We agree with this point, and will include more figures in the body of our next version, and experiment with compressing some of the data presented in the main paper. Under each figure, we will also include a direct reference to related appendices. We believe that we can further compress some of the experimental results and (especially) figures in order to include more in the main paper. >**W2**: Some details, such as how groups are chosen for language tasks, are also not clear when consulting the appendix, particularly for readers who are not familiar with how these groups are normally chosen. Details like this would help readers to better understand the contribution of the paper. We also agree with this point. We will add a clear description of our group selection criteria to the body, as well as a high-level description (and pseudocode in the appendix) of the multicalibration algorithms considered. We determined groups by “sensitive” attributes—individual characteristics against which practitioners would not want to discriminate. In many cases, such attributes naturally included race, gender, and age, and then varied with available information. This included all tabular datasets as well as the CelebA dataset (image). On the other datasets, samples are not necessarily in correspondence with individuals, (e.g. postings on an internet platform, or slides of tissue). On datasets where samples are not in correspondence with individuals—Camelyon17, Amazon Polarity, and Civil Comments—we define groups based on available information that can be viewed as “sensitive” with respect to the underlying task. In other words, we define groups such that an individual or institution using a predictor which is miscalibrated on this group may be seen as discriminating against the group. Ideally, a social media service will not be underconfident when predicting the toxicity of posts mentioning a minority identity group; such predictions may allow hate speech to remain on the platform. Similarly, we would want a shopping platform to promote product listings fairly: a predictor which outputs mis-multicalibrated product ratings may boost listings for technology products proportionally to their true ratings but unfairly ignore the positive reviews of book listings. Aside from these heuristics, we also required that groups composed at least a 0.005-fraction (.5%) of the underlying dataset. This imposed a degree of uniformity in our group selection across all datasets. >**Q1**: What are the groups for text experiments? The text datasets are Amazon Polarity and Civil Comments. Groups for these datasets can be seen in Appendix C.6 (page 19) but briefly can be described as the presence of certain relevant words or phrases. We will also improve the group explanations in the main text to avoid confusion. >**Q2**: Figure 2 claims that MLPs are among the models that do not benefit from post-processing, however the table appears to show better multicalibration after post-processing. Is this claim correct? In the table of Figure 2, in the first group of 4 rows, it can be seen that MLP HJZ obtains a max smECE of 0.076 ± 0.018, while MLP ERM obtains a max smECE of 0.086 ± 0.015. While we agree that the max smECE of HJZ is less than that of ERM here, this difference is not statistically meaningful: each of these means is within its standard deviation of the other. We give a more detailed discussion of this point in response to Q4 of reviewer Zyam, to which we kindly refer. --- Rebuttal Comment 1.1: Comment: Apologies for the late reply. Thank you for these clarifying points! I'll be keeping my score, but believe that adding the additional experiments and improved group descriptions to the paper would greatly improve the clarity.
Summary: The paper investigates the effectiveness of multicalibration post-processing across various datasets and machine learning models. The study finds that models which are inherently calibrated often exhibit multicalibration without post-processing, while uncalibrated models benefit from multicalibration techniques. Traditional heuristics calibration methods can sometimes achieve similar outcomes as multicalibration algorithms. The research provides empirical evidence and practical insights for applying multicalibration in real-world scenarios, emphasizing the sensitivity of multicalibration algorithms to hyperparameter choices and the varying necessity of multicalibration depending on the model and dataset. Strengths: 1. The writing is clear: The introduction effectively establishes the need for this empirical study, and the algorithms and results are presented clearly and concisely. 2. This is the first comprehensive study to evaluate different multicalibration methods. 3. The experimental results are thoroughly discussed, and the observations are detailedly listed. Weaknesses: 1. The paper does not introduce any new algorithms, datasets, or theoretical analysis, but instead focuses on evaluating existing algorithms. 2. The experimental results are primarily observations and lack a deeper discussion of the reasons behind the observed phenomena. 3. Including experiments for all datasets in the main paper would help readers better understand the common findings and differences. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. For lines 179-181, in what sense is HKRR sample inefficient? I thought multicalibration for binary problems was known to be sample efficient. 2. Could you provide more reasons why HKRR outperforms HJZ in language and vision datasets? 3. What is the rationale behind choosing a large fraction of data for calibration? From my understanding, it should be about 10% or even smaller when the overall dataset is small. 4. How do the results differ across different datasets for tabular data? I noticed that in the HMDA dataset, the post-processing algorithms perform strictly better than ERM. Do MC algorithms work better for large datasets? Could you add more experiments if this hypothesis makes sense? 5. Did you try to create more complicated groups for the tabular experiments? It is possible that the groups are simple enough that even ERM works well. 6. For Figure 3, could you explain why the max group smECE increases when the calibration fraction gets larger? Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: Yes, the authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for the detailed comments and feedback. Please kindly refer to the global response for **Q2** and **Q5**. We address the other weaknesses and questions below. **W1**: We agree with the reviewer that we do not introduce any new algorithms. The expressed purpose of our work is to empirically study existing algorithms and attempt to determine their potential for practical use in a host of important (existing) regimes. There are already a number of theoretical works examining and proposing multicalibration (MCB) algorithms which we cite and discuss (Hébert-Johnson et al., Gopalan et al. 2022a/b, Bastani et al. 2022 etc.). We believe that determining when and where these algorithms are useful in practice is an extremely important next step for the research area. More broadly, we respectfully disagree that focusing on existing algorithms is a weakness of papers in machine learning. It is worth noting the existence of several important empirical studies whose main contributions were not novel methods or theoretical results. For example, the main contributions of the important works [1, 2, 3] are almost purely focused on examining existing algorithms from an empirical lens. Such evaluation research is especially important for algorithms involved in high-stakes decision making in the real world (including multicalibration e.g., Barda et al. 2021). [1]: Grinsztajn et al., Why do tree-based models still outperform deep learning on typical tabular data? [2]: Zhang et al., Understanding Deep Learning Requires Rethinking Generalization. [3]: Minderer et al., Revisiting the Calibration of Modern Neural Networks. **W2**: We agree that we do not provide much explanation for the reasons behind the observed phenomena. However, we envision our work as an important step to understand when and where to focus additional study in order to better understand the applicability of MCB algorithms. Before our work, it was not at all obvious when and where MCB may or may not be helpful to the standard machine learning practitioner. Importantly, we believe that we already provide numerous helpful insights and words of caution. **W3**: We agree that many experiments are relegated to the appendix. This was due to space constraints: our goal was to have a cohesive message from over 40K MCB runs in the main body of the paper. We hoped to distill practical take-aways and best-practices for practitioners wishing to apply and measure the effectiveness of MCB post-processing algorithms. If accepted, we plan to better compress experimental results from the appendix and include them in the main body, with in-line references to appendix links in the captions. **Q1**: HKRR / HJZ are efficient in that the number of samples required is polynomial in the parameters of the algorithm. However, the degree of this polynomial may be large. For example, sample complexity bounds of the HKRR algorithm are $O(1/(\alpha^4 \cdot \lambda^{1.5}))$. Unfortunately, this quickly grows to the order of millions of samples for moderately small choices of $\alpha$ (multicalibration error) and $\lambda$ (bucket width). We view our work as investigating whether we really need this large number of samples to empirically provide reasonable multicalibration levels in practice. **Q3**: We vary the amount of (hold-out) calibration data widely due to the sample complexity issues discussed in Q1. In particular, the required samples to guarantee theoretical convergence is very large, and a priori, it was not clear at all how much data should be given to MCB algorithms in practice. This is because MCB algorithms generally require many more samples than simple calibration (which often performs well with only 10% of the data). Indeed, we find that in some cases for tabular data, MCB performs best with 40-80% sized calibration sets. More generally, this is due to a trade-off between the effectiveness of MCB post-processing and the (latent) multicalibration properties of ERM. We will include this discussion in the revision. **Q4**: While results differ in minor ways between the different tabular datasets, we observe that models which are calibrated “out of the box” tend to not benefit from multicalibration post-processing in a statistically meaningful way (Observation 1). On four of the five tabular datasets, looking only at the max-smECE metric of MLP ERM, the standard deviations overlap with the opposing mean (MEPS, ACS Income), or (mean ± sd) values come within 0.003 of each other (Bank Marketing, Credit Default). As you point out, on HMDA (the remaining fifth dataset), max-smECE of MLP ERM is improved drastically by post-processing. On HMDA, however, MLP ERM is poorly calibrated to begin with: relative to the smECE of MLP HKRR (0.005 ± 0.001), or even the smECE of MLP Temp (0.022 ± 0.006), the smECE of MLP ERM (0.049 ± 0.006) is quite large. Therefore, observation 1 does not apply (as strongly) to the MLPs trained on HMDA. On the four other tabular datasets, MLP ERM achieves smECE much closer to that of the post-processed MLPs. Outside of MLPs, we find that miscalibrated models (e.g. Naive Bayes, SVMs) tend to benefit from multicalibration post-processing throughout our tabular experiments (Observation 2). We can provide specific examples if necessary (omitted here due to space). **Q6**: Note that an increase in calibration fraction implies less data is used to train the base predictor, and more data saved for post-processing. As you point out, max smECE increases as we increase the calibration fraction in Fig 3. This is likely since the base predictor has poorer generalization when trained on less data, and this performance is not rectified by multicalibration post-processing. Please also see **Q3**. We thank the reviewer for their time and thoughtful feedback. If the pressing concerns have been sufficiently addressed, we kindly ask the reviewer to consider adjusting their score by taking the rebuttal into account. Thank you! --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response! Regarding Q1: I believe it would be beneficial to make this point clear in the final version. Regarding Q3: Yes, I definitely think it would be useful to include this in the discussion. Regarding Q4: It's important to be careful when stating observations and to list results across all datasets in the main paper so that readers can easily verify them. I agree with reviewer rHYn’s point that "However, in most cases, it's unclear why and when these statements hold true," and "Therefore, I suggest the authors carefully reconsider how each statement is phrased." I'm glad to see the authors provided a detailed response to this. Overall, while some of the reasoning behind the observations remains unclear, given that this paper is the first comprehensive study to evaluate different multicalibration methods and includes a large number of experiments, and considering that the authors plan to include additional discussion based on feedback from the reviewers, I will raise my score to 5. --- Reply to Comment 1.1.1: Comment: Thank you for your response and engagement! To address your Q4, we will better compress the results from all datasets in the main body as discussed in M2Di weakness 1. We appreciate the feedback and believe it will result in improvements to the presentation!
Rebuttal 1: Rebuttal: **Global author response** We thank all the reviewers for the thorough reviews and detailed comments. We respond to two common reviewer comments here. 1. Firstly, reviewers **Zyam Q5**, **nkHR W2**, and **M2Di W2** all had questions and comments about group selection. We have combined our discussion here. We also have some additional discussion in the response to **nkHR W2**. There are (at least) two important properties of groups: group size and group “complexity”. We will certainly provide further discussion of both for all datasets in the next version of the paper. The minimum group size is a parameter which has implications for the overall sample complexity of multicalibration (in particular, it introduces a $1/\gamma$ factor into known sample complexity upper / lower bounds. $\gamma \in [0,1]$ is the size --- as a fraction of the dataset --- of the smallest group in the collection). For this reason, we restricted ourselves to groups which were >0.5% of the entire dataset. Note that we consider groups spanning 0.5% all the way to 70-80% of the data. We deem this variety (details in appendix C.4) sufficient to capture the varying sizes. Group “size” can also be thought of as somewhat correlated with group complexity: one can imagine that some (but not all) sufficiently small groups are defined by more complex (boolean) functions of the features. More broadly, group “complexity” is more difficult to capture. As reviewer **nkHR** points out, we can (nearly) always construct groups for which our predictor is not multicalibrated against. These groups may be as complex as the underlying predictor (since they can, for example, capture where the predictor misclassified test points). However, it is not clear whether this is a meaningful set of groups to multicalibrate against. To avoid such discussion, we intentionally determine groups by available features which we hypothesize practitioners may deem important or “sensitive” to the underlying prediction task. In our tabular data and CelebA, such attributes naturally include race, gender, and age, and further vary with the available features already in the data. On the other datasets, however, samples are not necessarily in correspondence with individuals, (e.g. postings on an internet platform, or slides of biological tissue). In these cases, we define groups such that an individual or institution using a predictor which is miscalibrated on this group may be seen as discriminating against the group. As a consequence of these selection criteria, our results only apply to these sensitive groups about which a practitioner would be concerned in practice. We will make this limitation clearer in the next draft of the paper. It is entirely possible that defining more complex — but perhaps less practically motivated — groups would yield different results than ours, and this remains an interesting direction for future work. We do not believe, however, that such possibilities detract from our overall message. We will further emphasize in our next draft that our findings hold only with respect to how we have defined groups "simply" or "intuitively". 2. Secondly, reviewers **Zyam Q2** and **rHYn** have questions about 1) Why HKRR outperforms HJZ in language and vision datasets; and 2) Why HJZ is not robust to hyperparameter choices for language and vision data (potentially counter to Observation 5). These are both questions about dynamics of the multicalibration post-processing algorithms, and we address them with the following discussion. The dynamics of both of these algorithms are complex, and an answer to these questions is difficult to support from our results alone. In particular, the performance of the algorithms depend on _at least_ the following parameters: 1. Distribution of initial predictions output by the models (i.e. input to post-processing algorithm). 2. Choice of hyperparameters for HKRR and HJZ. 3. “Complexity” or “expressiveness” of the groups. 4. Amount of samples. In our work, we focus mainly on (2) and (4). We discuss (3) at length above, but teasing apart exactly how (1) and (3) contribute to the performance of the multicalibration algorithm is certainly an interesting avenue for future work. We believe that part of the reason for the superiority of HKRR in language/vision data may be explained within the lens of (2) and (4). As stated in the paper (line 296), due to computational constraints and the added dimension of choosing how much data to save for calibration, we search a large — but not all-encompassing — collection of hyperparameters for each of the multicalibration algorithms tested. With regards to dataset size, three of the four vision/language datasets are noticeably larger than the tabular datasets (by at least 100k samples). It is possible that HKRR generally performs better on such dataset sizes, or that the optimal hyperparameters for HJZ change significantly in this larger sample regime. Stability of HJZ is a more challenging question to answer, since it likely has to do with internal game dynamics in the learning algorithm. In particular, by choosing various online learning algorithms, HJZ implements a _family_ of multicalibration post-processing. We test all algorithms from this family with varying parameters. It is possible that the family of algorithms itself somehow has a shift in stability as we scale to a large data regime (4). However, as analyzing even a singular algorithm (e.g. HKRR) is challenging, we are not sure that speculating about the stability of a family of algorithms is currently possible, and hence, leave this to future work. We thank all the reviewers for the helpful feedback, and we will certainly incorporate additional discussions (as indicated) in future versions of our work. Thank you!
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Stochastic Extragradient with Flip-Flop Shuffling & Anchoring: Provable Improvements
Accept (poster)
Summary: The paper studies unconstrained (strongly)-monotone finite sum minimization problems. They introduce a new scheme called SEG-FFA which: i) for a given epoch runs stochastic extragradient (SEG) with possibly two different stepsizes ii) uses flip-flop shuffling per epoch iii) uses the average of the epoch initialization and the last iterate as the initialization for the next epoch With $K$ being the number of epochs and $n$ being the number of finite sum elements, they show: - For the monotone case a $\tilde{\mathcal O}(1/K^{1/3})$ rate for the squared norm of the operator. - For the $\mu$-strongly-monotone case a $\mathcal O(\exp{(-\mu K^\epsilon)\Vert z^0_0 - z^\star \lVert^2} + \tfrac{1}{nK^{4-5\epsilon}})$ rate where $\epsilon \in (0,2/3)$. Strengths: The paper reads very well in terms of laying out what problem is being addressed and the current landscape of the literature (including the extended discussion in the appendix). The reasoning behind their algorithmic construction is also very clear. Weaknesses: My main concern is with the strength of the results: - Regarding the monotone case (Thm. 5.4): The argument seems to be that one _full epoch_ of SEG-FFA can approximate one step of the deterministic EG accurately. In other words, the rate has no dependency on $n$. Why not instead simply run full batch EG which would get the much faster $\mathcal O(1/K)$ rate? (e.g. using gradient accumulation to make it memory efficient) - Regarding the strongly-monotone case (Thm. 5.5): There is a tradeoff in the rate. If we want fast rate for the stochastic term $\mathcal O(\tfrac{1}{nK^{4-5\epsilon}})$ we need to take $\varepsilon \rightarrow 0$ in which case the linear rate $\mathcal O(\exp{(-\mu K^\epsilon)\Vert z^0_0 - z^\star \lVert^2})=\mathcal O(\exp{(-\mu)\Vert z^0_0 - z^\star \lVert^2})$ cannot be made small. Even if we ask to just match the lowerbound $\mathcal O(\tfrac{1}{nK^{3}})$ on SGDA-RR and SEG-RR we run into trouble. Pick $\epsilon=1/5$ to get $\mathcal O(\exp{(-\mu K^{1/5})\Vert z^0_0 - z^\star \lVert^2} + \tfrac{1}{nK^{3}})$. If we plot the two terms (ignoring the constants) we will see that the linear rate dominates the polynomial term for even very large $K$ (100M+). In other words, it is unfair to compare only the polynomial terms in this case, and the benefit of SEG-FFA over e.g. SEG-RR is much less clear when the linear term is taken into account. Other concerns: - Regarding Thm. 4.1: It seems surprising that it is possible to show divergence for e.g. SEG-US (with two stepsizes) since this is the scheme that [24] shows converges almost surely. How is it possible for these two results to coexist? - Currently the rates do not state dependencies on noise parameters. It is good practice to include $\rho$ and $\sigma$ in the rates as we should expect different dependencies. - Table 1 currently does not include the dependency on the initialization in the rate for SEG-FFA, which can be misleading. I suggest also including the rate $\mathcal O(\exp{(-\mu^2 K)\Vert z^0_0 - z^\star \lVert^2} + \tfrac{1}{nK^2})$ of e.g. SEG-RR [17, Thm. 2.1] to make the smaller dependency on $K$ in the linear rate apparent. Minor: - SEG-RR/SEG-FF is already introduced using two stepsizes, so it is slightly confusing when EG+ is introduced in (7) and contrasted with EG as being a version with two stepsizes. You might also want to shortly comment that strictly speaking the method you consider is not an instance of [16] as you consider a _larger_ second stepsize as oppose to a smaller second stepsize. Related work: - Apart from mention KM iterations it might be worth referencing the Lookahead algorithm for minimax https://arxiv.org/pdf/2006.14567 (LA–ExtraGrad, for which also two different stepsizes are considered, is essential equivalent to SEG-FFA but without flipflip) Technical Quality: 3 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper lacks appropriate discussion on the rates obtained in the main results of Thm. 5.4-5.5 (see weaknesses). Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the effort made by the reviewer in inspecting our manuscript. 1. **Monotone Case: Why Not Use Full Batch? (W1):** This point, raised by the reviewer, is valid. However, in practice, shuffling-based stochastic methods are prevalent; it is not an exaggeration to say that they are now the *de facto* standard. While shuffling-based stochastic minimization is now relatively well understood, there has been limited progress on studies that theoretically support using shuffling-based stochastic minimax optimization methods. In particular, it was unknown whether shuffling is beneficial in the monotone setting. The main purpose of our study is to enhance our understanding of this already widely used sampling scheme. Just to add our two cents: in minimization problems, recall that SGD is asymptotically slower than GD, as SGD only converges sublinearly on strongly convex problems while GD enjoys linear convergence. In spite of that, SGD and its variants are used everywhere in practice, whereas full-batch GD appears rarely. We believe the machine learning community has reached a consensus that using SGD has benefits beyond textbook convergence rates, such as better generalization properties and implicit regularization; see for example `[A]` and `[B]`, and we refer to `[C, Chapter 9]` for an in-depth discussion. It is natural to expect similar benefits in minimax problems if stochastic methods are used, but to reach that state, we first have to understand the convergence properties of stochastic minimax methods. Thus, we believe our work is an important step forward in this direction. 2. **Tradeoff in the Convergence Rate for Strongly Monotone Problems (W2):** We have realized that the convergence rate in the original submission was derived suboptimally regrading $\varepsilon$, and a slight modification in the final few steps of the proof makes the exponent of the exponentially decaying term to depend on $K$ instead of $K^\varepsilon$. Please refer to the general comments for the details. 3. **Theorem 4.1 vs. Hsieh et al. (W3):** Hsieh et al. [24] use independent-sample SEG, while we consider same-sample SEG. Thus, the two results can coexist. Our focus lies on the same-sample strategy as it combines more naturally with shuffling-based schemes. Please refer to the paragraph on lines 57–65, and the footnote in the same page. 4. **Dependencies on Noise Parameters (W4):** Please refer to the general comments. 5. **On How Rates are Demonstrated in Table 1 (W5):** Considering that an improved rate (regarding the exponentially decaying term) has now been obtained, we now could safely say that omitting the exponentially decaying term does not give a decisive advantage to us. In fact, as we can see from Table 1 in [17] as an example, it is customary to hide the logarithmic dependencies when one summarizes the convergence rates into a table, while deferring the exact formulae to the theorem statements. We hope that the reviewer will agree that, given the improved exponential term, omitting the exponential terms in the table is no longer misleading. 6. **On Using Two Stepsizes (W6):** We are indeed using the term EG in a broad sense, allowing it to use two stepsizes. Hence we introduced the update rule of EG using two stepsizes from the beginning, in (2). However, while we admittedly have not made this point clear enough, we do not assume any explicit constraints on $\alpha_k$ and $\beta_k$, unlike [16] which only considers when $\alpha_k \geq \beta_k$. Also, while the inner iteration of SEG-FFA does use $\alpha = \beta/2$, thanks to the anchoring step, the overall update of the epoch sums up to the "standard" EG as introduced by Korpelevich [27]), modulo a small noise. Hence, our method is not *completely* different from [16]. We understand that these might cause slight confusions. We will try to find a way to present this clearer in our revision. 7. **Additional Related Work (W7):** Thank you for notifying us about the related work. We agree with you on that there is an interesting resemblance between SEG-FFA and the mentioned Lookahead methods, and we will add a comment on it in our revision. We hope that our response resolves your concerns. We would greatly appreciate it if you could consider re-evaluating our paper. Thank you. > [A] On Large-Batch Training for Deep Learning, Keskar et al. (2017) > > [B] On the Generalization Benefit of Noise in Stochastic Gradient Descent, Smith et al. (2020) > > [C] Understanding Deep Learning, S.J.D. Prince (2023) --- Rebuttal Comment 1.1: Comment: I thank the authors for the rebuttal, which addresses my concern regarding the strongly monotone case, where there now seems to be a significant improvement in terms of the rate. Trusting that the derivation is correct and that the constant $b$ does not have strange dependencies (like e.g. on $\Vert z^0-z^* \Vert^2$), I have increased my score. I still consider the results in the monotone case somewhat unsatisfying, since the rate $\mathcal O(1/K^{1/3})$ is worse than the $\mathcal O(1/K)$ rate of EG. In other words, there seem to be no reason for running SEG-FFA if we know that only monotonicity holds: - For this reason I would downplay the monotone case by instead focusing on SEG-FFA provably benefiting the strongly monotone, while providing some kind of "fallback" guarantee in the monotone case. - Since we are in the finite sum case it is possible to run deterministic methods (e.g. EG) even with same memory footprint by simple accumulating gradients. I suggest comparing against EG in Table 1. > "there is an interesting resemblance between SEG-FFA and the mentioned Lookahead methods" I would argue that its more than a resemblance, since Lookahead has also been connected with the Krasnosel’skii-Mann iteration (see e.g. https://arxiv.org/pdf/2310.13459). Compared with Lookahead with EG+ as the base optimizer the main differences seems to be: - SEG-FFA picks a smaller extrapolation stepsize (so that the inner loop is no longer strictly the EG+ scheme of [16] – this is why I think it is important to differentiate SEG-FFA from EG+ regarding the inner loop. Your modification is crucial!) - SEG-FFA uses flip-flop Both are important modification, so I am not trying to argue against the contributions, but I think making the connection precise might be valuable. Especially considering that both Lookahead for minimax and the original Lookahead algorithm for minimization (https://arxiv.org/pdf/1907.08610) introduces the scheme in order to reduce variance. --- Reply to Comment 1.1.1: Comment: We appreciate your thoughtful reevaluation of our work. We agree that the strongly monotone case results now deserve more emphasis than in the original manuscript given the improvement, and that the current convergence rate in the (star-)monotone case is not ideal. While we fully respect your perspective, we also hope you understand our assertion that the monotone case result is worth some highlighting, as it effectively demonstrates our second-order matching framework and offers new theoretical explanations on how the SEG variants behave in the more practical setting where stochastic gradients are used throughout the optimization procedure. In the revision, we will adjust our tone, taking into account what deserves more attention and what remains relatively limited. Regarding the connection between our work and the Lookahead method, we thoroughly agree with you; we condensed our viewpoint into the phrase “interesting resemblance” because of the character limit. Your additional suggestions will help us make a more detailed comparison in our revision. Please feel free to add any further comments or questions.
Summary: In this paper the authors study stochastic extragradient methods for solving unconstrained minimax convex-concave problems with a finite sum structure. In particular, various shuffling schemes (random reshuffling without replacement, flip-flop, uniform sampling with replacement) are investigated and it was shown that stochastic extragradient with them can lead to divergent behavior when $f$ is merely convex-concave. The authors proceed to propose stochastic extragradient with flip-flop + anchoring which successfully converges in convex-concave problems with rate $O(1/K^{1/3})$ and rate $O(1/nK^{4-5\epsilon})$ for $\epsilon < 2/3$ when $f$ is strongly-convex-strongly-concave. This is supplemented with a lower bound on random-reshuffling based stochastic gradient descent-ascent and stochastic extragradient in the same setup, demonstrating advantage of the proposed algorithm. The analysis is based on controlling the degree of approximation to a known convergent method. Some numerical experiments are presented to illustrate the practical performance. Strengths: The results in this paper removed several limitations from the previous work and shed light on algorithm design for the appropriate sampling scheme on stochastic min-max problem. I believe the contributions are of great interest for the community and the resulting proposal is conceptually simple, easy to implement with minimum modification from known algorithms. The exposition is clear with nice explanation of the design principle (second-order matching to a known deterministic convergent method) and the relevant literature is thoroughly surveyed and compared against. The technical claims are sound with supporting numerical evidence. Weaknesses: I've listed a few questions in the section below but I don't think there are major weakness with the paper. Minor comments: - In the second set of equations in (2), I'd recommend changing the subscript in nabla to u and v instead, or some other notation to avoid confusion. - In the definition of Assumption 3.3, it might make more sense to call each $F_i$ $L$-lipschitz (corresponding to (i)) and each $F_i$ $M$-smooth (corresponding to (ii)) - this would possibly make things more consistent with convex optimization literature. In Line 178, should it be $f$ being $L$-smooth or $F$? Technical Quality: 3 Clarity: 3 Questions for Authors: - It seems like this gap of $1/nK^3$ for SEG-RR vs. $1/nK^{4-5\epsilon}$ for SEG-FFA only exists in a very narrow range of $\epsilon$. Is $\epsilon$ tied to other parts of the results? If not, I'd recommend just optimizing over $\epsilon$ and state the final complexity instead. - Is there any special meaning behind the $1/2$ in (12)? Would other convex combination of the iterates work as well? Or possibly with other anchor points or even more than $1$ point help for higher-order matching? - In Theorem 5.4, can one afford a constant (and larger) stepsize if instead of asking guarantee in $\min_k ...$ we do tail averaging? - The parameter $\rho$ or $\sigma$ from Assumption 3.4 don't make an appearance in Theorem 5.4 and 5.5? - What's the difficulty of generalizing this framework from same sample to independent sample extragradient methods? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes it's adequately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the positive feedback and thoughtful comments. 1. **On the Comments on the Notations Used (W1, W2):** The reviewer has made valid points, and let us share our thoughts about the comments made by the reviewer. The second set of equations in (2) may cause a bit of confusion, as concerned by the reviewer. However, considering the problem formulation in Eq. $(1)$, the subscripts $x$ and $y$ are to represent which argument we are taking the derivative with respect to. In contrast, $u$ and $v$ in Eq. $(2)$ are actual points we evaluate the gradients at. This notation is consistently used throughout the paper. Regarding the second suggestion, we realized that saying that $F_i$ is $L$-Lipschitz and $M$-smooth indeed sounds more natural, coherent to the existing literature. We will make necessary changes in the revision. In line 178, as the first equation in Asmp. 3.3 asserts that $F$ is $L$-Lipschitz, we would have $f$ being $L$-smooth, as in the original submission. 2. **Existence of $\varepsilon$ in the Convergence Rate (Q1):** Please refer to the general comments. 3. **Other Possible Variants of SEG-FFA (Q2):** The weights $1/2$ in the convex combination $(12)$ are judiciously chosen so that SEG-FFA achieves second-order matching, under the choice of $\alpha_k = \eta_k /2 $ and $\beta_k = \eta_k$. Our Prop. D.1 demonstrates what happens when we choose other convex combinations. It is indeed possible to choose $\alpha_k$, $\beta_k$, and the weight $\theta$ in the convex combination as long as Eq. (29) achieves an error of $\mathcal{O}(\eta^3)$ à la Prop. 5.3. 4. **On the Idea of Tail Averaging (Q3):** Based on our current understanding, the benefits of the averaging technique (in minimization problems) mostly comes from applying Jensen's inequality. However, in minimax problems where a possibly nonconvex function $||F(\cdot)||$ is used as an optimality measure, we are not sure how averaging could be applied. We would also like to remark that this inability to apply Jensen's inequality to $||F(\cdot)||$ was overlooked in [17], leading to an invalid convergence result. 5. **Appearence of the Noise Parameters $\rho$ and $\sigma$ (Q4):** Please refer to the general comments. 6. **Extending to Independent Sampling (Q5):** As we work in the context of random reshuffling schemes, the correct update rule for the independent-sample setting is less clear than in the same-sample setting. The main challenge in generalizing our framework to independent-sample SEGs thus lies in interpreting it in terms of random reshuffling, rather than in technical difficulties. Nonetheless, should we aim to devise an independent-sample method, the simplest way would be to use two independently chosen permutations, one for the extrapolation step and the other for the update step. For such a method, Eq. (8) reads $w_i^k = z_i^k - \alpha T_i^k z_i^k$, $z_{i+1}^k = z_i^k - \beta T_{\pi(i)}^k w_i^k$ for some permutation $\pi:[n]\to[n]$. Accordingly, the right hand side of Eq. (11) then becomes ${\alpha\beta} \sum_{j=0}^{N-1} DT_{\pi(j)}^k (z_0^k) T_j^k z_0^k + {\beta^2} \sum_{ i < j } DT_{\pi(j)}^k (z_0^k) T_i^k z_0^k$. It is not difficult to check that the exact same choices of $\alpha$, $\beta$, and $\theta$ used in the same-sample SEG-FFA will also achieve second-order matching in the independent-sampling case. As our convergence analyses are valid for generic methods that achieve the second-order matching property (recall the statements of Theorems F.5 and G.4), we expect that it is possible to derive analogous results for the independent-sample variant of SEG-FFA, but there remain details to be checked of course. --- Rebuttal Comment 1.1: Comment: Thanks for the explanation. I intend to keep my rating. --- Reply to Comment 1.1.1: Comment: Thank you for your time and effort in reviewing our work. If you have anything you would like to discuss further, please feel free to leave additional comments.
Summary: This paper proposes a new algorithm, SEG-FFA, which converges for the convex-concave minimax problem, while existing algorithms like SEG-RR and SEG-FF diverge for the same class of problem. Moreover, the authors show that SEG-FFA can better approximate EG in comparison to SEG-RR and SEG-FF. Strengths: - It proposes a new stochastic algorithm, SEG-FFA, which simultaneously incorporates flip-flop shuffling and anchoring to prove convergence. To the best of my knowledge, this is the first work to successfully combine these two techniques. - Usual Halpern iterations take a convex combination with the initial iterate, while SEG-FFA takes a convex combination with the initial iterate of the epoch. This idea is interesting and certainly paves the way for further research. - I find section 5.1 very interesting, where the authors explain how well different algorithms can approximate EG. Weaknesses: - Lipschitz Hessian in Assumption 3.3 is too restrictive, and the authors agree. I don't know if practical ML applications satisfy such assumptions. - In each epoch, SEG-FFA and SEG-FF make $4n$ oracle calls compared to $2n$ oracle calls made by SEG-RR and SEG-US and $n$ oracle calls by SGDA-US and SGDA-RR. Therefore, the comparison made in the second plot of Figure 1 is unfair. I recommend having the number of oracle calls on the $x$-axis to compare these algorithms fairly. - In Theorem 5.4, authors propose a decreasing step size of the order $\mathcal{O} \left( 1/ k^{1/3} \log k \right)$ while in plot 1 of the experiment, they use a different step size. Why not use the step size proposed in the Theorem? Technical Quality: 3 Clarity: 3 Questions for Authors: - What is $\eta_k$ in Theorem 5.4? The step sizes of SEG-FFA are $\alpha_k$ and $\beta_k$ in Eq 5. - Is it possible to relax Assumption 3.3? - Multiple papers show the convergence of SEG and Stochastic Past Extragradient [1] method for monotone problems with increasing batch sizes. Moreover, a work [2] proposes BCSEG (bias-corrected SEG), which proves convergence without increasing batch sizes. I recommend comparing these works with your Theorem 5.4 in terms of the number of oracle calls to achieve a given accuracy $\epsilon$. [1] "Single-Call Stochastic Extragradient Methods for Structured Non-monotone Variational Inequalities: Improved Analysis under Weaker Conditions." [2] "Solving stochastic weak Minty variational inequalities without increasing batch size." Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: NA Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the effort made by the reviewer in inspecting our manuscript. 1. **On the Lipschitz Hessian Assumption (W1):** As both our manuscript and the reviewer have mentioned, the Hessian Lipschitz assumption is a somewhat unusual one, not widely assumed in the literature. Still, it is not too difficult to find machine learning problems that satisfy the Hessian Lipschitz condition. As a basic example, consider the logistic loss function $\ell(w;a) = \log(1+e^{aw})$. It is not difficult to verify that $\ell'''(w;a) = \frac{a^3 e^{aw} \left(1 - e^{aw}\right)}{\left(1 + e^{aw}\right)^3} = \frac{a^3}{1+e^{aw}} \cdot \frac{e^{aw}}{1+e^{aw}} \cdot \tanh(-\frac{1}{2}aw)$ and thus $|\ell'''(w;a)| \leq |a|^3$. Now, when given a dataset $(x_i, y_i), i=1, \dots, n$, recall the typical form of the loss function $\frac{1}{n} \sum_{i=1}^n \ell(w;-y_ix_i)$. Since $|\ell'''(w;-y_ix_i)|$ is upper bounded by a constant $\max_{i=1,\ldots,n} |y_i x_i|^3$ for all $i$, the Hessian Lipschitzness follows. 2. **Relaxations of Assumption 3.3 (Q2):** While we unfortunately do not have a definite answer to this question, let us at least share our intuitions on this assumption. The reason we impose the Hessian Lipschitzness assumption originates from the analysis of the flip-flop sampling scheme (*cf.* [41]). In short, the desideratum is that the change in the Hessian of $f$ remains controllable throughout an epoch, so that after aggregating the updates we get a good enough approximation of a deterministic EG step. If such control is possible with other assumptions then one would be able to replace Assumption 3.3 with it. We would also like to make a final remark that as we discussed in lines 199–202, the lower bound results that are obtained under our set of assumptions also serve as valid lower bounds for weaker settings that are implied by ours, so having the Lipschitz Hessian assumption is not a weakness in this scope. 3. **Comparing Methods on the Same Plot (W2):** We would like to remark that, in the plot(s), we actually do count in the fact that SEG-FF and SEG-FFA make twice as many oracle calls than other methods. That is why for those two methods we plot the values of $\|\|Fz_0^{t/2}\|\|$ instead of $\|\|Fz_0^{t}\|\|$; please refer to the captions below the plots. 4. **Choice of Stepsizes in the Experiments (W3):** We thank the reviewer for bringing up this point. We realized that the rationale behind our choice of the $\mathcal{O}(1/k^{0.34})$ stepsize schedule has not been thoroughly discussed in the submission, so please allow us to elaborate further. Strictly speaking, to make use of the convergence result of Thm 5.4—or more precisely, Thm. G.4—one should choose $\eta_0$ so that it satisfies both equations $(121)$ and $(122)$. However, if one closely examines the proof of Thm. G.4., it is not difficult to realize that choosing the stepsizes so that $\eta_k = \Omega({1}/{k^q})$ for $\frac{1}{3} < q < \frac{1}{2}$ while $\eta_k \leq \frac{\eta_0 \sqrt[3]{2} \log 2}{(k+2)^{1/3} \log (k+2) }$, we will still achieve a convergence to a stationary point, but at a cost of having a slightly slower rate of convergence $\mathcal{O} ({1}/{K^{1-2q}} )$. But then, since the decay of $1/k^q$ is faster than that of $1/(k^{1/3}\log k)$, simply taking $\eta_k = {\eta_0^*}/{k^e}$ for a suitably small $\eta_0^*$ will suffice, as there will exist some $K_0$ such that $k \geq K_0$ implies the inequality $\eta_k \leq \frac{\eta_0 \sqrt[3]{2} \log 2}{(k+2)^{1/3} \log (k+2) }$, and we may simply ignore the first $K_0$ iterates to get a convergence guarantee. We will add a more formal version of the discussion above in our revision. 5. **Notation in Thm. 5.4 (Q1):** Please refer to the general comments. 6. **Additional Comparisons with the Existing Works (Q3):** Despite everything, if we were to compare the number of gradient oracle calls according to their face values, SEG-FFA would in fact require $\tilde{\Omega}(1/\epsilon^3)$ calls as the rate of convergence is $\tilde{\mathcal{O}} (1/K^{1/3})$, whereas, for example SPEG would require $\Omega(1/\epsilon^2)$ calls as claimed in p. 34 in their paper. However, we would like to raise the point of whether it is fair to compare our SEG-FFA in its current form to methods that allow increasing batch sizes. SEG-FFA allows a constant batch size (of 1), which is common in practice, by coping with noise induced by gradient variance. Conversely, increasing batch size is not only rarely used in practice but also essentially sidesteps the complexities of dealing with gradient variance, as a batch size of $b$ reduces variance by a factor of $b$. Please recall Thm. H.4, where we showed that methods known to converge with increasing batch sizes may no longer converge if the batch size is kept constant. Thus, if we really were to compare SEG-FFA to such methods, it shall be allowed for SEG-FFA to have increasing batch sizes as well. But this approach not only reduces the RHS of Assumption 3.4 by the size of the batch but also reduces the number of passes in an epoch—which is $n$ for single-sample-in-a-batch SEG-FFA—according to how batches are formed. Consequently, the size of the noise term $\|\|r^k\|\|$ will be reduced, and SEG-FFA would enjoy a much better convergence rate. We believe that convergence analyses incorporating this change should be conducted as future work. On a different note, as we have listed in Table 2, the work on BC-SEG+ ([39] in our paper) assumes uniformly bounded gradients. This, as discussed in lines 190–196, is too strong to be a realistic assumption, so we have not included a thorough review of it in the current submission. But yet, we agree that supplementing additional comparisons with [39], and the suggested related work by Choudhury et al., will make how our paper is positioned within the existing literature clearer. We will modify Section B.1 accordingly. --- Rebuttal Comment 1.1: Comment: Thank you for the reply. Please add the above details in the updated version. I will raise my score to 6. --- Reply to Comment 1.1.1: Comment: We are pleased that your concerns have been resolved. We will ensure that the additional details discussed in the comments are well incorporated into the revision. If you have any further questions or comments, we would be happy to address them.
Summary: The paper studies various same-sample SEG algorithms under different shuffling schemes, including SEG-US, SEG-RR and SEG-FF. The three algorithms all can diverge when $f$ is convex-concave. Furthermore, the authors discuss the underlying cause for the nonconvergence of the three algorithms. Moreover, the authors propose a novel stochastic extragradient method named as SEG-FFA, which is SEG amended with flip-flop shuffling scheme and anchoring. The proposed algorithm enjoys improved convergence guarantees and the convergence rate is calculated under different conditions of $f$. Finally, the authors conduct numerical experiments to verify the convergence of SEG-FFA. Strengths: 1. The paper proposes a novel algorithm SEG-FFA for minimax problems, which achieves a better convergence rate when $f$ is strongly-convex-strongly-concave. Furthermore, the algorithm enjoys convergence when $f$ is convex-concave while other baseline algorithms diverge under this setting. 2. The paper provides comprehensive proof for divergence of other baseline algorithms when $F$ is merely monotone. 3. The proposed algorithm SEG-FFA can match extragradient up to second-order terms and get an error of $\mathcal{O}(\eta)$ where $\eta$ is the stepsize. As a result, SEG-FFA achieves convergence in monotone problems. 4. Sufficient numerical experiments verify the convergence result and superiority of SEG-FFA. Weaknesses: 1. Due to the presence of anchoring step, the initial point seems to be more important, which implies that the algorithm is more dependent on the selection of the initial points. 2. How to select the parameter $\epsilon$ concerning convergence in Table 1 and Theorem 5.5 for practical applications and specific problems. 3. Table 1 does not provide the lower bound of SEG-FFA under convex-concave setting. Is there such a lower bound on the iterative complexity for SEG-FFA? Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Section 4 explains the reasons why simple shuffling is not enough for convergence for monotone problems. However, why the added anchoring step can make the algorithm SEG-FFA converge is not sufficiently demonstrated in the proof, especially compared with SEG-FF. 2. What is the difference between the stepsizes $\alpha_k, \beta_k$ of SEG-FF and $\eta_k$ of SEG-FFA. 3. In Figure 4(a) and 4(b), the convergence result of SEG-FFA is better than other baseline algorithms. However, the convergence rate appears slower. 4. There are few typos in the main text and appendix. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: 1. The length of the paper is excessive, especially the appendix section. 2. The experiments could consider demonstrating the performance of SEG-FFA on more large-scale problems. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We appreciate the reviewer for the constructive feedback and thoughtful comments. 1. **Importance of Initial Point due to the Anchoring Step (W1):** For any optimization method, its behavior is more or less influenced by the choice of the initial point, and our SEG-FFA is not an exception. However, as we have shown (in Prop. 5.3) that the total update of SEG-FFA made over an epoch is equal to an update made by a deterministic EG up to a small $\mathcal{O}(\eta^3)$ noise, we think the dependency of SEG-FFA on the initial point is as minimal as that of the deterministic EG. 2. **On $\varepsilon$ in the Convergence Rates (W2):** Please refer to the general comments. 3. **Lower Bounds on SEG-FFA (W3):** Unfortunately, at the moment we are not aware of the lower bound complexities for SEG-FFA. To the best of our knowledge, our work is the first to introduce the idea of flip-flop + anchoring, and has not been investigated even in the context of minimization problems. A search for an explicit lower bound would also be an interesting direction of future work. 4. **Why SEG-FFA is better than SEG-FF (Q1):** The explanations on how SEG-FFA outperforms SEG-FF are detailed thoroughly in Section 5 through the lens of Taylor expansion matching, with a summary in lines 257–261, rather than in the proofs. To summarize, SEG-FFA can achieve a second-order matching to the deterministic EG, leaving an error as small as $\mathcal{O}(\eta^3)$. Whereas, SEG-FF is at best only a first-order matching method, since an attempt to make it a second-order matching method leads to it approximating a nonconvergent method. Please refer to to Section 5.1.2, and the related Appendices D and E, for the details. 5. **Difference in the Notations for Stepsizes across the Methods (Q2):** Please refer to the general comments. 6. **Interpretations on Fig. 4 (Q3):** The reviewer has made a correct observation. However, as discussed in lines 1519–1521, these results do not counter our theoretical analyses. Indeed, the convergence results in the strongly monotone setting are established under a fixed choice of the time horizon $K$ (see Thm. 5.5). More precisely, the results for strongly monotone $F$ are interested on how small $\|\| F z \|\|$ can be eventually made after running for $K$ epochs. So, for the initial few steps, the *speed* of convergence may be slightly suboptimal. 7. **Typos (Q4):** Thank you for notifying us about the typos. We will try our best to fix them. We would also greatly appreciate it if you could point out some of them to help us correcting them. 8. **Paper is Too Long (L1):** It is true that the paper is longer than average, but we hope that the reviewer understands the length of the paper as an unavoidable side effect of encompassing multiple rigorous proofs with technical details. 9. **Additional Experiments on Large-Scale Problems (L2):** Due to the nature of our paper focusing more on the theoretical results, we admittedly did not consider much about additional experiments. Yet, we thank the reviewer's suggestion, and will contemplate on which additional experiments will be beneficial to our paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for providing detailed replies to all my concerns. The authors' discussions on the practical algorithms and theoretical investigation are thoughtful. --- Reply to Comment 1.1.1: Comment: We are glad that our explanations have cleared up your concerns. If there are further comments or questions, we would be glad to continue our discussion.
Rebuttal 1: Rebuttal: We thank all the reviewers for their constructive feedback. Here, we would like to discuss some important issues commonly raised by the reviewers. 1. **On the Convergence Rate in the Strongly Monotone Setting ([k7BS] W2, [hmmd] Q1, [19M7] W2):** One of the common questions raised by the reviewers was about the parameter $\varepsilon$ in the convergence rate derived in the strongly monotone setting. Admittedly, in our original submission, we were more or less satisfied with getting a rate that is provably faster than the $\Omega(1/nK^3)$ lower bound of SEG-RR, hence the rate we derived remained having an additional obscure parameter $\varepsilon$. However, reading the comments from the reviewers, and following the suggestion of one of the reviewers, we revisited our analyses to see if we can optimize the rate with respect to $\varepsilon$. Surprisingly, with a slight modification in the proofs, we were able to remove the dependency on $\varepsilon$ and derive a rate of $\mathbb{E} \|\| z_0^K - z^*\|\|^2 \leq \exp \left( - \frac{b \mu^2 K}{L^2} \right) \|\|z_0^{0} - z^*\|\|^2 + \mathcal{O}\left(\frac{\left(\log(n^{1/4} K)\right)^{4}}{ n K^{4}} \right)$ for a constant $b$ that does not depend on $\mu$, $L$, $n$, and $K$. Since we cannot make a revision of our paper during the discussion phase, please allow us to briefly explain how our proofs can be modified to obtain a new rate of $\tilde{\mathcal{O}}(1/nK^4)$ below. We are more than happy to take follow-up questions if anyone wishes for further verifications/clarifications. First we modify Lemma F.4, so that the number of epoch $K$ is no longer involved: more precisely, we show that using a constant stepsize (possibly independent of $K$) it holds that $\|\| F z^k\|\| \leq \|\| F z^0\|\| + V_1 / (\mu L)$ for all $k = 0, 1, \dots$. The trick is simple. In the current form of Lemma F.4 we show that $\|\|Fz^{k+1}\|\|$ is bouned by $\|\|Fz^k\|\|$ plus some error term; see Eq. (95). However, by choosing a smaller stepsize it is not difficult to improve this bound to $(1-\frac{\mu \eta n}{10}) \|\|Fz^k\|\|$ plus some error term. This "exponential decay with an error" type of recurrence also arises in the analysis of SGD on stongly convex problems (see, *e.g.*, Thm. 5.8 in arXiv:2301.11235), so we can use standard techniques with mathematical induction to unroll this recurrence and get the claimed bound. With this modification, in Thm. F.5, we then have that the inequality right below line 1057 holds whenever $\eta$ is sufficiently small, regardless of the choice of $K$. Again unravelling this ``exponential decay with an error'' type recurrence, one gets $\mathbb{E} \|\| z^K - z^* \|\|^2 \leq \left(1 - \frac{1}{2} \mu \eta n \right)^K \|\| z^{0} - z^* \|\|^2 + \frac{4+2\mu \eta n}{\mu^2} \cdot \eta^{2a-2} n^{2a-3} V_2$ , for any $K \geq 0$. Compare this with the first line of Eq. (101), which was obtained by a slightly looser unravelling of the same recurrence. However, unlike Eq. (101) where $\eta$ was already constrained to be $\mathcal{O}(1/K^{1-\varepsilon})$ due to (the unmodified) Lemma F.4, we have no dependency of $\eta$ on $K$ yet. So we can choose $\eta$ now, to have the best dependency on $K$ that minimizes the bound. We set $\eta$ to be $\mathcal{O}(\log(K) / K)$, inspired by the convergence analyses made in p. 26 of [17]. It is then easily able to show the $\tilde{\mathcal{O}}(1/nK^4)$ rate claimed above. 2. **Dependence of Convergence Rates on the Noise Parameters $\rho$ and $\sigma$ ([k7BS] Q4, [hmmd] Q4):** As mentioned by the reviewers, it is indeed good practice to write down the dependencies on the problem-dependent constants as explicitly as possible. However, the dependencies of the convergence rates on $\rho$ and $\sigma$ are quite involved (see, *e.g.*, Theorems E.9 and E.13), so we decided to highlight the concise versions of the rates in the main text, while deferring the exact details to the appendices. We hope the reviewers generously understand this as our effort to find the sweet spot between readability and rigor. 3. **What is $\eta_k$ in Theorem 5.4? ([jRnf] Q1, [19M7] Q2):** We admit that the relationship between the step sizes $\alpha_k,\beta_k$ and $\eta_k$ for the SEG-FFA was not clearly stated. Following Prop. 5.3, regarding equations (5) and (6) in the case of SEG-FFA, we selected step sizes $\alpha_k = \beta_k/2$ and $\beta_k = \eta_k$ for some sequence $\eta_k$, $k \geq 0$. This was implicitly assumed in Thm. 5.4 (and only written explicitly in Alg. 4). We will clarify this by adding that we choose $\alpha_k = \beta_k/2$ and $\beta_k = \eta_k$ in the statement of Thm. 5.4.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Enhancing Zero-Shot Vision Models by Label-Free Prompt Distribution Learning and Bias Correcting
Accept (spotlight)
Summary: This paper proposes a method named Frolic to improve the zero-shot performance of vision-language models like CLIP. The method focuses on two key challenges: enhancing prompt distribution learning and correcting inherent label bias in pre-trained models without relying on labeled data. Experimental results across 16 datasets demonstrate performance improvements of the method. Strengths: + The paper is well-written and the method is easy to follow. + The method effectively addresses label bias inherent in pre-trained models, improving the robustness and accuracy of zero-shot predictions. + Ablation experiments are comprehensive. Weaknesses: - Does the paper utilize the validation dataset in experiments, if not, this should be clarified. - How does the proposed method compare with the methods[1][2] in the few-shot settings? - Although Frolic is described as training-free, there may still be hyperparameters involved in the method, and how to choose these parameters? - How does the prompt distribution learning contribute to the performance? Does it outperform the original CLIP? - Can the method be performed with a few labeled samples? [a] Black box few-shot adaptation for vision-language models. ICCV 2023 [b] Tip-Adapter: Training-free Adaption of CLIP for Few-shot Classification. ECCV, 2022 Technical Quality: 4 Clarity: 3 Questions for Authors: The discussion of the limitations of the methods is unclear. The authors should discuss it in depth. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Please refer to the weakness and Questions parts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1]** Does the paper utilize the validation dataset in experiments, if not, this should be clarified. **[A1]** We do not use a validation set because our method involves no hypermeter searching. *** **[Q2]** How does the proposed method compare with the methods[1][2] in the few-shot settings? **[A2]** The methods[1][2] refer to LFA and Tip-Adapter. These methods boost the CLIP's generalization using labeled training samples. In contrast, our Frolic doesn't require any labeled samples. We evaluate our method with LFA and Tip-Adapert on the ImageNet and its variants dataset, where the LFA and Tip-Adapter only utilize the labeled samples from the ImageNet dataset. The results below show that our method achieves the best performance, except compared to LFA on the ImageNet.  *** | | ImageNet | ImageNet-A | ImageNet-V2 | ImageNet-R | ImageNet-Sketch | | --- | :---: | :---: | :---: | :---: | :---: | | LFA | 72.6 | 51.5 | 64.7 | 76.1 | 48.0 | | Tip-Adapter | 70.5 | 49.8 | 63.1 | 76.9 | 48.1 | | Frolic | 70.9 | 60.4 | 64.7 | 80.7 | 53.3 | **[Q3]**   Although Frolic is described as training-free, there may still be hyperparameters involved in the method, and how to choose these parameters? **[A3]** The only hyperparameter in our method is tolerance $\epsilon = 0.01$, which is critical for the precision of numerical calculations rather than being a traditional model hyperparameter.  As illustrated in Figure 4, when the iteration extends over 20 steps, $\epsilon$ approaches zero. While theoretically, a lower $\epsilon$ is preferable for finer precision, we have selected $\epsilon = 0.01$ to ensure a practical balance between computational efficiency and convergence reliability. *** **[Q4]** How does the prompt distribution learning contribute to the performance? Does it outperform the original CLIP? **[A4]** As compared in Table 3 ($f_{\rm c}$ and $f_{\rm g}$), the prompt distribution learning ($f_{\rm g}$, shown in the third row) significantly outperforms the original CLIP ($f_{\rm c}$​, shown in the first row). For example, $f_{\rm c}$ increases the performance from 65.1% to 68.8% on the 10-datasets and 68.7% to 69.8% on the ImageNet. *** **[Q5]**   Can the method be performed with a few labeled samples? **[A5]** Our method can effectively incorporate labeled samples by replacing the class description to estimate ${\bf z}_1$ to ${\bf z}_K$ in Equation (3). To demonstrate this capability, we have conducted additional experiments in a 16-shot setting using the ViT-B/16 backbone, the results below show that the 16-shot setting achieves better performance than the class descriptions. | | Pets | Flowers | Aircraft | DTD | EuroSAT | Cars | Food | SUN | Caltech | UCF | | -------------------------- |:---: |:---: | :---: |:---: |:---: | :---: | :---: | :---: | :---: | :---: | | Frolic | 92.9 | 74.8 | 31.5 | 56.1 | 58.5 | 69.1 | 87.2 | 70.8 | 95.2 | 75.2 | | Frolic with labeld sampels | 94.5 | 98.3 | 51.4 | 71.7 | 89.6 | 83.5 | 87.9 | 76.6 | 96.3 | 85.2 | *** **[Q6]** The discussion of the limitations of the methods is unclear. The authors should discuss it in depth. **[A6]** The quality and distribution of data used in pre-training can significantly impact the performance of pre-trained models. Our method relies on the capabilities of pre-trained models for downstream tasks, if the pre-trained knowledge differs from the downstream tasks, the efficacy of our method may be limited. We have included a detailed discussion of limitations in the revised manuscript. Additionally, as you suggested in [Q5], few-shot learning presents a viable solution to these challenges. We appreciate your insights, which help us to further exploit the potential of our method. --- Rebuttal Comment 1.1: Comment: I'm grateful for your response. All of my concerns have been resolved, and I now have a deeper understanding of your method. Therefore, I have raised my rating. --- Reply to Comment 1.1.1: Title: Response to Reviewer jbMQ Comment: We do appreciate the reviewer's positive support, and we are pleased to take the reviewer's advice to improve our work.
Summary: This work aims to enhance the zero-shot performance of pre-trained vision-language models. Specifically, three strategies, including label-free prompt distribution learning, adaptive calibration and correction pre-training label bias, are proposed and work together to improve the performance on different downstream tasks. Experiments on diverse tasks confirm the effectiveness of the proposed method. Strengths: 1. Enhancing the zero-shot performance has attracted much attention recently. This work discusses multiple challenges in this process and proposes corresponding algorithms with theoretical analysis. 2. The proposed method is training-free without hyper-parameters, which is applicable to real applications. 3. Diverse downstream tasks are included for evaluating the performance of the proposed method. 4. The manuscript is well-written and easy to follow. Weaknesses: 1. The work works with a uniform prior that classes are balanced distributed. It is better to discuss the scenario with imbalanced prior. 2. Besides CLIP, there are many other vision-language pre-trained models, e.g., BLIP, etc. It is better to include other pre-trained models to evaluate the performance of the proposed method. 3. Why does $e_i$ follow a Gaussian distribution in L446? Technical Quality: 3 Clarity: 3 Questions for Authors: I do not have critical questions about this work. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1]** The work works with a uniform prior that classes are balanced distributed. It is better to discuss the scenario with imbalanced prior. **[A1]** We consider that most of the datasets such as ImageNet and its variants are uniformly distributed,  leading us to assume $\pi_j=\frac{1}{K}$ in line 134 as a special case for simplicity. However, if the downstream datasets are imbalanced, we can derive the distribution vector $\bf \pi = Z^{-1}\bf \mu$ as outlined in Equation (5). This strategy allows our method to adapt flexibly to both balanced and imbalanced dataset distributions. *** **[Q2]**  Besides CLIP, there are many other vision-language pre-trained models, e.g., BLIP, etc. It is better to include other pre-trained models to evaluate the performance of the proposed method. **[A2]** Thank you for your valuable suggestion. We have conducted additional experiments using BLIP with the  ViT/B-16 backbone.  The results presented below indicate that our method consistently outperforms BLIP across various datasets. | | Pets | Flowers | DTD | EuroSAT | Cars | Food | SUN | Caltech | UCF | | --- | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | BLIP | 65.1 | 50.4 | 44.1 | 37.0 | 62.6 | 69.0 | 48.4 | 86.1 | 51.5 | | +Frolic | 79.3 | 56.2 | 58.3 | 50.1 | 70.2 | 74.2 | 67.1 | 92.3 | 64.3 | *** **[Q3]** Why does $e_i$ follow a Gaussian distribution in L446? **[A3]** The original $\bf x$ follows a Gaussian distribution ${\cal N}{({\bf z}_j, \Sigma)}$. We express the covariance matrix **$\Sigma$** of the original $\bf x$ as an expansion in terms of its eigenvectors as Equation (24), then we can interpret the variable $e_i = {\bf u}_i^T(\bf{x}-\bf{z}_j)$ to a new coordinate system defined by the orthogonal vectors ${\bf u}_i$. Since these transformations are linear and $\bf x$ is Gaussian, the transformed variables $e_i$ ​ also adhere to a Gaussian distribution. Specifically, as the variance of the $i$ -th coordinate is $\lambda_i$ ,  $e_i$ follows the Gaussian distribution ${\cal N}(0, \lambda_i)$ --- Rebuttal Comment 1.1: Comment: After reading the rebuttal and comments of other reviewers, I would like to raise my rating. --- Reply to Comment 1.1.1: Title: Response to Reviewer mJSR Comment: Thank you for considering our rebuttal and comments from your fellow reviewers. We appreciate your suggestions, which are crucial to the improvement of our work.
Summary: In this paper, the authors introduce a promising method for enhancing zero-shot vision models through the utilization of prompt distribution learning and bias correction. The method is particularly notable for being training-free and label-free, which greatly simplifies the implementation process. The authors provide comprehensive experimental results, which effectively demonstrate the efficacy of the proposed method. Strengths: 1. This paper is easy to understand. 2. The paper provides an in-depth theoretical discussion and a thorough experimental evaluation demonstrating the effectiveness of the proposed method. Weaknesses: 1. The computation of second-order moments and the covariance matrix from the marginal distribution as discussed in Eq.(3) and (5) might be computationally intensive. 2. The method relies on the pseudo-labels estimated by CLIP. How does this process influence the final results? 3. The setting of results on the ImageNet variants dataset is unclear. 4. The results lack comparison with other prompt distribution methods, such as CoOp and CoCoOp. Technical Quality: 3 Clarity: 4 Questions for Authors: The author stated that their method does not require hyperparameter tuning. How does the proposed method compare to those requiring hyperparameter tuning? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Refer to the weakness and Questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1]** The computation of second-order moments and the covariance matrix from the marginal distribution as discussed in Eq.(3) and (5) might be computationally intensive. **[A1]** We have evaluated the running time as presented in Table 5. The results show that while Frolic requires slightly more computation time (0.0078 seconds/per sample) compared to the original CLIP (0.0072 seconds/per sample), it yields improved performance, increasing from 68.7% to 71.1%.  *** **[Q2]** The method relies on the pseudo-labels estimated by CLIP. How does this process influence the final results? **[A2]** Our method can improve performance regardless of the quality of the pseudo-labels. As evidenced in Table 4, the original CLIP achieves 92.95% accuracy on the Caltech dataset, indicative of high-quality pseudo-labels, while 24.8% accuracy on the Aircraft dataset, which represents lower-quality pseudo-labels. Our method effectively improves results across these varied quality levels, boosting accuracy from 92.95% to 95.1% on the Caltech and from 24.8% to 31.4% on the Aircraft. *** **[Q3]** The setting of results on the ImageNet variants dataset is unclear. **[A3]** We utilize the model learned on ImageNet to evaluate its performance across the ImageNet variants.  We have provided details about each variant to ensure clarity in the revision: ImageNet-V2: sampling from the original ImageNet and including 10,000 images of 1,000 ImageNet categories. ImageNet Sketch: including 138 50,000 images and covering 1,000 ImageNet categories. ImageNet-R: containing renditions (e.g., art, cartoons, graffiti) for ImageNet classes, comprising 30,000 images from 200 ImageNet categories. ImageNet-A: collecting real-world images that are misclassified by ResNet-50, totaling 7,500 images from 200 of ImageNet categories. ObjectNet: including 50,000 test images with rotation, background, and viewpoint, and overlapping 113 classes with ImageNet These details have been included in the revised manuscript. *** **[Q4]** The results lack comparison with other prompt distribution methods, such as CoOp and CoCoOp. **[A4]** The CoOp and CoCoOp require a training procedure with labeled samples while our method does not involve any training.  To ensure a fair comparison, we compare our Frolic with CoOp and CoCoOp on across-dataset results, where the CoOp and CoCop are trained only with the labeled samples from the ImageNet dataset and then directly tested on the remaining datasets. The results shown below demonstrate that our Frolic not only avoids the complexities of training but also exhibits superior generalization performance compared to these methods. | | ImageNet | Caltech | Pets | Cars | Flowers | Food101 | Aircraft | SUN397 | DTD | EuroSAT | UCF101 | | --- | :---: |:---: | :---: |:---: | :---: |:---: | :---: | :---: | :---: | :---: |:---: | | CoOp | 71.5 | 93.7 | 89.1 | 64.5 | 68.7 | 85.3 | 18.4 | 64.1 | 41.9 | 46.3 | 66.5 | | CoCoOp | 71.0 | 94.4 | 90.1 | 65.3 | 71.8 | 86.0 | 22.9 | 67.3 | 45.7 | 45.3 | 68.2 | | Frolic | 73.3 | 95.4 | 93.6 | 71.7 | 74.3 | 88.2 | 31.8 | 72.8 | 58.0 | 65.3 | 75.9 | --- Rebuttal Comment 1.1: Comment: Thank you for the authors' rebuttal, which has addressed most of my concerns. I have raised my scores. --- Reply to Comment 1.1.1: Title: Response to Reviewer GPPc Comment: We're grateful for your appreciation and endorsement. Your review holds significant value for us, providing insightful feedback that helps enhance our work.
Summary: This paper presents Frolic, a label-free prompt learning methods aiming to improve zero-shot visual recognition of vision-language models like CLIP. The method is built upon estimating distributions over prompt prototypes to capture diverse visual representations and further bias correction. Experiment results demonstrate consistent improvement on various benchmarks over prior-art methods. Strengths: 1. The approach does not require access to large-scale datasets for estimation, which is required by many prior work. 2. The approach advances previous distribution estimation methods by removing the need of label information, which made this line of work possible for zero-shot recognition tasks. Weaknesses: While this approach does not need access to large-scale pretraining data, it seems to make some weak assumptions on downstream tasks. If my understanding is correct, it assumes: 1. the downstream task is balanced (line 134), which is not always hold true in reality. The long-tail nature of real world does not guarantee that the testing class distribution will be balanced even though the benchmarks do. 2. A decent scale of the testing data on downstream tasks is available (e.g, testing data does not come in online fashion like one sample at a time) for estimating the beta term in bias correction. Would the approach for bias estimation still holds true in online testing scenario? I would assume this is more aligned with reality too. Technical Quality: 2 Clarity: 3 Questions for Authors: 1. The methods did not seem to evaluate on all benchmark datasets like previous works do. For example, CIFAR10/100 and RESISC. While some of these datasets have saturated performance, it might be still helpful to include results there. 2. It's not clear to me why [24] is not listed in tables for comparison as I would assume 24 to be the most direct baseline. It shares the same motivation of this paper with the requirement of accessing pretraining data. Arguably, while LAION has been taken back, [24] already does the job for us. I would not think an argument like "[24] requires large-scale pretraining data for estimating bias thus we do not compare with it" as a valid argument because this zero-shot visual recognition task itself is solely working with CLIP-ish models and estimating bias from the datasets like LAION sounds like a fair practice to me. Confidence: 5 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Authors have included such discussion Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **[Q1]** the downstream task is balanced (line 134), which is not always hold true in reality. The long-tail nature of real world does not guarantee that the testing class distribution will be balanced even though the benchmarks do. **[A1]** We acknowledge that real-world data often exhibits an imbalanced distribution. The balanced assumption in line 134 is a special case for simplicity. In our method, if the downstream datasets are imbalanced, we can derive the distribution vector $\bf \pi = Z^{-1}\bf \mu$ as outlined in Equation (5). This strategy allows our method to adapt to balanced and imbalanced dataset distributions flexibly. *** **[Q2]**  A decent scale of the testing data on downstream tasks is available (e.g, testing data does not come in online fashion like one sample at a time) for estimating the beta term in bias correction. Would the approach for bias estimation still holds true in online testing scenario? I would assume this is more aligned with reality too. **[A2]** Thank you for your insightful question. Our method can be easily extended to an online scenario. This involves updating the estimation $S$ incrementally as each new test example is processed. Specifically, we first initialize the matrix $S$ with the identity matrix. Suppose we receive the $n$ -th test sample with the predicted label as $j$ and the predicted probability as $\bf p$, we first update the ${\mathbf{s}}_j = \frac{n-1}{n}{\mathbf{s}}_j + \frac{1}{n}\mathbf{p}$. Then we compute the estimated $\beta$ and update the $f\_{\text{d}}$. We have conducted additional experiments in the online setting to validate this extension, and the results below show that the online Frolic achieves comparable performance to the original Frolic. | | Caltech | Pets | Cars | Flowers | Food101 | Aircraft | SUN397 | DTD | EuroSAT | UCF101 | Average | | --- | :---: | :---: |:---: |:---: |:---: |:---: | :---: |:---: | :---: | :---: | :---: | | Frolic | 95.4 | 93.6 | 71.7 | 74.3 | 88.2 | 31.8 | 72.8 | 58.0 | 65.3 | 75.9 | 72.7 | | Frolic(Online) | 94.3 | 93.1 | 71.4 | 74.5 | 88.1 | 31.5 | 73.0 | 57.4 | 64.5 | 75.1 | 72.3 | *** **[Q3]** The methods did not seem to evaluate on all benchmark datasets like previous works do. For example, CIFAR10/100 and RESISC. While some of these datasets have saturated performance, it might be still helpful to include results there. **[A3]** We have conducted experiments with ViT-B/16 on CIFAR-10, CIFAR-100, and RESISC datasets, and the results are presented below: | | CIFAR-10 | CIFAR-100 | RESISC | | --- | :---: | :---: | :---: | | CLIP | 91.3 | 68.6 | 58.9 | | Frolic | 92.6 | 70.0 | 64.4 | We observe that on these three datasets, our method improves the performance over the original CLIP. *** **[Q4]** It's not clear to me why [24] is not listed in tables for comparison as I would assume 24 to be the most direct baseline. It shares the same motivation of this paper with the requirement of accessing pretraining data. Arguably, while LAION has been taken back, [24] already does the job for us. I would not think an argument like "[24] requires large-scale pretraining data for estimating bias thus we do not compare with it" as a valid argument because this zero-shot visual recognition task itself is solely working with CLIP-ish models and estimating bias from the datasets like LAION sounds like a fair practice to me. **[A4]** Thank you for your suggestions. We acknowledge that [24], referred to as REAL, represents a crucial baseline and shares a similar motivation with our work. We conducted a comparison between REAL, which utilizes the LAION 400M dataset, and our Frolic, using the OpenCLIP (ViT-B/16 model) across several datasets. The summarized results below demonstrate that our Frolic outperforms REAL obviously and achieves an average improvement of 1.1% in accuracy. We have included these results in the revised manuscript. | | ImageNet | Flowers | Cars | Aircraft | Pets | Food | DTD | Average | | --- | :---: |:---: | :---:| :---: |:---: | :---: | :---: | :---: | | REAL[24] | 68.1 | 73.1 | 84.0 | 18.8 | 90.5 | 85.2 | 59.8 | 68.5 | | Frolic | 70.3 | 73.9 | 84.6 | 19.9 | 91.6 | 86.9 | 60.1 | 69.6 | --- Rebuttal Comment 1.1: Title: Thanks for the rebuttal Comment: Thanks for the authors for providing the rebuttal and additional results. Most of my concerns are resolved while a few remains: 1. In [A1], you mentioned "if the downstream datasets are imbalanced, we can derive the distribution vector ". It's not clear to me how can this be achieved when we are working with zero-shot recognition. Under zero-shot scenario, you wouldn't have class distribution information of downstream tasks beforehand and would have to either make further assumptions or use some sort of estimation. Can you elaborate more on this? 2. I see most results are provided for ViT-B models, what about stronger ones like ViT-L and ViT-L@336? --- Rebuttal Comment 1.2: Title: Response to Reviewer VFZ9 Comment: **We are grateful for your feedback and we agree with your suggestions to enhance the quality of our work.** **A1:** We make an assumption that the mean of the class distribution can be represented by the text features $\mathbf{z}_j$ via prompting. Given unlabeled samples, we can compute their sample mean $\mathbf{m}$ and denote the unknown label priors as $\pi_j$. The sample mean can be represented as a linear combination of $\mathbf{z}_j$: $\mathbf{m} = \int_{\bf x}\sum_{j=1}^K\pi_j\mathcal{N}({\bf x};{\bf z}\_j,\Sigma){\bf x}{\text{d}}{\bf x}=\sum_{j=1}^K \pi_j \mathbf{z}_j$. Let $Z = [\mathbf{z}_1, \ldots, \mathbf{z}_K]$, then the unknown label priors can be solved by $\mathbf{\pi} = Z^{-1}\mathbf{m}$. *** **A2:** We have conducted experiments using the stronger models mentioned, and the results are presented below. We find that: - Our Frolic outperforms the original CLIP across various datasets with both ViT-L/14 and ViT-L/14@336, - Our online version of Frolic maintains comparable performance to the original Frolic. - Our Frolic can outperform REAL with ViT-L/14 backbone. | ViT-L/14 | CIFAR10 | CIFAR100 | RESISC | | :---: | :---: | :---: | :---: | | CLIP | 95.8 | 78.6 | 65.7 | | Frolic | 96.5 | 79.8 | 66.8 | | ViT-L/14@336 | CIFAR10 | CIFAR100 | RESISC | | :---: | :---: | :---: | :---: | | CLIP | 91.3 | 79.2 | 66.7 | | Frolic | 92.6 | 81.1 | 68.1 | | ViT-L/14 | Caltech | Pets | Cars | Flowers | Food101 | Aircraft | SUN397 | DTD | EuroSAT | UCF101 | Average | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | | Frolic | 97.3 | 95.4 | 83.5 | 81.8 | 92.4 | 42.1 | 77.3 | 66.9 | 71.0 | 82.2 | 79.0 | | Frolic(Online) | 96.7 | 95.6 | 83.3 | 82.1 | 91.7 | 41.9 | 77.7 | 66.7 | 70.4 | 81.5 | 78.8 | | ViT-L/14@336 | Caltech | Pets | Cars | Flowers | Food101 | Aircraft | SUN397 | DTD | EuroSAT | UCF101 | Average | | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | --- | | Frolic | 97.7 | 96.2 | 85.1 | 83.2 | 92.8 | 44.3 | 78.5 | 67.6 | 72.9 | 84.3 | 80.3 | | Frolic(Online) | 96.9 | 96.1 | 85.3 | 82.9 | 92.3 | 44.5 | 78.1 | 67.8 | 72.3 | 83.1 | 80.0 | | ViT-L/14 | ImageNet | Flowers | Cars | Aircraft | Pets | Food | DTD | Average | | :---:| :---: | :---: | :---: | :---:| --- | :---: | :---: | :---: | | REAL[24] | 73.7 | 82.4 | 89.6 | 28.2 | 92.8 | 89.4 | 65.7 | 74.5 | | Frolic | 74.2 | 82.9 | 90.5 | 29.1 | 93.3 | 90.1 | 66.6 | 75.2 |
Rebuttal 1: Rebuttal: Dear Program Chair, Senior Area Chair, Area Chair, and Reviewers, First of all, we gratefully thank all the reviewers for their thoughtful comments and feedback. In this paper, we propose label-Free prompt distribution learning and bias correction, dubbed as Frolic, framework to boost the performance of zero-shot models. The contribution of this paper is listed as follows: 1. Simple and Practical Solution: Our method is not only training-free but also circumvents the necessity for hyper-parameter tuning. 2. Comprehensive Evaluation: We demonstrate the effectiveness of our proposed method Frolic by conducting experiments across 16 datasets 3. Significant Performance Improvement: Our Frolic has a consistent and significant improvement over existing baselines. For example, it surpasses the state-of-the-art zero-shot models by a margin of 2.6% on average with CLIP ViT-B/16. As our paper received mixed ratings, i.e., three positive (666) and one negative (4), it would be appreciated if the reviewers could have a look at our responses and revision. We have tried our best to address your concerns in our responses in detail. Hope that our responses answered the questions. Please let us know at your early convenience if you have further questions or concerns. Best regards, Authors of Paper #2527
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Identifying Causal Effects Under Functional Dependencies
Accept (spotlight)
Summary: The paper studies the identifiability of causal effects in the presence of functional dependencies. Section 4 starts with an engaging discussion on the interaction between positivity constraints and functional dependencies, during which it defines the concept of F-identifiability. Then in Section 4 its introduces new concepts such as functional elimination, functional projection, and, which are pivotal for much of the subsequent discussion. Additionally, in this section, it presents several results that reduce D-separation to d-separation. In Section 5, the paper presents various results that reduce F-identifiability to F-identifiability from a simpler graph (Theorems 13 and 15) or to identifiability (Theorems 14, 16, 17 and 18). Strengths: I think that the paper is addressing an intriguing topic, extending identifiability results to encompass functional dependencies, which could prove valuable in numerous applications. The results are eloquently presented, with examples included in the main paper and complete proofs provided in the Appendix, making the paper comprehensible for readers acquainted with the subject matter. Weaknesses: * Explaining how the theoretical results of this paper can be practically applied to real-world datasets can make the theoretical results in this paper more relevant (I think the results are relevant and important for real world applications). See Questions below. * The paper presents numerous theorems that share a similar objectives. I wonder if it's possible to consolidate them into two general, elegant rules, one one for reducing F-identifiability to F-identifiability from simpler graphs and one for reducing F-identifiability to identifiability. However, I don't believe this issue alone warrants rejection. Each theorem, although not unified under a single rule, provides unique insights in different contexts, all of which have both theoretical and practical value. Technical Quality: 4 Clarity: 3 Questions for Authors: Minor: * Can you add a new section for example titled “Motivating Example” to present a hypothetical case study where knowledge of functional variables leads to successful identification, whereas classical identification methods would have suggested unidentifiability. * I think the sentence at line 216 can be made clearer by avoiding a double "if", for example: If every positivity constraint that mentions W does not mention HW ... * To my knowledge, a Bayesian network is not necessarily a causal graph. Therefore, in certain places such as footnote 3, it might be more accurate to replace "Bayesian network" with "causal Bayesian network" to emphasize that the graph should be causal. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The authors have addressed the limitations, but I believe they could put additional effort into emphasizing them. For example, they could highlight which theorems can provide an identifying formula and which cannot. Furthermore, clearly (in conclusion or in a dedicated paragraph) discussing future work in light of these limitations would be beneficial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > The paper presents numerous theorems that share a similar objectives. I wonder if it's possible to consolidate them into two general, elegant rules, one one for reducing F-identifiability to F-identifiability from simpler graphs and one for reducing F-identifiability to identifiability. However, I don't believe this issue alone warrants rejection. Each theorem, although not unified under a single rule, provides unique insights in different contexts, all of which have both theoretical and practical value. Thanks for the feedback! We will see if there is a succinct way to summarize these rules. > Explaining how the theoretical results of this paper can be practically applied to real-world datasets can make the theoretical results in this paper more relevant (I think the results are relevant and important for real world applications). See Questions below. > Can you add a new section for example titled “Motivating Example” to present a hypothetical case study where knowledge of functional variables leads to successful identification, whereas classical identification methods would have suggested unidentifiability. Thanks for the suggestion. We will add the following hypothetical example to the introduction. We want to study how the enforcement of speed limits affects the accident rate. According to our knowledge, the legal driving age (A) is functionally determined by the country (C), and both legal driving age and country are causes of driving speed (X). Moreover, the driving speed and legal driving age are causes of accidents (Y). We can construct a causal graph with edges $C \rightarrow A, C \rightarrow X, A \rightarrow X, A \rightarrow Y, X \rightarrow Y$. Suppose we observe variables $\{C, X, Y\}$, the classical identification method will suggest that the causal effect of X on Y is unidentifiable. However, it is in fact F-identifiable given that A is functionally determined by C. > I think the sentence at line 216 can be made clearer by avoiding a double "if", for example: If every positivity constraint that mentions W does not mention HW … OK, will fix it. Thanks! > To my knowledge, a Bayesian network is not necessarily a causal graph. Therefore, in certain places such as footnote 3, it might be more accurate to replace "Bayesian network" with "causal Bayesian network" to emphasize that the graph should be causal. Yes, We are aware that Bayesian networks may not be causal. We tried to save space by omitting (causal) as shown on Line 72, but we will add it back to avoid confusion. > The authors have addressed the limitations, but I believe they could put additional effort into emphasizing them. For example, they could highlight which theorems can provide an identifying formula and which cannot. Furthermore, clearly (in conclusion or in a dedicated paragraph) discussing future work in light of these limitations would be beneficial. Thank you for the suggestion. Will do. --- Rebuttal Comment 1.1: Comment: Thank you for your response on my minor remarks. After considering the other reviews, I continue to find this paper very interesting and will maintain my original score.
Summary: The paper addresses a novel problem in causal effect identifiability, introducing the concept of functional dependency among variables. It proposes a new elimination approach for removing redundant variables from the graph while preserving the identifiability of the target quantity. The main contribution includes proposing graph conditions that reduce the problem to classic ID settings. Strengths: 1- The paper addresses a novel problem in causal effect identifiability. 2- The paper is well-written, and the authors formulate the problem clearly. 3- An approach called functional elimination has been proposed to remove redundant variables while preserving identifiability. 4- By leveraging functional elimination and certain graph conditions, the theorems demonstrate that F-Identifiability can be reduced to the classic ID problem. Weaknesses: The proposed conditions are not complete; the paper does not provide a complete condition to determine whether a causal effect is id or not. Technical Quality: 4 Clarity: 3 Questions for Authors: 1- I didn’t understand when your theorems fail to recognize whether a causal effect in a graph is id. Can you provide some examples where the conditions of Theorem 15 are not satisfied, but the causal effect is identifiable (or not)? 2- Could you elaborate on corollary 14 and the positivity constraint? It’s worth rechecking the exact conditions of projection operation and stating the corollary in precise form. 3- What about the generalization of results to other variants of the causal effect identification problem, such as c-ID, g-ID, s-ID, and so on? If there is additional space, it would be beneficial to include a brief review of these works. Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I didn’t understand when your theorems fail to recognize whether a causal effect in a graph is id. Can you provide some examples where the conditions of Theorem 15 are not satisfied, but the causal effect is identifiable (or not)? We suspect that Theorem 15 will hold under weaker positivity constraints, but the current condition is the most succinct one we can obtain so far. The difficulty lies in how to formulate the positivity constraints $C’_V$ for the graph resulting from eliminating functional variable $Z$ if the original positivity constraints $C_V$ contain $Z$. For example, we cannot simply set $C’_V$ to be the positivity constraints in $C_V$ that do not mention $Z$. To see why, consider a causal graph with observed variables $A,Z,X,Y$ and edges $A \rightarrow Z, Z \rightarrow X, Z \rightarrow Y, X \rightarrow Y$. Assume that variable $Z$ is functional and the positivity constraint is $Pr(X|Z) > 0$. The causal effect of $X$ on $Y$ is F-identifiable. However, it is no longer F-identifiable in the graph resulting from eliminating $Z$ by Proposition 5 since $C’_V = \emptyset$ if we only keep the positivity constraints that do not mention $Z$. > Could you elaborate on corollary 14 and the positivity constraint? It’s worth rechecking the exact conditions of projection operation and stating the corollary in precise form. In [Tian & Pearl, 2002], the projection operation is stated under the strict positivity $Pr(V) > 0$. We are saying that if projection also operates under a weaker positivity assumption, then we can relax the positivity constraint accordingly in Corollary 14. We will clarify this further. > What about the generalization of results to other variants of the causal effect identification problem, such as c-ID, g-ID, s-ID, and so on? If there is additional space, it would be beneficial to include a brief review of these works. Thank you for the suggestion! We will consider it. --- Rebuttal Comment 1.1: Comment: Thank you for your response. After considering the feedback from other reviewers and your replies, I have revised my evaluation to 7.
Summary: Existing causal effect identification algorithms such as the ID algorithm, require strict positivity constraints on the observed distribution that can get violated in cases where some variables (observed or hidden) are deterministic functions of their parents. This paper takes a step towards finding conditions for when causal effects can be identified when such 'functional variables' are present (with quantitative knowledge of functional dependencies being not required). At a high-level, the approach is to eliminate functional variables followed by latent projection that enables using existing ID algorithms on the resulting causal Bayesian nets. Sound conditions for identifiability are proposed that depend on whether the functional variables are hidden or observed. Strengths: It's quite clear that functional variables are ubiquitous in the world and hence causal modeling invariably encounters such variables. The dependence of do-calculus and ID algorithm on strict positivity is a hindrance to identify causal effects in such models that include functional variables. This is a nice first step in causal effect identification in the presence of functional variables. I like the view of identification algorithms that take as input positivity constraints which is further extended to taking the functional variables as input. The paper is clearly written overall with some minor issues mentioned in the weakness section. While, the relationship between positivity constraints and functional dependencies was not fully explored, the necessary condition for identifiability w.r.t. the positivity constraints is a nice addition. The proofs of the main theorems are correct but I have not completely checked the proofs of the intermediate lemmas. This work also opens up multiple avenues for future research into developing complete identification algorithms and perhaps inspires work into ID-style algorithms that take in the weakest possible positivity constraints. This seems an active thread of investigation already, see for example, "On Positivity Condition for Causal Inference" accepted to ICML 2024. Weaknesses: I believe the current version is dense with results with the page restriction limiting a better style of presentation. There are multiple corollaries that deserve to be highlighted separately that appear in the middle of the text. I would also prefer adding proof sketches of the main theorems and cutting out results that don't directly impact the main message of the paper, for example, perhaps limiting the section on positivity contraints since it appears as an interlude in the current version. Technical Quality: 4 Clarity: 3 Questions for Authors: A few questions and typos: 1) Line 169 - the constraint is sufficient for identification under the assumption that there exists an instantiation of Z such that Pr(z)>0. This is also later used in the proof of Proposition 5. Why is this assumption justified? 2) The footnote remark on Page 8 is ambiguous. What in Ref 25,26 point to requiring the positivity condition for projection? 3) Proof of Proposition 5 - typo in definition of f^1(z|x,p_Z) Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 4 Limitations: Limitations of the technical content aren't explicitly discussed. Potential negative societal impact is not applicable. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I believe the current version is dense with results with the page restriction limiting a better style of presentation. There are multiple corollaries that deserve to be highlighted separately that appear in the middle of the text. I would also prefer adding proof sketches of the main theorems and cutting out results that don't directly impact the main message of the paper, for example, perhaps limiting the section on positivity contraints since it appears as an interlude in the current version. Thank you for your suggestions! We will consider them. > Line 169 - the constraint is sufficient for identification under the assumption that there exists an instantiation of Z such that Pr(z)>0. This is also later used in the proof of Proposition 5. Why is this assumption justified? This is a relaxed version of positivity from the ID algorithm [Shpitser & Pearl, 2006], which requires $Pr(X|P) > 0$, where P are the observed parents of X. In this example, we only need the assumption $Pr(X|Z) > 0$ since it is sufficient to make the identifying formula well-defined. This is because $Pr(y|x, z) Pr(z)$ in the formula is equal to zero when $Pr(z)=0$, and is computable when $Pr(z) > 0$ (the conditional probability $Pr(y|x,z)$ is well-defined if $Pr(x|z) > 0$). > The footnote remark on Page 8 is ambiguous. What in Ref 25,26 point to requiring the positivity condition for projection? Ref 26 is more relevant in this case. Ref 25 mainly focuses on d-separations. We will fix this. Thank you for pointing it out! > Proof of Proposition 5 - typo in definition of f^1(z|x,p_Z) We will fix it. Thanks! --- Rebuttal Comment 1.1: Title: Response to rebuttal Comment: Thanks for the response. I am happy to maintain my original score.
Summary: This paper investigates the identification of causal effects in the presence of functional dependencies, where some variables are determined by their parents. The study demonstrates that unidentifiable causal effects can become identifiable and that certain functional variables can be excluded from observation without affecting identifiability. The authors introduce a new elimination procedure to remove functional variables while preserving key properties of the causal graph, and show how existing algorithms can be used to test and obtain identifying formulas. This approach can significantly reduce the number of variables needed in observational data. Strengths: 1. The paper is written well and easy to understand 2. Theoretical aspects are explained with the help of examples which makes it easy to follow along. Weaknesses: 1. What are the implications when the treatment/target variable is a functional variable? 2. Assumptions need to be formally written. Currently, many assumptions are not very clear. For example, are functional variables need to be observed or they can also be unobserved? 3. Justification for NeurIPS checklist items is not provided where necessary. Technical Quality: 4 Clarity: 3 Questions for Authors: See weaknesses section Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: Limitations are not discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > What are the implications when the treatment/target variable is a functional variable? If each treatment and target variable has some hidden parent and these are the only functional variables (after perhaps eliminating other functional variables by Theorems 13 & 15), we can reduce F-identifiability to classical identifiability by Theorem 17 (we will include a note to this effect). Otherwise, our current results do not cover this case which is a subject for future work. > Assumptions need to be formally written. Currently, many assumptions are not very clear. For example, are functional variables need to be observed or they can also be unobserved? We allow both observed and unobserved (hidden) functional variables in the causal graph. This is evident by Theorem 13 & Corollary 14 which eliminate hidden functional variables, and Theorem 15 & Corollary 16 which eliminate observed functional variables. We would really appreciate it if you could point us to any unclear assumptions so that we can clarify them further. > Justification for NeurIPS checklist items is not provided where necessary. We did not see ambiguity in our answers to the checklist items, but we are happy to add justifications to these items. Please let us know in case any of them need particular attention and we will address them properly. --- Rebuttal Comment 1.1: Title: Thank you for your response. Comment: I thank the authors for their response. I've read their response and I will increase my score. Regrading checklist items, I suggest writing some justifications for [YES] or [NA] instead of leaving them to TODO.
Rebuttal 1: Rebuttal: We thank the reviewers for their thoughtful comments and suggestions. Please see individual responses below.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mini-batch kernel $k$-means
Reject
Summary: The authors present the first mini-batch algorithm for kernel k-means. The algorithm itself is simple and works the way one would expect mini-batch kernel k-means to work. The authors improve the running time of an iteration of kernel k-means from $O(n^2)$ to $O(n(k+b))$ for the mini-batch version of the algorithm. Additionally, they show that using a specific learning rate function, there is an upper bound on the number of iterations of the algorithm. The main challenge in the design of this algorithm is to keep track of the intermediate centers as storing the points in feature space is infeasible, as they are updated iteratively as in Lloyd's algorithm. For this, the authors design a recursive update rule to keep track of the quantity $\| \phi(x) - C_i^j \|^2$ for each iteration $i$ and each center $j$. They show that in a new iteration this quantity can be updated by considering the distance of each point in the dataset to the centers of mass of the clusters in the mini-batch and the previous centers. Finally, the authors provide an experimental study of the mini-batch algorithm on four datasets and compare it to a non-kernel mini-batch algorithm and the full kernel k-means algorithm. Strengths: - The algorithm offers improved running time bounds that are interesting to practitioners using kernel k-means in practice. It is also the first algorithm for mini-batch kernel k-means. - The main theorem's bound on the number of iterations is nice to have and a good follow up to paper [26]. - The theoretical analysis is cleanly written and easy to follow. Weaknesses: - The techniques, while elegant are not particularly novel in terms of theory. The proofs mostly follow from analyzing the inner product terms in the k-means formulation. - A number of the proofs in the main body of the paper could have been moved to the appendix, as they do not give the reader more of an understand of the big picture and are very detail specific. - There is no discussion of the experimental results. - While the authors state the paper is mostly theoretical, I believe this algorithm is mostly interesting to practicioners and therefore a more thorough focus on the experimental evaluation with more parameters, additional datasets and thorough discussion would strengthen the paper in my eyes. Technical Quality: 3 Clarity: 3 Questions for Authors: Throughout the paper you assume that the computation of the kernel function $\langle \phi(x), \phi(y)\rangle$ can be done in constant time. Is there a particular reason for this? It appears to me as though many kernel functions would take time $\Theta(d)$ to compute. In the plot for the experimental results, it is hard for me to understand the influence of the batch size on the running time, can you elaborate why we don't see bigger changes in many of the plots, and why there is such a large jump in the har dataset going from 256 to 1024? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors included a checklist in the appendix of the paper, but have not discussed practical limitations in the main body of the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your questions and suggested improvements. We agree with your observation that our main contribution is for practitioners, and that our work can benefit from additional experiments. We ran additional experiments on graph kernels (see our main rebuttal for more details). Please let us know if there are any additional experimental results you would like to see. On your question about the time to evaluate kernel functions, yes you are correct. For kernels such as the Gaussian kernel it takes time $O(d)$. However, often the kernel matrix is the input to the problem itself. As such, papers often assume oracle access to the kernel matrix. We will add a note about this with the runtime if you have to start from scratch. To your question about the lack of difference in runtime across batch sizes, we decided to include the cost of constructing the full kernel matrix in the runtime to be fair to the full kernel k-means algorithm. Had we not included this, the difference in runtime would have been even more stark. In our new experiments we present the kernel construction times separately so the difference is visible. In practice , there are other techniques that can be used to alleviate this up front cost when starting from scratch such as composing our algorithm with a corset or kernel sparsification algorithm. --- Rebuttal Comment 1.1: Comment: Thank you for your clarification and running the additional experiments.
Summary: This paper proposes the first mini-batch kernel $k$-means algorithm, which significantly reduces running time compared to the previous kernel $k$-means methods relying on the full datasets. With the proposed mini-batch kernel $k$-means algorithm, each iteration can be executed in time $O(n(k+b))$, improving the complexity of $O(n^2)$ for fully-batch methods. The authors also provide theoretical guarantees, ensuring that the algorithm can terminate (reach a convergence) within $O(\gamma^2/\epsilon)$ iterations with high probability, where $\gamma$ is the bound on the norm of points in the feature space. When initialized with the $k$-means++ seeding method, the algorithm achieves an $O(logk)$-approximation. Experimental evaluations confirm that the mini-batch kernel k-means algorithm performs significantly faster than its full-batch counterpart while maintaining solution quality. Strengths: The proposed algorithm achieves significant improvements on time complexity compared with full-batch kernel $k$-means methods. The paper provides theoretical analysis, ensuring the algorithm's termination and performance bounds if initialized with the $k$-means++ seeding method. Weaknesses: The techniques used in this paper are largely based on the work of [1]. Mini-batch $k$-means method is not new for clustering problem. The main contribution of this paper is to combine the idea of mini-batch $k$-means with the kernel $k$-means versions. It should be noted that the theoretical bounds given in this paper are not entirely novel. There are some technical issues in the proofs (details see questions), potentially undermining the theoretical guarantees. [1] Gregory Schwartzman. Mini-batch $k$-means terminates within $O(d/\epsilon)$ iterations. ICLR 2024. Technical Quality: 3 Clarity: 2 Questions for Authors: **Question 1:** For Mini-batch $k$-means methods, the centers are only updated based on a small sample of the whole datasets without calculating the exact clustering cost of the whole dataset. Thus, why the authors claim that "the execution of Lloyd's algorithm following initialization can only improve the solution and the why the approximation guarantee can remain the same for $k$-means++ method? **Question 2:** In line 8 of Algorithm 1, I think the condition of returning the center set should be "if $f_{B_i}(C_{i}) - f_{B_i}(C_{i+1})) < \epsilon$, then return $C_{i+1}$", since $f_{B_i}(C_{i})$ is intuitivelly larger than than $f_{B_i}(C_{i+1})) $. **Question 3:** In the proof of Lemma 7, since $a_{max} - a_{min} \le 4\gamma^2$, the probability bound by using Hoeffding Inequality should be $P_r[|f_B(C) - f_X(C)| \ge \delta] \le 2e^{-b\delta^2 / 8\gamma^4}$ instead of $2e^{-b\delta^2/2\gamma^2}$. If I am wrong, please point out. **Question 4:** In Theorem 5, I think there is an negative sign in the probability bound, i.e., $e^{\Theta(\delta^2 / \sum_{i=1}^{m}a_i^2)}$ should be $e^{\Theta(-\delta^2 / \sum_{i=1}^{m}a_i^2)}$. **Question 5:** In the proof of Lemma 12, I think the sample size is not enough for obtaining a success probability of around $\Omega(1 - 1/ntk)$. $b$ should be at least $b = \Omega((\gamma^2/\epsilon)^2log(nt))$. **Question 6:** In experiments, how to choose the kernel functions to achieve the desired performance? **Question 7:** The technical proofs of termination bounds given in this paper are quite similar to that for Mini-batch $k$-means proposed in ICLR 2024 (see [1] above). It seems that the only differences between these two work is the initial bound for the clustering cost difference, one is $d$ for ICLR 2024 and one is the maximum norm distance for this work. Can the authors give detailed discussions on the differences between these two work and the main technical contribution of the proposed kernel $k$-means method? Minor Comments: In page 6, "The following lemma provide concentration guarantees when sampling a batch" should be "The following lemma provides concentration guarantees when sampling a batch". Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: Although this paper mainly gives theoretical results for clustering problems, it lacks discussions on broader impact as required by the NeurIPS guidelines. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to comb through are paper. You have been very generous with your time and have given us some new ideas to think about. We understand your concern regarding the novelty of the theoretical analysis. However, as reviewer mEhQ pointed out, our main contribution is in making a big step towards making kernel k-means usable in practice by reducing the running time by an order of magnitude. Furthermore, our theoretical analysis shows that both the number of iterations and the batch size of mini-batch kernel k-means are better than the (non-kernel) mini-batch k-means for normalized kernels, which we found to be quite surprising. To respond to your questions: - Q1: Since Kmeans++ gives an O(log(k)) approximation ratio in expectation and whp our algorithm either makes progress or stops, we are guaranteed to match the same approximation in expectation. - Q2: Yes, we have $f_{B_i}(C_{i+1})$ and $f_{B_i}(C_{i})$ the wrong way around. - Q3/ Q5: Yes, you are correct. Thanks for spotting this! This means we pick up the extra factor of $\gamma^2$ in the batch size as you point out in Q5. This does not affect our results for normalized kernels. This extra term is interesting in it's own right: a higher order dependence on $\gamma$ actually helps shrink batch sizes for kernel functions induced by graphs (see our main rebuttal for more details) as they often have $\gamma<<1$ and are often more useful in practice compared to the gaussian/laplacian kernel functions with $\gamma=1$. See our new results for a comparison. - Q4: Yes that is a typo thanks. - Q5: Answered above. - Q6: Choosing a good kernel function is still a bit of a (dark) art. In our submission we followed a heuristic from the literature [Mahoney and Wang, ref 31 in the paper] to pick a reasonable bandwidth parameter for the gaussian kernel function. However, we found that using the kernel function induced by a knn-graph is often a lot easier as its parameter, the number of neighbours, is far easier to tune. See our results for a comparison. - Q7: The main technical difference between our theoretical results and those of Schwartzman is indeed in the generalization to Hilbert spaces, the introduction of the parameter gamma, and using Hilbert space valued martingales. We agree that the general proof follows the same outline and that our main contribution is the conceptual idea of using mini-batch methods to speed up kernel k-means. Thanks for spotting the typo on page 6. Regarding impact, please suggest what discussions on broader impact you would like to see.
Summary: In this paper, the authors propose the first mini-batch kernel k-means clustering algorithm. It is a variant of Lloyd's algorithm that was introduced by Sculley that takes a batch of random b points instead of the full set of points and a weighted avaerage with the current centers while updating the centers. This paper attempts to translate this idea in the *kernel* k-means setting. The resulting algorithm has the same approximation guarantee as the original k-means but it terminates faster and consumes less time per iteration. Their analysis follows the recipe of Scwartzman who used an early stopping condition when the improvement on the batch drops below some user-provided parameter. The main challenge in the kernel setting is that the underlying Hilbert space could be large or even infinite-dimensional. This is prohibitive as Scwartzman's bound on the number of iterations depends on the dimension. The authors bypass this by instead giving a bound on the Hilbert norm of the points which can be bounded in practice for example using normalized kernels. Strengths: The authors coduct detailed experiments that compares their algorithm favorably with the prior works. Specfically, the ARI and NMI scores were noticably better across a variety of 4 datasets. I liked the paper. I think it has a decent theoretical and experimental contribution. Weaknesses: New ideas are limited. Mostly an adaptation of Scwartzman's work in the kernel setting Technical Quality: 3 Clarity: 3 Questions for Authors: None. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for taking the time to go through our paper. We understand your concern regarding the novelty of the theoretical analysis. However, as reviewer mEhQ pointed out, our main contribution is in making a big step towards making kernel k-means usable in practice by reducing the running time by an order of magnitude. Furthermore, our theoretical analysis shows that both the number of iterations and the batch size of mini-batch kernel k-means are better than the (non-kernel) mini-batch k-means for normalized kernels, which we found to be quite surprising. We think minibatch methods will be vital for practical kernel k-means. Do you have any suggestions for practical improvements? --- Rebuttal Comment 1.1: Title: Read the rebuttal Comment: I have read the rebuttal from the authors. I don't have any suggestions for practical improvements. I'll maintain my score.
Summary: The article presents the first mini-batch kernel k-means algorithm, which significantly improves running time compared to the full batch kernel $k$-means with only a minor negative effect on solution quality. The proposed algorithm runs in $O(n(k+b))$ time per iteration, as opposed to $O(n^2)$ for the full-batch version. The authors provide theoretical guarantees for the algorithm's performance, demonstrating that it terminates within $O(\gamma^2/\epsilon)$ iterations with high probability when the batch size is $\Omega((\gamma/\epsilon)^2 \log(n\gamma/\epsilon))$. Experimental results confirm the efficiency and effectiveness of the algorithm. Strengths: Improved Efficiency: The mini-batch approach drastically reduces the running time from $O(n^2)$ to $O(n(k+b))$ per iteration, making it feasible to handle large datasets. Theoretical Guarantees: The algorithm includes a thorough theoretical analysis, ensuring termination within a specific number of iterations and providing an approximation ratio when using $k$-means++ initialization. Flexibility with Kernels: The algorithm works well with popular normalized kernels (e.g., Gaussian, Laplacian), making it versatile for various applications. Practical Relevance: Early stopping conditions align with practical machine learning workflows, increasing the algorithm's usability in real-world scenarios. Weaknesses: Approximation Quality: While the solution quality is comparable to the full-batch version, the approximation ratio depends on the batch size and initialization, which may not always guarantee optimal clustering. Parameter Sensitivity: The performance heavily relies on parameters such as batch size and learning rate, which need careful tuning. Complexity in Implementation: Implementing the recursive distance update and maintaining inner products can be intricate, potentially increasing the implementation complexity. Potential Issues Stochastic Nature: The inherent stochasticity of mini-batch algorithms can lead to variations in performance, and convergence to local minima is not guaranteed. Parameter Initialization: Poor initialization of cluster centers can significantly affect the algorithm's performance and convergence speed. Data Dependence: The effectiveness of the algorithm may vary depending on the dataset characteristics, such as the distribution and dimensionality of the data points. Technical Quality: 2 Clarity: 3 Questions for Authors: See Weaknesses above Confidence: 2 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your comments. Regarding your points: - The performance of the algorithm does not require careful tuning of the learning rate as you suggest since $\alpha_i^j$ is totally determined by the formula given at the end of page 6. Please can you clarify what you meant? - The approximation quality doesn't depend on the batch size as you suggest, we get it for free (in expectation) by initialising with kmeans++. Please can you clarify what you meant? - Please can you clarify what you mean by "convergence to local minima is not guaranteed"? We prove our algorithm stops due to early stopping after a constant number of iterations whp. - Poor initialization won't effect convergence speed due to the early stopping condition. Please can you clarify what you meant?
Rebuttal 1: Rebuttal: Following the reviewer comments, we ran additional experiments with graph datasets. Please find the details of the experiments below and a PDF with the results attached. An advantage of kernel k-means compared to (non-kernel) k-means is its ability to handle graph datasets. Specifically, we can take a (potentiality sparse) graph as input and compute its Heat kernel (Spectral Graph Theory, Fan Chung, chapter 10) and run kernel k-means on top of that. We show experimentally that our algorithm is an order of magnitude faster than the full batch kernel k-means algorithm, with almost no loss in the quality of the clustering. The Heat kernel of a graph is defined as $H(t)\triangleq \exp(-t\mathcal{L})$ where $\mathcal{L}$ is the normalized Laplacian of the input graph and $t$ is a parameter. Datasets: We create graph datasets by constructing $k$-nearest neighbour graphs for each dataset in our paper; then we compute the heat kernel for each. We use $t=8.0$ for all datasets, $k=1000$ for PenDigits, $k=500$ for letter and $k=250$ for HAR and MNIST. We do not count the time to construct the heat kernel in the runtimes of Figure 1 so the difference in runtime for different batch sizes will be more visible. Constructing the heat kernel took 1.2 seconds for PenDigits, 1.1 seconds for HAR, 19.6 for MNIST and 19.8 for Letter. We recorded the empirical values of gamma to be 0.036 for PenDigits, 0.060 for HAR, 0.055 for MNIST and 0.040 for Letter. We believe this approach to be a more realistic test than the stochastic block model while also being small enough to run the full batch kernel k-means algorithm on. Any larger and it would simply take too long. Pdf: /pdf/b0a4435a640006c33da2c73425dceae875d5663e.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Deterministic Uncertainty Propagation for Improved Model-Based Offline Reinforcement Learning
Accept (poster)
Summary: To address the high variance issue of model-based offline RL methods, which rely on sampling-based uncertainty estimation, this manuscript proposes MOMBO, a model-based policy optimization method based on moment matching. MOMBO learns a Q-function using moment matching and deterministically propagates uncertainties through the Q-function. Theoretically, using moment matching has tighter upper bounds on value approximation than using Monte Carlo sampling. The authors evaluate MOMBO’s performance across various environments. Strengths: 1. MOMBO is a more stable and sample-efficient approach. 2. There is a tight upper bound on value approximation errors using moment matching compared to Monte Carlo sampling. Weaknesses: 1.There is a clear necessity for the authors to enhance their writing proficiency. 2.The manuscript lists many contributions, which obscures its central thesis. The primary contribution is learning a Q-function using moment matching. However, this approach needs to strike me as sufficiently innovative. 3.How can the alignment of moments from one distribution with those from another ensure the approximation or equivalence of the two? 4.OOD data seems more penalized by high variance, whereas IND data does not generally have high variance. Moreover, there is a need for diversification to improve the variance of Ensemble RL (cf. EDAC, NeurIPS 2021). The moment matching proposed by the authors is similar to transforming uncertainty/variance. Furthermore, the experimental results do not prove the method's validity due to the lack of comparison with the latest SOTA method. 5.Section 3, entitled "Limitations of SOTA model-based offline RL approaches," does not quite capture the issue's essence. Ideally, it should be reframed to reflect a specific shortcoming within a methodology, technique, or theoretical framework. Moreover, the term "SOTA" is inherently time-bound, making it an imprudent choice for a section title, as it may quickly become outdated. 6.The authors follow \mu - \beta \cdot \sigma, where \beta is the weight parameter seems to differ little from most offline RL methods based on uncertainty estimation. 7.The core of the proposed method is based on moment matching; however, the citations to moment matching in the manuscript are too old. Is there any recent work? 8.Only some tests on mujoco environments are insufficient. Isn't it common to perform experimental validation on environments like Antmaze, Adroit, etc.? How do the authors explain this? I suspect that the proposed method will not work in these neglected environments. Technical Quality: 2 Clarity: 2 Questions for Authors: See weaknesses. Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for this thorough review. We will address your concerns below. > 1.There is a clear necessity for the authors to enhance their writing proficiency. Could you be specific as to where the writing proficiency is suboptimal? > 2.The manuscript lists many contributions, which obscures its central thesis. Our contribution is three-fold. (i) We propose a novel moment matching and propagation approach for uncertainty-based model-based offline RL. (ii) We highlight theoretical differences between it and a sampling-based approach. (iii) We provide empirical evidence for the usefulness of our proposal. We will restructure the relevant paragraph to make this more clear. > 3.How can the alignment of moments from one distribution with those from another ensure the approximation or equivalence of the two? The Wasserstein 1-distance (Def 1) which we consider in this paper measures closeness between two distributions in terms of their cdf not their moments. However, e.g., Wasserstein 2-distance between two univariate Gaussian distributions can be shown to be the sum of the squared difference between their mean and variance terms, i.e., alignment of the moments minimizes their distance based on the $W_2$ metric. Does that answer your question? > The moment matching proposed by the authors is similar to transforming uncertainty/variance. Indeed, as we propose a sampling-free approach, this necessitates a way to transform uncertainty as it passes throughout the network. Can the reviewer clarify why this analytical transformation is a weakness and not a feature? > Furthermore, the experimental results do not prove the method's validity due to the lack of comparison with the latest SOTA method. EDAC (An et al., 2021) and its improvements, such as SPQR (Lee et al., 2023), focus on model-free learning, whereas we consider model-based algorithms. To our knowledge, MOBILE is the state-of-the-art method for model-based offline learning. Most recent follow-up work such as Luo et al. (2024) and Song et al. (2024) take MOBILE as the base algorithm and complements it with additional features which can be applied also to our method. In our general answer, we have included further metrics that highlight the difference between MOBILE's performance and ours. > Section 3, entitled "Limitations of SOTA model-based offline RL approaches," does not quite capture the issue's essence. Ideally, it should be reframed to reflect a specific shortcoming within a methodology, technique, or theoretical framework. Thank you for the proposal. We will change the section title to _Limitations of sampling-based uncertainty propagation_ to highlight the specific limitation we are addressing. > The authors follow \mu - \beta \cdot \sigma, where \beta is the weight parameter seems to differ little from most offline RL methods based on uncertainty estimation. We agree, using a lower confidence bound is a standard practice throughout offline RL. That is not a contribution of ours. What is relevant is how this bound is constructed, i.e., in our case $\hat \sigma$. See also Theorem 2 in Jin et al. (2021) regarding which derives the desired suboptimality bound on a lower bound estimator. > The core of the proposed method is based on moment matching; however, the citations to moment matching in the manuscript are too old. Is there any recent work? There are still minor variations explored in the literature, but these tend to be either only theoretical or too costly, such that there have been no further fundamental new developments. E.g., moment-matching with complete covariance matrices is explored in the current work by Wright et al. (2024). But the cost scales quadratically in the layer width, i.e., becomes intractable for common network sizes. As the authors in that work acknowledge (see their Section 4.3), using the moment matching developed in our references performs just as well or even better. Similarly, Look et al. (2023) explored a way to propagate these covariances in neural stochastic differential equations using Stein's lemma. We will clarify the current state of the literature at the beginning of Section 4. > 8.Only some tests on mujoco environments are insufficient. Isn't it common to perform experimental validation on environments like Antmaze, Adroit, etc.? How do the authors explain this? I suspect that the proposed method will not work in these neglected environments. We focused on mujoco as a popular and extensive benchmark. See our general answer above for more metrics and a preliminary set of adroit results. _____ An et al., _Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble_ (NeurIPS 2021) Jin et al., _Is Pessimism Provably Efficient for Offline RL?_ (ICML 2021) Lee et al., _SPQR: Controlling Q-ensemble Independence with Spiked Random Model for Reinforcement Learning_ (NeurIPS 2023) Look et al., _A deterministic approximation to neural SDEs_ (PAMI 2023) Luo et al., _Reward-Consistent Dynamics Models are Strongly Generalizable for Offline Reinforcement Learning_ (ICLR 2024) Song et al., _Compositional Conservatism: A Transductive Approach in Offline Reinforcement Learning_ (ICLR 2024) Wright et al., An Analytic Solution to Covariance Propagation in Neural Networks (AISTATS 2024) --- Rebuttal Comment 1.1: Comment: Thanks for the author's response, which has resolved some of my initial concerns. I have thoroughly read the comments from other reviewers and the author's replies, including the general response. Below, I provide explanations for the points raised by the author and list the concerns that are still unclear to me: 1. Improvement Suggestions for Writing: Here are the areas I believe would benefit from refinement in writing: - Sentences are too long and hard to read: Line 20-23 "Applying traditional …", line 24-27 "Model-based …", line 171-174 "As the input …" - Conflation of expressions: distributional shift & distribution shift, behavioral policy & behavior policy. Refer to literatures MOBILE and COMBO. - Sentence Construction: On lines 89-104, the frequent use of "when" and consecutive "while" at the beginning of sentences, coupled with a near absence of connecting words, raises concerns about the flow and coherence of the writing. - Passive Voice: Upon reviewing lines 121-125, it has been observed that the text relies heavily on the passive voice. While the passive voice can be appropriate in certain contexts, an overreliance on it can obscure the clarity and directness of the writing. - Writing Conventions: The placement of "E.g.," (Line 146) and "I.e.," (Lines 174, 226) at the beginning of sentences is not in accordance with standard academic writing practices. While these abbreviations are commonly used to provide examples or clarify meanings, their position at the sentence's onset can disrupt the flow and formality expected in scholarly articles. Improvement of Pronoun Usage: The manuscript exhibits multiple instances of ambiguous or incorrect pronoun usage, which can lead to confusion for the reader (as seen on line 135; a comprehensive list is not provided here for the sake of brevity). - Refinement of Expressions for Scholarly Rigor: The manuscript frequently employs expressions that may not meet the standards of precision expected in academic writing, such as "via" (the specific instances are not exhaustively listed here). In summary, the numerous writing issues are a significant shortcoming of this paper. The abovementioned concerns may only encompass some areas that require attention; thus, the authors must invest further effort into meticulously reviewing the entire manuscript. 2. Clarification and Enhancement of Contribution Statement: The conciseness of the stated contributions needs to be improved. The author's response appears somewhat superficial and has not yet captured the essence of the contributions. If I understand correctly, this paper's core contribution lies in employing a moment-matching method to address the variance issues arising from Monte Carlo sampling. However, the authors have not adequately reflected these critical elements, especially considering using the term "bottleneck" as a substitute. 3. Clarification of Research Motivation: The motivation behind the study is somewhat ambiguous. If I have understood correctly, the paper's logic is as follows: Monte Carlo sampling leads to variance issues, which in turn prompts the proposal of a sampling-free method. It seems the authors aim to improve upon Monte Carlo sampling. However, I have concerns: - Is high variance the only issue with ineffective Monte Carlo sampling? Given that the authors have only analyzed the variance issue, it naturally raises the following concern: - Does the variance problem only arise with Monte Carlo sampling? - Compared to other means of addressing the variance issue, what is the motivation for using a sampling-free approach in this paper? In summary, the current manuscript's analysis of motivation is incomplete. I have not grasped the authors' central focus, which is why solving the variance problem necessitates using a sampling-free approach. --- Reply to Comment 1.1.1: Title: Clarifications about Contribution Statement and Motivation Comment: Thanks for your meticulous review and valuable suggestions. We believe there remains a few points calling for further clarification, but we also fully respect your final decision. > 1. Improvement Suggestions for Writing: We agree with your points and appreciate your rigor in pointing them out greatly. We promise to handle them in the revised version of our work. However, we kindly express our view that their scope stays within the limits of a minor revision. > 2. Clarification and Enhancement of Contribution Statement: > 3. Clarification of Research Motivation: We agree about the need for further improvement here. We believe presenting our core contribution in Section 1, `Our contribution` paragraph as follows would address both points 2 and 3, as well as your concern about the central focus of our work: Estimating tight $\xi$-uncertainty quantifiers and using them for reward penalization is a provable recipe for improved offline reinforcement learning (see Theorem 4.2 of Jin et al. (2021)). Reward penalties calculated on the Bellman target are provably tighter $\xi$-uncertainty quantifiers than those calculated on the environment model (see Theorem 3.6 of Sun et al. (2023)). However, the calculation of this quantity necessitates propagating the uncertainty of the estimated next state through a non-linear value function, which is typically modeled by a deep neural net. The commonplace approach is to approximate this quantity by Monte Carlo sampling. Our key novelties include: i) detecting the problem of approximation errors on the reward penalty for the first time in the literature ii) characterizing the problem theoretically by deriving novel concentration inequalities on their commonplace Monte Carlo estimator, iii) proposing an original solution to this problem by using a moment matching approach originally developed for Bayesian neural net inference for the first time in the context of reward penalty calculation, iv) quantifying the approximation error incurred by the Monte Carlo estimate on a large set of benchmarks, showing a significant reduction of this error by our proposed moment matching approach, and demonstrating the impact of the error on convergence speed (improved AUC score). ----- Jin et al., _Is Pessimism Provably Efficient for Offline RL?_ (ICML 2021) Sun et al., _Model-bellman inconsistency for model-based offline reinforcement learning_ (ICML 2023) --- Rebuttal 2: Title: Further Clarifications Comment: 1. We promise once again to implement all of your suggestions, which comprise: 1. Shortening the sentences that are hard to read 2. Unifying the allocation of terminology, sticking to their usage in the literature 3. Correcting wrongly constructed sentences 4. Rephrasing sentences written in passive voice in active voice in the most understandable way 5. Correcting improperly used writing conventions throughout the text. We do think they are extremely important to improve the quality of our work. Their implementation does require a thorough and careful pass over the text, which is a tedious task and we are ambitious to fulfill it following your guidance. We kindly express our opinion that all these corrections are typically handled during the preparation of a camera-ready versions of conference papers. We encounter corresponding period in journal publication processes as **accept with minor revision**. 2. Here are our answers to your questions: We would like to first make sure that we agree on the terminology. We study the **variance** of *the Monte Carlo estimators used in the penalty term calculation of model-based offline RL algorithms*. Using an estimator is inevitable while calculating a penalty term based on the Bellman operator. In mathematical terms, there is uncertainty on the next state $s’$ stemming from the error in the learned environment dynamics. This uncertainty needs to be propagated through the state-action value function $Q(s’,a’)$ which appears in the Bellman target calculation. This operation is analytically intractable if $Q$ is a neural network. > Is high variance the only issue with ineffective Monte Carlo sampling? We are sure it is not the only issue. Our claim is that it is one of the many important issues as it plays a significant role in training efficiency. We demonstrate why it is the case by developing concentration inequalities for the Monte Carlo estimator and our moment matching based alternative. > Given that the authors have only analyzed the variance issue, it naturally raises the following concern:” Does the variance problem only arise with Monte Carlo sampling? As we pose the problem in the first place as the estimator variance of a Monte Carlo estimate, the answer is yes. Reducing estimator variance is major issue in a wide range of problems in statistics. It is handled using methods such as introducing control variates or Rao-Blackwellization. See for instance: Lemieux (2017). The state-of-the-art approach to approximate the uncertainty of the state-action value of the next state is to use Monte Carlo integration, which is represented in our paper with the MOBILE algorithm (Sun et al., 2023). Sampling-free approaches do exist, but only for penalty terms that are computed directly on the estimated error of the model dynamics. We represent this alternative in our paper by the MOPO algorithm (Yu et al., 2021). However, as proven in Theorem 4.3 of the MOBILE paper, this option over-penalizes the reward and causes suboptimal results. The tightness experiment we reported in our global response demonstrates that MOPO applies much higher penalty than MOBILE as this theorem suggests, and our MOMBO applies even less penalty, hence brings about much faster training. > Compared to other means of addressing the variance issue, what is the motivation for using a sampling-free approach in this paper There does not exist an alternative way to address this issue yet. Our key novelty is that we are the first to detect its existence and prominence for offline reinforcement learning and provide a solution to mitigate it. ----- Lemieux, C., _Control Variates_ (Wiley StatsRef: Statistics Reference OnlineL 1-8 2017) Yu et al., _COMBO: Conservative offline model-based policy optimization_ (NeurIPS 2021) Sun et al., _Model-bellman inconsistency for model-based offline reinforcement learning_ (ICML 2023)
Summary: This paper proposes a new uncertainty estimation method for model-based offline reinforcement learning, which uses moment matching to deterministically propagate uncertainties through the Q-function, rather than relying on sampling-based uncertainty estimation. The resulting model, Moment Matching Offline Model-Based Policy Optimization (MOMBO), significantly accelerates training compared to its sampling-based counterparts while maintaining performance. Strengths: - The paper is clearly written, allowing readers to follow the main arguments. - It provides a detailed theoretical analysis to demonstrate why moment matching is better than Monte Carlo sampling. Weaknesses: - The novelty of this method seems relatively weak. It appears to be a minor modification of the uncertainty quantification method used in MOBILE [1]. Additionally, as noted by the authors in the limitations section, this method is also limited to certain activation functions. I want to clarify that I do not think minor modifications lack value or novelty. Instead, the novelty of a method should be evaluated along with its empirical performance. However, the experiments in this paper do not show a strong improvement over the baseline (MOBILE). - The advantage claimed by the authors is sample efficiency. Firstly, using “sample efficiency” here is incorrect. This criterion is used to evaluate an online RL method. In offline RL, we train an algorithm on a fixed dataset, so “convergence speed” would be more appropriate. Furthermore, this criterion is not critical in an offline setting, where we are more concerned with the asymptotic performance. - According to PEVI [2], the key factor for an uncertainty-based offline RL method is how tightly it can estimate the Bellman error, which directly determines the performance of the derived policy. The authors do not provide direct evidence (theoretical or empirical) to show that MOMBO can provide a tighter estimation of the Bellman error. [1] Sun et al. "Model-Bellman Inconsistency for Model-based Offline Reinforcement Learning" (ICML 2023) [2] Jin et al. "Is Pessimism Provably Efficient for Offline RL?" (ICML 2021) Technical Quality: 3 Clarity: 3 Questions for Authors: To summarize the main points/questions raised in the weaknesses section: - Could the authors evaluate their method on more challenging tasks to show that it can achieve better asymptotic performance compared to MOBILE? - Could the authors provide evidence to demonstrate that MOMBO can provide a more correlated/tighter estimation of the ideal uncertainty, i.e., the Bellman error? I‘m willing to raise my score if all these concerns can be addressed by the authors. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors have thoroughly discussed the limitations of their method in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the detailed review. > The novelty of this method seems relatively weak. It appears to be a minor modification of the uncertainty quantification method used in MOBILE. MOPO, MOBILE, and MOMBO are three algorithms derived from the meta-algorithm known as Pessimistic Value Iteration (Jin et al., 2021). They all attempt to quantify uncertainty to learn a pessimistic value function. Therefore, a high degree of similarity among them is expected. However, we demonstrate that our approach surpasses the SOTA (MOBILE) both analytically and empirically. > as noted by the authors in the limitations section, this method is also limited to certain activation functions. Let us clarify that while this limitation exists, it is less restrictive than it sounds. The derivations required to compute the first two moments of a ReLU activation directly generalize to all other piecewise linear activation functions, e.g., Leaky-Relu or PReLU. While requiring a more expensive forward pass, the first two moments are also tractable for more modern activation functions such as the ELU and its SELU counterpart. To summarize, while the first two moments cannot be computed for all activation functions, they can be for the most relevant and most popular ones covering all mainstream architectures used in critic training in modern RL. > The advantage claimed by the authors is sample efficiency. Firstly, using “sample efficiency” here is incorrect. This criterion is used to evaluate an online RL method. In offline RL, we train an algorithm on a fixed dataset, so “convergence speed” would be more appropriate. Furthermore, this criterion is not critical in an offline setting, where we are more concerned with the asymptotic performance. We agree, that _sample efficiency_ was poorly worded. We adapt the discussion paragraph in the final version of the paper accordingly to make this clear. (Table 2 refers to the new results provided in this rebuttal.) ``` Discussion and results. To summarize, our experimental findings are: (i) MOMBO matches the state-of-the-art MOBILE in normalized reward. Additionally, it outperforms other model-based offline RL approaches like COMBO (Yu et al., 2021) with 66.8, TT (Janner et al., 2021) with 62.3, and RAMBO (Rigter et al., 2022) with 67.7 as their respective average normalized reward scores across the same twelve tasks. These numbers are cited from Sun et al. (2023). (ii) MOMBO has a faster convergence speed. It learns faster and reaches its final performance earlier. This is reflected in its AUC score, where MOMBO outperforms the baselines. (iii) MOMBO provides improved uncertainty estimates for the Bellman target. Its moment-matching approach provides it with tighter and more accurate estimates, as shown in Table 2. (iv) MOMBO is more robust. It has a lower standard deviation than the normalized reward in six out of twelve tasks. This indicates better stability compared to MOBILE, which has a lower standard deviation in only three tasks. Note that for hopper-random, our model failed to learn in one repetition, leading to a high standard deviation and low average normalized reward. In conclusion, our MOMBO achieves state-of-the-art performance, exhibiting robustness and fast learning capabilities, aligning with the theory. ``` > However, the experiments in this paper do not show a strong improvement over the baseline (MOBILE). Upon three million gradient steps, both MOBILE and MOMBO perform very similar, with a tendency for minor improvements in MOMBO. However, the AUC results, being calculated over ten evaluation results throughout the training process, show a clear distinction in the speed of convergence with MOMBO outperforming MOBILE in all but one environment. Convergence speed is a crucial metric in real applications where it is not feasible to engineer the training duration based on evaluation results, which is the common practice for standard offline RL benchmarks. Additionally, this means that for a fixed amount of computational time/training data, we can train more powerful models than MOBILE, which can be expected to then translate into a larger normalized reward as well. Finally, the experiments performed for the rebuttal (see general answer) show improvements in accuracy and tightness indicating a better training signal for our proposed approach. > Could the authors evaluate their method on more challenging tasks to show that it can achieve better asymptotic performance compared to MOBILE? We provide additional performance metrics in our general answer demonstrating the improvements our method provides as well as preliminary results on the adroit domain. > Could the authors provide evidence to demonstrate that MOMBO can provide a more correlated/tighter estimation of the ideal uncertainty, i.e., the Bellman error? We have included an additional experiment in our main response that demonstrates improved performance in terms of accuracy and tightness upon both MOBILE and MOPO. Please let us know if this answer clarifies your concerns. ----- Jin et al., _Is Pessimism Provably Efficient for Offline RL?_ (ICML 2021) Yu et al., _COMBO: Conservative offline model-based policy optimization_ (NeurIPS 2021) Janner et al., _Offline reinforcement learning as one big sequence modeling problem_ (NeurIPS 2021) Rigter et al., _RAMBO-RL: Robust adversarial model-based offline reinforcement learning_ (NeurIPS 2022) Sun et al., _Model-bellman inconsistency for model-based offline reinforcement learning_ (ICML 2023) --- Rebuttal Comment 1.1: Comment: Thank you for your reply, my concerns have been mostly resolved. I have already improved my score.
Summary: ** I am unfamiliar with the methods/related works in this paper.** This paper proposed a model-based method for offline RL, based on moment matching. Improved numerical results are presented. Strengths: The method of moment matching is quite novel, which aims to improve the accuracy of the first two moment estimation. Weaknesses: I am quite confused about Sec 4. Why the Gaussian RVs are related? I feel the presentation of the paper can be improved. Technical Quality: 3 Clarity: 2 Questions for Authors: The authors mention that the uncertainty-based approach can be overly pessimistic, but some recent paper [1,2] are showing that the uncertainty-based methods achieve the minimax optimal sample complexity, under the tabular setting. Can you explain the reason of such a statement? [1]Li, Gen, et al. "Settling the sample complexity of model-based offline reinforcement learning." The Annals of Statistics 52.1 (2024): 233-260. [2]Wang, Yue, Jinjun Xiong, and Shaofeng Zou. "Achieving the Asymptotically Optimal Sample Complexity of Offline Reinforcement Learning: A DRO-Based Approach." Confidence: 2 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: n/a Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your thorough reading despite the unfamiliarity. > I feel the presentation of the paper can be improved. Given the overall feedback provided via the reviews, we updated the section title and presentation for Section 3, and the _discussion and results_ paragraph in Section 5 (see, e.g., answer to BmgH). Please let us know if further parts of the paper could benefit from presentation improvements in your opinion. > Why the Gaussian RVs are related? Assuming that the overall input follows a normal distribution is a common one in the literature coming from the environmental model. Assuming in turn that the pre-activation output of a neural layer is also Gaussian can be justified via a central limit theorem argumentation. This is because the input to a specific neuron is a sum of random variables, i.e., the central limit theorem holds and converges to a Gaussian as the width increases. Additionally, if the computation of the first two moments is analytically tractable, the Gaussian distribution has the property of being the maximum entropy distribution, i.e., it imposes the least additional number of assumptions. The fact that Algorithm 1 ignores covariance terms is common practice in moment-matching approaches (see the references in the paper), as the computational complexity and memory cost scales quadratically in the number of neurons otherwise. > The authors mention that the uncertainty-based approach can be overly pessimistic, but some recent papers [1,2] are showing that the uncertainty-based methods achieve the minimax optimal sample complexity, under the tabular setting. Pessimism is required, however, current uncertainty-based approaches, e.g., MOPO and MOBILE, penalize too much. We do not contradict the two references, but show that our uncertainty estimator is tighter than our baselines'. See the additional experiments we provide in the main rebuttal answer. Note that the the term "samples" in those references refers to the size of the underlying data set, see, e.g., Section 2.2 in Li et al. (2024). _____ Li et al., _Settling the sample complexity of model-based offline reinforcement learning_ (Annals of Statistics, 2024) --- Rebuttal Comment 1.1: Comment: Thank you for the response. The experiments do show some improvement, but I am still not convienced that the previous methods are too pessimistic. Considering the different counstruction of [Li et al. 2024] and the MOPO baseline, I am wondering if the MOPO is the SOTA of current uncertainty-based approaches. --- Reply to Comment 1.1.1: Title: Clarifications about Pessimism and SOTA Comment: Thank you indeed for your response. > Thank you for the response. The experiments do show some improvement, but I am still not convienced that the previous methods are too pessimistic. We believe the tightness scores we reported in the global response provide direct and strong evidence in favor of our claim that MOMBO applies less penalty than both MOPO and MOBILE. The experiments demonstrate that MOPO applies significantly higher penalty than MOMBO and MOBILE, which is an expected consequence of Theorem 3.6 of Sun et al. (2023). To remind, tightness score measures the difference between the penalty applied by a model and the actual error the Bellman estimator makes. See our global response for details. If there still remains an open issue regarding the relative pessimism of the models in comparison, we are happy to discuss further. > Considering the different counstruction of [Li et al. 2024] and the MOPO baseline, I am wondering if the MOPO is the SOTA of current uncertainty-based approaches. We actually consider MOBILE as the SOTA of the model-based offline RL models, not MOPO, given its publication date (ICML, 2023) and the lack of a widely known method that improves its performance using commensurate resources. That said, MOPO is still a reference model representing the option of reward penalization based on environment dynamics. MOBILE improves on MOPO by deriving a penalty term based on a sampling-based estimate of the Bellman target. Our contribution is to identify and characterize the problems this sampling-based approach causes and present a novel solution that addresses these problems both theoretically (see our Theorems 1 and 2, and comparison paragraph between lines 247-258.) and empirically (see our AUC results in Table 1 that demonstrate faster convergence). ----- Sun et al., _Model-bellman inconsistency for model-based offline reinforcement learning_ (ICML 2023)
Summary: This work addresses the issue of sampling for uncertainty propagation that is the standard practice in offline RL and identifies the high variance of sampling-based estimates as an obstacle to better performance of uncertainty-aware offline RL methods. As an alternative, the authors propose propagating uncertainty with moment matching, which is a deterministic procedure. The method involves propagating the uncertainty of state transitions into the value function where the value function is parameterized with a ReLU network. The method, coined MOMBO, is further theoretically supported with tight bounds on value error approximation, and experiments on the standard Mujoco benchmarks indicate that MOMBO is on par or exceed existing approaches in offline RL. Strengths: - Uncertainty propagation in RL is a relevant problem to which there is not yet a definitive solution, mainly due to the interplay between uncertainty in state transition and uncertainty in value estimation. This work addresses this issue and proposes a method which is well motivated and theoretically supported. While the method itself does not touch upon the method of uncertainty quantification, rather only the propagation of uncertainty, I believe further investigation into the effect of various UQ methods when used with this propagation could be interesting. - Empirical results are fairly strong - Overall the paper is well written Weaknesses: While the authors conjecture that sampling based approaches to uncertainty propagation has inherent flaws, it would be nice to see more evidence, e.g. empirical evidence that digs into this phenomenon. As such, the proposed method does not seem like a fundamentally novel approach, rather an improvement in estimation of a given framework. Still, the suggestion of moment matching as a scalable and efficient method of propagating uncertainty seems useful to the community and this paper would fit well alongside existing work. Technical Quality: 3 Clarity: 3 Questions for Authors: N/A Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your positive feedback. > While the authors conjecture that sampling based approaches to uncertainty propagation has inherent flaws, it would be nice to see more evidence, e.g. empirical evidence that digs into this phenomenon We include an additional evaluation in our general answer above, to demonstrate both our improvements in accuracy as well as tightness. --- Rebuttal Comment 1.1: Comment: Thank you for the additional experiments. I maintain my original rating, thank you.
Rebuttal 1: Rebuttal: We thank all reviewers for their feedback. We address all the reviewer comments in our individual responses. To summarize the main changes during the rebuttal phase: - We conducted an experiment to compare the quality of uncertainty quantifiers among the baselines and our MOMBO, which supports our thesis. _(see below)_ - We provide additional results on a new benchmark (Adroit). _(see below)_ - We reorganized Section 3, _discussion and results_ paragraph of Section 5, and contributions paragraph. _(see, e.g., answer to BmgH)_ ## Experiment on Uncertainty Quantifiers Per the reviewers' request, we conducted an experiment to provide empirical evidence that MOMBO offers a tighter estimation of the ideal uncertainty. To achieve this, we followed the definition of the $\xi$-uncertainty quantifier (see Definition 4.1 in Jin et al. (2021)): $$| \hat{\mathcal{T}}\hat{Q}(s, a) - \mathcal{T}\hat{Q}(s, a) | \leq \beta \cdot u(s, a)$$ for all $(s, a) \in \mathcal{S} \times \mathcal{A}$. We aimed to measure the quality of uncertainty penalizers and determine their tightness in model-based offline reinforcement learning algorithms, specifically comparing MOPO, MOBILE, and our algorithm, MOMBO. To achieve this, we report two scores: 1. **Accuracy**: For a given state-action pair, we check whether the equation above holds or not. This measures the quality of the uncertainty quantifier. 2. **Tightness score**: For a given state-action pair we calculate $$\text{Tightness score}= \beta \cdot u(s, a) - | \hat{\mathcal{T}}\hat{Q}(s, a) - \mathcal{T}\hat{Q}(s, a) |.$$ This score measures how tightly the uncertainty quantifiers can estimate the possible errors. Here, $\beta \cdot u(s, a)$ represents the penalty applied by the algorithm for a specific state-action pair. $\hat{\mathcal{T}}\hat{Q}(s, a)$ is the learned estimated Bellman target, calculated using learned dynamics, while $\mathcal{T}\hat{Q}(s, a) = r(s, a) + \gamma E_{s' \sim T} [E_{a' \sim \pi}[Q(s', a')]]$ is the true Bellman target. The true Bellman target requires calculating the real expectation for the next state and next action. Since the environments we used are deterministic, we used the actual next state from the environment dynamics and the actual reward. For the expectation of the next action, we used a large number of samples to approximate it accurately. ### Experiment Procedure: We load the learned policies and dynamics and generate 10 episode rollouts using the real environment in evaluation mode. From these rollouts, we select state, action, reward, and next state tuples at every 10th step, including the final step. For each of these tuples, we calculate the mean accuracy and tightness scores. This process is repeated for each of the 4 seeds across every task, and we report the mean and standard deviation of the accuracies and tightness scores. ### Experiment Results #### Accuracy | | MOPO | MOBILE | MOMBO | |---:|:---:|:---:|:---:| | halfcheetah-random | 0.02 ± 0.01 | **0.25 ± 0.02** | 0.24 ± 0.07 | | hopper-random | 0.04 ± 0.01 | **0.85 ± 0.03** | 0.62 ± 0.07 | | walker2d-random | 0.0 ± 0.0 | 0.55 ± 0.04 | **0.74 ± 0.06** | | halfcheetah-medium | 0.05 ± 0.01 | 0.25 ± 0.0 | **0.34 ± 0.04** | | hopper-medium | 0.04 ± 0.01 | 0.41 ± 0.01 | **0.55 ± 0.03** | | walker2d-medium | 0.02 ± 0.0 | 0.16 ± 0.01 | **0.38 ± 0.02** | | halfcheetah-medium-replay | 0.08 ± 0.01 | 0.04 ± 0.0 | **0.16 ± 0.0** | | hopper-medium-replay | 0.02 ± 0.01 | 0.03 ± 0.01 | **0.17 ± 0.01** | | walker2d-medium-replay | 0.08 ± 0.01 | 0.14 ± 0.01 | **0.36 ± 0.02** | | halfcheetah-medium-expert | 0.11 ± 0.01 | 0.36 ± 0.03 | **0.44 ± 0.03** | | hopper-medium-expert | 0.04 ± 0.02 | 0.43 ± 0.02 | **0.51 ± 0.04** | | walker2d-medium-expert | 0.05 ± 0.01 | **0.47 ± 0.02** | 0.45 ± 0.02 | #### Tightness Score | | MOPO | MOBILE | MOMBO | |---:|:---:|:---:|:---:| | halfcheetah-random | -14.92 ± 1.14 | -6.88 ± 0.61 | **-6.21 ± 0.28** | | hopper-random | -1.81 ± 0.16 | -0.64 ± 0.2 | **-0.31 ± 0.65** | | walker2d-random | -98521227.61 ± 91264317.73 | -0.27 ± 0.08 | **-0.14 ± 0.02** | | halfcheetah-medium | -15.76 ± 0.66 | -10.03 ± 0.53 | **-9.06 ± 0.26** | | hopper-medium | -4.08 ± 1.16 | -3.14 ± 0.08 | **-1.3 ± 0.21** | | walker2d-medium | -8.98 ± 0.63 | -5.02 ± 0.52 | **-4.03 ± 0.24** | | halfcheetah-medium-replay | -12.16 ± 1.18 | -11.85 ± 0.41 | **-10.4 ± 0.66** | | hopper-medium-replay | -5.45 ± 0.59 | -3.4 ± 0.04 | **-3.17 ± 0.04** | | walker2d-medium-replay | -4.45 ± 0.43 | -4.56 ± 0.13 | **-3.78 ± 0.39** | | halfcheetah-medium-expert | -21.45 ± 2.79 | -13.22 ± 0.47 | **-11.88 ± 0.5** | | hopper-medium-expert | -7.12 ± 2.6 | -3.5 ± 0.03 | **-3.38 ± 0.02** | | walker2d-medium-expert | -9.72 ± 0.24 | -5.52 ± 0.36 | **-5.29 ± 0.14** | ### Discussion on Results To summarize the experiment results: MOMBO provides improved uncertainty estimates for the Bellman target on almost all data sets (9/12 in terms of accuracy, 12/12 in terms of tightness). This improved estimation leads to faster convergence due to clearer gradient signals and results in higher final performance. ## Adroit Per the reviewer igqb's suggestion, we conducted an experiment on the Adroit domain of the D4RL dataset, specifically reporting on 'pen-human-v1' and 'pen-cloned-v1'. We skipped the other tasks because MOBILE does not provide their hyperparameters, and conducting an extensive hyperparameter search requires too much time, which we do not have for the rebuttal period. We report the MOPO and MOBILE results from the MOBILE paper. Even without detailed hyperparameter tuning, we are competitive with MOBILE. (MOBILE provides hyperparameters in the appendix, however, these do not agree with those reported in their repository. We rely on the latter.) | | MOPO | MOBILE | MOMBO | |---:|:---:|:---:|:---:| | pen-human | 10.7 | 30.1 ± 14.6 | **32.0 ± 10.7** | | pen-cloned | 54.6 | **69.0 ± 9.3** | 63.5 ± 19.5 | _____ Jin et al., _Is Pessimism Provably Efficient for Offline RL?_ (ICML, 2021)
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Scalable Bayesian Optimization via Focalized Sparse Gaussian Processes
Accept (poster)
Summary: The paper presents a new method for scaling Bayesian Optimisation to large datasets by employing sparse GP in a way that facilitates BO. The proposed method of Focal BO fits a sparse GP at different scales, optimises the acquisition function for each of them and selects the next point to query by sampling with probability proportional to its acquisition function value at its scale. The proposed algorithm is evaluated at a number of benchmarks, where a large number of data points is available. Strengths: The paper addresses a very important and, in my opinion, understudied topic of scaling BO to large datasets. The proposed algorithm is elegant and does not introduce any new hyperparameters. It also seems to be delivering a significant improvement in performance over baselines. Weaknesses: - Since the paper is mostly empirical I would expect a bit more baselines. The simplest way to scale TuRBO to large datasets is simply to keep $N$ points closest to the optimal solution. It would be great to see how this naive "keep closest N TuRBO" baseline compares to FocalBO TuRBO. - "FocalBO with TuRBO (...) is the first GP-based method to achieve top-tier performance reported by prior MBO works." - this is a strong statement without much justification. Please either directly compare with MBO baselines or quote the exact numbers achieved by prior work on the exact same setting as considered in the paper. Technical Quality: 3 Clarity: 3 Questions for Authors: - In Equation 8, why is there a $ - 1$ at the end? Since it is a constant it should not affect the optimisation process at all. - I cannot understand why authors refer to ELBO as "ELBO Loss", since due to how it is defined, I imagine you would rather want to maximise it (as it corresponds to expected likelihood). Are you maximising the loss function? In that case, I do not think it is proper to call it that name. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Within the paper authors do not discuss the limitations (or at least I cannot find them). Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your appreciations of our problem setting and algorithm design. We address your concerns below. Please find the rebuttal Figures by opening the rebuttal PDF file. **Baselines comparison** In **Figure R4**, We run the "keep closest N TuRBO" baseline (denoted as NN GP TuRBO) on both robot morphology design and human musculoskeletal system control task, and find that FocalBO consistently outperforms this simple baseline. Thanks to reviewer SUei, we find that FocalBO also outperforms original TuRBO implementation on both high-dimensional benchmarks. The reason of TuRBO's poor performance may be that it cannot quickly adapt over the search space when the online evaluation budget is small. In terms of the statement in section 5.2, we find that there is no statistically significant difference between FocalBO and the best performing algorithm reported in Design Bench baselines (p value=0.1168). We will extensively compare the optimization performances against MBO baselines in the new version. **The design of $\mathcal{L}_{\text{reg}}$** Indeed, the -1 does not affect the loss, and is primarily for illustrative purposes. Specifically, we can write $L_{\text{reg}} = \frac{ \sum_{i=1}^t w_i - |X \in S_{c,l}| }{ |X \in S_{c,l}| } $ $= \frac{ \sum_{i=1}^t w_i -\sum_{i=1}^t 1_{x\in S_{c,l}} w_i }{ \sum_{i=1}^t 1_{x\in S_{c,l}} w_i } = \frac{ \sum_{i=1}^t 1_{x\notin S_{c,l}} w_i }{ \sum_{i=1}^t 1_{x\in S_{c,l}} w_i } $ which motivates the regularization term as minimizing the weights of points outside of the search region, while normalizing by the size of the search region (to account for decreasing search region sizes in FocalBO). **ELBO statements** Thanks to pointing out the confusing loss description - it's a typo on our end. We actually maximize the weighted data likelihood term and minimize the regularization term, therefore $\mathcal{L}_{\text{reg}}$ should be multiplied by $-1$. We will unify the definitions to the ELBO description to reduce confusion. **Limitations of our work** While FocalBO demonstrate superior performances against existing scalable BO algorithms, theoretical can be further improved to make it more rigorous. For instance, an analysis of the convergence of FocalBO would enhance our understanding of the algorithm’s mechanism. The currently-used hypercube search region may not align well with the underlying function landscape, potentially wasting computational resources on low-performing regions. A more sophisticated search region design such as [1] may help FocalBO to further improve the optimization performance. [1] Wang, Linnan, Rodrigo Fonseca, and Yuandong Tian. "Learning search space partition for black-box optimization using monte carlo tree search." Advances in Neural Information Processing Systems 33 (2020): 19511-19522. --- Rebuttal Comment 1.1: Comment: Thank you very much for your rebuttal. Given the new experiments, I am willing to increase my score to 7 (accept). Regarding the novelty issues mentioned by Reviewer SUei, I believe the TuRBO algorithm became universally accepted as the go-to solution when it comes to high-dimensional Bayesian Optimisation. As such, in my opinion, the fact that FocalBO is merged with TuRBO to produce a stronger baseline is not really a problem, given that it severely outperforms other TuRBO-based baselines. It is clear that the main component critical to success is the FocalBO strategy. I also believe that the FocalBO itself is sufficiently different from the previously proposed sparse-GP frameworks. --- Reply to Comment 1.1.1: Comment: Thank you for your feedback on the new experiments and the novelty of our work! We will improve the presentation of the manuscript by incorporating the new experimental results and highlight our novelty in the revised version.
Summary: Operating in the context of Bayesian optimization, the authors propose to train a surrogate model which focuses on a specific sub-region of the input space by weighting the log-likelihood contributions of each datapoint relative to their distance to that region. They also propose an algorithm for choosing this subregion over the course of optimization, which they demonstrate in numerical experiments. Strengths: The reasoning and motivation at a conceptual level is clear and the article is well-written at a high level. I really like Section 5.4 which gives us an idea of how the algorithm is attacking the problem. The test problems used in Section 5.2 and 5.3 of the numerical experiments are outstanding, as they represent nontrivial problems and are of high dimension. Weaknesses: The Lreg term of equation 8 seems to come out of left field, and the implications of adding it to the cost in equation 9 are insufficiently discussed; I say more about this in the Questions section. Figure 1 is not super convincing: it's clear that with a different hyperparameter/inducing point setting (perhaps with admittedly more inducing points), that SVGP would be able to interpolate as well. The theoretical implications section is underdeveloped, containing neither new math nor computational experiments verifying that the claims made there hold up empirically. Regarding novelty level, I would assess it as medium as it falls within these existing frameworks of trust-region style optimization spawning from Erikkson et al as well as work about tailoring inducing point methods to consider specific parts of the input space (a different implementation but a similar spirit is [1]), though the combination to Bayesian optimization is in my view novel. [1] Cole, D. Austin, Ryan B. Christianson, and Robert B. Gramacy. "Locally induced Gaussian processes for large-scale simulation experiments." Statistics and Computing 31.3 (2021): 33. Only for the benefit of the authors, I am attaching here some grammar/clarity issues which did not affect my scoring of this article: 201) shows an comparison -> shows a comparison 232) directly reduces 246-248) sentence spanning these two lines needs attention. 251) from GP -> from the GP 298) both large offline dataset and large number -> both a large offline dataset and a large number 304) inducing variable number -> number of inducing variables 374) "consistently augments" does not make sense in context; I'm not sure what you're trying to say. Technical Quality: 3 Clarity: 3 Questions for Authors: I understand that Lreq (Eq 8) is given such that it is divided by the number of points in the subregion and has one subtracted to it such that its minimum possible value is zero, and since the w's for any points inside S is constant, we are effectively minimizing the sum of kernel functions evaluated at extra-S training points with the projection of that point onto S. Why is this the right thing to do? It's going to bias lengthscale estimates to be small relative to a GP fit to all the data or even one fit only to data inside the subregion. Why is this desirable, and why is Lreg the right way to operationalize this? Am I correct that none of FocalBO Turbo, SVGP Turbo, or Vecchia TurBO represent the actual algorithm originally proposed my Eriksson et al, but rather are re-implementations of TuRBO's strategy in other approximate GP frameworks? I think there are some interesting ideas here, but I regretfully am presently suggesting rejection as a result of the limited scope of the competitors in the numerical experiments which in my view is required of a methodology paper with this medium level of conceptual novelty appearing in Neurips. I think this is a solid paper but does not yet meet the high bar of a Neurips article. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: You say that you don't introduce any additional hyperparameters, but it might be more accurate to say that you provide defaults for additional hyperparameters: you set the relative strength of Lreg to the inverse of the number of points in the search region (which makes sense for making the minimum of this objective zero, but it's not clear what implications the scaling has) and you have a specific heuristic for choosing the optimization depth. Why is it reasonable to believe this would still be the right thing to do on some other black-box test function outside of the five you've presented here? I think the problems in Section 5.2 and 5.3 are really interesting problems. But to help compare against prior work, it would be really important to compare against TuRBO itself: though TuRBO uses an "exact" GP, they use numerical analysis tricks to scale to large datasets; (glancing at the paper,they do 20,000 evaluations on a 60D problem). So it seems like TuRBO as originally presented by Eriksson et al should have been a competitor in this simulation study. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your detailed review. We are improving the paper based on your suggestions. We'd also like to clarify your concerns in the following paragraphs. Please find the rebuttal Figures by opening the rebuttal PDF file. **GP comparison in Figure 1** We would like to highlight that our goal of designing focalized GP is to to **allocate a fixed set of limited computational resources to obtain better prediction over specific search regions instead of the entire input domain**. This would contribute to better acquisition function optimization in BO as local optimization are usually employed (line 147-165 in the paper). For any function, there is some number of inducing points for which the sparse GP posterior converges to that of the full GP. In practice, however, this threshold is unknown, and may be prohibitively high for complex functions. Hence, Figure 1 attempts to illustrate that for a **fixed number of inducing points**, there is a failure mode in which attempting to model the full input space results in an overly-smooth posterior, and that one possible remedy (ours) is to allocate this representational budget towards a subregion of the input space. **The design of $\mathcal{L}_{\text{reg}}.$** Without this regularization term, we observed that the ELBO loss can lead to arbitrarily high weights, leading to large lengthscales and poor fidelity within the search region. Hence, we designed $\mathcal{L_{\text{reg}}}$ to emphasize prediction within the search region, since we are only optimizing within those subregions in FocalBO. To better illustrate the formulation of $\mathcal{L}_{\text{reg}} $, we can rewrite it as $L_{\text{reg}} = \frac{ \sum_{i=1}^t w_i - |X \in S_{c,l}| }{ |X \in S_{c,l}| } $ $= \frac{ \sum_{i=1}^t w_i -\sum_{i=1}^t 1_{x\in S_{c,l}} w_i }{ \sum_{i=1}^t 1_{x\in S_{c,l}} w_i } = \frac{ \sum_{i=1}^t 1_{x\notin S_{c,l}} w_i }{ \sum_{i=1}^t 1_{x\in S_{c,l}} w_i } $ in which the numerator encourages reducing the weights of points outside search region (leading to better estimation within the search region), and the denominator scales the value of $\mathcal{L}_{\text{reg}}$ according to the amount of data points inside the search region. Using this relative quantity allows us to perform hyperparameter-free regularization over different search region sizes. In our experiments, we observed that adding $\mathcal{L}_{\text{reg}}$ did help avoid overfitting to large lengthscales and enabled better local estimation compared to SVGP (Figure 7 and 8 in appendix). --- Rebuttal 2: Title: Rebuttal (2/4) Comment: **Theoretical implications of sparse GP approximation** We agree that the theory section can be fleshed out more, and will do so by (1) providing more precise formulae for the additional regret incurred due to too-sparse approximations, and (2) experimental and numerical evidence for the increased approximation fidelity of Focalized GP. For the former, we do not aim to propose any new theorems due to the restrictive nature of common assumptions, lack of guarantees for ELBO-based methods, and difficulty for controlling the effective number of inducing points within the local region. However, we will better motivate our method by providing examples of high KL error under reasonable assumptions, e.g. those used in SVGP-TS[1]. In **Figure R2**, we also empirically measure our claim that Focalized GP can significantly reduce approximation error on the search region. We sampled 8000 training points from 2d GP functions to train focalized GP and SVGP. Over different size of the search region, we compare the KL divergence of the GP posterior prediction over search region between sparse GPs and the exact GP. We observe that the KL divergence between focalized GP and exact GP is consistently smaller than that between SVGP and exact GP, implying tighter approximation to the exact GP over local region. While a rigorous regret bound is hard to derive, we conduct an empirical study where we directly compare the optimization performance between focalized GP and SVGP when combining with TuRBO. In this way we can eliminate the influence of hierarchical acquisition optimization. The optimization performances are shown in **Figure R3**. We observe that focalized GP outperforms SVGP on both high-dimensional problems, which empirically demonstrates our theoretical implications that Focalized GP contributes to reducing regret. [1] Vakili, Sattar, et al. "Scalable Thompson sampling using sparse Gaussian process models." Advances in neural information processing systems 34 (2021): 5631-5643. --- Rebuttal 3: Comment: **The novelty of our work** Here we highlight the novelty of FocalBO in both GP and BO aspects. **We design the first sparse GP model that improves acquisition function optimization**. Our design of focalized GP enables **better local estimation, aiming at improving acquisition function optimization during Bayesian optimization**. We adopt variational inference to derive focalized GP, which is **capable of performing joint posterior inference and sampling given a set of test points just like exact GP**. Therefore our proposed GP model is automatically compatible with any acquisition function that is used for exact GPs. (Figure 2). The locally induced GP (LIGP) in the cited work [2] is designed for regression tasks, which assigns different inducing points to every test point, aiming at improving the point estimation measured by MSE. The inability of joint posterior computation and discontinuous prediction prevents LIGP from easily being incorporated into the BO framework. **We propose the first scalable BO algorithm that is capable of utilizing large offline dataset for optimization.** Our design of hierarchical acquisition function optimization with focalized GP **searches promising positions over different scale of search region during one BO iteration, enabling making decision based on both global and local information under restricted computational budget**. The overall idea of TuRBO [3] is to restrict the search space to a fixed size during one BO iteration, and adjust the search region size based on the optimization results. It achieves scalability by discarding previous samples and restarting when the search region reduce to certain threshold. Their use of exact GP still faces the scalability issue and would be computationally intensive when handling large offline datasets. As mentioned in the last paragraph of section 4.2, our proposed framework is orthogonal to TuRBO, and find that FocalBO and TuRBO can have a complementary effect in sections 5.2 and 5.3. [2] Cole, D. Austin, Ryan B. Christianson, and Robert B. Gramacy. "Locally induced Gaussian processes for large-scale simulation experiments." Statistics and Computing 31.3 (2021): 33. [3] Eriksson, David, et al. "Scalable global optimization via local Bayesian optimization." Advances in neural information processing systems 32 (2019). Title: Rebuttal (3/4) --- Rebuttal 4: Title: Rebuttal (4/4) Comment: **The rationality of FocalBO design** FocalBO has two main components: the focalized GP and the hierarchical acquisition optimization. In the clarification of the regularization term, we have explained that the scaling is used for maintaining similar scale for similar size of search regions, and our emprical evaluation demonstrate the rationality of our loss function design. Below we give some empirical results and the intuition in the design of hierarchical acquisition optimization. After each BO iteration, we use a simple yet effective heuristic to adjust the optimization depth for the next round, without introducing additional hyperparameters. We encourage more exploitation (increase the depth) when finding good points in the smallest search region, and encourage more exploration (decrease the depth) when good points are found in boarder search regions. On our set of diverse benchmarks - including synthetic and commonly-used test functions, as well as two challenging high-dimensional problems - this simple heuristic worked without any tuning required. **Comparison with original TuRBO** Our current experiments with TuRBO are all re-implementations to make sure baselines have the same computational cost. We run the original TuRBO implementation (with exact GP and Thompson sampling) on both robot morphology design and human musculoskeletal system control task (**Figure R4**). We observed that FocalBO outperforms TuRBO on both tasks with smaller computational cost. The reason of TuRBO's poor performance may be that it cannot quickly adapt over the search space when the online evaluation budget is small. **Typos and unclear statements** Thanks for pointing out the typos. We will correct them and improve our paper in the new version. --- Rebuttal Comment 4.1: Comment: I’ll reply to all 4 of your rebuttal messages here. Regarding Figure 1: Thanks for challenging me on this and I think you’re right. Regarding Lreg: Thanks for pointing out this alternative characterization of that cost function. I agree that heuristically this sort of makes sense as a way of dealing with some of the problems that arose. But what I’m suggesting is that it’s not convincing that this will work on general problems. The modification appears ad-hoc and appears not to be motivated by a statistical model, numerical approximation, or otherwise a precise idea of how big that term should be in order to balance against the expected log-likelihood and KL terms. Regarding the theoretical implications: Thanks for these simulations, they are interesting and help to build understanding of the proposed methodology. Regarding novelty of the focal approximation: I agree that there are fundamental differences between LIGP and your work, and I do not know of prior work which uses a “goal oriented” inducing point approximation for BO. I reference LIGP to point out that the idea of teleologically determining inducing points is not novel; it is rather its application to BO which is novel. Regarding Scalability: I must for now push back on the idea that the proposed method’s scalability is novel. The approach used in your article for scaling the GP falls into the meta-approach of approximating somehow the GP, and then doing the linear algebra exactly. It’s true that TuRBO does not use this meta-approach. However, an alternative meta-approach used for GPs is to avoid approximating at the statistical/GP level, but then to instead approximate the linear algebra directly. This latter approach is the one taken by Eriksson et al in their original implementation, which leverages pytorch and fancy linear algebra to avoid direct cholesky decomposition. Consequently, looking back at Eriksson et al, it seems like their largest dataset is of size 20,000. If I am not mistaken, the largest study in the proposed article in terms of total sample size is the Robot Morphology problem with a total problem size of 10,128, which is actually less. Furthermore, all but the last 128 of these are acquired offline. A priori, I would imagine that the fact that the ELBO has to be re-optimized at every level of depth makes for some amount of overhead. However, you did mention in your rebuttal that your algorithm had a smaller computational cost than TuRBO, though without providing the numerical results; I understand you may not have been able to fit those results onto the single rebuttal page. But on the basis of the observed experiments, we actually find that TuRBO’s original article contains the larger problems. (Please let me know if I missed or misinterpreted something). Regarding the comparison with the original TuRBO: Thanks very much for running these experiments, those are very helpful. It’s impressive that your method is able to keep ahead of it on those important applications. Regarding The rationality of FocalBO design: As I explained above, I still don’t understand why it would be the case that the Lreg term will always have the right magnitude relative to the Lwll and Lkl terms. But regarding the optimization depth criterion, I think I agree that what you’re doing makes sense. In conclusion: I thank the authors for the additional experiments and for engaging with my review. I think the direct comparisons against TuRBO are convincing so I’ll raise my score to a 4/borderline reject. Again, I think this is an interesting paper which clearly makes incremental progress on this challenging problem. Regretfully, my opinion is still that this paper does not have the novelty/impact necessary for a 2024 neurips publication (particularly because the large-offline setting, though certainly not contrived, I would argue is somewhat niche) but nor do I feel strongly that this article should be excluded from these proceedings. PS, upon reviewing the article, I noticed on line 188 that it’s not a stationary kernel that you’re looking for; stationary kernels which are periodic form counterexamples to your claim. Rather, it is kernels which decrease monotonically wrt some distance. (Just pointing it out for your benefit; no need for us to litigate this minor point and it does not impact my score; it's indeed clear that this projection will usually be efficiently computed in practice). --- Reply to Comment 4.1.1: Comment: Thank you for providing detailed feedback and raising the score! Below, we address your concerns regarding the scalability of FocalBO. As mentioned in our novelty rebuttal section, the scalability of TuRBO is primarily achieved through a restart mechanism, which discards all previously sampled data and re-initializes the optimization process. In our reproduction of the experiments from the original TuRBO paper, we found that the average number of evaluations in the rover planning problem (originally reported as 20,000 total evaluations) before restarting was $3198.7\pm649.8$ (mean$\pm$1 std, averaged over 77 restarts). This suggests that the actual data size used for GP in TuRBO is much smaller than data size in our addressed problem. Regarding the GP implementation in the original TuRBO, we agree that the conjugate gradient method and Lanczos decomposition in GPyTorch help to accelerate and parallelize computation via GPU. To further demonstrate the computational efficiency of our method, we ran FocalBO and the original TuRBO on an NVIDIA GeForce RTX 2080Ti with 11 GB of memory, instead of an NVIDIA A100 with 80 GB of memory. We observed that FocalBO was still capable of optimizing the robot morphology design task, while TuRBO encountered an out-of-memory error during GP training, even when using approximate computing methods. We believe that data size remains an issue in TuRBO, and this may be one of the reasons why TuRBO introduces a restart mechanism.
Summary: This paper focuses on scalable Bayesian optimization, where the training data size is large and the search space is high dimensional. This work proposes a new kind of sparse GP model by designing a variational loss function that allows for adaptively focusing on interesting regions throughout the optimization process. By coupling this GP model with common acquisition functions such as EI, TS, PI, the authors show in experiments the effectiveness of this newly developed GP model. The authors also provide comprehensive algorithm analysis to further decipher the exploration vs. exploitation behavior of the algorithm. Strengths: The scalability issue of GP has greatly limited the application of BO to real-world problems. This paper introduces a novel sparse GP model and its complementary optimization strategy to solve this issue. The idea is intuitive and elegant, and with compelling empirical study, I think this is a valuable piece of work to the field. Weaknesses: Minor error: - L130, "more that" -> "more than" Technical Quality: 4 Clarity: 3 Questions for Authors: - One concern is that the search region is always centered around the current best position. In the case when the best point from initially sampled points is far away from the true global optimum, and the landscape of objective function is not very smooth, then the algorithm will have a hard time to attain the global optimum quickly, because Focalized GP reduces to regular sparse GP in the broader search region and suffers inaccurate prediction. This issue might explain in Figure 3 why "FocalBO TuRBO" has a better performance than "FocalBO EI" (because TuRBO divided up the search space). I wonder if there is any better idea in setting the center of search region to encourage more exploration. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The discussion of limitations is absent in the paper, and it is better for the authors to discuss limitations of this work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation of this work! We address your question below. The rebuttal figures can be found by opening the rebuttal PDF file. **Better way to center the search region** A plausible alternative would be to instead center the search region at the maximum of the chosen acquisition function. We empirically investigate this in **Figure R1 (a)**, which compares different ways of selecting the search region center by measuring the distance from the search region center to the global optima. We observe that current best point consistently is the closest to the global optimum, which validates this design choice. For the experiment above, we sampled 2d functions from GPs with Matern $\frac{5}{2}$ kernel and lengthscale of 0.05 (representing rigid functions), and selected the best point over unifromly sampled 10,000 points as the global optima. A sparse GP is already more explorative than using the full GP, since the smaller representational capacity leads to smoother posteriors. We demonstrate this empirically in **Figure R1 (b)**, where we measure the pair-wise distance of 100 Thompson sampling points under exact and SVGP (with 50 inducing points). We observe that sparse GP actually samples more diverse sets compared to exact GPs, i.e. exhibiting more exploration. Therefore, using focalized GPs does not sacrifice exploration, and significantly helps exploitation by performing acquisition function optimization over smaller search regions. Regarding Figure 3, we believe that the better performance of "FocalBO TuRBO" may be because utilizing trust regions helps FocalBO to better exploit at early optimization depths, leading to faster convergence with limited evaluation budgets. **Limitations of our work** While FocalBO demonstrate superior performances against existing scalable BO algorithms, theoretical can be further improved to make it more rigorous. For instance, an analysis of the convergence of FocalBO would enhance our understanding of the algorithm’s mechanism. The currently-used hypercube search region may not align well with the underlying function landscape, potentially wasting computational resources on low-performing regions. A more sophisticated search region design such as [1] may help FocalBO to further improve the optimization performance. [1] Wang, Linnan, Rodrigo Fonseca, and Yuandong Tian. "Learning search space partition for black-box optimization using monte carlo tree search." Advances in Neural Information Processing Systems 33 (2020): 19511-19522. --- Rebuttal Comment 1.1: Comment: Thanks for providing empirical evidence for the design choice of search region center. I still think this work has adequate contribution to the field, however, regarding SUei's comment on novelty / impact of this work, the author did not offer more evidence, I would revise the score to 7 (Accept). --- Reply to Comment 1.1.1: Comment: Thank you for your feedback. Below, we clarify the novelty and impact of our work. **The novelty of our work** **We design the first sparse GP model that improves acquisition function optimization**. Our design of focalized GP enables **better local estimation, aiming at improving acquisition function optimization during Bayesian optimization**. We adopt variational inference to derive focalized GP, which is **capable of performing joint posterior inference and sampling given a set of test points just like exact GP**. Therefore our proposed GP model is automatically compatible with any acquisition function that is used for exact GPs. (Figure 2). Previous work which allocate inducing variables based on test points can not directly be incorporated into BO framework. Reviewer SUei also recognized that focalized GP is **fundamental different** from previous works. **We propose the first scalable BO algorithm that is capable of utilizing large offline dataset for optimization.** Our design of hierarchical acquisition function optimization with focalized GP **searches promising positions over different scale of search region during one BO iteration, enabling making decision based on both global and local information under restricted computational budget**. Besides, FocalBO also demonstrates **superior optimization performance in large online setting** in our human musculoskeletal system control task. Regarding the Reviewer SUei's concerns of the scalibility, we emphasize that **our used focalized GP in FocalBO share the same $\mathcal{O}(m^3)$ computational complexity as SVGP**, while the complexity of TuRBO is $\mathcal{O}(n^3)$ for using exact GP. We addressed this point by investigating the restart mechanism of the original TuRBO and demonstrating a failure case of TuRBO under low computational resources. Our numerical experiment also demonstrates that TuRBO needs significantly more GPU memory than FocalBO even under approximated linear algebra computation (78.9 GB v.s. 7.8 GB). Our empirical evidences demonstrate that FocalBO is more scalable than orginal TuRBO. **The impact of our work** We argue that our addressed problem setting is not niche. **The problems we tackle in the experiment section includes both large offline and large online setting, where FocalBO consistently demonstrates the superior performance.** In the human musculoskeletal system control task, FocalBO outperforms TuRBO with 3000 online evaluations, which we consider to be a substantial number. We emphasize the importance of the large offline setting as no previous scalable BO methods have been able to effectively handle it. We believe our setting is important because the problems addressed by BO are typically expensive to evaluate. Instead of collecting data online from scratch, many problems already have data available from various sources. FocalBO’s ability to utilize larger datasets enables it to tackle challenging problems where the online evaluation budget is limited.
null
null
Rebuttal 1: Rebuttal: We would like to thank the reviewers for helpful and insightful reviews. In addition to addressing the comments below, we will incorporate the suggestions and new figures into the camera-ready version. Our additional experimental results are attached in the rebuttal PDF below. Pdf: /pdf/e6613a2d1008fdb4040a4311ba8e833434e9aaa7.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Towards Stable Representations for Protein Interface Prediction
Accept (poster)
Summary: This work focuses on protein interface prediction, i.e., determining whether a pair of residues from different proteins interact. It notices the conformational change within the protein upon binding and regards the flexibility as an attack on the trained model. An adversarial training framework named ATProt is introduced to defend against this kind of attack with a theoretical guarantee. Experiments demonstrate consistent improvements. However, I am not completely certain about many unignorable points, such as the experimental setting, the missing of up-to-date baselines, and etc. I am absolutely fond of the idea of being robust to conformational changes and give a score between 4 and 5 (leave the decision to other reviewers and AC). Strengths: (1) This paper pinpoints an important topic in computational biology, that is, the domain shift for bounded and unbounded structures when evaluating deep learning models. It is a real-world dilemma that scientists do not hold bound-form structures for new proteins. (2) ATProt is proposed to achieve stability-regularization. It has a theoretically upper bound associated with the Bernstein basis and a stability-oriented objective is introduced. The overall framework can be generalized to many sorts of GNNs, which is also a big strength. (3) The author conducted many different experimental settings, including native unbound structures, and structures produced by AF2, and ESMFold. The results showcase the superiority of ATProt and demonstrate the decrease of DL models when using predicted or native structures as inputs. Weaknesses: (1) The problem formulation is kind of confusing. The authors target classifying all possible pairs of residues from separate proteins. I suppose this may not be the regular version of protein interface prediction (PIP). The standard one would be to classify whether each residue on the ligand and receptor is on the interface or not. The PIP task in this paper is more challenging than the standard PIP task, and is very close to (but easier than) the protein-protein docking problem. Because once the pairwise interactions are recognized, the docking pose is nearly known. I recommend the authors reconsider the problem definition and formulate it more carefully. (2) No adequate ablation study. I believe a more proper ablation analysis is needed to showcase the contribution of each component. For instance, it would be interesting to see ATProt-SGC/CHEBY/LPF's performance without the stability regularization. (3) More baselines are required. I notice that all baselines are pretty old, containing NEA (2017), dMasif (2021), BIPSPI (2019), SASNet (2019), DTNN (2017), and DCNN (2016). It is not convincing enough to select these old-school algorithms as competitors. There has been a surge of interest in developing DL models for interface identification. Please see below. Moreover, it is necessary to also evaluate ESM-Fold and Alphafold-2 (if possibly, AF3) in this problem, because they can also immediately output the complex structures (so the interface is given) from sequences. These structure prediction models completely do not suffer from structural perturbation. [A] Epitope 3D: a machine learning method for conformational b-cell epitope prediction. Brief in Bioinformatics, 2022. [B] ScanNet: an interpretable geometric deep learning model for structure-based protein binding site prediction, Nature Methods, 2022. [C] PesTo: parameter-free geometric deep learning for accurate prediction of protein binding interfaces. Nature Communications, 2023. [D] SEMA-2.0: Antigen b-cell conformational epitope prediction using deep transfer learning. Frontiers in Immunology, 2022 [E] Pair-EGRETt: enhancing the prediction of protein-protein interaction sites through graph attention networks and protein language models. ICLR Workshop, 2023. (4) I really appreciate the authors' efforts in attacking the structure change's bad effect in DL models. However, there is an unavoidable issue that I want to discuss with the authors and AC. If our concentration is to bridge the gap between training (bound) and test (native/unbound) datasets, why does not the author directly train the model on unbound or predicted structures? To be specific, we can use AF2/ESMFold to obtain unbound structures and leverage them as the training data. Moreover, it is also feasible to rely on Rosetta, PyMol, OpenMM, etc. to relax the bound structures with some force fields and approximate the unbound circumstance. In the AI4S community, I believe the solution is more flexible and we should consider more simple but effective methods to solve the problem. There have already been some works [A] that mention this representation shift between real structures and predicted structures. [A] Protein 3D Graph Structure Learning for Robust Structure-Based Protein Property Prediction. AAAI 2024. Technical Quality: 3 Clarity: 4 Questions for Authors: (1) In Figure 3, how do the authors increase the structure change of test samples? I agree with Equ. 5 to use \Delta X to represent the structure perturbation. However, I suppose there are some constraint in this \Delta X. Notably, the protein flexibility is not entirely random. The atoms' movement need to follow some physical rules. See [A] for illustration. [A] Fractional denoising for 3d molecular pre-training. ICML 2023. (2) As far as I am concerned, dMasif is a surface point cloud-based mechanism, and it only outputs the prediction of surface points instead of residues. How do the authors make the comparison compatible? (3) The conformational change is fancy without doubt and many studies have been trying to do relevant analysis. For instance, ProtMD [A] pretrains the DL model on MD data to learn the protein flexibility with continuous time domain. Can the authors do an additional experiment to see whether this category of pretraining is effective to mitigate the negative effect of conformation change for PIP? [A] Pre-training Protein Geometric Models via Molecular Dynamics Trajectories. Advanced Science, 2022. (4) It looks weird to me that the pretraining on DIPS brings a negative impact on dMasif but positive impacts for all the others. After pretraining, dMasif drops from 0.928 to 0.922 for native-bound, from 0.912 to 0.903 for native-unbound, etc. Can the authors explain this counterintuitive phenomenon? Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes, the limitation is addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer's thorough review and insightful feedbacks. We also appreciate the reviewer's recognition for the core idea and contribution of this paper. We will respond to these comments point-by-point. **[Cons. 1 Problem Formulation]** Thank you for your professional comments, which we fully agree with. We will make some adjustments to avoid confusion. To address your concerns, we make the following efforts. 1. Firstly, we clarify that existing research related to our task falls into three categories: partner-specific interface prediction (PS-IP), partner-specific binding site prediction (PS-BSP), partner-agnostic binding site prediction (PA-BSP). **PS-IP** is the problem of our paper, which is to predict whether there is an interaction between a pair of residues of two different proteins. In many cases (e.g., NEA, SASNet, BIPSPI, HOPI), this task is referred to as 'protein interface prediction'. **PS-BSP**, as mentioned in your comment, predicts whether the residues belong to the interface. For this task, the interface of a protein is conditioned on its partner. **PA-BSP** predicts all possible binding sites of a protein. It is the main setting of ScanNet and PesTo, which does not assume knowledge of the partner. 2. Although the applications of the three tasks differ, they are strongly related. We will mention the above three tasks and their differences in the abstract and introduction sections. Importantly, in the related work section, we will provide descriptions of baseline methods belonging to each. **[Cons. 2 More Ablation Study]** Thanks for the valuable suggestion. To address the your concerns, we show ablation results on stability regularization (SR) in Table 3,4 in the one-page PDF. We find that: (1) In the Native-Bound scenario, SR is ineffective, while in the remaining (real-world) scenarios, SR significantly improves all four cases. (2) The gain of SR on the SGC encoder is relatively small among the four cases, which may be due to the direct reduction of the polynomial order $K$ in SGC. Therefore, the expressive power is explicitly diminished. **[Cons. 3 More Baselines]** Thanks for the constructive feedback. Regarding the five papers provided, we find that except for Pair-EGRET, they are all designed for the task of 'partner-agnostic binding site prediction'. However, we find that PesTo can perform the PIP task after simple code modifications. **In this way, we consider Pair-EGRET, PesTo, ESM-Fold and AF2 as additional baselines.** The results are shown in Table 7 in the one-page PDF. We find that ATProt can still outperform AF2 (actually AF-Multimer) and ESM-Fold. **[Cons. 4.1 Use AF2 to Generate Training Data]** This is a very worthy topic for discussion. Frankly speaking, this is our initial attempt. Table 7 in one-page PDF shows the results. The results indicate that using the suggested methods is difficult to achieve SOTA. Regarding this, our insights are: 1. There may still be distribution shift between the training and testing sets. This is because the cropping process of AF2 considers monomers and multimers. This results in AF2's training set being a mixture of bound and unbound structures. Thus, using AF2 to generate both training and test sets may not necessarily guarantee consistent distribution. 2. It is not very practical to use AF2 to generate training structures on a large-scale dataset (e.g., DIPS with over 40,000 dimers), as it would be very time-consuming. 3. Our method allows **training only once** and then testing on **various versions of structures**. This uniformity and wide applicability are also a potential advantage. **[Cons. 4.2 Relax the Bound Structures]** Performing force field optimization based on First Principles is a very insightful idea and has been proven effective[1]. However, molecular dynamics (MD) often requires huge computational cost. Thus, applying MD to large-scale datasets is a challenge. [1] Benchmarking Refined and Unrefined AlphaFold2 Structures for Hit Discovery. ACS, 2023. **[Cons. 4.3 Representation Shift]** Thanks for providing this literature called SAO. SAO indicates that there is room for improvement when training and testing both on AF2-structures (i.e., its TonP baseline). Although tasks are different, the deep motivations of ATProt and SAO are similar: both to enable effective inference using structures from various source. **[Q1. How to Change Structures]** We strongly agree with your point that flexibility cannot be arbitrarily generated. We applied a dynamics guided method called ProDy. It can sample structures that differ from the initial structure by a specific value (RMSD). **[Q2. About dMaSIF]** The main point of dMaSIF is to generate point cloud representations. However, the dMaSIF paper also contributes a geometric convolutional model, enabling it to address structure-related downstream tasks. Importantly, the PIP task is implemented in the dMaSIF paper. **[Q3. ProtMD]** Thanks for providing this valuable literature, which has a user-friendly open-source nature. In Table 7 of the one-page PDF, we present the experimental results of ProtMD. We consider the results of fine-tuning (ProtMD-FT) and linear probing (ProtMD-LP). Surprisingly, its performance slightly exceeds the baseline specifically designed for PIP. **[Q4. Pre-training with dMaSIF]** Thanks for the thorough findings. The quality of the DIPS and DB5.5 dataset differs slightly. The latter is an expert-annotated "gold standard dataset", while the dimers mined in DIPS are mainly from multimers in PDB, which do not reflect pure **binary interaction knowledge**. Facing such gap, negative transfer (where pre-training has a negative impact) is common for some models because they are not designed for the pretraining-finetuning paradigm. --- Rebuttal Comment 1.1: Title: Update Comment: Thanks for the authors' efforts in answering my question. I am glad to see that ATProt-Bern outperforms all essential baselines and stands the No. 1 in this binding site prediction task. I am also happy to know that my suggestion helps the author clarify more clearly the concepts and definition of interface v.s. binding site. I understand that a conference paper cannot totally solve the binding site prediction task and there is still a long way to go, e.g., considering more training data (like using Alphafold Database), using MD simulations to relax the structures, designing domain adaptation and generalization algorithms to adapt the methods to predicted structures, etc. Based on this fact, I believe ATProt-Bern is worthy of publication and should be regarded as an adequate contribution to this important challenge. I have raised my score to 6 and recommend the authors to incorporate the updates to the final revisions, such as the ablation studies, and the new baseline (PesTo, ProtMD). Last but not least, I am still confused by this reply: "[Q2. About dMaSIF] The main point of dMaSIF is to generate point cloud representations. However, the dMaSIF paper also contributes a geometric convolutional model, enabling it to address structure-related downstream tasks. Importantly, the PIP task is implemented in the dMaSIF paper." As far as I am concerned, the quasi-geodesic convolutional model is performed in the point cloud instead of the structure. My question is how the author mapped the prediction of dMaSIF on the surface point to the residues? --- Reply to Comment 1.1.1: Title: Thank you for raising the score and helping us improve. Comment: Thank you for raising the score. We greatly appreciate your recognition and assistance of our work. Based on your constructive suggestions, we commit to including all the added baselines in the final version manuscript. Additionally, we will introduce a discussion on AF2 data augmentation (providing experimental results) and the use of MD for relaxation. ***About dMaSIF*** We previously misunderstood your question. Please allow us to explain again. Regarding dMaSIF, we did perform additional processing to make it compatible with the PIP task. In the dMaSIF paper, the predicted elements of the interface are **not residues but points** simulated by the dMaSIF method. This can be found in its source code and paper, for example: *"For interaction prediction, we compute dot products between the feature vectors of both proteins to use them as interaction scores between pairs of points."* This setting is obviously different from the mainstream approach of PIP, as it calculates the link probability between points rather than residues. Additionally, when we reproduced dMaSIF under its default setting, the result was only around 0.85, indicating significantly lower performance. **Therefore, we made modifications by referencing the processing manner in the ScanNet paper.** In the ScanNet paper, to allow for a fair comparison with MaSIF-site, they converted MaSIF's predictions from the surface point level to the residue level. This is described in the ```Baseline methods/masif-site``` section of ScanNet. The approach is to first assign each point to the nearest atom and the corresponding residue. Then for each residue, take the highest probability among all its points as the final result. However, things are slightly different because we need to calculate the residue-residue link probability rather than classify each residue in the binding site prediction task. **Therefore, we made some minor adjustments:** We first assign each point to the nearest atom and the corresponding residue. Then for each **residue-residue pair**, take the highest probability among all its point pairs as the final result. For an illustrative example, let's assume two proteins have 1 and 2 residues respectively, labeled as {1} and {1',2'}. dMaSIF generates a total of 6 surface points for these 3 residues. The correspondence between points and residues is: {a,b$\rightarrow$1}, {c,d$\rightarrow$1'}, {e,f$\rightarrow$2'}. dMaSIF can outputs the linking probabilities between points: {a,c:0.2},{a,d:0.4},{a,e:0.6},{a,f:0.8},{b,c:0.1},{b,d:0.3},{b,e:0.5},{b,f:0.7}. Finally, for residue pair {1,1'}, we pick the highest value among {a,c:0.2},{a,d:0.4},{b,c:0.1},{b,d:0.3} (i.e., 0.4) as the result. For residue pair {1,2'}, we pick the highest value among {a,e:0.6},{a,f:0.8},{b,e:0.5},{b,f:0.7} (i.e., 0.8) as the result. We tried alternative approaches for ```pick the highest value```, such as taking the average of all relevant probabilities or the average of the top k probabilities, but they were not as effective as simply using the highest value. Thus, we can reformulate dMaSIF into our PIP setting. Under this setting, its performance can match or even surpass that of several baselines.
Summary: This paper first identifies a commonly overlooked issue in protein docking: protein flexibility, and proposes an improved method to address it. This approach utilizes an adversarial training framework to maximize Lipschitz continuity. Experimental results demonstrate the effectiveness of this method. Strengths: The overall quality of this paper is very high. The addressed issue - protein flexibility - is highly meaningful. The methods are sound. Therefore, I believe this paper should be accepted. Weaknesses: The writing in one part of the paper is not very clear. I hope the authors can adjust it. The paper discusses a method of adversarial training to ensure Lipschitz continuity, but the specific training objectives of adversarial training are not explicitly stated here. Adversarial training typically refers to having two competing objectives, which are contradictory. However, I did not see this aspect described in the paper. Technical Quality: 3 Clarity: 2 Questions for Authors: None Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Excessive emphasis on ensuring Lipschitz continuity may limit the expressive power of the network ($L_S$). The article is meaningful and interesting. However, excessive restriction on the network's expressive ability could potentially prevent this method from becoming the ultimate solution. Therefore, I believe this article does not reach the level of spotlight or oral presentation. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Comment: Let me add that I believe the method proposed by the author is a form of regularization rather than an adversarial training approach. If I am correct, please adjust the relevant writing accordingly. --- Rebuttal 2: Comment: Also, please add the ablation study and baselines (e.g. EBMdock) as reviewer RNrt mentioned. --- Rebuttal 3: Rebuttal: We thank the reviewer **RLcQ** for acknowledging our method as "meaningful and interesting", the quality of our paper as "very high" and our methods as "sound". Many thanks for the constructive feedback especially for the questions about the writing part. These all help us to further enhance the readability and the quality of our paper. **[Cons. Description of Adversarial Training]** ***Our explanation:*** Thanks for the constructive feedbacks. We strongly agree with you that adversarial training (AT) typically involves two or more conflicting training objectives. Many cases of literature indicate that a more accurate statement is: Lipschitz continuity regularization is a popular and effective method for **Adversarial Robustness (AR)**. From the formulation of the training objectives, two losses only exhibit intuitive conflicts, as $\mathcal{L}_S$ acts by constraining the slope of the GNN filter, which limits the expressive power of the GNN (as mentioned in the Limitation section). Moreover, from the empirical perspective, the regularization loss slightly reduces the model's performance on the 'Native-Bound' structure, implying that there may be an **implicit conflict** in the training objective. Therefore, we acknowledge that the two objectives do not have explicit or theoretically provable conflicts. In our work, robustness is the model's performance when facing such flexibility 'attack', that is, when we test on unbound, ESM-Fold or AF2 structures. Therefore, while our training objectives ($\mathcal{L}_I$ and $\mathcal{L}_S$) have implicit conflicting nature, we believe that AR will be a more conducive expression for promoting readability. ***Our adjustment:*** 1. Firstly, as shown in Table 3,4 in one-page PDF, we note that the proposed regularization slightly impairs performance in the standard, i.e., Native-Bound scenario. Thus, we validate the conflicting nature of the objectives in the empirical perspective. 2. Additionally, we will supplement the formulation analysis of the objective conflicts, i.e., how constraining the slope of the GNN filter affects its expressive power. 3. We will adopt the use of "adversarial robustness" instead of "adversarial training" which will involve the Preliminaries section. As a result, the training objective for protein representation stability will have a more rigorous description. **[Cons. Ablation Study]** Thanks for the valuable suggestions. ***About baselines:*** We have added various baselines, and all results can be found in Table 7 in the one-page PDF. We have added the following baselines: **EBMdock, Pair-EGRET, PesTo, ProtMD-FT, ProtMD-LP, ESM-Fold, AF2 and AF2-Train**. ***About regularization:*** We perform ablation study on the effectiveness of the stability regularization. The results are shown in Table 3,4 in the one-page PDF. **We will include all newly added experiments to our revised manuscript** --- Rebuttal 4: Comment: Dear RLcQ, Have you had the chance to read the authors’ rebuttal, and does it address the concerns you raised, and has it influenced your evaluation of the paper? Best regards, AC --- Rebuttal Comment 4.1: Comment: Dear AC, I have reviewed the authors' response. I have no further questions and stand by my original assessment. I believe the paper has reached the standard for acceptance, but it does not meet the criteria for spotlight or oral presentation.
Summary: This paper considers the generalization issue caused by the conformational changes of two proteins before and after binding in the PIP task. The authors view protein flexibility as an attack on the model and aim to defend against it for better generalization. Hence, an adversarial training framework for protein representation is proposed, termed ATProt. ATProt is theoretically proven to ensure protein representation stability under complex protein flexibility. Experiments on the DB5.5 and DIPS datasets further validate the effectiveness of ATProt, especially when using unbound structures as input. Strengths: * The concept of viewing protein flexibility as an attack on the model and utilizing an adversarial training framework to address this issue is both interesting and brilliant. * Experiments on the DB5.5 and DIPS datasets effectively validate ATProt's performance improvement in unbound scenarios. * ATProt provides a novel perspective for robust predictions with real unbound inputs. Combining ATProt with protein structure prediction models such as AlphaFold and ESMFold might aid in advancing sequence-based PIP tasks. Weaknesses: * Although the idea of ATProt is interesting and effective, the model's architecture adopts a combination of existing methods, BernNet and Cross-attention. * The ablation experiment targeting 'SR' was conducted only within the 'ATProt-BearNet' combination. * Lacking some visualization experiments. No examples are presented to show which residue pairs are predicted more accurately before and after incorporating ATProt. * In Supplementary.E, some hyperparameters are missing, such as the number of epochs and the number of Cross-attention layers. Technical Quality: 3 Clarity: 2 Questions for Authors: * What about the experimental results for the removal of 'SR' in the other three encoders? Clearly demonstrating the performance improvement brought by 'SR' in the other three encoders would better validate the effectiveness of the framework. * Considering visualizing the changes in interface prediction before and after adding 'SR' could more intuitively validate the proposed framework.. * Showcasing the hyperparameter tuning process for “the weight of loss $L_{s}$ with BernNet” would be beneficial, as it is the core idea of ATProt. * Providing the number of epochs and the number of cross-attention layers would help readers better reproduce the results. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The authors have thoughtfully addressed the limitations of their study, particularly highlighting the absence of adversarial training-based classifiers in their framework. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely appreciate the reviewer **kBXn**'s recognition of our paper and valuable comments. We will respond to reviewer's insightful suggestions point-by-point. **[Cons 1. Novelty of Model's Architecture]** Thanks for the valuable comment. We sincerely clarify that the novelty of this paper is three-fold. 1. **Various protein graph encoders:** We fully agree with your feedback that we did not pay enough attention to the design of new protein representation models. However, **the proposed method (strategy) is plug-and-play**. Although we only demonstrate four cases, the GNN encoders applied for protein-related tasks are far more numerous [1,2]. The concept of stability regularization (SR) has the potential to be widely applied to **various existing encoders** to address the challenges posed by protein flexibility. 2. **Various protein-related tasks:** Through empirical and theoretical validation, we show that in the protein interface prediction (PIP) task, we can avoid simulating the exact distribution of protein flexibility and instead use **simple yet effective regularizations**. In other protein-related tasks, this strategy might also address distribution shifts caused by structural changes. For example, SR has the potential to resolve the distribution shift between AlphaFold 2 and native structures mentioned in [3]. 3. **New proposition for stable BernNet:** BernNet is a widely applied model with strong expressive power. Here, we introduce the Lipschitz regularization for BernNet for the first time (to our knowledge), achieving state-of-the-art performance on the PIP task. Due to the analytical difficulties caused by the complex polynomial structure of BernNet, we respectfully believe it is a relatively fundamental contribution to the field of stable GNNs. **[1]** Protein representation learning by geometric structure pretraining. ICLR, 2023. **[2]** Structure-based protein function prediction using graph convolutional networks. Nature communications, 2021. **[3]** SaProt: Protein language modeling with structure-aware vocabulary. ICLR, 2024. **[Cons 2. Need More Ablation Experiments]** We really appreciate the reviewer's detailed comments. We include the added experimental results in Table 3,4 in one-page PDF. From the results, we can see that: (1) In the Native-Bound (ideal and virtual) scenario, the proposed SR is ineffective, while in the remaining (real-world) scenarios, SR significantly improves all four cases of encoders. Furthermore, we have made a new discovery that: (2) The gain of SR on the SGC encoder is relatively small among the four cases, which may be due to the direct reduction of the polynomial order $K$ in SGC caused by SR. Therefore, the expressive power is explicitly diminished. **We will add all of these results to our revised manusrcipt.** **[Cons 3. Residue-Level Visualizations]** Thank you for your insightful comments regarding this issue. To address your concern, we add the residue-level visualization, and the figure is included in one-page PDF. Due to space constraints, we select two out of the 25 samples from the DB5.5 testset. The baseline method we select is NEA in our manuscript. First, we categorize the misclassified samples by NEA into **false negatives (FN)** and **false positives (FP)**. Due to the imbalanced nature of the PIP problem, we observe that there are notably more FP samples than FN samples. Therefore, in the added figure, we visualize all FN samples and some of the FP samples. Please kindly note that all of the visualized pair samples are misclassified by NEA but correctly classified by ATProt. From the figure we add, it can be seen that the residues involved in the shown samples (especially the FN samples), exhibit significant structural changes. In addition, we quantify the average structural changes of residues corresponding to correct and incorrect predictions for both NEA and ATProt. This is performed with all of the DB5.5 testset samples. For NEA, the average structural changes of correctly and incorrectly classified residues are 0.563 and 0.972, respectively. For ATProt, the respective values are 0.589 and 0.597. In simple terms, the structural changes impair the performance of NEA but have little impact on ATProt. This insightful comment highlights the advantages of the proposed method, and **we will include this interesting part in our revised manuscript**. **[Cons 4. Some Hyperparameters are Missing]** Thanks for your very thorough review. Table 5 in one-page PDF is a comprehensive summary of the hyperparameters. The **bolded** rows are newly added. **[Question 1. Effectiveness of SR]** Thanks for the feedback, and we have added more experimental results in the response to **[Cons 2]**. **[Question 2. The Prediction Changes after Using SR]** Thanks for the valuable comment. In the response to **[Cons 3]**, we conducted visualizations and quantitative experiments, both of which show the robustness of ATProt to structural changes. **[Question 3. Hyperparameter Tuning Process for Weight of $\mathcal{L}_S$]** Thank you very much for this question. For the overall training objective $\beta\mathcal{L}_I + \gamma\mathcal{L}_S$, we fix $\beta$ at $1.0$ and search for $\gamma$ in the range of $0.1$ to $10$. For fairness, we ensure that all baselines and our methods conduct hyperparameter search with ```n_trials=50``` in the Optuna API. We provide the results on the DB5.5 dataset in Table 6 in one-page PDF. We observe that all three cases exhibit a similar pattern: with the increase of $\gamma$, **the performance first increases and then decreases**. The difference is that the optimal weight ranges for the three types of encoders are not the same. **[Question 4. More Detailed Hyperparameters]** Thanks for your detailed suggestion, and we have provided more hyperparameter results in the response to **[Cons 4]**. The complete hyperparameter table will be added to our revised appendix. --- Rebuttal 2: Comment: Dear kBXn, Have you had the chance to read the authors’ rebuttal, and does it address the concerns you raised, and has it influenced your evaluation of the paper? Best regards, AC
null
null
Rebuttal 1: Rebuttal: Dear reviewers, We sincerely appreciate your valuable time and constructive feedbacks. **Please see the attached one-page PDF with a summary of added experimental results.** It contains: Figure 6: Visualization of the changes in interface prediction results before and after using stable regularization (SR). Table 3,4: The ablation study on the effectiveness of SR. Table 5: Hyperparameter choices. Table 6: The tuning process of the weight of $\mathcal{L}_S$. Table 7: Results of the newly added baselines (EBMDock, Pair-EGRET, PesTo, ProtMD-FT, ProtMD-LP, ESM-Fold, AF2 and AF2-Train). In this table, to address the concern of **Reviewer RNrt**, AF2-Train represents using the training structures generated by the AF2. Please see our reviewer-specific feedback for more information. Pdf: /pdf/5d3dc9b556325734a8935f8d97650c7554b83e22.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
M$^3$GPT: An Advanced Multimodal, Multitask Framework for Motion Comprehension and Generation
Accept (poster)
Summary: This paper introduces M3GPT, an advanced Multimodal, Multitask framework for motion comprehension and generation. M3GPT operates based on three fundamental principles. Firstly, it aims to create a unified representation space for various motion-relevant modalities. We employ discrete vector quantization for multimodal control and generation signals, such as text, music, and motion/dance, enabling seamless integration into a large language model (LLM) with a single vocabulary. Secondly, it involves modeling directly in the raw motion space. This strategy circumvents the information loss associated with discrete tokenizers, resulting in more detailed and comprehensive model generation. Thirdly, M3GPT learns to model the connections and synergies among various motion-relevant tasks. Text, being the most familiar and well-understood modality for LLMs, is utilized as a bridge to establish connections between different motion tasks, facilitating mutual reinforcement. Strengths: 1. M3GPT is equipped with multimodal tokenizers capable of compressing raw multimodal data, including motion, music, and text, into a sequence of discrete semantic tokens. 2. The paper jointly trains LLM and motion de-tokenizer, optimizing LLM in both discrete semantic space and raw continuous motion space. 3. This Paper is well writing and the video results in Supp. are well presented. Weaknesses: 1. In Table 3, some comparation results are missed. For text to motion generation task, you could compare your method with 1. MoMask: Generative Masked Modeling of 3D Human Motions; 2. Plan, posture, and go: Towards Open-World Text-to-Motion Generation; and 3. Real-time Controllable Motion Generation via Latent Consistency Model. If it’s not suitable for the comparations, you could explain the reason. 2. The parameterizations (dimensions) of the variable in Section 3 are all missing. 3. In line 167, T5 is utilized as the language model backbone. While most of the previous model utilize CLIP or BERT, you should include ablation study about the text encoder. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. Is it just a matter of migrating multimodal model to the motion domain? 2. What new issues have been encountered in the process? 3. What is your innovation in solving these problems? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The author has addressed the limitations. This paper focuses on modeling human body movements, excluding hands and faces details. Flag For Ethics Review: ['Ethics review needed: Research involving human subjects'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: In Table3, some comparation results are missed for text-to-motion task.** Thanks for your suggestions. We will add the related works mentioned, MoMask [1], PRO-Motion [2] and MotionLCM [3], in revised version. (1) To ensure fair comparison, we reproduce **MoMask** and **MotionLCM** on Motion-X dataset using their official code. As shown in the table below, our method achieves superior performance than MotionLCM, and comparable performance with MoMask. (2) Since **PRO-Motion** is not open-sourced, we were unable to reproduce it on Motion-X dataset, so its result is not included in Table3. |Methods (Text-to-Motion) Motion-X | R TOP1$\uparrow$ | FID$\downarrow$ |Div$\uparrow$ | | ---- | ----- |----|----| MoMask | **0.668** | **0.074**| 2.241 | MotionLCM | 0.658 | 0.078 | 2.206 | $M^3$GPT (Instruction-tuned) | 0.615 | 0.093 | 2.253 | $M^3$GPT (Instruction-tuned only T2M) | 0.661 | 0.076 | **2.273**| **W2: The parameterizations (dimensions) of the variable in Section 3 are all missing.** Thanks for your suggestions. We will add the dimensions of variable in section 3 as follows in revised version. - $N_m$ (line 128), the size of motion codebook $\mathcal{B}_m$, is set to 512. - $d_m$ (line 131), the dimension of raw motion features, is 263. - $d$ (line 133), the dimension of embeddings in motion codebook, is set to 512. - $d_a$ (line 149), the dimension of raw music, is 128. - $T_m$ (line 131) and $T_a$ (line 149) denotes the sequence length of motion and music, respectively. So their values depend on the input sequence. - $L_m$ (line 133), the sequence length of diverse motion tokens, is $T_m/4$. Here, 4 is the downsampling rate of motion tokenizer. - $L_a$ (line 150), the sequence length of diverse music tokens, is $T_a/128$. Here, 128 is the downsampling rate of music tokenizer. **W3: T5 is utilized as the language model backbone. While most previous model utilize CLIP or BERT, you should include ablation study about the text encoder.** Following your suggestions, We add ablation study on CLIP and BERT as text encoder. Specifically, we use T5, CLIP and BERT as text encoders respectively, each combined with T5 decoder. We then conducted experiments on the text-to-motion task using these configurations. As shown in below table, T5 encoder can achieve the best performance, validating its superiority over CLIP and BERT in our framework. Methods (Text-to-Motion) Motion-X |R TOP1$\uparrow$ |FID$\downarrow$ |Div$\uparrow$| | ---- | ---- | ---- | ---- | CLIP | 0.641 | 0.088 | 2.110 BERT | 0.652 | 0.083 | 2.124 T5 | **0.656** | **0.078** | **2.133** **Q1: Is it just a matter of migrating multimodal model to the motion domain?** $M^3$GPT is not simply migrating multimodal model to motion domain. Unlike most multimodal models such as BLIP2, LLaVA, Ferret, which primarily focus on understanding tasks, $M^3$GPT takes a step further by incorporating multimodal tokenizers and de-tokenizers, achieving both motion **comprehension and generation** across **text, motion/dance and music** modalities. Additionally, $M^3$GPT **jointly optimizes LLM and motion de-tokenizer**, further enhancing its capabilities in motion generation. **Q2: What new issues have been encountered in the process?** We have encountered two new issues: 1. Within the multi-task framework, music-to-dance task poses greater challenges than text-to-motion. This difficulty arises because both input and output modalities of music-to-dance are unfamiliar to LLMs, and there is limited training data available for this task. Therefore, **effectively leveraging text-to-motion task to enhance more complex music-to-dance task is a critical issue that needs to be addressed.** 2. The multi-stage training leads to error accumulation in motion generation, due to the combined effects of LLM's prediction and de-tokenizer. Therefore, **mitigating error accumulation from multi-stage training is another critical issue that needs to be addressed.** **Q3: What is your innovation in solving these problems?** 1. To solving the first issue outlined in Q2, we design two strategies. (1) a **shared tokenizer** for motion and dance data, and (2) two **auxiliary tasks: music-to text and text-to-dance**. Through these auxiliary tasks, $M^3$GPT implicitly learns to decompose complex music-to-dance into simpler music-to-text and text-to-dance tasks. Also, the shared tokenizer ensures that text-to-dance and text-to-motion tasks occupy the same matching space, enabling mutual reinforcement. In this way, $M^3$GPT builds the connection between text-to-motion and music-to-dance, leveraging text-to-motion to enhance its music-to-dance capabilities. 2. To solving the second issue outlined in Q2, we **jointly optimize LLM and motion de-tokenizer** during training. This strategy can help mitigate error accumulation caused by multi-stage training, thereby enhancing the model's motion generation capabilities. --- Rebuttal 2: Title: Any questions about the rebuttal Comment: Dear Reviewer 2Gpw: As the rebuttal period is ending soon, please let us know whether your concerns have been addressed or not, and if there are any further questions.
Summary: The paper presents M3GPT, which creates a unified representation space for different motion-relevant modalities, including text, music, and motion/dance. The framework employs discrete vector quantization for these modalities, enabling integration into a large language model (LLM) with a single vocabulary. The model operates directly in the raw motion space to avoid information loss associated with discrete tokenizers. Extensive experiments highlight M3GPT’s superior performance across various motion tasks and its zero-shot generalization capabilities. Strengths: - The paper introduces a novel approach to integrate multiple modalities (text, music, motion) within a single framework, addressing a significant gap in existing research. The unified representation space and the strategy of using text as a bridge to connect different modalities are theoretically sound and contribute to the field of multimodal learning. - M3GPT handles six core tasks related to motion comprehension and generation, demonstrating versatility and robustness. The experiments show that M3GPT achieves competitive performance across multiple tasks. - The model's ability to perform well in zero-shot scenarios, such as long-term dance generation and music-text conditioned dance generation, is particularly impressive and valuable for practical applications. Weaknesses: - The proposed framework is complex and may require significant computational resources to implement and extend. - While the experimental results are promising, the paper could benefit from a more detailed discussion of the evaluation metrics and benchmarks used to assess performance. A more extensive comparison with a broader range of baselines could strengthen the argument for its superiority. Also, the paper would benefit from a more in-depth discussion on the potential real-world applications and limitations of M3GPT. Technical Quality: 3 Clarity: 3 Questions for Authors: - Could you provide more details on the training process, including the computational resources required and the time taken to train M3GPT? - Have you conducted any ablation studies to understand the contribution of each component of M3GPT to the overall performance? If so, could you include those results? - Can you provide more examples or case studies where M3GPT has been applied to real-world scenarios? What challenges were encountered? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations have been discussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: The proposed framework is complex and may require significant computational resources to implement.** To ensure reproducibility, we will open-source the training and inference code of proposed $M^3$GPT framework. For computational resources, please refer to **reply to Q2 in Global Response**. **W2-1: The paper could benefit from a more detailed discussion of evaluation metrics and benchmarks.** Thanks for your suggestions. We will add these discussion in final version. **(1) Evaluation Metrics.** The table below shows the evaluation metrics for each task. |Task| Text-to-Motion (T2M) |Motion-to-Text (M2T)| Motion Prediction/In-Between| Music-to-Dance (A2D) | Dance-to-Music (D2A)| | ----| ----- | ----- | ----- |----- |----| |Metrics| FID, R-Precision, Div| R-Precision, Bleu, CIDEr, Rouge| FID, Div, MPJPE| FID$_k$, Div$_k$, BAS| BCS, BHS| - FID: Measures distance between the distributions of generated and real motion. - R-Precision: Measures retrieval accuracy of generated motion/text relative to input, reflecting semantic similarity. - Div: Measures diversity of generated motions. - Bleu, Rouge, CIDEr: Measures accuracy, matching degree and semantic consistency between generated and real text. - FID$_k$, Div$_k$: FID and Div calculated using kinetic features. - MPJPE: Measures the mean per joint position error between generated and real motion. - BAS: Computes beat alignment degree between generated dance and input music. - BCS/BHS: Measures convergence/alignment between generated and real music. **(2) More Benchmarks.** The tables below adds two benchmarks. As shown in below table, and Table3/4 of main paper, we observe that: - Text-to-Motion: $M^3$GPT achieves comparable FID and best R-Top1 and Div score, showing high-quality motion generation with strong semantic alignment and diversity. - Motion-to-Text: $M^3$GPT achieves best R-precision and CIDEr score, indicating fluent and semantically consistent motion descriptions generation. - Motion Prediction and Inbetween: $M^3$GPT achieves best FID, Div and MPJPE score, reflecting superior motion prediction quality, diversity and precision. - Music-to-Dance (Table 4): $M^3$GPT achieves comparable Div/BAS and best FID$_k$ score, indicating high-quality dance generation with good beat. - Dance-to-Music (Table 4), $M^3$GPT achieves comparable BCS and best BHS score, indicating improved semantic alignment of music with input dance. |Methods on Motion-X| T2M (R TOP1$\uparrow$ \| FID$\downarrow$ \| Div$\uparrow$) | M2T (R TOP3$\uparrow$ \| Bleu@4$\uparrow$ \|CIDEr$\uparrow$) | Motion Prediction (FID$\downarrow$ \| Div$\uparrow$\| MPJPE$\downarrow$)| Motion In-between (FID \| Div \| MPJPE) | ---- | ----- | ----- | ----- |----- | |MotionLCM| 0.658 \| 0.078 \| 2.206| -| -| -| |MotionGPT| 0.659 \| 0.078 \| 2.166 | 0.840 \| 11.21 \| 31.36 | 0.701 \| 1.818 \| 71.3 | 0.648 \| 1.875 \| 59.9 |$M^3$GPT| 0.661 \| 0.076 \| 2.273 | 0.845 \| 11.50 \| 42.93 |0.682 \| 1.838 \| 54.2 | 0.612 \| 1.900 \| 51.0 **W2-2: The paper could benefit from a more in-depth discussion on potential real-world applications and limitations of $M^3$GPT.** **(1) Potential applications:** $M^3$GPT has may real-world applications, such as **animation creation, visual digital human control, music-based dance choreography creation**. Taking animation creation as a example, traditional manual methods adjusts character joints frame-by-frame, which is extremely time-consuming and labor-intensive. With $M^3$GPT, one can simply provide a textural description of desired motion, and $M^3$GPT will automatically generate the corresponding animation, largely reducing labor and time cost. **(2) Limitations:** $M^3$GPT focus on modeling human body movements, but currently lacks the capability to handle hands. As a result, $M^3$GPT cannot address speech-driven motion generation, which involves gestural rather than body movement. **Q1: Could you provide more details on the training process, including the computational resources required and the time taken to train $M^3$GPT?** Yes, please refer to our **reply to Q2 in Global Response**. **Q2: Have you conducted any ablation studies to understand the contribution of each component of $M^3$GPT to the overall performance** Yes, in Table 2 of main paper, we have conducted ablation studies to show the effectiveness of each component of $M^3$GPT. Specifically, $M^3$GPT includes 3 major components designed to improve performance: **(1)** Introduction of two auxiliary tasks, music-to-dance (A2D) and text-to-music (T2D). **(2)** Joint optimization of LLM and motion de-tokenizer. **(3)** Instruction-tuning. As shown in the table below, each component contributes to the overall performance of $M^3$GPT, specifically for the complex music-to-dance task. |Methods| T2M on Motion-X (R TOP1 \| FID \| Div) | A2D on AIST++ (FID_k \| Div_k \| BAS) | ---- | ----- | ----- | |$M^3$GPT (Pretrained without A2T and T2D)| 0.547 \| 0.104 \| 2.099 | 37.14 \| 7.61 \| 0.200 |$M^3$GPT (Pretrained without joint optimization)| 0.598 \| 0.089 \| 2.218 | 32.71 \| 7.43 \| 0.209 |$M^3$GPT (Pretrained)| 0.601 \| 0.092 \| 2.251 | 27.65 \| 7.52 \| 0.210 | |$M^3$GPT (Instruction-tuned)| 0.615 \| 0.093 \| 2.253 | 24.34 \| 7.50 \| 0.205 |$M^3$GPT (Instruction-tuned only single task )| **0.661** \| **0.076** \| **2.273** | **23.01** \| **7.85** \| **0.226** **Q3: Can you provide more examples or case studies where $M^3$ has been applied to real-world scenarios? What challenges were encountered?** We have applied $M^3$GPT to several real-world scenarios, including animation creation, dance choreography creation, music creation from dance. The main challenge is generating long sequence (longer than 5 minutes), where the quality of generated content degrades over time. We leave the generation of high-quality long-sequence (longer than 5 minutes) as a future research direction. --- Rebuttal 2: Title: Any questions about the rebuttal Comment: Dear Reviewer KMVC: As the rebuttal period is ending soon, please let us know whether your concerns have been addressed or not, and if there are any further questions.
Summary: This paper presents an advanced Multimodal, Multitask framework for Motion comprehension and generation. It aims to create a unified representation space for various motion-relevant modalities and model the connections and synergies among different motion tasks. M3GPT consists of multimodal tokenizers and a motion-aware language model, and is trained through a multimodal pre-training + instruction-tuning pipeline. The model is evaluated on multiple motion-relevant tasks and datasets, and shows competitive performance and strong zero-shot generalization capabilities. Strengths: (1) Unifies multiple motion-related tasks in a single model, achieving bidirectional alignment. (2) Achieves competitive performance across multiple tasks and datasets and strong zero-shot generalization capabilities. (3) Explores motion related music2text and text2dance as auxiliary tasks. Weaknesses: (1) For supplementary videos, it's hard to judge the quality of music-motion bidirectional generations without combining them in one video; also, long-term dance generation on aist++ dataset is only a little longer than 5s. (2) The method of Joint optimization of LLM and motion de-tokenizer lacks important details. In line 194-195, I wonder how the gradient from de-tokenizer backprogate to LLM if author use softmax to choose top-1 token. (3) For motion prediction and In-between tasks, feature-level metrics like FID are not enough, pose-level metrics like MPJPE are needed for comparison. Technical Quality: 3 Clarity: 3 Questions for Authors: (1) Did authors try different LLM architecture or different size of T5, what is the conclusion and the training time for T5 base? (2) Did you evalute two auxiliary tasks music2text or text2dance quantitatively or qualitatively? (3) For zero-shot tasks, can they combined together for more application? For instance, generating a long dance while we specify text prompt for one segment in the middle of sequence. Also, how about zero-shot text2music? (4) For tab.4, what is the result of Instruction-tuned only on AIST++ or FineDance of which the setting is similar to the last row in tab.3. (5) How did you implement motion in-between in pretrained stage use the equation(4)? More specifically, whether your model is conditioned on future target frame, if so, the equation is not precise enough. (6) Is there any special token for motion and music in the unified token vocabulary, like start/end of motion/music token? If not, how to ensure the pretrained model generate desired modality? For example, I wanna directly generate dance conditioned on music, but actually the model generate text token. (7) The statistics of music2text and text2dance paired dataset are missing. I would like to raise the score if authors can well address my concerns questions. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N.A Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: For supplementary videos, it's hard to judge the quality of music-motion generations without combining them in one video; Long-term dance generation on aist++ is only a little longer than 5s.** (1) Due to the rebuttal period restrictions on video uploads, we will present music and dance in one video for clearer evaluation in revised version. (2) Regarding long-term dance generation, AIST++ comprises musics that are slightly longer than 5s. So the generated dances on AIST++ are relatively short. To showcase our model’s capability for long-term dance generation, we have included a dance generation of over 90s on FineDance in supplementary video. **W2: How the gradient from de-tokenizer backpropagate to LLM?** In implementation, we do not directly backpropagate de-tokenizer gradient to LLM. Instead, with the goal of minimizing L1 loss between the predicted and real motion, we search for the motion's token sequence that could minimize this L1 loss in original motion space. As the motion de-tokenizer continuously optimizes, the target motion's token sequence, which supervises LLM training, dynamically changes. This dynamic adjustment reduces L1 loss progressively, achieving joint optimization. **W3: For motion prediction and in-between tasks, pose-level metrics like MPJPE are needed for comparison.** In the tables below, we add MPJPE as a metric. Our method still outperforms existing works on MPJPE metric in both tasks. |Methods on Motion-X| Motion Prediction (MPJPE$\downarrow$)| Motion In-between ( MPJPE) |   | ---- | ----- | ----- | |T2M-GPT | 80.2| 63.7 | |MoMask |67.9 | 55.2| |MotionGPT | 71.3| 59.9| | M3GPT (Instruction-tuned)| **54.2**|**51.0** | **Q1: Did authors try different LLM architecture or different size of T5, what is conclusion and training time for T5?** Yes, Please refer to our **reply to Q1 in Global Response**. **Q2: Did you evalute two auxiliary tasks music2text (A2T) or text2dance (T2D) quantitatively or qualitatively?** (1) For quantitative evaluation, we report the *Bleu@4* and *CIDEr* for A2T, and *R Top1* and *FID* for T2D. As shown in the tables below, our method consistently outperforms single task training. (2) For qualitative evaluation, **Figure 4 of uploaded PDF** visualizes the generation of text2dance. Compared to the singe-task model, $M^3$GPT generates more realistic dance aligned with input music. |Methods (Music-to-Text) AIST++|Bleu@4$\uparrow$ | CIDEr$\uparrow$| |----|----|----| |A2T (Single task)| 9.24 | 24.55 | |$M^3$GPT (Instruction-tuned)| **11.95**| **28.88** |Methods (Text-to-Dance) AIST++|R Top1$\uparrow$| FID$\downarrow$| |----|----|----| |T2D (Single task)| 0.541 | 0.095 |$M^3$GPT (Instruction-tuned)| **0.588** | **0.077** **Q3: For zero-shot tasks, can they combined together for more application? For instance, generating a long dance while we specify text prompt for one segment in the middle of sequence. How about zero-shot text2music?** (1) Our model can perform additional zero-shot tasks. It can generate a long dance while specifying text prompt for one segment, as visualized in **Figure 3 of uploaded PDF**. This application integrates music-to-dance, zero-shot **music+dance-to-dance**, and zero-shot **music+dance+text-to-dance** tasks. As shown in that Figure 3, the generated dance is coherent and consistently aligning the input text prompt. (2) Our model can perform zero-shot text-to-music. The table below shows a quantitative evaluation on text-to-music task. Even without training on text-to-music task, $M^3$GPT can obtain comparable even superior performance over existing methods. |Methods (Text-to-Music) AIST++|BCS$\uparrow$|BHS$\uparrow$| |----|----|----| |MusicLDM| **74.5** | 73.8 |Mubert|73.3 |73.0 |$M^3$GPT (Instruction-tuned)| **74.5** | **74.7** **Q4: For tab.4, what is the result of Instruction-tuned only on AIST++ or FineDance?** Please refer to our **reply to Q3 in Global Response**. **Q5: How did you implement motion in-between in pretrained stage use the equation(4)?** Motion in-between can be implemented using Equation 4. Formally, given a motion sequence $\boldsymbol{m} = \left(m_1, \dots, m_N\right)$, motion in-between task uses the first $n_1$ and last $n_2$ frames, $\boldsymbol{m_1}=\left(m_1, \dots, m_{n1}, m_{N-n2+1}, \dots, m_{N}\right)$, to predict intermediate frames $\boldsymbol{m_2}=\left(m_{n1+1}, \dots, m_{N-n2}\right)$. So the future frames in $\boldsymbol{m_1}$ are only used as **input**, and the **target** frames are the middle frames in $\boldsymbol{m_2}$. In implementation, $\boldsymbol{m_1}$ is fed into the motion tokenizer to produce a token sequence, serving as **source input** of LLM ($\boldsymbol{q_s}$). Also, $\boldsymbol{m_2}$ is fed into the motion tokenizer to produce a token sequence, serving as **target output** of LLM ($\boldsymbol{q_t}$). Thus motion in-between can be formulated as $p_{\theta}\left(\boldsymbol{q}_t^i |\boldsymbol{q}_t^{<i}, \boldsymbol{q}_s \right)$ (Equation 4). **Q6: Is there any special token for motion and music in the unified token vocabulary, like start/end of motion/music token? If not, how to ensure the pretrained model generate desired modality?** Yes. There are special tokens for motion and music, such as *<start_of_motion>* and *<start_of_music>*. These special start and end tokens control the beginning and end of the model's decoding process. Additionally, we use **task-specific instruction** to control model to generate desired modality. For example, the instruction *Generate a motion for the caption* is used for text-to-motion task; Instruction *Generate a music based on the dance* is used for dance-to-music task. **Q7: The statistics of music2text (A2T) and text2dance (T2D) paired dataset are missing.** Please refer to our **reply to Q4 in Global Response**. --- Rebuttal 2: Title: Any questions about the rebuttal Comment: Dear Reviewer Wkeh: As the rebuttal period is ending soon, please let us know whether your concerns have been addressed or not, and if there are any further questions.
Summary: In this paper, the authors introduce $M^3$GPT, a multimodal multitask framework designed for both motion comprehension and generation. Utilizing discrete vector quantization, $M^3$GPT establishes a discrete semantic representation space for various modalities. To avoid information loss during discrete de-tokenization, $M^3GPT$ jointly trains the large language model (LLM) and motion de-tokenizer, thereby optimizing the LLM within both the discrete semantic space and the continuous raw motion space. Furthermore, the authors construct paired text descriptions for music and devise two supplementary tasks music-to-text and text-to-dance, which facilitate the alignment of music and dance within the text embedding space. The efficacy of this approach is demonstrated through experiments across a range of motion-related tasks, and showcases zero-shot generalization capabilities for highly challenging tasks. Strengths: 1. The motivation behind investigating the integration of text, music, and motion for motion comprehension and generation is convincing. 2. The proposed joint optimization of the large language model (LLM) and motion de-tokenizer, along with the synergy learning strategy incorporating auxiliary tasks, demonstrates potential in enhancing multimodal motion generation. Weaknesses: 1. The effectiveness of combining text, music, and motion modalities in the $M^3$GPT framework is not thoroughly evaluated. Although the authors present the $M^3$GPT capabilities in handling zero-shot tasks. However, there is still doubt about how can text-to-motion or music-to-dance tasks benefit from the multimodal framework. 2. Some key experiments are missing. a. In Table 3, the authors evaluate proposed $M^3$GPT on text-to-motion dataset Motion-X on 4 tasks. However, MotionGPT[1] can also perform the tasks while the comparison between MotionGPT[1] is missing. b. Besides motion prediction lacks comparison with models like T2M-GPT[2], since the auto-regressive models are better at prediction. c. Motion inbetweening lacks comparison with the state-of-the-art models like MoMask[3] capable of temporal inpainting which can handle motion inbetweening tasks. 3. The qualitative comparison of generated motions by the proposed $M^3$GPT framework from text or music is absent. The inclusion of detailed qualitative results is crucial for providing a comprehensive understanding of the naturalness and overall realism of the generated gestures. Minor: 1. In Figure 2, there is an error in drawing graphics in the Motion Codebook part. [1] Jiang, Biao, et al. "Motiongpt: Human motion as a foreign language." Advances in Neural Information Processing Systems 36 (2024). [2] Zhang, Jianrong, et al. "Generating human motion from textual descriptions with discrete representations." Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2023. [3] Guo, Chuan, et al. "Momask: Generative masked modeling of 3d human motions." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I am curious about the long-duration dance generation mentioned in Page 6, especially if the LLM can maintain coherence and consistency in generating long sequences or this long-duration generation is achieved by the consistency during de-tokenization. 2. In Table 4, the authors evaluate music-to-dance on two datasets AIST++ and FineDance, $M^3$GPT achieves state-of-the-art performance on AIST++, but the performance on FineDance is not that competitive. Could the authors provide more insights into the reasons behind this performance gap between the two datasets? 3. Have authors evaluate different sizes of the $M^3$GPT model? Given the added modalities and tasks, it would be interesting to understand how the model scales with respect to performance and computational requirements. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The authors have discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: **W1: There is still doubt how can text-to-motion or music-to-dance tasks benefits from multimodal framework.** We argue that text-to-motion and music-to-dance tasks benefits from multimodal framework in two main aspects: 1. **A shared tokenizer for motion and dance data:** The shared tokenizer inherently broadens the scale and diversity of the training data, allowing this tokenizer to learn more generalizable representations for both motion and dance data. 2. **Constructing two auxiliary tasks, music-to-text (A2T) and text-to-dance (T2D):** Through auxiliary tasks, $M^3$GPT implicitly learns to decompose the complex music-to-dance task into simpler A2T and T2D tasks. Also, with a shared motion tokenizer, text-to-dance and text-to-motion tasks reinforce each other within the same matching space. In this way, $M^3$GPT builds the synergies between music-to-dance and text-to-motion, facilitating mutual reinforcement. As shown in the table below, the shared tokenizers and auxiliary tasks generally lead to gains in both tasks, validating their effectiveness. |Methods | Text-to-Motion on Motion-X (R TOP1$\uparrow$ \| FID$\downarrow$ \| Div$\uparrow$) | Music-to-Dance on AIST++ (FID_k$\downarrow$ \| Div_k$\uparrow$ \| BAS$\uparrow$)| | ---- | ----- | ----- | |Single Task | 0.638 \| 0.095 \| 2.101 | 80.41 \| 5.51 \| 0.1882 |Single Task with shared tokenizer| 0.656 \| 0.078 \| 2.133 | 75.47 \| 5.57 \| 0.1884 |$M^3$GPT (without T2D and A2T)| 0.547 \| 0.104 \| 2.099 | 37.14 \| 7.61 \| 0.2005 |$M^3$GPT (Pre-trained) | 0.601 \| 0.092 \| 2.251 | 27.65 \| 7.52 \| 0.2105 |$M^3$GPT (Instruction-tuned only single task) | **0.661** \| **0.076** \| **2.273** | **23.01** \| **7.85** \| **0.2261** **W2-a: Comparison with MotionGPT on Motion-X is missing.** The table below presents the results of MotionGPT on motion-X. Our method outperforms MotionGPT across all 4 tasks on motion-X. |Methods on Motion-X| Text-to-Motion (R TOP1$\uparrow$ \| FID$\downarrow$ \| Div$\uparrow$)| Motion-to-Text (R TOP3$\uparrow$ \| Bleu@4$\uparrow$ \| CIDEr$\uparrow$) | Motion Prediction (FID$\downarrow$ \|Div$\uparrow$ \| MPJPE $\downarrow$)| Motion In-between (FID \| Div \| MPJPE) | ---- | ----- | ----- | ----- |----- | |MotionGPT| 0.659 \| 0.078 \| 2.166 | 0.840 \| 11.21 \| 31.36 | 0.701 \| 1.818 \| 71.3 | 0.648 \| 1.875 \| 59.9 |$M^3$GPT| **0.661** \| **0.076** \| **2.273** | **0.845** \| **11.50** \| **42.93** | **0.682** \| **1.838** \| **54.2** | **0.612** \| **1.900** \| **51.0** **W2-b: Lacking comparison with T2M-GPT on motion prediction task.** The table below presents the results of T2M-GPT on motion prediction task. Our method largely outperforms T2M-GPT. |Methods (Motion Prediction) Motion-X| FID $\downarrow$ | Div $\uparrow$ | MPJPE $\downarrow$ | ---- | ----- | ----- | ----- | |T2M-GPT| 0.814 | 1.755 | 80.2 |$M^3$GPT| **0.682** | **1.838** | **54.2** **W2-c: Lacking comparison with MoMask on motion In-between task.** The table below presents the results of MoMask on motion In-between task. Our method outperforms MoMask in motion In-between task. |Methods (Motion In-between) Motion-X| FID $\downarrow$ | Div $\uparrow$ | MPJPE $\downarrow$ | ---- | ----- | ----- | ----- | |MoMask| 0.626 | 1.884 | 55.2 |$M^3$GPT| **0.612** | **1.900** | **51.0** **W3: The qualitative comparison of generated motions by the proposed $M^3$GPT framework from text or music is absent.** **(1) Figure 1 of uploaded PDF file** visualizes generated motions from text. We compare our generations with MDM and MoMask. As seen, our model generates motions of higher quality, with unrealistic motions from MDM and MoMask highlighted in red. **(2) Figure 2 of uploaded PDF file** visualizes generated dances from music. We compare our generations with Bailando. As seen, the dances generated by our $M^3$GPT are more danceable. **Q1: Curious about the long-duration dance generation.** We maintain coherence and consistency in generating long dance sequences during **LLM prediction phase**. For long-duration dance generation, we sequentially generate 5-second dance segments. Each dance segment is generated using corresponding music and **previously generated dance segments as control conditions**. This strategy ensures that each newly generated dance segment aligns seamlessly with the preceding ones, maintaining overall coherence. **Q2: Could the authors provide more insights into the reasons behind the performance gap between AIST++ and FineDance datasets?** The main reason is that **the training and testing settings on FineDance are inconsistent in our work, whereas they are consistent in other works**. Existing works typically train and test **separately** on AIST++ and FineDance—using 5-second clips for AIST++ and 30-second clips for FineDance. In contrast, we develop a **unified multitask model across datasets**. To maintain consistency, we slice AIST++ and FineDance data into 5-second clips for unified training, aligning with the 5 to 10-second range of motion-X data. During testing, we generate longer 30-second clips on FineDance to compare with existing methods. This inconsistency on FineDance could lead to performance drop compared to existing models. To further validate our method, we finetune $M^3$GPT on FineDance using 30-second clips. As shown in the table below, $M^3$GPT can achieve competitive, even superior performance compared to existing methods, demonstrating its potential on FineDance. |Methods (Music-to-Dance) FineDance| FID$_k$$\downarrow$ | Div$_k$$\uparrow$ | BAS$\uparrow$ | ---- | ----- | ----- | ----- | |EDGE | 94.34 | 8.13 | 0.2116 |Lodge | 45.56 | 6.75 | **0.2397** |$M^3$GPT |86.47 | 7.75 | 0.2158 |$M^3$GPT(Instruction-tuned with 30-second clips)| **42.66** | **8.24** | 0.2231 **Q3: Should evaluate different sizes of $M^3$GPT.** Please refer to our **reply to Q1 in Global Response**. --- Rebuttal 2: Title: Any questions about the rebuttal Comment: Dear Reviewer ZwnC: As the rebuttal period is ending soon, please let us know whether your concerns have been addressed or not, and if there are any further questions.
Rebuttal 1: Rebuttal: ## **Global Response** We sincerely thank all reviewers and ACs for reviewing our work. Some common questions are answered. ### **Q1: The evaluations for different size of T5 (Reviewers #ZwnC, #Wkeh, #KMVC)** We conduct experiments on different sizes of T5: T5-small (60 M), T5-base (220 M), T5-large (770 M). As shown in the tables below, as the model size increasing, the performance on each task improves, but the training time also increases. Considering the trade-off between performance and computational cost, we use T5-Base model in $M^3$GPT. |Methods on Motion-X | LLM | Training time |Text-to-Motion (R TOP1$\uparrow$ \| FID$\downarrow$ \| Div$\uparrow$)| Motion-to-Text ( R TOP3$\uparrow$ \| Bleu@4$\uparrow$ \| CIDEr$\uparrow$) | ---- | ---- | ----| ----| ----| | $M^3$GPT | T5-small (60M) | 5 days |0.598 \| 0.096 \| 2.202 |0.822 \| 10.43 \| 38.22 | $M^3$GPT | T5-base (220 M) | 7 days| 0.615 \| 0.093 \| 2.253 | 0.845 \| 11.50 \| 42.93 | $M^3$GPT | T5-large (770 M)| 8 days| **0.619** \| **0.090** \| **2.256** | **0.848** \| **11.64** \| **43.05** |Methods on AIST++| LLM | Training time| Music-to-Dance (FID$_k$$\downarrow$ \| Div$_k$$\uparrow$ \| BAS$\uparrow$) | Dance-to-Music (BCS$\uparrow$ \| BHS$\uparrow$) | | ---- | ----- | ----- | ----- |----| | $M^3$GPT | T5-small (60M) | 5 days |28.05 \| 5.96 \| 0.2091 |89.1 \| 91.2 | $M^3$GPT | T5-base (220M)| 7 days| 23.34 \| 7.50 \| 0.2056 | 93.6 \| 94.0 | $M^3$GPT | T5-large (770M)| 8 days| **23.26** \| **7.54** \| **0.2061** | **93.8** \| **94.1** ### **Q2: More details for training process and computational resources. (Reviewer #KMVC)** Our training process is divided into three stages: Multimodal Tokenizers Training (Stage1); Modality-Alignment Pre-training (Stage2); Instruction Fine-tuning (Stage3). All experiments are conducted on a machine with 8 A40 (48G) GPUs. Detailed implementation can be found in Lines 276-285 of main paper. Specifically: - Stage1: We combine Motion-X (64867 motion samples), AIST++ (952 dance samples) and FineDance (177 dance samples) to train a motion tokenizer. This stage involves 50K training steps, with a batch size of 1000 and a learning rate of 1e-4. **Stage1 takes ~2.5 days to complete.** - Stage2: We train on 6 main tasks and 2 auxiliary tasks. The detailed statics of training data is shown in the table below. This stage involves 1000K training steps, with a batch size of 56 and a learning rate of 2e-4. **Stage2 takes ~5.5 days to complete.** - Stage3: We use 200 instruction templates to construct instruction samples for finetuning the model. This stage involves 200K training steps, with a batch size of 48 and a learning rate of 1e-4. **Stage3 takes ~1.5 days to complete.** |Tasks| T2M | M2T | Motion Prediction/In-between| A2D | D2A | A2T | T2D| | ---- | ---- | ----| ----| ----| ----|----|----| |Training dataset| Motion-X | Motion-X| Motion-X/AIST++/FineDance|AIST++/FineDance | AIST++/FineDance|AIST++/FineDance | AIST++/FineDance| |Training samples number | 64867 | 64867 | 64867/952/177 |952/177 | 952/177|952/177|952/177| ### **Q3: The results of instruction-tuned only on AIST++ or FineDance (Reviewer #Wkeh)** As shown in the tables below, we add the results of $M^3$GPT instruction-tuned only on AIST++ and FineDance. As shown, instruction-tuned on a single task further improves performance. We will add these results in final version. |Methods on AIST++| Music-to-Dance (FID$_k$$\downarrow$   \|Div$_k$$\uparrow$ \| BAS$\uparrow$)| Dance-to-Music (BCS$\uparrow$ \| BHS$\uparrow$) |   | ---- | ----- | ----- | $M^3$GPT (Pre-trained) | 27.65 \| 7.52 \| 0.2105 | 93.4 \| 93.8 $M^3$GPT (Instruction-tuned)| 24.34 \| 7.50 \| 0.2056 |  93.6 \| **94.0**| $M^3$GPT (Instruction-tuned for single task) | **23.01** \| **7.85** \| **0.2261**| **94.3** \| **94.0** |Methods on FineDance| Music-to-Dance (FID$_k$$\downarrow$   \| Div$_k$$\uparrow$ \| BAS$\uparrow$)| Dance-to-Music (BCS$\uparrow$ \| BHS$\uparrow$) |   | ---- | ----- | ----- | $M^3$GPT (Pre-trained) | 92.35 \| 7.67 \| 0.2134 | 83.16 \| 73.65| $M^3$GPT (Instruction-tuned)| 86.47 \| 7.75 \| 0.2158 | 84.10 \| 74.66| $M^3$GPT (Instruction-tuned only A2D) | 65.27 \| 7.83 \| 0.2195 | 84.79 \| 75.20 | $M^3$GPT (Instruction-tuned only A2D with 30-second clips) | **42.66** \| **8.24** \| **0.2231** | **86.72** \| **79.64** ### **Q4: The statistics of music2text (A2T) and text2dance (T2D) paired dataset (Reviewer #Wkeh)** A2T and T2D datasets are constructed from existing dance dataset (AIST++ and FineDance). Given a music-dance pair, we construct paired textural descriptions for the music/dance data. Specifically, we use the style annotations of music to create paired texts, such as *a person is dancing Jazz* for *Jazz* style. The table below shows the statistics of A2T and T2D datasets. We will added this statistics in final version. | Task | Training sample number (AIST++, FineDance) | Testing sample number | Average sequence duration |----|----|----|----| | A2T| 1129 (952, 177) | 60 (40, 20) | (13.31s, 135.8s) | | T2D| 1129 (952, 177) | 60 (40, 20) | (13.31s, 135.8s) | Pdf: /pdf/1684a1cb859fc7b23ccbe85e44c3a2c788ddcf3c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Mining and Transferring Feature-Geometry Coherence for Unsupervised Point Cloud Registration
Accept (poster)
Summary: The paper introduces a new unsupervised outdoor point cloud registration method called INTEGER, which dynamically integrates high-level contextual information and low-level geometric information to generate reliable pseudo-labels, addressing the issue of poor performance in complex outdoor environments seen in previous methods reliant on geometric data. By incorporating the Feature-Geometry Coherence Mining module, Anchor-Based Contrastive Learning, and the Mixed-Density Student model, this method not only performs well in scenarios with significant variations in data density but also demonstrates its efficiency and generalizability on standard datasets like KITTI and nuScenes. Overall, the INTEGER method significantly improves in accuracy and generalizability, especially in handling complex and distant outdoor scenes, showing advantages that traditional methods struggle to match. Strengths: Originality:The INTEGER method demonstrates originality by innovatively combining high-level contextual and low-level geometric information for unsupervised point cloud registration. This method creatively addresses challenges in outdoor environments, where previous methods relying solely on geometric data often fail. Quality:The paper showcases high-quality research through detailed methodological execution and robust evaluations on standard benchmarks like KITTI and nuScenes. The proposed mixed-density student model, which learns density-invariant features, ensures robust performance across diverse scenarios. Clarity:The manuscript is clearly written and well-organized, effectively communicating its core ideas, methodologies, and results. It uses figures and tables effectively to aid understanding, making it accessible to readers familiar with point cloud processing. Significance:The work is highly significant, offering potential transformative impacts in autonomous driving and robotics by enhancing unsupervised point cloud registration in outdoor scenes. Its competitive performance against supervised methods highlights its practical relevance and potential to reduce reliance on labeled data. Weaknesses: Dependency on Initial Conditions The INTEGER method's performance heavily relies on the quality of the initial teacher model. A poorly initialized teacher model could propagate errors and inefficiencies through the learning process. The paper could explore and discuss alternative strategies for more robust initializations or provide a more detailed analysis of how the initial conditions affect the overall performance. This would give readers a clearer understanding of the method's robustness and potential limitations in real-world scenarios. Comparison with State-of-the-Art Methods While the paper presents comparisons with existing methods, the selection seems limited to those that are closely aligned with the proposed method's framework. Including a broader range of state-of-the-art methods, especially recent unsupervised learning approaches that use different paradigms, could help validate the strengths and uniqueness of INTEGER more convincingly. This comparison could also be extended to include a discussion on how INTEGER performs relative to these methods in terms of computational efficiency and scalability. Discussion on Failure Cases and Limitations The paper could benefit from a more thorough discussion of scenarios where INTEGER might underperform, such as extremely sparse point clouds or highly noisy environments. Understanding the method's limitations and potential failure cases would help in setting realistic expectations and could guide future research to address these specific challenges. Generalizability Across Different Datasets While the paper tests INTEGER on KITTI and nuScenes, additional experiments on datasets from different domains or with different characteristics could help demonstrate the method's adaptability and robustness. Exploring performance on datasets with varying densities and noise levels would provide a more comprehensive evaluation of the method's effectiveness across diverse real-world conditions. Technical Quality: 2 Clarity: 3 Questions for Authors: Robustness of Initial Teacher Model: Could you elaborate on the strategies used for the initialization of the teacher model? Given the dependency of the INTEGER method on the quality of the initial teacher, understanding its robustness in adverse conditions or with suboptimal initialization would be informative. Are there specific conditions under which the initial model tends to fail, and how does this impact the overall registration accuracy? Broader Comparisons: The paper currently compares INTEGER with a select group of methods. Could you include comparisons with additional state-of-the-art unsupervised methods that employ fundamentally different approaches? This could help in understanding the unique advantages or potential trade-offs of INTEGER. Additionally, it would be beneficial if you could provide insights into the computational efficiency and scalability of INTEGER compared to both supervised and unsupervised counterparts. Confidence: 4 Soundness: 2 Presentation: 3 Contribution: 2 Limitations: The manuscript does discuss some technical limitations related to the initialization of the teacher model and the conditions under which the method may underperform. However, these discussions could be expanded to include: Scalability: How does INTEGER scale with increasingly large datasets or more complex environments? Including a discussion on computational resources and runtime could provide a clearer scope of applicability. Robustness: More detailed exploration of robustness against different types of noise or outliers in point cloud data would be beneficial. Given the application domains of point cloud registration, such as autonomous driving and robotics, understanding these aspects is crucial. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first would like to thank the reviewer for giving us valuable comments. **Q1:** *Robustness of Initial Teacher Model: the strategies used for teacher initialization and its robustness in adverse conditions or with suboptimal initialization.* **A:** The initialization strategy is detailed in Sec 3.2. We first split an input point cloud into two partially overlapping parts, apply a random rigid transformation to one part, and use it as the ground truth. To mimic real-world LiDAR scans, we employ a periodic sampling technique. We also apply commonly used data augmentations, such as random scaling and noise. Visualizations of the generated pairs are provided in Sec. A.5 (L594-600). The performance of INTEGER indeed relies on the initial teacher. However, INTEGER remains robust even with suboptimal initialization. Our ablation study (Sec 4.3) shows that INTEGER maintains high-quality pseudo-labels, even with EYOC-style initialization (achieving 71.9% IR compared to EYOC's 53.2% IR). This superior robustness may be attributed to the proposed FCEM module, which dynamically adapts the teacher, enhancing it before mining pseudo-labels. To further demonstrate the robustness, we have compared (1) a **F**ully-trained teacher, (2) a **U**nder-trained teacher, and (3) a **R**andomly-initialized teacher. We measured the Inlier Ratio of the teacher in the first epoch to assess the quality of pseudo-labels: ||**F**|U|R| |-|-|-|-| |IR@1st Epoch|**81.2%**|72.1%|34.9%| We observed only a **minor** performance drop when the teacher wasn't fully trained, and the randomly-initialized teacher performed only slightly worse than the **well-trained** EYOC. In this **very extreme** case, FCEM's adaptation process ensures that the teacher **still performs surprisingly well**, highlighting both the superiority of the FCEM design and the robustness of our method. We would also like to highlight that after training for several epochs, **the teacher's IR improves significantly**, with **minimal negative impact** on the student's convergence. Indeed, more complicated registration networks, such as Predator, **slightly** struggle with current initialization strategies. During training, they show suboptimal performance at the **beginning** of teacher-student training but **eventually catch up** after more training epochs, highlighting the superior effectiveness of the proposed FCEM module. We speculate that the complicated networks may suffer from overfitting the synthetic data. In further work, we will devise a better strategy to address this issue. Regarding adverse conditions, we plan to investigate common adverse conditions in autonomous driving scenarios, such as extreme weather, in our future work. This is a critical aspect of real-world applications that requires thorough investigation. --- **Q2:** *Broader Comparisons: Additional SOTA unsupervised methods that employ fundamentally different approaches.* **A:** Our INTEGER focuses on **outdoor scenes**; only **a few unsupervised methods**, namely EYOC and RIENet, are applicable in this context. We ***have included both*** in our comparison (see Sec 4.1, Table 1). Among these unsupervised methods, RIENet takes a **fundamentally different approach**. Our INTEGER not only outperforms EYOC, but also suppresses RIENet by a large margin. --- **Q3:** *Insights into computational efficiency and scalability of INTEGER compared to both supervised and unsupervised counterparts.* **A:** As mentioned in Limitations (Sec.A.1), our method is slightly slower to obtain pseudo-labels than EYOC because of the proposed iterative method used in the FGC-Branch of the FCEM module. However, with only 33% more time cost on KITTI, our INTEGER has **much more accurate** pseudo-labels with 28.1% higher IR (81.3% v.s. EYOC's 53.2%), and **considerable improvements** on RR% of very distant pairs. (54.2% v.s. EYOC's 52.3%). We show additional results for supervised baselines on the nuScenes dataset and generalizability tests in **part 2 in the general response**, which shows that our method has **better scalability than SOTA supervised and unsupervised methods.** Please refer to it for details. --- **Q4:** *Scalability with increasingly large datasets or more complex environments and the discussion on computational resources and runtime; Generalizability Across Different Datasets.* **A:** Our method targets autonomous driving scenarios, so we tested it on KITTI and nuScenes, standard datasets in this field. These datasets vary in density, occlusion, and noise levels, enabling us to evaluate INTEGER's scalability and robustness. To **further** demonstrate the scalability of INTEGER, we have also evaluated the performance of INTEGER on ETH, an outdoor dataset with **rural and forest scenes**, which are primarily unstructured and complex, differing from the urban environments of KITTI and nuScenes datasets. Moreover, we also conducted additional experiments in **indoor** scenes as requested by other reviewers. Please refer to **part 2 and 3 of the general response** for results. These results also show the superior generalizability of INTEGER. --- **Q5:** *Robustness against different types of noise or outliers.* **A:** In outdoor scenes, registration methods are often challenged by extremely low overlap and density variation, which introduces noise in correspondences, especially for distant pairs. Our method demonstrates superior robustness against these challenges, as shown in Sec 4.1, Table 1. --- **Q6:** *Discussion on Failure Cases and Limitations.* **A:** The robustness of the proposed INTEGER against different types of noise and outliers, including the noise introduced by extreme weather, requires further investigation. These situations are particularly relevant for autonomous driving. We will discuss these limitations and future work in the conclusion. --- Rebuttal 2: Comment: Dear Reviewer 2VKD, We sincerely thank you for your precious time and efforts in reviewing our paper. We greatly appreciate your insightful and detailed feedback, and we have carefully addressed the concerns in the rebuttal. Please let us know if any aspects remain unclear, and we are happy to provide further clarification. We are looking forward to your reply. Best regards, The authors of submission 1909 --- Rebuttal 3: Comment: The author's answer here is convincing and solves some of my previous questions. Although only a few unsupervised methods are compared, it can be seen that the proposed method has advantages over those methods. If possible, it would be better to compare more unsupervised methods. Although the performance of the proposed method is lacking in some aspects compared to other methods, it also shows that the method proposed by the author has a good effect overall, which is surprising. I keep my original score. --- Rebuttal Comment 3.1: Title: Thanks Comment: Thanks for continuing to recognize our work. Your suggestion on understanding the robustness of the teacher initialization and scalability across various datasets are very helpful—these experiments effectively highlight the superior robustness and generalizability of our work. Additionally, your suggestion on adding more unsupervised baselines can enable a more comprehensive comparison between our work and existing methods, and we are very grateful for that. Although we have found some existing indoor unsupervised methods fail to work in outdoor scenes, we will conduct these experiments and include in the experiment section. We sincerely appreciate your thorough review, and we will incorporate your suggestions into the final version.
Summary: This paper introduces an unsupervised framework for point cloud registration that generates reliable pseudo-correspondences using both low-level geometric and high-level contextual information. It employs a widely used teacher-student architecture and proposes Anchor-Based Contrastive Learning to facilitate robust feature space development through contrastive learning with anchors. Additionally, the paper introduces a Mixed-Density Student approach to learn density-invariant features, effectively addressing challenges associated with density variations and low overlap in outdoor scenarios. Strengths: This paper focuses on registering point clouds without relying on pose prior datasets to supervise model training. The observation that in the feature space, points of latent new inlier correspondences tend to cluster around respective positive anchors summarizing features of existing inliers is particularly interesting. The FCEM adaptation of the teacher model to a data-specific teacher for the current mini-batch is intriguing. However, the procedure described in the manuscript is complex and challenging to follow. It seems that the teacher model is first updated using the current mini-batch dataset, then correspondences are selected to train the student, followed by an update of the teacher weights using Exponential Moving Average (EMA). This sequence raises questions about the efficiency and effectiveness of such a dynamic updating scheme. The authors need to provide a clearer explanation of this process. Weaknesses: First limitation is that some parts of the framework appears remarkably similar to the established EYOC method, thereby calling into question the novelty of this contribution. Besides, the integration of different levels (low and high) of information for mining pseudo-labels is highlighted as a distinguishing contribution, yet the manuscript lacks a clear explanation of the specific mechanisms. If Spatial Compatibility Filtering is employed, it should be noted that this technique is already a component of EYOC. Technical Quality: 2 Clarity: 2 Questions for Authors: This paper introduces an unsupervised framework for point cloud registration that integrates both low-level geometric and high-level contextual information to generate reliable pseudo-labels. While the concept is promising, the manuscript could benefit from additional detail and clarification in several areas: 1. Similarity to Existing Methods: Some parts of the framework appears remarkably similar to the established EYOC method, thereby calling into question the novelty of this contribution. 2. Using low and high level information for mining pseudo-labels is highlighted, yet the manuscript lacks a clear explanation of the specific mechanisms. If Spatial Compatibility Filtering is employed, it should be noted that this technique is already a component of EYOC. 3. The manuscript mentions pairs with distances [d1, d2], but it does not explain the impact of these distances for registration. A detailed explanation would enhance understanding of how these challenge the registration. 4. Dataset Differences and Overlap Ratios: The experimental section discusses the use of KITTI and nuScenes datasets. It is important to delineate the differences between these datasets, especially concerning their overlap ratios. Providing specific overlap ratios for these and other datasets would facilitate a more nuanced discussion of the framework's applicability and robustness across different scenarios. 5. Performance on Various Datasets: The framework's performance is evaluated on outdoor datasets like KITTI and nuScenes, but there is no discussion on its efficacy in indoor settings, such as with the 3DLoMatch dataset. Insights into performance in diverse environments would be particularly valuable, given the unique challenges posed by indoor scenarios. 6. Training Time: The manuscript omits details regarding the training duration. Including this information is essential for assessing the practicality of deploying this framework in real-world applications. 7. Comparative Analysis with EYOC: If EYOC also utilizes synthetic pretraining, a direct comparison with the proposed method under similar conditions is necessary. Such a comparative analysis would provide a fairer assessment of the proposed method's strengths and help highlight any genuine advancements. 8. Architecture Details for Predator: Since Predator architecture employs cross-attention and a dual-branch structure but lacks clarity on how the network is divided into teacher and student components. This is non-trivial, and thus, more detailed explanations are required to understand its implementation and functionality fully. 9. Generalization to Indoor Environments: There is a need to assess how networks trained on outdoor datasets perform in indoor point cloud registration tasks. Evaluating the generalization capabilities of the framework would give insights into its versatility and effectiveness across different application settings. Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 3 Limitations: The authors adequately addressed the limitations Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for the valuable comments. **For concerns about the efficiency and effectiveness of FCEM, please refer to the "6. Efficiency and effectiveness of FCEM" part in the general response.** **Q1:** *Similarity to EYOC.* **A:** Our method **significantly differs** from EYOC in several key aspects: (1) **Novel Insight:** We innovatively introduce a new perspective by demonstrating that inlier matches are closer to positive anchors than negative ones, which summarizes existing inlier and outlier features, respectively. (2) **Pseudo-Labeling with Multilevel Information:** Our innovative FCEM module dynamically adapts the teacher and integrates low-level geometric and high-level contextual information, whereas EYOC **only uses low-level geometric cues**. (3) **Robust Knowledge Transfer** We propose ABCont for effective and robust teacher-student knowledge transfer, which is not present in EYOC. (4) **Density Invariance:** We introduce MDS to explicitly address density variation in outdoor scenarios, a concern **overlooked by EYOC**. Experiments show that our method outperforms EYOC in two datasets by a considerable margin. --- **Q2:** *Clarity on pseudo-labeling mechanisms and the use of Spatial Compatibility Filtering(SCF) in relation to EYOC.* **A:** We would like to clarify a potential misunderstanding. While SCF in GSA-Branch appears to be similar to EYOC regarding rejecting outliers, our intention **completely differs from EYOC**: We aim to **improve the *teacher* for the current data batch**, whereas EYOC uses it for pseudo-label extraction for ***student***. Moreover, experiments(Sec4.2, L272-285) show that SCF can be implemented using other pose estimators rather than SC2-PCR estimators used by EYOC. Moreover, we **do not solely rely** on SCF to incorporate both low- and high-level information; this is achieved through the *entire* FCEM, particularly the FGC-Branch(Sec 3.3, L186-216). We iteratively identify latent inliers using feature-space anchors and exclude outliers via SCF. The contributions are novel and not found in EYOC. In our manuscript(L174-175), we cited EYOC to credit their work. We will add detailed discussions on the differences to EYOC in the revised version. --- **Q3:** *Impact of distances [d1, d2] on registration.* **A:** Our evaluation follows EYOC's existing protocol, focusing on pairs within distances [d1,d2], consistent with our method's design for autonomous driving data. The differences between frames, **indicated by the distance between them**, are crucial for demonstrating robustness and generalizability in real-world applications like collaborative perception. Increased distances mainly present two challenges:(1) Overlap Ratio: Increasing distances reduce overlap ratios, complicating the search for reliable correspondences. (2) Density Variation: Increasing distances often lead to significant point density variations, challenging feature matching without density-invariant features. --- **Q4:** *Dataset Differences and Overlap Ratios.* **A:** **Fig.2 in the PDF attachment of the general response** illustrates the overlap ratios w.r.t distances between frames in KITTI and nuScenes datasets, indicating increasing distances reduce overlap ratios. Furthermore, KITTI and nuScenes differ in LiDAR resolution(64 LiDAR beams in KITTI and 32 LiDAR beams in nuScenes), resulting in denser point cloud data of the KITTI dataset. The sparsity of nuScenes point cloud data makes it more challenging for registration, especially for feature extraction. --- **Q5:** *Performance on Various Datasets.* **Q9:** *Generalization to Indoor Environments.* **A:** We address Q5 and Q9 together here. We conduct additional experiments in **indoor**(3DMatch/3DLoMatch dataset) and **non-urban** environments(ETH dataset). Please refer to **parts 2 and 3 of the general response** for details. --- **Q6:** *Lack of Training Time Details.* **A:** For most experimental results with FCGF, the full training procedure takes approximately 76 hours. For the additional results with the Predator presented in the Appendix, the training procedure takes about 140 hours. The inference efficiency is the same as that of the registration network(0.16s for FCGF and 0.30s for Predator per pair) because our method does not introduce any **test-time** modules. Please refer to **part 5 of the general response** for details. --- **Q7:** *Direct Comparison with EYOC if it also uses synthetic pretraining.* **A:** EYOC **doesn't use** synthetic pretraining. Instead, it assumes minimal transformation between two consecutive LiDAR frames, using identity transformation as the ground truth for the teacher's pretraining. In contrast, our method employs synthetic pretraining to initialize the teacher, as detailed in Sec3.2(L149-156). Table 4 of the general response **shows that EYOC's assumption does not hold in real-world data**: it may introduce significant noise, leading to a suboptimal initial teacher of EYOC. For direct comparison, we evaluate both methods using EYOC's **App**roximation approaches and **Syn**thetic-Pretraining. ||App.|Syn.| |-|-|-| |EYOC|53.2|**54.0**| |**Ours**|71.9|**81.2**| Our method consistently outperforms EYOC even when initialized the same as EYOC. While synthetic pretraining benefits EYOC, the improvement is limited. --- **Q8:** *Clarification on Predator architecture and its use in the framework.* **A:** **Predator doesn't employ a dual-branch structure**. It consists of a **single-branch KPConv encoder-decoder structure** with an overlap-attention module in between, and we do not divide the architecture into teacher and student ***components**. Regarding Predator's results in the Appendix, we utilize the **entire** Predator model described in Predator's paper, pre-training it fully as the teacher, and then supervise the student with the **same** architecture. We will clarify this in the revised version. --- --- Rebuttal Comment 1.1: Title: Post-rebuttal Comment: I appreciate the authors' clarifications. After reading the rebuttal, I still have the following questions: It seems we may have different understandings of the dual-branch structure. Let me restate the concept as described in the original paper: PREDATOR is a two-stream encoder-decoder network, producing point-level features for both the source and target point clouds. How the final loss is calculated remains unclear to me. Did you calculate the loss from the two stream? Again, I thank the authors for their work and time. --- Rebuttal 2: Comment: Dear Reviewer SEpR, We sincerely thank you for your precious time and efforts in reviewing our paper. We greatly appreciate your insightful and detailed feedback, and we have carefully addressed the concerns in the rebuttal. Please let us know if any aspects remain unclear, and we are happy to provide further clarification. We are looking forward to your reply. Best regards, The authors of submission 1909 --- Rebuttal 3: Title: Further Clarification on Loss Calculation of PREDATOR Comment: We sincerely thank you for your precious time and efforts. We are happy that we have addressed most of your concerns. The reviewer is still concerned about the loss calculation for both the source and target point clouds. Indeed, as stated in the PREDATOR paper, PREDATOR is a two-stream encoder-decoder network, which produces point-level features for both the source and target point clouds. We provide more details about the loss for the source and target point clouds as follows: As is elaborated in Sec.3.4(L236-238) in our manuscript, the overall training loss for student is a weighted combination of multiple $\mathcal{L} _ \text{ABCont}$ introduced by ABCont (see Sec. 3.1), which is defined as $\mathcal{L} _ \text{ABCont} = \mathcal{L} _ \text{reg} + \lambda _ \text{aux} \mathcal{L} _ \text{aux}$. - For the auxiliary loss part $\mathcal{L}_\text{aux}$ **proposed by us**, as we have stated in L142-143 of the manuscript, we calculate it **from the two stream** (i.e.,for **both** source **and** target point clouds): $\mathcal{L} _ \text{aux} = \frac{1}{2} (\mathcal{L} _ \text{aux}^\mathbf{P} + \mathcal{L} _ \text{aux}^\mathbf{Q})$ - For the registration loss part $\mathcal{L}_\text{reg}$ , as we have stated in L137 of the manuscript, we **directly adopt the loss of the registration networks,** such as PREDATOR and FCGF used in our experiments. Therefore, for experiments involving PREDATOR, we calculate the registration loss in the same way as in their paper. As stated in their paper, they calculate the loss **from the two stream** (i.e.,for **both** source **and** target point clouds), and average them to form the total registration loss. In summary, we calculate the loss from the two streams in the experiments involving PREDATOR during teacher-student training. As for synthetic pretratining, we also directly adopt the loss of the registration network, and thus calculate the loss **from the two stream** (i.e.,for **both** source **and** target point clouds) for PREDATOR. We hope this further explanation would address your questions on the loss calculation. Thank you again for your constructive suggestions. We sincerely appreciate your thorough review, and we will incorporate your suggestions into the final version, including further clarification on loss calculation. --- Rebuttal 4: Comment: Dear Reviewer SEpR, We sincerely appreciate the time and effort the reviewer has dedicated to evaluating the work. As the reviewer-author discussion phase is coming to the end, we hope that our responses and additional experiments have effectively addressed the concerns raised. We respectfully request that the reviewer re-evaluate our work and kindly reconsider the ratings. Thank you for the continued support and understanding. Best regards, The authors of submission 1909 --- Rebuttal Comment 4.1: Title: Reply for discussion Comment: Hi, Thank you for your replies and the well-prepared rebuttal. I almost fully understand what you did in the method section regarding PREDATOR. I just have a small question about the PREDATOR part, though it does not affect my final score. In this case, both \(L_{\text{aux}}\) and \(L_{\text{reg}}\) are implemented as circle loss. For the distillation part, the gradient backpropagation only passes through one decoder and two encoders for either \(L_{\text{aux}}^P\) or \(L_{\text{aux}}^Q\). However, when combined, they will cover both encoders and decoders. The \(L_{\text{reg}}\) loss passes through both encoders and decoders, so it seems that the \(L_{\text{reg}}\) part might be redundant and could potentially be removed. Best, --- Rebuttal 5: Comment: Dear Reviewer SEpR, We sincerely thank you for your precious time. We are happy to see that our work related to PREDATOR has been understood. We would like to clarify a potential misunderstanding of the the two parts of our loss, namely the anchor-based auxiliary loss $\mathcal{L} _ {\text{aux}}$ and the registration loss $\mathcal{L} _ {\text{reg}}$. We **proposed** $\mathcal{L} _ {\text{aux}}$ to increase the feature similarity to positive anchors from the teacher, and decrease the feature similarity to negative anchors from the teacher. It is **not** Circle Loss, although they may share some similarities in their mathmatical expressions. The detailed definition of these anchors are given in L106-117 of our manuscript. Overall, $\mathcal{L} _ {\text{aux}}$ is designed to **regularize the feature space** of the student with teacher's extracted features. $\mathcal{L} _ {\text{reg}}$, in PREDATOR's case, involves the Circle Loss. However, it **cannot be removed** because it is the **primary supervision signal** to supervise the student with the pseudo-labels from the teacher. In Table 3 of our ablation study, we remove the $\mathcal{L} _ {\text{aux}}$ and find a performance drop. However, with $\mathcal{L} _ {\text{reg}}$ and our more accurate pseudo-labels comapred with existing methods, we are still able to outperfrom exsiting SOTAs (our 83.5% mRR v.s. EYOC's 83.2%), which demonstrates **the importance** of $\mathcal{L} _ {\text{reg}}$. For the student, indeed, we **need to train** both encoders and decoders, as well as the overlap attention module in between. As you said, the loss *passes through both encoders and decoders*, so it can adequatly supervise the learning of **all components of the student**. During teacher-student training, we prevent gradients from being propagated to the teacher, as is typically done by works involving distillation. We sincerely appreciate your thorough review. Indeed, it is crucial to understand how gradients are back-propagated and their effects on different modules of the network. We will conduct additional experiments and incorporate your suggestions into the final version. Best regards, The authors of submission 1909 --- Rebuttal Comment 5.1: Title: Reply to Authors Comment: Thank you for your clarification. If the paper is accepted, I strongly suggest releasing the code, as it would help readers understand the method more clearly. I understand that \(L_{\text{aux}}\) is used to increase feature similarity to positive anchors from the teacher and decrease feature similarity to negative anchors. However, it still functions as a circle loss, as stated in PREDATOR and other papers like GeoTransformer. Are the hyperparameters in this loss, such as \(\gamma\), set the same as in PREDATOR? (Perhaps I missed this detail.) Regarding Table 3, which network did you use? Was it not FCGF? I ask this because you mentioned in your response that the loss ablation study cannot remove the regularization loss. In your supplementary material, you provide results for both FCGF and PREDATOR, which has confused me further. --- Reply to Comment 5.1.1: Comment: Dear Reviewer, Thank you for your reply. Follow our tradition, we will release the code if the paper is published. We would also like to clarify a potential misunderstanding in the ablation study. As we stated in our previous reply, we indeed **have removed the regularization loss** in our ablation study (see row "w/o ABCont" in Table 3), and becuase the **registration loss** is the primary part for the student's supervision, we cannot remove it. Best regards, The authors of submission 1909
Summary: This submission proposes an unsupervised framework for point cloud registration. The key contribution is a two-stage training scheme, which first trains a teacher network on synthetic data that extract features in a density-invariant manner, and then trains a student network with pseudo label produced by the teacher net. Strengths: 1. The submission is in general well structured. Similar to this type of works, it contains quite a few notations and architecture design details, which are presented in a relatively clear manner. 2. The key insight, namely, "points of latent new inlier correspondences tend to cluster around respective positive anchors..." seems interesting and fresh to me (though I am not an expert in this area). 3. The experimental performance looks competitive. Weaknesses: 1. The key insight is more presented as an empirical observation, is there any chance that such can be justified in a more concrete way, even on some toy examples? 2. I didn't find report on the training cost and inference efficiency in the submission. Technical Quality: 3 Clarity: 3 Questions for Authors: In KITTI and nuScenes, as the data are scanned with limited degree of viewpoint freedom, can such method be applied to cases with significant rigid transformations? For example, in robotics, one may have to handle objects in varying poses randomly sampled in SE(3). If it is doable, could you please provide some insight/estimation on cost of the pre-training stage. Minor points: . L161, missing space. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limitation discussion looks good to me. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first would like to thank the reviewer for giving us valuable comments. **Q1:** *The key insight is more presented as an empirical observation, is there any chance that such can be justified in a more concrete way, even on some toy examples?* **A:** Thanks for your valuable comments. Fig.1 in our manuscript provides qualitative results concerning our key insight. We also conduct an additional simple quantitative experiment. For a sample pair from KITTI, we report the IR% of pseudo-labels at the first iteration in the FGC-Branch **under different thresholds based on the feature-space distance to positive anchors** in the following table. Features are normalized before calculating distances. ||0.1|0.5|0.8| |-|-|-|-| |IR%|98.2|88.3|85.1| The table shows that most inliers are close to positive anchors in feature space, and outliers are likely far away from positive anchors. This result conforms to our key insight. Notably, despite the high IR% of correspondences very close to anchors at the first iteration, **there are too few correspondences to supervise the student**, and some outliers still exist. This fact calls for our iterative design of FCEM to include potential inliers and update the anchors iteratively. We will refine the presentation of our key insight by (1) providing detailed quantitative results that cover more pairs from both the KITTI and the nuScenes datasets and (2) adding some toy examples to Fig.1 in the revised version. --- **Q2:** *Missing Report on the training cost and inference efficiency in the submission.* **A:** As an unsupervised registration network, our method's training cost and inference efficiency are highly dependent on the choice of registration network. For most experimental results with FCGF, the full training procedure takes approximately 76 hours. For the additional results with the Predator presented in the Appendix, the training procedure takes about 140 hours. The inference efficiency is the same as that of the registration network(0.16s for FCGF and 0.30s for Predator per pair) because our method does not introduce any **test-time** modules. We will add these details in the revised version. --- **Q3:** *INTEGER applies to cases with significant rigid transformations. For example, object-level cases in robotics.* **A:** Although our work focuses on ***outdoor scene-level*** data rather than **object-level** data, we are willing to demonstrate the effectiveness of our method in scenarios involving significant rigid transformations. Specifically, we augment the rotation around the specified axis by a randomly-sampled angle between [a1,a2]°. - **Results with Augmented Z-axis Rotation on KITTI:** Transformation that has been presented in the training set. In outdoor settings, the rotation around the Z-axis is the most common transformation. However, significant rotations as large as 180° are unseen in the model during training, which makes them a challenge. Compared to the supervised FCGF, our method has shown superior performance in terms of robustness against significant rigid transformations. Note that we follow the common practice of previous works to compute RRE and RTE only for pairs that are successfully registered. Therefore, RRE and RTE are not necessarily large even when RR% is very low. The results are as follows: **Ours**: ||[-15,15]°|[-45,45]°|[-90,90]°|[-180,180]°| |-|-|-|-|-| |RRE(°)|0.25|0.28|0.28|0.33| |RTE(m)|0.13|0.14|0.14|0.16| |RR(%)|99.1|99.1|98.6|79.6| FCGF (supervised) trained on KITTI: ||[-15,15]°|[-45,45]°|[-90,90]°|[-180,180]°| |-|-|-|-|-| |RRE(°)|0.36|0.38|0.39|0.44| |RTE(m)|0.46|0.42|0.43|0.25| |RR(%)|85.9|83.0|80.1|77.2| - **Results with Augmented X-axis Rotation on KITTI:** Transformation that **hasn't** been presented in the training set. In outdoor settings, the rotation around the X-axis is almost absent from the dataset. Such transformations are thus considered a challenge for registration models, both in terms of generalization and robustness. In this case, we have observed that our method outperforms the supervised FCGF by a very large margin (+18.9 RR% at most), demonstrating the superior generalizability of our method. The results are as follows: **Ours**: ||[-15,15]°|[-45,45]°|[-90,90]°|[-180,180]°| |-|-|-|-|-| |RRE(°)|0.49|0.51|0.84|1.28| |RTE(m)|0.15|0.28|0.25|0.30| |RR(%)|98.5|96.6|89.3|58.7| FCGF (supervised) trained on KITTI: ||[-15,15]°|[-45,45]°|[-90,90]°|[-180,180]°| |-|-|-|-|-| |RRE(°)|0.91|0.93|1.01|1.23| |RTE(m)|0.49|0.52|0.51|0.48| |RR(%)|87.3|80.1|74.8|39.8| Overall, these results indicate that our method **can effectively handle cases with significant rigid transformations**, even when such transformations **are not adequately presented in the training dataset**. We have acknowledged the importance of handling **object-level** data, such as ModelNet40. However, we are not able to conduct experiments on object-level datasets due to the lack of point cloud **sequences** in these datasets. Our method relies on progressive training (see Sec.3) and is designed to train with point cloud **sequences** provided in outdoor datasets. We will consider this in our future work. --- **Q4:** *Insight/estimation on the cost of the pre-training stage.* **A:** For the KITTI dataset, we train the teacher for 30 epochs in the pretraining stage, which takes roughly the same time as teacher-student training. However, the time cost for a single epoch is longer because there are more point cloud pairs (we generate a synthetic pair for **every** frame in the training set). We will investigate the necessity and devise more efficient sampling strategies in our future work. --- **Q5:** *Minor points: L161, missing space.* **A:** Thanks for pointing out it. We will correct it in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for the reply, I do not have further questions. --- Reply to Comment 1.1.1: Title: Thanks Comment: Thanks for your time and effort to read our response. We are more than happy to be able to address your questions! --- Rebuttal 2: Comment: Dear Reviewer FNCk, We sincerely thank you for your precious time and efforts in reviewing our paper. We greatly appreciate your insightful and detailed feedback, and we have carefully addressed the concerns in the rebuttal. Please let us know if any aspects remain unclear, and we are happy to provide further clarification. We are looking forward to your reply. Best regards, The authors of submission 1909
Summary: This paper focuses on unsupervised point cloud registration in 3D computer vision. To tackle the problem, it leverages the observation that in the feature space, points of latent new inlier correspondences tend to cluster around respective positive anchors. Based on that, this paper proposes a novel unsupervised registration method termed INTEGER to incorporate high-level contextual information for reliable pseudo-label mining. Extensive experiments were conducted on outdoor benchmarks including KITTI and nuScenes, which demonstrates the effectiveness of the proposed method. Strengths: 1. This paper is well-written and well-organized; 2. The observation w.r.t. the new inliers and positive anchors seems interesting and correct; 3. The use of teacher-student networks is rational, and the design of Feature-Geometry Coherence Mining is novel. 4. The experimental results are strong on outdoor scenes. Weaknesses: 1. There are some problems in terms of figures. For example, Fig. 1 is not very easy to understand. As a teaser, it does not demonstrate the core idea of this paper very clearly. Fig. 2 and Fig.3 include too much content and make it hard to focus. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. In Tab.1, why there are fewer baseline methods on nuScenes compared to those on KITTI? As well as the generalization test from KITTI to nuScenes? 2. Although the title demonstrates that this paper focuses on outdoor scenes, the proposed method could also work on Indoor scenes. Why Indoor scenes are excluded? Could some experiments be added to further demonstrate the superiority of the proposed method? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Limitations have been dicussed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We first would like to thank the reviewer for giving us valuable comments. **W1:** *Problems with the figures. For example, Fig. 1 is not very easy to understand. Fig.2 and Fig.3 include too much content and make it hard to focus.* **A:** Thanks for pointing out the figure issues. A simplified version of Fig.2 is shown in **the PDF attachment of the general response**. In Fig.1, we visualize the **feature space** by reducing the feature dimension to 2 with t-SNE. Here, we provide an explanation: - **Contour lines and colors in between** denote **the feature-space distance from a point to its nearest anchor**. - **The "o" and "x" markers** denote points forming inlier and outlier correspondence, respectively. - From Fig.1, we can observe that **almost all points close to positive anchors in the feature space belong to inlier correspondences.**. The observation conforms to our key insight. We will further improve the clarity of these figures in the revised version. Thank you for bringing this to our attention. --- **Q2:** *Reasons for fewer baseline methods presented on nuScenes compared to those on KITTI, as well as the generalization test from KITTI to nuScenes.* **A:** We excluded some supervised baselines (e.g., GeoTransformer, CoFiNet, and Predator) in experimental results of nuScenes and generalization tests for two main reasons: - We aimed to align our experimental setup with previous research, such as EYOC, which similarly omits these baselines. - The omitted baselines are included in KITTI's results to demonstrate that ***state-of-the-art supervised methods fail when the overlap ratio is very low and density variation is huge.*** Since KITTI and nuScenes are both autonomous driving datasets and share similar challenges, we believe including these baselines may not significantly add to the completeness of our experiments. We recognize the value of a comprehensive evaluation and have provided additional baselines in **part 1 of the general response**. Overall, these supervised SOTA methods still fail to perform on par with unsupervised counterparts with distant point cloud pairs. These experimental results will also be added to the revised version of our paper for clarity. --- **Q3:** *Reasons indoor scenes are excluded? Additional experiments to further demonstrate the superiority of the proposed method?* **A:** We chose to focus on outdoor scenes for two reasons: (1) A rich set of unsupervised indoor registration methods have been proposed, and their performance is already on par with supervised indoor methods. Still, they **fail to tackle outdoor challenges** due to their reliance on RGB-D data, high overlap assumptions, and LiDAR data's sparsity in outdoor scenes. We aimed to extend the success of unsupervised methods to outdoor scenes, explicitly addressing the unique challenges of these scenarios, such as the varying point density of the same object in different scans, as often encountered in autonomous driving. Given the very distinct applications and sensor characteristics (RGB-D cameras v.s. LiDARs), which may hinder a "unified method" from excelling in both indoor and outdoor environments, we only present the results on outdoor datasets in our paper. (2) Our method relies on progressive training (see Sec.3) and is designed to train with point cloud **sequences** that are commonly provided in outdoor autonomous driving datasets. We acknowledge the significance of indoor scenes and have conducted additional experiments to explore our method's potential in these environments. Specifically, we tested generalization from the outdoor KITTI dataset to the indoor 3DMatch/3DLoMatch dataset, demonstrating that our framework, despite being unsupervised, outperforms the supervised FCGF by a significant margin (26.4% RR vs. FCGF's 19.7% RR). Please refer to **part 2 of the general response for details.** --- Rebuttal 2: Comment: Dear Reviewer Bwh6, We sincerely thank you for your precious time and efforts in reviewing our paper. We greatly appreciate your insightful and detailed feedback, and we have carefully addressed the concerns in the rebuttal. Please let us know if any aspects remain unclear, and we are happy to provide further clarification. We are looking forward to your reply. Best regards, The authors of submission 1909 --- Rebuttal Comment 2.1: Comment: Thanks for answering my questions! I am still leaning towards accepting this manuscript. --- Rebuttal 3: Title: Thanks Comment: Thanks for your time and effort to read our response. We are happy to be able to address your concerns, and we are more than grateful for your accept recommendation.
Rebuttal 1: Rebuttal: We sincerely thank all reviewers for their careful, valuable, and insightful reviews. The reviewers acknowledge our work with good writing(Bwh6,FNCk,2VKD), novel design(Bwh6,SEpR), and competitive performance(Bwh6,FNCk,2VKD). We are particularly delighted to see that all reviewers recognize the novelty and correctness of the insight and the key observation of our method. We have conducted additional experiments to address the comments, and results are included in the PDF attachment. We describe new experiments in the PDF attachment and discuss the results most reviewers requested. 1. **More Supervised Baselines.** We excluded some supervised baselines for **nuScenes dataset** and the **generalizability test** because: - We aimed to align our experimental setup with previous work, such as EYOC, which similarly omits these baselines. - The omitted baselines are included in KITTI's results to show that ***SOTA supervised methods fail with very low overlap and huge density variation***. Since KITTI and nuScenes are both autonomous driving datasets and share similar challenges, we believe including these baselines may not significantly add to the completeness of our experiments. We have **included more** supervised baselines in **Table 1 of the PDF attachment**. Similar to KITTI's results, these supervised SOTA methods still fail to perform on par with unsupervised counterparts with distant point cloud pairs (within [40,50]m; see Table 1). This highlights the need for an unsupervised outdoor registration method with strong generalizability to handle distant point cloud pairs, which is crucial to real-world applications. We will include these results in the revised version for clarity. --- 2. **Reasons for Outdoor Scenes. Generalizability to Indoor Scenes.** In our paper, we chose to focus on outdoor scenes because (1) A rich set of unsupervised indoor registration methods have been proposed, and they already perform on par with supervised indoor methods. Still, they **fail to tackle outdoor challenges** due to their reliance on RGB-D data, high overlap assumptions, and LiDAR data's sparsity in outdoor scenes. In this paper, we aimed to extend the success of unsupervised methods to outdoor scenes, explicitly addressing the unique challenges of these scenarios, such as the varying point density of the same object in different scans, as often encountered in autonomous driving. Given the very distinct applications and sensor characteristics (RGB-D cameras v.s. LiDARs), which may hinder a "unified method" from excelling in both indoor and outdoor environments, we only present the results on outdoor datasets in our paper. (2) Our method relies on progressive training(see Sec.3) and is designed to train with point cloud **sequences** that are commonly provided in outdoor autonomous driving datasets. However, we acknowledge the importance of indoor settings and provide results on **3DMatch/3DLoMatch** in **Table 3 in the PDF attachment**. Following SpinNet(CVPR 2021) and BUFFER(CVPR 2023), we report RR% for generalizability tests using weights trained on KITTI. --- 3. **ETH Dataset: Generalizability to Non-Urban Outdoor Environments.** We **further** demonstrate the scalability of INTEGER by evaluating on ETH dataset, an outdoor dataset with **rural and forest scenes**, which are primarily unstructured and complex, differing from the urban environments of KITTI and nuScenes datasets. Because ETH is too small for training from scratch, we directly evaluate performance using weights trained on the KITTI dataset. The results are shown in **Table 2 of the PDF attachment**. Overall, our method suppresses supervised FCGF in RR% by a large margin (58.06% v.s. 39.55%), demonstrating its superior generalizability. Moreover, our method also outperforms unsupervised EYOC by a considerable margin. --- 4. **Overlap w.r.t. Distance of KITTI and nuScenes.** Our evaluation follows **EYOC's existing protocol**, focusing on pairs within distances [d1,d2], consistent with our method's design for autonomous driving data. The differences between frames, **indicated by the distance between them**, are crucial for demonstrating robustness and generalizability in real-world applications like collaborative perception. **Fig.2 in the PDF attachment** illustrates the overlap ratios w.r.t distances between frames in KITTI and nuScenes datasets, indicating **increasing distances reduce overlap ratios.** --- 5. **Training Cost and Computational Efficiency.** As an unsupervised training framework, INTEGER's training cost is highly dependent on the choice of the registration network. For most experimental results with FCGF, the entire training takes ~76 hours. For results with Predator in the Appendix, the training takes ~140 hours. The inference efficiency is the same as that of the registration network because our method does not introduce any **test-time** modules. We list the inference efficiency of FCGF and Predator on KITTI here for convenience: |Method|Time(s)| |-|-| |FCGF|0.16| |Predator|0.30| We will add the results to the revised version. --- 6. **Efficiency and Effectiveness of FCEM.** We have demonstrated the effectiveness in Sec 4.3, Table 3. In the ablation study, we validate the effectiveness with experiments on the two branches of FCEM. As mentioned in Limitations (Sec.A.1), our method is slightly slower to obtain pseudo-labels than EYOC because of the proposed iterative method used in the FGC-Branch of the FCEM module. On average, our method takes ~12 minutes to *train* (i.e., validation time is not included) every epoch on KITTI, while EYOC takes about ~9 minutes. However, with only 33% more time cost on KITTI, our INTEGER has **much more accurate** pseudo-labels with 28.1% higher IR (81.3% v.s. EYOC's 53.2%), and **considerable improvements** on RR of very distant pairs. (54.2% v.s. EYOC's 52.3%). Therefore, our FCEM is efficient concerning **the trade-offs of time and performance**. Pdf: /pdf/d41ffe0166c81dde782a04fc85834ffd803f048c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Honor Among Bandits: No-Regret Learning for Online Fair Division
Accept (spotlight)
Summary: The paper studies online fair division of indivisible items, where items arrive one at a time, and each item must be permanently assigned to a player upon arrival. Player utilities are initially unknown and must be learned via bandit-style feedback: there are a finite number of types of goods, and each player’s value for a good of type k is drawn independently from a distribution D_k. The authors aim to maximize utilitarian welfare (the sum of utilities) while satisfying a hard constraint of either envy-freeness in expectation (EFE) or proportionality in expectation (PE). EFE and PE are evaluated with respect to the randomized allocation on each time step. That is, before observing the type of the new item, the learner proposes a randomized allocation X_t which determines how the item will be allocated for each possible type. Each X_t is required to satisfy the fairness constraint (EFE and PE), not the cumulative realized allocation. I initially found this rather unintuitive, but the authors effectively justify this choice by showing that: 1. Any EFE algorithm achieves realized cumulative envy at most $\sqrt{T} \log(T)$, while optimal is at least $\sqrt{T}$. 2. EFE at each time step does not cause any welfare loss compared to only requiring EFE at the end. The main result is a simple explore-then-commit algorithm which is EFE or PE while achieving regret $\tilde{O}(T^{2/3})$ wrt welfare. Along the way, the authors prove that the EFE and PE constraints satisfy the following property: either all players receive the same allocation, or the constraint can be satisfied with slack. This novel result formalizes the intuition that the “hardest case” for fair division is when all agents have the same utilities. EFE/PE can trivially be satisfied in that case by the uniform allocation, and in all other cases, EFE/PE can be satisfied with slack. Strengths: The presentation is clear and accessible. Although some of the modeling choices felt initially unintuitive, the authors did an excellent job of justifying those choices, as discussed above. The main result is satisfying, both in terms of the simple explore-then-commit algorithm and the regret bound. Essentially, the authors successfully solved the problem they chose to study. I found the either-uniform-allocation-or-slack property (and the result that EFE and PE satisfy it) to be quite interesting as well. This result captures an important intuition in a simple way. The authors argue that this property may be of independent interest, and I agree. Weaknesses: Although I think the problem is reasonable and meshes well with prior work, I question whether the mathematical model has any real-world relevance. The authors use a running example of a food bank with multiple branches, with the idea that unpredictable perishable donations must be allocated immediately and fairly among the branches. I have several problems with this analogy: 1. Surely food banks wish to allocate according to need without worrying that one branch might “envy” another if the second branch simply has higher need? 2. The paper assumes additive utility, and each branch’s utility would be extremely non-additive (no one needs infinite cereal). 3. It’s hard to imagine the donations being so unpredictable. To be clear: I’m not claiming that a mathematical model must perfectly match reality in order to provide real-world insights. But this paper’s model seems only tenuously connected to the real-world scenario that is used as motivation. If the authors have further justification for the food bank analogy, I would be interested to hear it. To me, the most compelling story in this paper actually centers on the either-uniform-allocation-or-slack property. The authors establish this intuitive but nontrivial property, and demonstrate how it can be applied in a bandit context. I can easily imagine this property being useful for other fair division problems that might have more real-world relevance. This is more minor, but I think the citation below is quite relevant and should be discussed along with Benadè et al. Hakuei Yamada, Junpei Komiyama, Kenshi Abe, and Atsushi Iwasaki. Learning fair division from bandit feedback. International Conference on Artificial Intelligence and Statistics, 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: Do you agree with my criticisms of the food bank analogy? Is there a different real-world situation that you believe is captured by your mathematical model? More broadly, do you believe that your mathematical model provides real-world insights, and if so, what are they? Below is feedback that you're welcome to take or leave. I don't expect responses, even though some are phrased as questions. 1. I would suggest naming Property 2 so that I don’t have to keep calling it the either-uniform-allocation-or-slack property :) 2. “This distribution is independent of both player i’s distributions for other item types and other players’ distributions.” Distributions can’t really be independent; I think this should state that samples from those distributions are independent. 3. “Intuitively, this is equivalent to assuming that players have non-zero values for all item types and that their values have a finite upper bound.” I don’t think this is quite right. Assuming non-zero values would just require mu_{ik}^* > 0. I would describe this assumption as saying that the distribution means are “well-conditioned”. 4. Definitions 1 and 2: mu_i^* is a vector, right? I don’t think this was ever explicitly defined. (Only mu^* and mu_{ik}^* were, I think.) The definitions are also missing “for all i”. 5. “proof” should be capitalized at the start of proofs 6. Why not just use gamma_0 in Property 2? Why have for all gamma < gamma_0? 7. Is it intentional for the “2” in C_{P2} to be a hyperlink to property 2? 8. Consider mentioning the actual value of (or a bound on) gamma_0 is for each property? 9. For readability in Algorithm 1, consider writing indicator variables as \mathbb{1}(stuff) instead of \mathbb{1}_{stuff}. 10. The notation “\mu in \hat{\mu} \pm \varepsilon” is non-standard to my knowledge. Consider “\mu in [\hat{\mu} - \varepsilon, \hat{\mu} + \varepsilon]” 11. I found lines 154-178 to be quite compelling and and also quite important. Consider making this its own subsection? And maybe even including some of the Theorem statements from Appendix B? 12. I really like lines 323-330. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors thoroughly discussed the technical limitations of their work, but did not discuss what I see as the greatest limitation: the weak motivation of the problem and the potential lack of real-world insights. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for all of your helpful feedback! We will make sure to incorporate your suggestions into the final version. Below, we address your questions about the motivating example. > Do you agree with my criticisms of the food bank analogy? Is there a different real-world situation that you believe is captured by your mathematical model? More broadly, do you believe that your mathematical model provides real-world insights, and if so, what are they? While this paper is primarily a theoretical contribution, we do believe that the food bank analogy is a reasonable application, and we would like to provide some further justification for this use case. In online fair division, the food bank redistribution problem is a canonical example that has been studied extensively theoretically and in practice. In fact, a recent work [1] describes a partnership with a program in Indiana that redistributes "rejected truckloads of food". The program, known as "Food Drop", allocates 10,000+ lbs of rejected food per month to food banks. As in our setting, the rejected food is an online and unpredictable process, and therefore this application requires an online algorithm. [1] focuses on practical algorithms that allocate the food fairly among the different food banks, and they use envy-freeness as the fairness metric. [1] does not discuss how to incorporate feedback/learning into the fair algorithms, which is one of our main contributions. > Surely food banks wish to allocate according to need without worrying that one branch might “envy” another if the second branch simply has higher need? In the aforementioned application of online fair division (which explicitly aims for envy-freeness), there are many locations that all have relatively equal need. Similarly, we are assuming that all players (food banks) start with the same amount of need, which can for example be achieved by first normalizing by number of customers. With that in mind, envy-freeness can be seen as preventing situations where the need of a food bank aren't met when it was possible to meet them. For example, if one food bank needs tomatoes and another doesn't, allocating the tomatoes to the former food bank wouldn't create envy, whereas allocating them to the latter (thereby not meeting the needs of the former) would. > The paper assumes additive utility, and each branch’s utility would be extremely non-additive (no one needs infinite cereal). This is an excellent point. Additive utility is a standard assumption for online fair division due to the difficulty of studying non-linear utilities — unfortunately, even in online fair division without learning, there is not much previous work analyzing non-additive utilities. Studying non-additive utilities in online fair division would certainly be an interesting problem. That being said, if donations are delivered at most once a week and all of the donations are used between deliveries, then additive utilities may actually be a reasonable assumption. > It’s hard to imagine the donations being so unpredictable. We expect that donations to food banks are pretty random; for example, grocery store donations depend on what is left at the end of the day, which is highly variable. However, there also exists an equivalent way to phrase our problem that does not involve unpredictable donations. Instead of receiving an item of a random type at each time as in our paper, we instead receive a (non-random) bundle of items at each time that has an equal amount of all item types. For example, instead of there being 1/2 chance of receiving a pound of apples and 1/2 chance of receiving a pound of oranges, we could instead have 100% chance of receiving 1 pound of apples and 1 pound of oranges, which can be allocated to the same or different players. This formulation is actually equivalent to our problem due to the linearity of expectation, and the exact same algorithms and results hold. We hope the above discussion helped clarify your concerns with the food bank motivating example. If you found this discussion helpful, we would also be happy to include more discussion of the application in the final paper. [1] Marios Mertzanidis, Alexandros Psomas, and Paritosh Verma. "Automating Food Drop: The Power of Two Choices for Dynamic and Fair Food Allocation." arXiv preprint arXiv:2406.06363 (2024). --- Rebuttal Comment 1.1: Comment: I mostly find the authors' response compelling, and I will raise my score by 1 point under the assumption that the further justification for their mathematical model is incorporated into the final version of the paper. The most compelling point in my opinion is the existence of a real application that matches this mathematical model, and I would recommend the authors discuss Food Drop explicitly in the final version. I will briefly mention that I find this aspect of their response unconvincing > Additive utility is a standard assumption for online fair division due to the difficulty of studying non-linear utilities The fact that the assumption is standard doesn't make it realistic. And the fact that non-linear utilities are challenging to analyze doesn't make the actual utilities additive. --- Reply to Comment 1.1.1: Comment: Your response is greatly appreciated. We're happy to follow your suggestion and discuss Food Drop. > The fact that the assumption is standard doesn't make it realistic. And the fact that non-linear utilities are challenging to analyze doesn't make the actual utilities additive. This is well taken. Not to belabor the point, but perhaps the argument we briefly mentioned about the frequency of allocations is more compelling. For example, in the food allocation data of Lee et al. (2019), there were a total of 1760 donations from 169 donors over the course of five months, and 277 organizations received donations. This means that organizations receive donations every 3-4 weeks on average, suggesting that the donations can largely be regarded as independent. Min Kyung Lee, Daniel Kusbit, Anson Kahng, Ji Tae Kim, Xinran Yuan, Allissa Chan, Daniel See, Ritesh Noothigattu, Siheon Lee, Alexandros Psomas, Ariel D. Procaccia: WeBuildAI: Participatory Framework for Algorithmic Governance. Proc. ACM Hum. Comput. Interact. 3(CSCW): 181:1-181:35 (2019)
Summary: The authors study an online fair division problem where items arrive online and need to be assigned to agents to maximize welfare while satisfying one of envy-freeness or proportionality. The novel consideration in their model is that the item valuations are drawn from an unknown distribution. They approach this as both a learning problem as well as an allocation problem. They provide a simple algorithm with small regret compared to an algorithm that knew the distributions beforehand. They also prove several structural results regarding their model. Strengths: The model is quite interesting, as are their results. The proofs are involved, and might be of independent interest. Weaknesses: N/A Technical Quality: 3 Clarity: 4 Questions for Authors: Is it possible to achieve non-trivial regret in the setting where the valuations of the agents are not independent? In the motivating example of the food pantry, if a delivered food item is of bad quality, the agents might all lower their valuation of this item together. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > Is it possible to achieve non-trivial regret in the setting where the valuations of the agents are not independent? In the motivating example of the food pantry, if a delivered food item is of bad quality, the agents might all lower their valuation of this item together. This is definitely an interesting extension. By linearity of expectation, we would expect that correlation between arms does not change the regret of the algorithm. Similarly, because the fairness constraints are linear, correlation between players would probably not affect the envy-freeness in expectation constraints. However, the correlated arms could potentially allow the algorithm to learn faster (have lower regret) if the algorithm can extract information from the correlation.
Summary: In this work, the authors consider the problem of maximising the total utility in online fair allocation settings, where: (i) T items having types in set [m] arrive online in T distinct rounds; (ii) the value $V_{i}(t)$ that each agent $i$ assigns to each item $t\in [T]$, when its type is $k$, is independently drawn from a sub-Gaussian distribution with unknown mean $\mu_{i,k}^*$; (iii) for each item that arrives online, its type is known, and based on this information and previous observations, it must be irrevocably assigned to some agent; (iv) the value $V_{i}(t)$ of each item $t$ can only be observed after it has been assigned, and this information allows the estimation of the unknown probability distributions as $t = 1, \ldots, T$ varies. The authors design an Explore-Then-Commit algorithm that maximises the total utility of the various players over $T$ rounds, with a regret of the order $\tilde{O}(T^{2/3})$, where the maximisation is constrained by achieving fairness properties (envy-freeness or proportionality) in expected value, with high probability. Strengths: -This paper introduces an interesting setting of online fair allocation, where techniques derived from bandit algorithms are applied to the case where valuations are distributed according to unknown distributions. -The approaches considered in Lemmas 1 and 2, which characterise fairness properties robust to small uncertainties in the underlying distribution, are interesting and novel. -The paper certainly required a considerable technical effort. Weaknesses: I believe that the results could be written and presented more effectively (see the major comments below). Another minor criticism is that, although the results provided in Lemmas 1 and 2 are quite novel and there has been a considerable technical effort in writing the complete version, the results obtained are somewhat expected within the field of bandit algorithms and follow standard approaches. Major comments: -It should be highlighted from the beginning that the allocations returned by the algorithm are envy-free in expectation, but with high probability (that is, not always). -The matrix notation could sometimes be avoided in favour of explicitly listing the various constraints. -It would be clearer to present the main algorithm right from the start and explain informally how the various properties come into play. -The fact that the presented LP has infinite constraints is mentioned in the conclusion. I suggest providing more details in the technical section. Minor comments: -The acronyms efe and pe are not defined (despite clear from the context). -Remark 1: specify that the rows different from i and i' are composed of zeros only. -Claims of Lemma 1 and Lemma 2: satisfy -> satisfies. Technical Quality: 3 Clarity: 3 Questions for Authors: No questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: > I believe that the results could be written and presented more effectively (see the major comments below). [...] -It should be highlighted from the beginning that the allocations returned by the algorithm are envy-free in expectation, but with high probability (that is, not always). -The matrix notation could sometimes be avoided in favour of explicitly listing the various constraints. -It would be clearer to present the main algorithm right from the start and explain informally how the various properties come into play. -The fact that the presented LP has infinite constraints is mentioned in the conclusion. I suggest providing more details in the technical section. Thank you for your helpful comments! We will try to incorporate your presentational comments into the final version of the paper to make the exposition clearer. We will make sure to explicitly clarify that the algorithms are envy-free with high probability (in fact, no algorithm can do better than that and still have sub-linear regret). We will also add more discussion of the LP with infinite constraints near the algorithm and make it clear that it is still possible to find the solution in polynomial time. --- Rebuttal Comment 1.1: Title: Rebuttal Comment: I have read the authors' response carefully, and I am satisfied with their reply. I strongly believe that the rationale behind the considered setting will be even more exhaustive in the revised version. Moreover, although the case of additive functions is more restrictive compared to general monotone functions, I do not think this is an issue, as this setting is one of the most studied in the field and is well established in the literature. However, I believe that the interest in this setting should not be justified by the fact that the more general setting is difficult or not possible in online learning. Finally, I think that the tight lower bound presented by the authors deserves to be mentioned in the new version (including the complete proof in the appendix), as it further justifies the approach obtained for the upper bound. In light of the above comments, I will increase my score by one point. --- Reply to Comment 1.1.1: Comment: Thank you very much for your comment. We appreciate your take-aways from the entire discussion and are happy to follow your suggestions. > In light of the above comments, I will increase my score by one point. Just in case this somehow fell through the cracks, we believe the change (from 6 to 7) isn't reflected in your current score.
null
null
Rebuttal 1: Rebuttal: We would like to thank all the reviewers for their time and comments! We respond to specific comments in the individual rebuttals below. Shortly after submitting our paper, we found a simple lower bound which shows that for our setting, no algorithm can do better than the ones we presented. We plan to include this in the revision and we believe the new result will strengthen the paper. We include a brief discussion here in case the reviewers are interested, but please feel free to ignore this section if not. A weakness of the current paper (as acknowledged in the discussion section) is that perhaps a more sophisticated algorithm such as UCB or Thompson Sampling would be able to achieve a better regret of $\tilde{O}(\sqrt{T})$. However, we are actually able to show that $\tilde{O}(T^{2/3})$ is the best that any algorithm can do with a very simple example, given below. The example and informal proof is as follows. Suppose there are two item types and two players. In this case envy-freeness and proportionality are equivalent, and therefore we will focus on the former. Consider the following two value matrices (with rows = players, columns = item types). $\mu_1 = \\begin{bmatrix} 2 & 3 \\\\ 1 & 1 \\end{bmatrix}$ $ \mu_2 = \\begin{bmatrix} 2 & 3 \\\\ 1 & 1 +T^{-1/3}. \\end{bmatrix}$ We can show that no algorithm can with probability $1-1/T$ achieve regret of less than $\tilde{\Omega}(T^{2/3})$ and satisfy envy-freeness in expectation for both of these distributions. First, note that the expected social welfare maximizing allocation for $\mu_1$ is to give all items of type 1 to player 2 and all items of type 2 to player 1. On the other hand, any envy-free allocation for $\mu_2$ must give $\tilde{\Omega}(T^{-1/3})$ fraction of items of type 2 to player 2. This implies that if an algorithm is unable to distinguish between $\mu_1$ and $\mu_2$, then either the regret will be $\tilde{\Omega}(T^{2/3})$ for $\mu_1$ or the algorithm will not be envy-free for $\mu_2$. Therefore, any algorithm that has regret of less than $\tilde{\Omega}(T^{2/3})$ and satisfies envy-freeness for both $\mu_1$ and $\mu_2$ must distinguish betwen $\mu_1$ and $\mu_2$. The only way to do this is to allocate at least $\tilde{\Omega}(T^{2/3})$ items of type $2$ to player 2. However, this will result in regret under $\mu_1$ of $\tilde{\Omega}(T^{2/3})$. A more rigorous version of this argument using standard techniques for lower bounds in multi-armed bandit problems would be included in the final paper.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Barely Random Algorithms and Collective Metrical Task Systems
Accept (spotlight)
Summary: This paper introduces and studies the problem of designing and analyzing randomized algorithms for Metrical Task Systems (MTS) using only limited randomness, that is, a number of random bits that only depends on $n$ (the number of states in the MTS), rather than the length of the sequence, as in the case of the known SOTA randomized algorithms. The main result of the paper shows that $2\log n$ random bits suffice to achieve the same competitive ratio as the best fully randomized algorithms, up to a factor of 2. This is near-optimal, as shown, in the sense that $\log n-O(\log \log n)$ bits are necessary to this end. The barely-random setting with $k$ random bits also yields directly a solution to a {\em collective} MTS setting, in which, informally, $k$ algorithms are run in parallel, and the cost paid is the average of the cost of the $k$ algorithms. This collective setting is motivated by similar in spirit works on graph exploration in distributed settings, with $k$ parallel agents. The approach builds on a fractional formulation of the online problem, and techniques from optimal transport theory, such as the Birkhoff-von Neuman theorem, instead of the standard tree embedding techniques. Strengths: The paper is very well written. All results are rigorously proven, but there is also a lot of intuition and high-level guiding of the reader. The solution is non-trivial and builds on fairly complex ideas. Furthermore, even though the work is theoretical, it is connected to concrete applications. The idea of using concepts from optimal transport theory looks novel, although I am not familiar with its application to other online problems via competitive analysis. Weaknesses: There is no experimental analysis. It is understandable that the work is mainly theoretical, but one suggestion would be to expand on the application discussed in Remark 2.3 and include an experimental evaluation of this problem. Technical Quality: 4 Clarity: 4 Questions for Authors: Is the use of optimal transport theorem novel in the competitive analysis of MTS problems? For the setting of Remark 2.3, is this application known in the literature, or you define it? In Section 3 it would be good to have the statement of the algorithm in rough pseudocode. line 400: Can you expand on these similarities with descent methods? Other comments: lines 141-148: please add appropriate citations throughout. I assume the reason behind the proof of Proposition 2.1 is to give intuition about the more complex setting? If yes please make this clear. line 203: typo "achieves". line 280: This is the potential function, but it is hidden, please make it more prominent. Confidence: 3 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: No issues in regards to limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the Reviewer for their interest and their thoughtful questions. * To the best of our knowledge, the use of the Birkhoff-von Neumann theorem in the context of competitive analysis is new. We build upon this theorem in Proposition 2.2 of Section 2.2. As the reviewer points out, the purpose of Proposition 2.1 (which is well-known in the literature) is just to give more intuition on the more complex setting. We will highlight this in the revised version of the paper. The other key novel technical part of our analysis is in Section 3 where we develop a first-order method that displays a *hysteresis phenomenon* to track a fully fractional configuration with a barely fractional configuration. This shows the existence of $O(\log^2 n)$-competitive barely fractional configurations, which in turn has implications for randomized algorithms, collective algorithms and advice complexity. * The application suggested in Remark 2.3 is quite natural: since our algorithm only moves significant amounts of mass, it is not sensitive to (small) fixed transaction costs ; this comes in contrast with fully fractional algorithms, because they move arbitrarily small amounts of mass. We believe that this application is also new and was out of reach for previous methods. * As suggested by the reviewer, we will add a (very short) pseudocode to highlight the simplicity of our method. * Section 3 is reminiscent of offline first order optimization in many ways. Gradient descent of a function $f(\cdot)$ is also known as the explicit Euler method, where one can write the $t+1$-th iterate of the method as the minimizer of the sum of the first order approximation of $f(\cdot)$ at the previous iterate and a quadratic cost, i.e. \begin{align} x(t+1) = \arg\min_x \nabla f(x(t))(x-x(t)) + ||x-x(t)||^2. \end{align} A very closely related method is the implicit Euler method where the $t+1$-th iterate is the minimizer of the sum of $f(\cdot)$ and a quadratic cost, \begin{align} x(t+1) = \arg\min_x f(x) +||x-x(t)||^2, \end{align} which is also known as the Proximal operator. Our Equation (2) is reminiscent of this Proximal operator, which has important stability properties. One key difference is that we replace the squared norm by an optimal transport cost, and that the offline objective function $f(\cdot)$ is replaced by a potential which is defined online. Note that quite recently, the notion of Wasserstein proximal operator has also appeared as a central object of study in the context of diffusion and partial differential equations (see e.g. the review of Santambrogio on Gradient Flows). A key difference with our Equation (2) is that we consider the Wasserstein-1 metric (aka, the earthmover distance) and not the Wasserstein-2 metric. We will add a discussion on the reviewer's question in the final version of the paper. * If the reviewer is satisfied with the novelty of our method, and with the proposed revisions, we kindly encourage them to positively reassess their rating of the submission. * We will correct all typos, put more emphasis on the potential function and add citations in lines 141-148. --- Rebuttal Comment 1.1: Comment: Dear authors, thank you for your helpful response. I would suggest to include your comment on Section 3 in the paper, if possible,
Summary: The current paper addresses the problem of metrical task systems on a general space with $n$ points and proposes a technique to reduce the amount of randomness used by any random algorithm. More precisely, the authors prove that any "fully random" algorithm, which uses an unlimited number of random bits, can be transformed into a "barely random" algorithm that uses only $\lceil 2 \log n \rceil$ random bits, with a competitive ratio at most twice as large. An immediate consequence of this result, using previous works, is the existence of an $O((\log n)^2)$-competitive algorithm that uses only $\lceil 2 \log n \rceil$ random bits. The authors also show that any $O((\log n)^2)$-competitive algorithm requires at least $\log n - O(\log \log n)$ random bits, essentially proving the tightness of their main result. Strengths: * The results of the paper are interesting and may inspire future work in similar directions. * Viewing the problem as a collaborative MTS with $k$ deterministic agents provides better insights and understanding of the problem. * The paper introduces new analysis techniques that could have broader applications. Weaknesses: I don't see any major weaknesses in the presented results or the proofs. However, I find that the paper is not a good fit for NeurIPS at all. A conference or journal focused on theoretical computer science would be a much better venue for this work and would provide it with greater visibility among researchers who are more likely to benefit from it. Technical Quality: 4 Clarity: 3 Questions for Authors: Do the authors think that some arguments and techniques from the paper can be used to prove similar results on barely random algorithms in other problems in competitive analysis? Confidence: 3 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: The assumptions of the theoretical results are clearly stated. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their interest and their careful reading. We note that the reviewer's appreciation of our results seems very positive, and that their only concern is whether the paper is a good fit for NeurIPS. We attempt to convince the reviewer that this is indeed the case, in the hope that they would increase their rating in a way that reflects their appreciation of the paper's inherent quality. While metrical task systems (MTS) is a problem that originated in the 1990s from the community of theoretical computer science (TCS), we observe that in the last 5 years the problem has gained a strong interest in the machine learning community, with many contributions on *learning augmented algorithms* [1,17,2,18,3] and on *concrete applications* of MTS [37, 25]. Most of these papers were published at NeurIPS. We note that our work has a flavor that is very similar to *learning-augmented algorithms*. Learning-augmented algorithms use predictions coming from a black-box algorithm, in the hope to improve over the performance of competitive algorithms when predictions are accurate. In a similar fashion, our method tracks a fully fractional configuration seen as coming from a black-box algorithm, and makes it barely fractional in a competitive way. We therefore expect our methods could be relevant to researchers working on learning-augmented algorithms, for instance when they need to interpolate smoothly between black-box configurations coming from one or many predictors. Our work is also motivated by *concrete applications*. The method we propose is very easy to implement and can be applied to concrete problems such as dynamic power management [2], distributed systems [30,21] or asset management (Remark 2.3). For a very immediate example of an application of our work to a power management problem, we encourage the reviewer to read the second item of our response to **Reviewer stvh**. The result on the existence of a barely random algorithm for MTS has indeed a theoretical flavor. But it is only one of the consequences of our analysis. Our results also imply for e.g. that a deterministic (finite) team of agents can be competitive in a way no single agent can be – which echoes of the study of intelligence in multi-agent systems. A totally different interpretation of our results is in advice complexity (Corollary 1.4) a subject which is very connected to learning-augmented algorithms. For more details on this aspect of the paper, we encourage the reviewer to read the first item of our response to **Reviewer stvh**. Before submitting this paper to NeurIPS, we carefully verified the "NeurIPS 2024 Call for Paper" which lists the topics in the scope of the conference. We felt that our paper falls entirely within the scope described by this document (specifically, at the intersection of Theory, Optimization and Online Machine Learning). We encourage the reviewer to check these guidelines when finalizing their rating. Finally, we understand and truly appreciate the intent of the Reviewer to make the results of the paper known to the TCS community. We will not miss other opportunities (workshops, seminars or other submissions) to engage with this community as well, in the hope to develop application-driven and theoretically-grounded works for online settings. * Regarding the reviewer’s question, we believe that our techniques could help to prove new results for other online problems (such as k-servers or metrical service systems), even though there is no immediate reduction. --- Rebuttal Comment 1.1: Title: I raised my score Comment: I thank the authors for their response. I am very familiar with the fields of advice complexity and learning-augmented algorithms, and aware of previous works on MTS and online algorithms published in ML venues. I agree that the paper has strong connections to the field of advice complexity, which is somewhat related to learning-augmented algorithms. However, the core aspect of the latter framework, which I believe makes it suitable for ML venues, is the necessity of dealing with potentially inaccurate predictions. This aspect is not addressed in studies of advice complexity, which are out of scope for NeurIPS. That said, since the other reviewers do not see this as an issue, and because I mostly liked the paper and find its contributions very interesting (as noted in my review), I will change my score and join the other reviewers in recommending acceptance.
Summary: Authors study randomized algorithms for Metrical Task Systems (MTS) which need only a small number of random bits which, in particular, does not depend on the length of the time horizon. They also interpret this in terms of an average performance of a cooperating group of several deterministic algorithms and in terms of advice complexity of reaching the performance of the best randomized algorithms for MTS using a deterministic algorithm. Their main result is a reduction which can turn any alpha-competitive randomized algorithm for MTS into a 2alpha-competitive algorithm which uses only 2*log n random bits. Strengths: * Proposed results are very interesting. I even find them surprising. * MTS is one of the central problems in online algorithms. It is extensively studied in terms of randomized and deterministic algrithms as well as advice complexity. The paper brings interesting and novel point to the literature on MTS. * The proposed result connects several distinct views of the problem: collective algorithms, barely random algorith, advice complexity. I appreciate the way authors explain this context. * presentation of the results is clean and elegant. Weaknesses: * No upper bounds for $k < n^2$. This is a mild weakness, I consider the presented results strong enough. Technical Quality: 3 Clarity: 3 Questions for Authors: * advice complexity of MTS was already studied, you may want to add missing references, e.g. Emek et al. Would be helpful for the reader to compare the regime of their results to yours. * at lines 82,83, there is a repeated word "tight" * Acquiring random bits sometimes limits usability of randomized algorithms. Caching would be one example. Can you give an example of a concrete MTS where your method could ease implementation of randomized algorithms? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Explained well in statements of the results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their constructive feedback and suggestions. We answer the reviewer's questions below. * The reference "Online Computation with Advice" by Emek et al. pointed out by the reviewer is very relevant and will be added to the paper: Emek et al. study the situation where $b$ bits of advice are provided at each time-step to a deterministic MTS algorithm, for $b$ of order $\log n$. The total amount of advice in this reference is therefore of order $B = \Theta(\log n*T)$, where $T$ is the number of time-steps. One surprising consequence of our results (Corollary 1.4) is that we present a deterministic MTS algorithm that has a competitive ratio of $O(\log^2 n)$ using a *single* advice of total size $B = 2\log n$, i.e. $B$ is now independent of the number of time-steps $T$. As suggested by the reviewer, we will add a brief discussion on this matter in the revised version of the paper. * As the reviewer points out, we also provide a randomized algorithm which requires very few random bits (again, $B= 2\log n$, independently of the input size). The method is therefore helpful in any setting where acquiring random bits is costly. In concrete implementations, we also expect that barely fractional configurations (which are discrete objects) are generally much more practical to handle for computers than fully fractional configurations (which are arbitrary real numbers). But perhaps the most applied consequence of our results is that a deterministic team of k agents can be competitive in a way that no single agent could be (provided that $k\geq n^2$). This has for instance a very direct implication in energy management [2]. Consider a computer with three energy modes (*on*, *sleep*, *off*) with a switching cost of $1$ between *on* and *sleep* and another switching cost of (say) $5$ between *sleep* and *off*. At each time $t$ the request is either that * the computer is used: in which case $c(t) = (1, +\infty, +\infty)$ so the computer must be in state *on* ; or * the computer is not used: in which case $c(t) = (1, 0.5, 0)$ so the computer would rather be *off* than *sleep*, and *sleep* than *on*. This simple setting was presented by S. Bubeck to motivate the relevance of MTS in a series of lectures in 2019 (Five miracles of mirror descents, available on Youtube). Consider the variant of the problem, where the computer is in fact composed of $k = 9$ components (e.g. screen, CPU cores, etc.) that can each individually switch between one of the three modes (*on*, *sleep*, *off*). The energy spent by the computer is the sum of energy spent by its constituents. Since $n=3$ and $k > n^2$, our results imply that this system can enjoy the randomized competitive ratio (up to a factor 2) against a deterministic adversary! * Typos will be corrected. --- Rebuttal Comment 1.1: Comment: thank you for your explanation.
Summary: The paper bounds the randomness required to achieve the asymptotically tight randomized competitive ratio for metrical task systems, a fundamental model of online computing (essentially, prediction with expert advice endowed with a metric cost function for switching among experts). It shows that $2\log n$ random bits (where $n$ is the number of states/experts of the system) are sufficient to achieve a competitive ratio that is within a factor of $2$ of the optimal ratio (which is known in general to be $\Theta(\log^2 n)$, though for some metrics $O(\log n)$ is achievable). It also shows a nearly matching lower bound of $\log n$ bits. The main result, the upper bound, is a rather simple and elegant reduction that transforms any randomized competitive algorithm to a ``barely fractional" algorithm (which uses a finite set of probabilities that are all multiples of $1/n^2$), which in turn implies a ``barely randomized" algorithm. The reduction is not trivial as simple rounding to the nearest multiple of $1/n^2$ probability vector gives very poor performance. Hence, a more sophisticated rounding scheme is employed, where instead of optimizing the distance between the fully randomized algorithm's probabilistic position to the rounded position, one factors in a smoothed version of the rounded position (averaging between it and a uniform distribution), and the switching cost of the rounded algorithm. Strengths: Metrical task systems are a widely acknowledged fundamental model of online computing. The question is a fundamental question in online computing. The result answers a fundamental question in online computing. It show in particular that the amount of randomness needed is independent of the input sequence length. This has implications to advise assisted online computing, and to multi agent online computing. Weaknesses: It's a simple idea and a simple proof, though that's not necessarily a weakness. Technical Quality: 4 Clarity: 4 Questions for Authors: None. Confidence: 5 Soundness: 4 Presentation: 4 Contribution: 4 Limitations: None. Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their encouraging review!
Rebuttal 1: Rebuttal: We thank the reviewers for their interest, their careful reading, and their feedback to improve the paper. We quote all four reviewers who unanimously appreciated the content and the presentation of the paper: * **Reviewer Ddub**: "The result answers a fundamental question in online computing." * **Reviewer stvh**: "Proposed results are very interesting. I even find them surprising." "Presentation of the results is clean and elegant." * **Reviewer 1Eyx**: "The paper is very well written." "The solution is non-trivial and builds on fairly complex ideas." * **Reviewer Kc9cy**: "The paper introduces new analysis techniques that could have broader applications." **Reviewer Kc9cy**, whom we thank for their valuable point of view, suggested borderline rejection not on the foundation of the paper’s merits, but because they question its relevance to the publishing venue. Specifically, the reviewer says that the paper would have more impact if it were published in the computer science community. We reply in detail to **Reviewer Kc9cy**, explaining why the present paper can appeal to many researchers attending NeurIPS 2024 and, more broadly, to the online machine learning community. We also answer the questions of **Reviewer stvh** and **Reviewer 1Eyx** in the hope of alleviating any remaining doubts.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Abrupt Learning in Transformers: A Case Study on Matrix Completion
Accept (poster)
Summary: The paper explores the behavior of Transformer models in the context of low-rank matrix completion, treated as a masked language modeling (MLM) task. The authors train a BERT model to solve matrix completion and analyze its performance, particularly noting an algorithmic shift characterized by a sudden drop in loss, transitioning from copying input to accurately predicting masked entries. The paper aims to provide insights into the interpretability of Transformer models. Strengths: 1. The paper presents an interesting approach by framing matrix completion as an MLM task. 2. The experiments are detailed, with clear descriptions of data preprocessing, training, and performance metrics. 3. The identification of an algorithmic shift during training is a significant observation, providing insights into the model's learning dynamics. Weaknesses: 1. The paper lacks a deep theoretical or mechanical analysis of why the model undergoes an algorithmic shift and how it learns the task. 2. There is insufficient explanation of the internal mechanisms by which the model learns matrix decomposition, particularly in the second stage. 3. The experiments are conducted on relatively small-scale matrices (up to 15x15), raising concerns about the scalability and generalizability of the findings. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Does the model operate through two distinct pathways for non-MASK and MASK positions, as suggested by the different behaviors observed during training? 2. What would be the model's behavior if the attention matrices' parameters were frozen at the start of training to match those of a fully trained model? Would the model still exhibit the initial copying behavior, or would it directly perform matrix completion? 3. How well does the model generalize to different types of low-rank matrices, especially matrices with different sparsity and rank levels? Can it accomplish the task of finding the lowest rank matrix? 4. Can the approach be scaled to larger matrices, and what modifications would be necessary to achieve this? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, and are excited to know that they found our approach interesting! - *The paper lacks a deep theoretical ...* We would like to point out that apart from the analysis in our paper, to the best of our knowledge there has been no work on analyzing such an algorithmic shift either mechanistically or theoretically. Ours is the first step in this direction where we start by formulating a mathematical setup, since this gives us a clearer picture of mechanisms learnt by the model before and after the algorithmic shift. We provide several detailed experiments on this front, as seen in Sections 4–6 of the paper. To address reviewer concerns, we have also added 2 additional probing experiments that analyze the hidden states in the model in more detail than our existing probing experiment. To understand why the model undergoes algorithmic shift, we have now additionally added interventional experiments wherein various combinations of model components are frozen to their final parameter states (i.e., from the final step of training). These results further indicate rapid structural changes in attention layers drive sudden loss drops. We have also added several other experiments: e.g., OOD generalization experiments, training on different matrix distributions, scaling model width / depth. All these results further demonstrate the generalizability and robustness of our results. Please see our global response for more details on these experiments. - *The experiments are conducted ...* Please see our global response on “Larger matrices’, where we discuss possible ways to scale up our approach to larger input matrices. - *Does the model operate …* Our results show that this is most likely the case, since the mechanisms for non-MASK and MASK prediction remain distinct after the model has converged (copying observed entries vs. computation). We demonstrate this empirically through intervention experiments on the model weights in Sec 5.1 for observed (non-MASK) entries, and in Sec 5.2 for MASK entries. One of the main takeaways is that prediction at observed entries is not affected significantly by attention layers (since uniform ablating them does not lead to a drastic change in performance), whereas prediction at masked entries is significantly affected. Please see Sec 5 for more details about these experiments. - *What would be the model's behavior …* Please refer to Fig. K in the attached PDF; we find that freezing only the attention layer to weights obtained at the last training step and training only MLPs and embeddings (labeled 'mlp+embed’ in the plot) still leads to a plateau and then sudden drop. However, on freezing both Attention and Embedding layers, and training only MLP layers (labeled 'mlp’), we do not see plateau or sudden drop in loss, and the training loss decreases in a continuous manner to optimal value. Our hypothesis is that the sudden-ness of the drop is contributed to by the Attention / Embedding layers, due to these components encoding ‘structure’ of the matrix completion problem. To corroborate this hypothesis, we also do the following experiments. Below, training all components is labeled `all’, which is the loss plot in the submitted manuscript – it is a reference for visualizing “sudden-ness” of drop in loss, and the length of loss plateau. **Train only Attention layers (labeled ‘att’)** – smaller loss plateau than “all” with sharp drop **Train only Embeddings (both positional and token, labeled ‘embed’ in the plot)** - smaller loss plateau than “all” with sharp drop **Train only positional embeddings (labeled ‘pos’)** – this has the longest plateau of all settings **Train only token embeddings (labeled ‘token’)** – there is no observable plateau or sudden drop in loss. Please note that this also matches the observation in our submission (Section 6, “Do embeddings change abruptly?”) where we have plotted the projection of token embeddings on the principal components of the final model’s token embeddings. We observed that the structure of this 2D projection does not change abruptly at step 15000, and that is further supported by this modified training process, which shows that token embeddings are learnt by the model without any significant observable plateau or sudden drop. Whether this early learning of token embeddings is also a driving force behind the sudden drop, is still an open question in our setup. From these observations, a mechanistic hypothesis emerges: positional embedding layers and attention layers are the main contributors towards the sudden drop dynamics of the training loss. Formalizing and rigorously verifying these observations is an important direction for future research. - *How well does the model generalize to different types …* We find that the model performs well on test data with matrix rank smaller than what it was trained on – hence indicating that the model can find the lowest rank matrix that fits the observed entries. Also please see Fig. A in the attached PDF for results on different (OOD) rank levels, and Fig. 3 in the manuscript for OOD sparsity levels (larger p_mask denotes larger sparsity). We have compared our model in these settings to the nuclear norm minimization solution, to highlight that the BERT model is comparable to, or better than this solution. Please also see the section “OOD performance” in our global response for more information. - *Can the approach be scaled …* Please see section “Larger matrices” in our global response for addressing this concern. We hope that our responses have justifiably addressed the reviewer’s concerns and they will consider increasing their score to support the acceptance of our work. --- Rebuttal Comment 1.1: Comment: Thank you for your response. Your replies, as well as your responses to the other reviewers, have resolved my concerns. While I believe that a thorough understanding of the model's grokking mechanism, such as explaining the phenomenon from a dynamic perspective and understanding why the model tends to learn the identity mapping first, and the implications of this mechanism on real-world tasks like eliminating grokking, still has a long way to go, I also think this is a promising area. For example, from a dynamics perspective, you might be able to identify the gradient direction of the model before grokking (the direction of escaping the fixed point) in a suitably simplified model. Based on your response, I raise my score. Additionally, I recommend that you discuss two contemporaneous related works on understanding the grokking phenomenon and analyzing the transformer's mechanism. [1] Grokked Transformers are Implicit Reasoners: A Mechanistic Journey to the Edge of Generalization. [2] Initialization is Critical to Whether Transformers Fit Composite Functions by Inference or Memorizing.
Summary: The authors study how encoders perform at the task of matrix completion. They generate synthetic data and train encoders of different sizes to predict masked out tokens. They observe an interesting behaviour where initially the loss appears to be at a plateau but then drops. Their hypothesis -which they investigate- is that in the first stage of training the model learns to copy, while in the second stage it learns to predict the masked out tokens. Strengths: I find the problem formulation interesting, as well as how the observed phenomenon is further investigated by the authors. Weaknesses: I think the related work could be improved. I think the paper would benefit from a deeper discussion on the experiments and findings of relevant works cited (e.g., 4, 6, 20) The experimental setting is limited. Synthetic data generation is only carried out using only the uniform distribution. What about other distributions? What about OOD performance? Technical Quality: 3 Clarity: 4 Questions for Authors: Did you consider other distributions? Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their feedback, and are glad that they found our problem formulation and analysis interesting! - *I think the related work could be improved. I think the paper would benefit from a deeper discussion on the experiments and findings of relevant works cited (e.g., 4, 6, 20)* We are certainly happy to expand on the related work! We will also add the following addendum that discusses references [4, 6, 20] in more detail to the manuscript. [4] show that in an in–context learning framework, a transformer based model can learn to select the most optimal statistical method for solving the task in the prompt, without explicitly being provided any information about the optimal method (called `in–context algorithm selection’ in their work). We emphasize that our setup is not in–context learning, and is quite distinct from [4] as far as the task being solved is concerned. However, whether the framework of layer-wise in-context gradient descent as in this work can also be shown in our setup is a plausible and open direction for future work (we provide some preliminary investigation in the Probing section in Global response). In [6], the author shows empirically that a encoder-decoder transformer can be trained to solve various linear algebraic tasks, such as eigendecomposition, SVD, matrix inverse etc. They support their findings by showing that the model generalizes to matrix distributions outside the training distribution to some extent, and that OOD performance can be improved by training on non-Wigner matrices. While many experiments in [6] also show a sudden jump in accuracy (Fig 1,2), they do not analyze why such a sudden jump occurs during optimization. In our work, we analyze the sudden drop and the model before and after it to derive insights into the sudden drop in loss in our training setup. [20] show that even a small transformer model can be trained to perform arithmetic operations like add, subtract, multiply accurately through appropriate data selection and formatting, and using Chain-of-Thought prompting. They further show that learning addition is connected to rank-2 matrix completion, and that the sudden jump in accuracy with increasing number of observed entries of the matrix is recovered when their model is trained on datasets of different sizes. This is because the size of the dataset for addition can be seen as the number of observed entries of the rank-2 matrix representing the addition table. We point out that while the task in this case is related to matrix completion, ours is a completely different setup, where the sudden drop happens with the number of training steps with each step consisting of 256 low-rank matrices, each with a fixed fraction (p_mask) of observed entries. - *The experimental setting is limited. Synthetic data generation is only carried out using only the uniform distribution. What about other distributions? What about OOD performance?* Thanks for this suggestion! We have now conducted new experiments to address your comment. **Changed training distribution.** We now use a standard normal distribution to sample inputs X i.e. X_ij ~ N(0, 1). We find that a sudden drop in the loss occurs in this setting as well, and the MSE before and after the drop matches the values obtained for U[-1, 1] (Fig. D in attached PDF). **Changed test distribution.** Replacing the distribution of input matrix entries from U[-1, 1] (on which the model is trained) to OOD entries from a Gaussian (mean=0, stdev=0.5) and Laplace distribution (location=0, scale=0.5) does not lead to considerable changes in mean-squared-error (MSE). Specifically, the Gaussian entries yield an MSE of 4e-3 and the Laplace entries yield an MSE of 2e-3 averaged over 1024 samples of 7x7 rank-2 matrix inputs. These results indicate the model has learned a general algorithm to solve the task that is robust to changes in the inputs’ distribution at test time. **Other interventions.** While we already analyzed OOD generalization performance when varying p_mask (Fig 3 in paper), we have now also conducted experiments by changing the number of rows / columns and rank of the input matrix at test time – see Fig. (A–C) in attached PDF. We find performance is either comparable or better than nuclear norm minimization on the same input, except for the case when we modify the number of columns. This is likely because positional embeddings in the model depend on the column index of the element, and hence (expectedly) changing the number of columns adversely affects model performance. - *Did you consider other distributions?* Please see our response to the previous question. We hope that our responses have justifiably addressed the reviewer’s concerns and they will consider increasing their score to support the acceptance of our work. --- Rebuttal Comment 1.1: Comment: Thanks for your response.
Summary: The paper explores the behavior of BERT on matrix completion tasks. The authors show that the model's loss shows a phase transition, where the model switches from copying tokens for filling masked tokens to predicting the masked entries accurately. The authors also conduct probing studies to understand the structure of attention layers, and the hidden representations during the two phases. Overall, the paper takes an important step towards a mechanistic understanding of transformer models, and will be an interesting read for the wider community. Strengths: The strength of the paper lies in its simplistic exposition of motivation, experimental setup to showcase changing behavior of BERT during training, and the probing studies to explain internal behavior during phase transition. The authors build their motivation from Chen et al. [2024]'s observations on BERT's phase transitions during pretraining on language data, and show the characteristic of such transitions on synthetic matrix completion data. The authors compare the solution to nuclear norm optimization and show that the model outperforms the candidate algorithm. Finally, with careful probing studies, the authors report the emergence of structure in model's internal representations, which help the model in the mask-token predictions. Overall, the paper takes an important step towards a mechanistic understanding of transformer models, and will be an interesting read for the wider community. Weaknesses: I have a few questions about the experimental setup and possible interesting followups that the authors may pursue. - How does convergence of BERT models change with increasing rank of the underlying model? Furthermore, how do results change if the authors mix in matrices of different sizes (say 3x3, 5x5, 7x7, 9x9) but fix rank of the underlying solution? Will the model perform equally well for each of them? - An interesting ablation would be to check the effect of the model's size on the convergence speed of the model. What happens if the number of attention heads is fixed to $1$ but the width and the depth are changed? More such ablations will help strengthen the understanding of the model. - In the experiments section "Attention Heads with Structured Mask", how does the removal of each group affect the behavior of the trained model? Are some groups more important than the rest? - In the "Probing" experiments, "the model tracks information in its intermediate layers and uses it for computation". Is this check only for the row or column number? If so, how does it "only" help the model computation? Is a more fine-grained probing possible, e.g. does the model compute the elements of the low rank decomposition in its embeddings? - Any discussion on how the results might change for auto-regressive training will be interesting. Technical Quality: 4 Clarity: 4 Questions for Authors: Please check my questions above. [Post rebuttal]: I have increased my score. The authors' responses have resolved my primary concerns on architecture and matrix ranks. I think this is a timely analysis on training time phase transitions in transformer models in a synthetic setting. The preliminary observations on GPT-2 are very interesting. I hope the authors can include them in the next version. Confidence: 4 Soundness: 4 Presentation: 4 Contribution: 3 Limitations: The authors discuss limitations of their work in section 8, and also mention few interesting directions that can be explored by the community as future work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their extensive suggestions, and are happy to know that they found our work important and interesting! - *How does convergence of BERT …* Thanks for this suggestion! We have now performed experiments with a 4-layer, 8-head model on rank–1 matrices of size 5x5, 7x7, 9x9 (at each training step, the size is sampled randomly, and then the training data consists of 256 matrices of that size). Interestingly, we find that it matches the intuitive learning order i.e. the sudden drop for 5x5 matrices occurs slightly before that for 7x7 matrices, which in turn is before that for 9x9 matrices (Fig. L in attached PDF). Further, the model performs similarly for 5x5 and 7x7, but is slightly worse for 9x9. For the question about the effect of rank on convergence, we train our 4 layer 8 head model on 7x7 rank-1 inputs (lower than rank-2 in the manuscript), and find that the sudden drop occurs earlier in this case (Fig. M in attachment). Hence this is an indication that the problem structure affects the rate of convergence, however we leave verifying this for larger matrices and rank to future work. Finally, in our experiments in the submission, the 12-layer, 12-head model was able to converge on matrices of size upto 15x15, rank–4 (please see Fig. 14 in manuscript for attention maps, and Fig. E in PDF attachment for convergence plot). We find that going larger than a certain rank for a specific matrix size does not lead to convergence in the usual training setup we report in the paper. This further reinforces the indication above that problem structure affects convergence. Due to limited time for rebuttal and limited computational resources, we are unable to extensively check the largest rank for larger matrix sizes for which our method converges. (Note that we have modified the input distribution to U[-1.2, 1.2] in rank-1 to match the loss magnitude rank-2 case before sudden drop (~0.06-0.07) for meaningful comparison.) - *An interesting ablation would be to check …* Thanks for this suggestion! We have now carried out ablations on model size / architecture to empirically test how it affects convergence. We change model width and depth while keeping other hyperparams the same as the model analyzed in the submitted draft (4–layer, 8 heads, width=768). We also analyze a 1–head, 12–layer model to check for convergence rate when attention layers are altered (please note that 1 head models generally do not converge, likely due to insufficient capacity, but do so for the 12-layer case only). Broadly, our observations are as follows. **Width (Embedding size)**: we changed the hidden state (embedding) size in the model to 64, 128, 256, 1024, keeping number of layers fixed at 4, and number of heads fixed at 8. For each value of width w, the MLP hidden layer width is 4.w as used in the original w=768 case. We observe (Fig. F in attached PDF) that for embedding size less than 256, the model could not converge to the optimal MSE attained by the larger models. **Depth (Number of layers)**: we also train a model with depth 2, 6, 8, 12, keeping the number of heads fixed at 8, and width fixed at 768 (Fig. G in attachment; L = number of layers). For depth 2, the model did not converge to optimal MSE and is omitted. We note that for larger models (6, 8, 12), the sudden drop happens earlier than that for depth 4, likely due to larger model capacity in those cases. **1–head model**: Using a 12–layer, width=768, 1–head model, we find the training converges to MSE ~ 4e-3, with a sudden drop in loss at around step 8000 (Fig J). Interestingly, we find that the attention heads obtained after training (Fig. N; layer 1-12 from left to right) are quite similar to those in 4-layer 8-head models. Possibly, the model exploits residual connections to simulate parallel attention heads within a single layer! - *In the experiments section ...* Please see section “Structure mask” section in our global response for a response to this concern. - *In the "Probing" experiments ...* This check is for the full masked row corresponding to the element being probed, and not only the row or column number. Specifically, for 768-dimensional hidden state at element (i,j), we map it through a linear model to the 7-dimensional vector X_mask[i, :] where X_mask = X at observed entries, and 0 at masked entries. We hypothesize that this strong correlation indicates that the model stores the masked input in some form in the hidden states, since this information can be probed through a simple linear map. However, we have not claimed that this is the “only” component for the model’s computation. For probing results on singular vectors (i.e. low rank decomposition), please see section “Probing” in global response. - *Any discussion on how …* Great question! We have now carried out preliminary experiments training a GPT model on the matrix completion task. Specifically, the input is now a concatenation of (X_mask, [SEP], X), and the objective is next–token prediction using cross–entropy loss. Due to the changed task structure, we now measure accuracy (instead of MSE) of output tokens at masked, observed, and all entries. Similar to the MLM setting of our main paper, we find that there is an initial plateau in loss (and accuracy), followed by a point of sudden drop in loss! However, the model merely learns to copy the observed entries (Fig. Q in attachment) at this point, and is still struggling to fill in the missing entries. We observe that even this sudden drop corresponds to the model learning specific structure in attention heads for copying (Fig. R in attachment, left: step 200, right: step 600). We are confident that with some hyperparameter and model size tuning, the model will be able to completely solve the task. Due to the limited timeframe for rebuttal, we were unable to finish these experiments, but we promise to include them in the final version of the paper. --- Rebuttal Comment 1.1: Comment: I thank the authors for their response. I have increased my score. The authors' responses have resolved my primary concerns on architecture and matrix ranks. I think this is a timely analysis on training time phase transitions in transformer models in a synthetic setting. The preliminary observations on GPT-2 are very interesting. I hope the authors can include them in the next version.
Summary: The paper applies a BERT-style transformer encoder to do low-rank matrix completion. To frame it as a masked language modeling problem, they restrict the domain of the problem to smaller matrices and discretize the domain. They find the model can solve the problem. The training dynamics are interesting. Initially the model just copies the input, but after a certain point, it solves the problem. They analyze many components of before and after this shift: attention heads, activations, and embeddings. They find the behavior of the attention heads changes via visualization and activation patching. Strengths: It's an easy to read paper with good figures and sound methodology. The authors anticipate many of the questions and acknowledge limitations. Weaknesses: The main weakness is how significant the contribution is. They have some interesting results which lead to some hypotheses. In the related work section, they frame the paper as mathematical capabilities of transformers, which is indeed an interesting area. But I don't feel like we've gain much insight into how the transformer does math. The paper feels like preliminary results that could lead to a more interesting paper down the road. Technical Quality: 3 Clarity: 4 Questions for Authors: The authors do a good job of anticipating many questions. Does this behavior generally to other mathematical tasks? Can we get any insight about the actually algorithm or implicit optimization procedure? What is the generalization to larger matrices? Is there a way to more rigorously verify the hypotheses from the **Attention Heads with Structured Mask** section? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 2 Limitations: The authors acknowledge the limitations in the discussion. This only works on small matrices and is not a generally useful matrix solver. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their comments, and are glad to know they found our results interesting! - *The main weakness is how significant the contribution is. They have some interesting results … how the transformer does math.* We re–emphasize that our focus is studying the sudden drop in loss from interpretability perspective, and not solving matrix completion itself. We did cite papers on using transformers for mathematical tasks, since our task is also mathematical in nature and wanted to give a review of existing work in this area to the reader. We will definitely consider improving this section to better match the focus of our paper. We would like to humbly disagree about the significance of our contribution – we emphasize that there are currently no explanations for the phenomenon of sudden loss drops in language modeling. To address this problem, our simplified but rich abstraction offers two concrete observations: (1) the optimization of the model on this task shows an abrupt jump in the model performance and (2) the sudden jump can be analyzed via interpretable evidence that it has learnt the problem structure. These observations allow us to implicate sudden acquisition of task structure by attention layers as the cause behind sudden loss drops. To the best of our knowledge, both our observations and the proposed hypothesis are novel results. Furthermore, we note that the sudden drop observed in our setting occurs without any changes to the optimization procedure (e.g. step-size, warmup etc) during training – prior work studying such loss drops [1] uses learning rate warmup for 10K steps, while we are able to obtain sudden drop in loss without warmup, and hence have arguably demonstrated a stronger empirical result. [1] https://arxiv.org/abs/2309.07311 - *Does this behavior generally to other mathematical tasks?* It does! To address your question, we have now extended our experiments to another mathematical task: histogram computation. Specifically, we follow to setup proposed by [2] and train a 2-layer, 2-head GPT model on sequences of the form (x_1, x_2, …, x_32, [SEP] c_1, c_2, …, c_32), where (x_i) are integers in range [1, 16] and (c_i) is the total number of occurrences of x_i in the full sequence. We use such sequences in a next-token prediction (online) training setup for our GPT model, using cross–entropy loss. At test time we prompt the model with incomplete sequences of the form (x_1, x_2, …, x_32, [SEP]) and expect it to complete the sequence with the correct sequence (c_1, c_2, …, c_32). We find that our observations in matrix completion extend to this setup (Fig. O, P in attached PDF). Specifically, there is a sudden drop in training loss, and corresponding increase in accuracy of the model in predicting c_i (Fig. O), and attention heads transition from a nearly uniform pattern + attending to [SEP] (step 7000, Fig P - left) to a pattern that attends to various x_i for predicting counts (step 30000, Fig P - right) – as expected for the task. This shows that such sudden drop in loss, and corresponding emergence of structure in attention heads is not limited to matrix completion and BERT architecture, but generalizes to a different mathematical task of histogram and on a different model (GPT) with autoregressive training, rather than masked language modeling. We are optimistic that such behavior would also extend to other mathematical tasks. We will add these results to the final version of the paper. We also remark that some experiments in (Fig 1,2) [3] show a sudden increase in accuracy during training, however they do not analyze why that sudden increase occurs, and that is the main focus of our work. [2] https://arxiv.org/abs/2402.03902 [3] https://openreview.net/forum?id=L2a_bcarHcF - *Can we get any insight about the actually algorithm or implicit optimization procedure?* As we show in Section 5, the algorithm learnt by the model shows many interpretable characteristics, indicating that the model learns (1) the problem structure through token and positional embeddings (Fig 7), and (2) how to combine elements at various positions through attention heads (Fig 4,5). Moreover, in our probing experiments (Fig. 6), we find that the model 'stores’ the masked rows of each element in the 3rd and 4th layer, giving insight into the algorithm used by the model. Please also see the section “Probing” in our global response for additional experiments on probing, including a possible hypothesis for implicit optimization in our setup. Overall, we note that despite our best efforts, we could not precisely nail down the algorithm learned by the model. However, we emphasize this characterization was not the main objective of our paper; instead, we aimed to capture and understand sudden loss drops in the model’s learning dynamics. We did investigate if naively expected hypotheses for how the model might be solving the task check out, e.g., whether the model implicitly learns to perform nuclear norm minimization. Our results were however in the negative, as reported in Fig 3 of the main paper: we found that the model in fact outperforms nuclear norm minimization in terms of MSE. - *What is the generalization to larger matrices?* Please see section “Larger matrices” in global response. - *Is there a way to more rigorously verify the hypotheses from the Attention Heads with Structured Mask section?* Please see section “Structured masks” in our global response – we find that uniform ablation on each category of attention heads leads to increase in L_mask to different extents, largest being for the observed–only heads i.e. (2,2–4), (2,6), (3,2), (3,3), (3,5), (3,1), (3,6). We hope that our responses have justifiably addressed the reviewer’s concerns and they will consider increasing their score to support the acceptance of our work.
Rebuttal 1: Rebuttal: We thank the reviewers for their detailed feedback and are excited to see that they find our work interesting! To address raised questions, we have performed several new experiments, as described below. We will add these results to the final version of the paper. **Larger matrices.** We show that our setup extends to matrices of size 15x15, rank-4 with a 12–layer, 12–head model with similar training dynamics (please see Fig. E in PDF attachment) and final attention head maps (please see Fig. 14 in manuscript). Extending to very large matrices is limited by the context length of transformers and compute resources; though we are optimistic that such results should hold in general for larger matrices. We would also like to re–emphasize that our main focus is not solving matrix completion through transformers, but study transformers in a mathematical setup to (i) model the sudden drop in loss and (ii) interpret the model behavior to understand why this drop occurs. **Probing.** We probe for the true matrix element at each position in each layer; i.e., map the 768-dimensional hidden states at position (i,j) for input X_mask, to the real value X_{ij} (Fig. H in attached PDF). Interestingly, we find that the MSE decreases at an exponential rate for masked positions. We conjecture that this might be related to the idea that each layer computation of the transformer implicitly corresponds to a step of some optimization algorithm (e.g. gradient descent [1, 2]), however we could not empirically verify that this is the case for our setup. We leave it to future work to confirm if this conjecture holds true in our case. We also attempted to probe the singular vectors of the ground truth matrix in the hidden states of the model. Concretely, we map the 768-dimensional hidden states at different layers through a linear model for a given input matrix X, to the 7-dimensional first left singular vector u of X (i.e. if X = U.S.V^T is the SVD of X, then u = first column of U). We found that the MSE is too large (~0.14), and the cosine similarity of the estimated vector and the true vector is near random (<0.3 i.e., less than the average cosine similarity of 7-dim vectors sampled from N(0, I)); please see Fig. (I) in the PDF attachment for these results for all layers. Hence, it is not immediately clear that the model has some information about the singular vectors of the input matrix stored in its hidden states, that is recoverable through linear probes. Similar finding holds for other singular vectors i.e second column of U, and first 2 columns of V. [1] https://arxiv.org/abs/2306.04637 [2] https://proceedings.mlr.press/v202/von-oswald23a.html **Structured mask.** We find that uniform ablation on each category of attention heads in Section "Attention Heads with Structured Mask" leads to increase in L_mask to different extents. The maximum effect of ablations is in the observed heads category ((2,2–4), (2,6), (3,2), (3,3), (3,5), (3,1), (3,6)), quantified by measuring the ratio of MSE with ablations to MSE without ablations. The first layer attention heads that are claimed to possibly “process positional and token embeddings” in row 5 in the table, also affect the model output to a larger extent. This rigorously verifies the hypothesis that these attention head groups causally affect the model output. **Effect of input distribution** - Training distribution. For training the model, we replace the distribution U[-1, 1] by a standard normal distribution (i.e. X_ij ~ N(0, 1) for input X). We can indeed recover the sudden drop in loss, where the MSE before and after the drop matches the values obtained for U[-1, 1]. Please see Fig. D in the attachment for reference. - Test distribution. For test time OOD performance, we find that replacing the distribution of input matrix entries from U[-1, 1] (on which the model is trained), with Gaussian (mean=0, stdev=0.5) and Laplace (location=0, scale=0.5) does not lead to any considerable change in mean-squared-error (MSE). Specifically, the Gaussian entries yield an MSE of 4e-3 and the Laplace entries yield an MSE of 2e-3 averaged over 1024 samples of 7x7 rank-2 matrix inputs. Please note that we already check OOD performance when varying p_mask (Fig 3 in manuscript). Further, we also check OOD performance by changing the number of rows / columns and rank of the input matrix – as observed in Fig. (A–C) in attached PDF, BERT performance is either comparable or better than nuclear norm minimization on the same input, except for the case when we modify the number of columns. This is because positional embeddings in our model depend on the column index of the element, and hence (expectedly) changing the number of columns adversely affects model performance. The overall idea is that as long as the input matrix entries do not exceed beyond a certain threshold in magnitude, the model can solve matrix completion. This is because the model has learnt token embedding representations for a fixed range of input values (as defined by U[-1, 1]). It is unexpected to generalize to larger entries that the model does not see during training, since the model has no knowledge of how tokens for larger entries are represented in the embedding space. This also matches the observations in Sec 5 [3], where the author shows that changing the variance of test distribution significantly affects OOD performance of the model. [3] https://openreview.net/forum?id=L2a_bcarHcF Pdf: /pdf/83364e2c9094665be25f0158fbfd4cdc2d923dbc.pdf
NeurIPS_2024_submissions_huggingface
2,024
Summary: The paper examine the ability of the transformer model to solve a low-rank matrix optimization problem. The numerical matrices with missing values are tokenized and flattened as sequences with masked values, and a BERT transformer is trained to predict the original sequence with both masked and unmasked values. The paper make the following observations through extensive tests: + The learning of transformer in matrix completion task exhibits two stages: copying masked values, and sudden transition to learning the underlaying patterns and drastically decreasing loss. + During the second, different attention heads exhibits different behaviors correspond to different parts of the input: masks, observed values, position embeddings. + The learned token embeddings exhibits clusters between elements from the same row in the original matrix, even when not row/column marker explicitly providing. These findings aligns with prior studies focusing on masked-language-modeling(MLM) tasks and provides valuable insights into the learning mechanism of transformers. Strengths: + The paper proposed on interesting combination by linking low rank matrix completion problem with MLM, potentially enabling the used of many NLP technique to solving the application involving the first problem. + The paper performed extensive experiments and examine the learning behaviors of transformers thoroughly across learning stages and important components of transformer. + The paper presented sufficient technical details for the experiments implemented such as the loss function and training parameters. Weaknesses: + Though using matrix completion as an "abstraction" of MLM is interesting and novel, the author gave little concrete explanation or reference on why insight learned from such setting is transferable to language domain, and what is the benefit over directly analyzing language sequences in terms of gaining interpretation and insights. + Despite adapting a new task, the main patterns the paper found in through experiments are similar to prior works. Technical Quality: 3 Clarity: 2 Questions for Authors: + What is the purpose of training the model to predict all values instead of just the masked ones? + What do the two panels in figure represent? The right panel caption appears incorrect and the loss surprisingly goes down as p_mask increases. + Is there any consideration for tokenizing the matrix entries, besides mimicking language sequences? Since transformers are increasingly used to processed multimodal data whose input are already embeddings, it would be interesting to see if the same patterns hold for continuous inputs. Confidence: 3 Soundness: 3 Presentation: 2 Contribution: 2 Limitations: The authors have addressed societal impact and limitations in the conclusion section. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for their insightful comments, and are happy to know they found our work interesting! - *Though using matrix completion as an "abstraction" of MLM is interesting and novel … interpretation and insights.* We emphasize that our goal while casting matrix completion as an abstraction of MLM was to define a simple, controllable system that captures the phenomenon that we aim to investigate: sudden drops in the loss. First reported by Chen et al. [1] in BERT training, the complexity of natural language disallowed authors to develop precise mechanistic hypotheses for what factors lead to these sudden drops. However, the simplicity of our setting affords us the ability to investigate model internals, showing that the model not only undergoes a change in the algorithm used to solve the given task, but also completely morphs at a mechanistic level (e.g., attention heads specialize to inferring structures relevant to the task). We hypothesize such rapid structure acquisition, specifically driven by attention, drives sudden loss drops. While defining this hypothesis was the goal of this work, given natural language and related practical domains such as code and mathematical problems are filled with rich, structured patterns (e.g., syntactical ones), we expect our hypothesis to hold true therein too. We plan to investigate this more concretely in future work. We will make sure to add the discussion above to the paper and better clarify why our setup is a faithful abstraction and why we expect our results to transfer across domains. [1] https://arxiv.org/abs/2309.07311 - *Despite adapting a new task, the main patterns the paper found in through experiments are similar to prior works.* We emphasize that there are currently no explanations for the phenomenon of sudden loss drops in language modeling. To address this problem, our simplified but rich abstraction offers two concrete observations: (1) the optimization of the model on this task shows an abrupt jump in the model performance and (2) the sudden jump can be analyzed via interpretable evidence that it has learnt the problem structure. These observations allow us to implicate sudden acquisition of task structure by attention layers as the cause behind sudden loss drops. To the best of our knowledge, both our observations and the proposed hypothesis are novel results. Furthermore, we note that the sudden drop observed in our setting occurs without any changes to the optimization procedure (e.g. step-size, warmup etc) during training – prior work studying such loss drops [1] uses learning rate warmup for 10K steps, while we are able to obtain sudden drop in loss without warmup, and hence have arguably demonstrated a stronger empirical result. [1] https://arxiv.org/abs/2309.07311 - *What is the purpose of training the model to predict all values instead of just the masked ones?* We find that it is difficult to train the model only to predict the masked entries, and predicting all entries substantially aids the training process. We hypothesize that learning to copy the observed entries is helpful for learning to complete the masked entries. We will add this observation to the paper. - *What do the two panels in figure represent? The right panel caption appears incorrect and the loss surprisingly goes down as p_mask increases.* We note that the caption is indeed correct. Specifically, the right panel in Fig. 3 compares the nuclear norm of the solutions obtained by BERT and nuclear-norm minimization – in this case, as p_mask increases, MSE displayed in the left panel goes up, as expected. We apologize for the lack of clarity though, and promise to make the caption clearer in the final version of the paper. - *Is there any consideration for tokenizing the matrix entries, besides mimicking language sequences? Since transformers are … if the same patterns hold for continuous inputs.* Indeed, the motivation behind tokenization is to keep the setup similar to language sequences, where BERT has been shown to perform well empirically. Importantly, as we show in Fig. 7, the model can learn the token embeddings representing the real values with expected structure (principal components separated by sign of the input, and continuously varying with magnitude of input). Hence it should not significantly affect the results if we use specialized embeddings instead of our approach; extending our work to other embedding methods is an interesting avenue for future work.
null
null
null
null
null
null
GDeR: Safeguarding Efficiency, Balancing, and Robustness via Prototypical Graph Pruning
Accept (poster)
Summary: This paper introduces a novel training debugging concept aimed at enhancing efficiency, robustness, and balance during the graph training process. It employs trainable prototypes to dynamically select appropriate samples for each training iteration. The concept is innovative and intriguing. However, the experiments are confined to the graph domain, which raises questions about its generalizability. The authors might need to further elaborate on this issue in the text to address potential limitations. Strengths: 1. Topic of large importance in the community given the direction of the field. The authors proposed training debugging, namely robust and unbiased soft pruning, is indeed novel and significant, especially in the context of research on large models where data collection is increasingly extensive. This approach addresses critical challenges in ensuring that these large-scale models train effectively and efficiently, mitigating issues that might otherwise go unnoticed due to the complexity and size of the data involved. 2. The article is well written and engaging, particularly excelling in the clarity of its experimental tables and diagrams. These elements contribute to a structure that quickly aids readers in understanding the contributions of the paper. Weaknesses: 1. One of my main concerns is whether this concept is only applicable to graph training. If it is limited solely to graphs, the overall contribution of this work may not reach a particularly high standard. Especially relevant is whether its methodology can be transferred to datasets for CV or NLP. 2. Intuitively, the introduction of prototypes may potentially decrease the training speed of the original GNN backbone. Considering there are $K$ prototypes and $|D|$ training samples, each epoch would require the computation of similarity scores between samples and prototypes at $\mathcal{O}(K \times |D|)$ space complexity. This is bound to introduce additional overhead during backpropagation. The authors need to provide discussions both on complexity analysis and numerical comparisons to quantify the extra computational burden. 3. The authors should report the results at extremely high sparsity levels (e.g. dropping 90% of the samples). Can your method still behaves fairly well? 4. The authors should also compare their method with other techniques for data imbalance in Section 4.3. Data imbalance/long-tail distribution is a long-standing issue with many established solutions in CV. To name a few, focal Loss [1] and Dynamic Sampling [2]. [1] Focal loss for dense object detection [2] Dynamic sampling in convolutional neural networks for imbalanced data classification Technical Quality: 3 Clarity: 4 Questions for Authors: 1. Is the proposed method extendable to other data domains? 2. How does the trainable prototypes effect the training speed? 3. Can GDeR remain its performance even pruning a great propotion of training samples? 4. Can you compare GDeR with other data imbalance baselines? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 8 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank you for your careful comments and thorough understanding of our paper! Here we give point-by-point responses to your comments and describe the revisions we made to address them. --- > **Weakness 1 & Question 1**: Is the proposed method extendable to other data domains? To address your concerns regarding the transferability of GDeR to other domains, we will first explain (1) how GDeR can be adapted with minor modifications to other data domains and (2) the performance of GDeR on CV datasets. **Minor Modification** On Lines 146-148, we mention that in each epoch, any GNN encoder encodes a graph sample $\mathcal{G}\_i$ into an embedding $\mathbf{h}\_i\in\mathbb{R}^E$, which is then projected into hyperspherical space via the projector $g_\phi: \mathbb{R}^E \rightarrow \mathbb{R}^D$. If we apply this to other data scenarios, such as ResNet+ImageNet training, ResNet would map an image $\mathbf{x}_i$ from ImageNet into a one-dimensional vector $\mathcal{F}(x_i)\in\mathbb{R}^E$ after pooling. We then simply project this $\mathcal{F}(x_i)$ into hyperspherical space, enabling the use of GDeR for data soft pruning in the same manner. Therefore, you can see that GDeR can easily extend to classical backbones in other domains (e.g., ViT, Swin Transformer) and datasets (e.g., CIFAR, ImageNet, COCO). **Performance Evaluation** We test GDeR's performance on two classic CV datasets, CIFAR-10 and ImageNet-1k, as shown in Tables A and B. As can be seen, even on the large-scale ImageNet, GDeR demonstrated consistent training speedup without performance loss. *Table A. Accuracy (%) comparison between Random, InfoBatch and GDeR on CIFAR-10+ResNet-18. Remaining ratio are set among {100%, 70%, 30%}.* |Remaining ratio|100|70|30| |-|-|-|-| |Random|95.69|94.88|90.2| |InfoBatch|95.69|95.36|94.92| |GDeR|95.69|**95.74**|**95.12**| *Table B. Accuracy (%) and Time (hour) comparison between UCB, InfoBatch and GDeR on ImageNet-1k+ResNet-50. Remaining ratio are set as 60%. Our results are tested on eight NIVIDIA Tesla A100 (80GB GPU).* |Remaining ratio|100||60|| |-|-|-|-|-| |Metric|Acc|Wall-clock Time|Acc|Wall-clock Time| |InfoBatch|76.43|6.2h|76.49|3.74h| |GDeR|76.43|6.2h|**76.52**|3.89h| --- > **Weakness 2 & Question 2**: The authors need to provide discussions both on complexity analysis and numerical comparisons to quantify the extra computational burden. Following your suggestions, we provide (1) complexity analysis and (2) numerical comparisons to better evaluate the efficiency of GDeR. **Complexity Analysis** Taking vanilla GCN as an example, its memory complexity is $\mathcal{O}\left(|\mathcal{D}|\times(L\times |\mathcal{N}|\times E + L\times |\mathbf{\Theta}|\times E^2)\right)$, where $|\mathcal{D}|$ denotes the graph sample count, $L$ denotes the GNN layer count, $|\mathcal{N}|$ denotes the (average) number of nodes, and $E$ represents the hidden dimension. GDeR utilizes an MLP-based projector that maps the graph-level embedding into hyperspherical space, introducing an additional $\mathcal{O}(|\mathcal{D}|\times D)$, where $D$ represents the hyperspherical embedding dimension. Furthermore, each sample needs to compute distances to prototypes, which introduces $\mathcal{O}(|\mathcal{D}|\times |\mathbf{P}|)$, where $|\mathbf{P}|$ denotes the number of prototypes. Overall, the additional complexity introduced by GDeR is $\mathcal{O}(|\mathcal{D}|\times (|\mathbf{P}|+D))$, which is significantly smaller than the complexity of GCN itself. Next, we empirically evaluate the extra burden caused by GDeR. **Numerical Comparisons** We provide a comparison of wall-clock time for GDeR, random pruning, and InfoBatch under different pruning rates in the context of MUTAG+PNA, as well as wall-clock time on GraphGPS+Molhiv. It can be observed that although GDeR incurs additional computational overhead due to prototype-related calculations compared to random pruning, these costs are marginal and lead to significant performance improvements (2.8% on Molhiv). *Table C. Results on MUTAG+PNA.* | Sparsity | 0.2 | 0.3 | 0.5 | 0.7 | |-|-|-|- | -- | | Ours | 115.9506 | 118.4320 | 195.7631 | 241.3048 | | Random | 100.6895 | 100.0426 | 183.6579 | 227.6009 | | InfoBatch | 102.5114 | 114.5953 | 185.8063 | 231.3231 | *Table D. Results on GraphGPS+Molhiv.* | (prune 70%)| Wall-clock time| Performance| |-|-|-| |Random| 1607.29| 74.1| |InfoBatch| 1694.55| 75.6| |GDeR| 1844.56|**76.9**| --- > **Weakness 3 & Question 3**: The authors should report the results at extremely high sparsity levels (e.g. dropping 90% of the samples). Can your method still behaves fairly well? To address your concerns, we compare GDeR with several best-performing baselines under extremely sparse scenarios. *Table E. Results on MUTAG+PNA.* |Remaining Ratio (%)|10|15|20| |-|-|-|-| |Random|82.71±1.29|85.92±0.94|87.60±0.52| |UCB|83.11±0.87|85.70±0.81|86.52±0.60| |InfoBatch|83.42±0.93|86.33±0.42|86.8±0.83| |GDeR|**83.59**±1.03|**87.15**±0.59|**88.24**±0.72| --- > **Weakness 4 & Question 4**: The authors should also compare their method with other techniques for data imbalance in Section 4.3. Thank you for your insightful suggestions! Following your instructions, we have supplemented our experiments in imbalanced scenarios by comparing GDeR with two classic data-balancing plugins. As shown in Table 4, GDeR outperforms existing data balancing techniques across various imbalanced settings. *Table F. F1-Macro (%) comparison of GDeR, DynamicSample and Graph-level SMOTE on MUTAG+GCN. We fix the pruning ratio of GDeR to 20%.* |Imbalance Ratio|1:9|3:7|7:3|9:1| |-|-|-|-|-| |Original|58.07|76.52|51.99|50.75| |DynamicSample[1]|63.12|75.41|73.07|69.20| |Graph SMOTE[2]|65.26|77.87|77.68|73.48| |GDeR|71.81|79.10|77.96|75.70| [1] Dynamic sampling in convolutional neural networks for imbalanced data classification [2] Imbalanced Graph Classification via Graph-of-Graph Neural Networks --- Rebuttal Comment 1.1: Comment: Thank you for your response. Introducing ImageNet experiments and complexity analysis is persuasive and effectively showcases GDeR's adaptability. Personally, I appreciate the contribution of this work, and with its proper packaging and open-sourcing, it can become a widely applicable training acceleration technique. Therefore, I have adjusted my rating accordingly. --- Reply to Comment 1.1.1: Comment: We extend our heartfelt thanks to Reviewer zhEK for their increased support and recognition of our paper! We are pleased that our revisions and rebuttal have addressed the concerns!
Summary: This paper presents a novel method for graph neural network training called Graph De-Redundancy. This method aims to enhance the efficiency, balance, and robustness of GNN training. It constructs a hyperspherical embedding space using trainable prototypes to maintain a balanced subset of the training data, addressing the computational and performance challenges posed by large-scale, imbalanced, and noisy datasets. The experiments demonstrate that GDeR can achieve or surpass the performance of full datasets with significantly fewer samples, achieving up to a 2.81× speedup and outperforming existing pruning methods in various scenarios. Overall, this approach achieves an effective, balancing, and robust pruning method for graph datasets, showing the potential for efficient training of graph datasets and even general datasets. Strengths: 1. The study gives a powerful solution that meets the multifaceted needs of balance, stability and efficiency, and has great potential for application especially on large-scale and unbalanced graph datasets. 2. Using a hyperspherical embedding space and trainable prototypes for dataset pruning in graph learning is quite novel. In addition, the authors give sound explanations and theoretical support for these methods. 3. The paper is fluently written, with clear explanations of the problem, methodology and results. Especially the graphs and visualisations in the paper are impressive. Weaknesses: 1. How are the prototypes initialized? I didn't see a note about it from the Algo. 1. Does the initialization of prototypes have an impact on the regulariaztion of hyperspherical space? 2. Does changing the weights of different losses such as $\lambda_1$ in Eq.(13) have an impact on the model output? 3. As shown in Eq. (14), at the beginning of training,$\Psi(t)$ will be relatively large tend to retain the graph retained in the last iteration, as the training proceeds $\Psi(t)$ will tend to be close to 0, resulting in a greater tendency to completely replace the graph retained in the last iteration, making the graph retained in each iteration varies greatly in the last few epochs, can you explain this phenomenon? 4. There appears to be some typos in line 245. $\lambda$ misses a subscript. Technical Quality: 2 Clarity: 3 Questions for Authors: This work looks like it could seamlessly migrate to the CV domain as the authors state in Section 3.5, have you done any extension experiments accordingly? Confidence: 3 Soundness: 2 Presentation: 3 Contribution: 3 Limitations: The authors have adequately discussed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you immensely for your time and efforts, as well as the helpful and constructive feedback! Here, we give point-by-point responses to your comments. --- > **Weakness 1**: How are the prototypes initialized? I didn't see a note about it from the Algo. 1. Does the initialization of prototypes have an impact on the regulariaztion of hyperspherical space? Thank you for your insightful inquiry! GDeR follows classic prototype learning practices like [1], randomly initializing prototype embeddings for each cluster. Although different random initializations may affect the subsequent regularization of the hypersphere, we respectfully note that this impact is minimal. As shown in Table 4, even when the number of prototypes per cluster $K=1$, the performance variance of GDeR is around $1-2\%$; as $K$ increases, the performance variance tends to decrease. [1] Attribute prototype network for zero-shot learning --- > **Weakness 2**: Does changing the weights of different losses such as $\lambda_1$ in Eq.(13) have an impact on the model output? We are happy to answer your question, and conduct a sensitivity on $\lambda_1$ and $\lambda_2$ on MUTAG+PNA, as shown in Table A. We can conclude that (1) too small $\lambda_1$ or $\lambda_2$ can lead to under-fitting of hypersphere and less reliable data pruning; (2) GDeR pruning performs best when $\lambda_1$ and $\lambda_2$ are kept within a similar magnitude. Overall, GDeR is not sensitive to these two parameters, and practitioners can achieve desirable training speedup without extensive parameter tuning. *Table A. Sensitivity analyis of $\lambda_1$ and $\lambda_2$ on MUTAG+PNA. The pruning ratio is set to 50%.* |$\lambda_1$\ $\lambda_2$|5e-2|1e-1|5e-1|1e-0| |-|-|-|-|-| |5e-2|88.72|88.54|88.66|88.38| |1e-1|89.11|89.37|89.21|88.56| |5e-1|88.80|88.24|89.17|89.15| |1e-0|88.03|88.79|89.07|89.12| --- > **Weakness 3**: As shown in Eq. (14), $\Psi(t)$ at the beginning of training, will be relatively large tend to retain the graph retained in the last iteration, as the training proceeds $\Psi(t)$ will tend to be close to 0, resulting in a greater tendency to completely replace the graph retained in the last iteration, making the graph retained in each iteration varies greatly in the last few epochs, can you explain this phenomenon? We apologize for any confusion caused! In Eq. (14), $\Psi(t)$ and $\widetilde{\Psi(t)}$ should be swapped, such that as the training proceeds $\Psi(t)$ will tend to be close to 1, indicating that we should retain the sample set more in the last few epochs. This is because, at the early stages of GNN training, the randomly selected sub-dataset is likely to be highly noisy, necessitating more frequent swapping. By the end of the training, GDeR has progressively filtered out truly representative, unbiased, and balanced sub-datasets, thus reducing the need for sample swapping. --- > **Weakness 4**: There appears to be some typos in line 245. $\lambda$ misses a subscript. Thank you for your detailed comments! On Line 245, we conducted a grid search for $\lambda_1$ and $\lambda_2$, both ranging in {$1e-1, 5e-1$}. --- > **Question 1**: This work looks like it could seamlessly migrate to the CV domain as the authors state in Section 3.5, have you done any extension experiments accordingly? To address your concerns regarding the transferability of GDeR to other domains, we will first explain (1) how GDeR can be adapted with minor modifications to other data domains and (2) the performance of GDeR on CV datasets. **Minor Modification** On Lines 146-148, we mention that in each epoch, any GNN encoder encodes a graph sample $\mathcal{G}\_i$ into an embedding $\mathbf{h}\_i \in \mathbb{R}^{E}$, which is then projected into hyperspherical space via the projector $g_\phi: \mathbb{R}^E \rightarrow \mathbb{R}^D$. If we apply this to other data scenarios, such as ResNet+ImageNet training, ResNet would map an image $\mathbf{x}_i$ from ImageNet into a one-dimensional vector $\mathcal{F}(x_i)\in\mathbb{R}^E$ after pooling. We then simply project this $\mathcal{F}(x_i)$ into hyperspherical space, enabling the use of GDeR for data soft pruning in the same manner. Therefore, you can see that GDeR can easily extend to classical backbones in other domains (e.g., ViT, Swin Transformer) and datasets (e.g., CIFAR, ImageNet, COCO). **Performance Evaluation** We test GDeR's performance on two classic CV datasets, CIFAR-10 and ImageNet-1k, as shown in Tables B and C. As can be seen, even on the large-scale ImageNet, GDeR demonstrated consistent training speedup without performance loss. *Table 1. Accuracy (%) comparison between Random, InfoBatch and GDeR on CIFAR-10+ResNet-18. Remaining ratio are set among {100%, 70%, 30%}. Our results are tested on one NIVIDIA Tesla A100 (80GB GPU).* |Remaining ratio|100|70|30| |-|-|-|-| |Random|95.69|94.88|90.2| |InfoBatch|95.69|95.36|94.92| |GDeR|95.69|**95.74**|**95.12**| *Table 2. Accuracy (%) and Time (h) comparison between UCB, InfoBatch and GDeR on ImageNet-1k+ResNet-50. Remaining ratio are set as 60%. Our results are tested on eight NIVIDIA Tesla A100 (80GB GPU).* |Remaining ratio|100||60|| |-|-|-|-|-| |Metric|Acc|Time|Acc|Time| |InfoBatch|76.43|6.2h|76.49|3.74h| |GDeR|76.43|6.2h|**76.52**|3.89h|
Summary: This paper addresses the computational and memory challenges posed by large datasets in the training of graph neural networks (GNNs). The authors propose GDeR, a dynamic soft-pruning method that leverages trainable prototypes to regularize the graph embedding distribution. This approach aims to maintain a balanced and unbiased subset of the data during training, thereby enhancing the efficiency and robustness of the learning process. Strengths: (1) Soft dataset pruning for graph data remains underexplored for the past years, and GDER makes the first attempt. (2) The paper reads very well, with straightforward motivation and well-organized methodology. I appreciate its visual aids (Fiure 2,3). (3) The experimental results (particularly on GraphMAE+ZINC) are encouraging. It is demonstrated that GDER can save over 60% of the training wall-clock time without performance decay. I believe GDER has the potential to become a general acceleration operator/plugin for the graph learning community. Weaknesses: (1) The methodology is primarily organized around graph classification. Though the authors claim that GDER can be extended to graph regression, more experiments are needed to validate their claim in Section 4.2. (2) In robust GNN training, the baselines compared are outdated. There are stronger robust GNN baselines [1,2,3]. (3) The authors merely test GDER with poisoning attack on node features. Is GDER capable of addresing evasion attack? Also, the experiments can be stronger if the authors use more advanced GNN attack methods like Mettack [4] or Nettack [5]. Minor: I recommend the authors to move Table 7 to the main text. Pretraining GNNs present more computational burden, so its accelerating is more meaningful. Besides, On Lines 184 and 189, does $D^{t}$ refers to $X_t$? [1] Adversarial robustness in graph neural networks: A Hamiltonian approach, NeurIPS 2023. [2] A Simple and Yet Fairly Effective Defense for Graph Neural Networks, AAAI 2024 [3] Graphde: A generative framework for debiased learning and out-of-distribution detection on graphs, NeurIPS 2022 [4] Adversarial attacks on node embeddings via graph poisoning, ICML 2019 [5] Adversarial attacks on neural networks for graph data, KDD 2018 Technical Quality: 3 Clarity: 3 Questions for Authors: (1) Figure 1 shows the sample label ditrbution using Infobatch, right? If so, what is the distribution like with GDER? (2) Can you give a case study of the graph samples pruned by GDER? For examples, how many of them are with noise? how many of them are from the majority class? Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The limiations are discussed in the paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We sincerely thank Reviewer RKXd for the thoughtful and constructive reviews of our manuscript! Based on your questions and recommendations, we give point-by-point responses to your comments and describe the revisions we made to address them. --- > **Weakness 1**: Though the authors claim that GDER can be extended to graph regression, more experiments are needed to validate their claim in Section 4.2. Thank you for your insightful comment! We would like to respectfully point out that in Section 3.5 and Appendix D, we provide a detailed explanation of how to extend GDeR to the scenarios of graph regression or unsupervised learning. In Observation 3 of Section 4.2, we analyze the pruning performance and efficiency acceleration of GDeR on GraphMAE+ZINC. We hope this can help you better understand the versatility of GDeR. --- > **Weakness 2**: In robust GNN training, the baselines compared are outdated. There are stronger robust GNN baselines [1,2,3]. We admire your extensive knowledge! As the methods you cited [1,2] are limited to node classification, we supplemented the comparison with GraphDE [3] and the latest SOTA method MRGL [4] with our GDeR. Additionally, we employed a newer graph attack method GRABNEL [5] for perturbation attacks. As shown in Table A, GDeR outperforms GraphDE and MRGL in high noise scenarios through its unique soft pruning mechanism. *Table A. F1-Macro (%) comparison among GraphDE, MRGL and GDeR on DAGCN+MUTAG with GRABNEL. We set the pruning ratio of GDeR to 30%.* |Noise ratio|0%|5%|10%|20%| |-|-|-|-|-| |GraphDE|85.12|82.99|81.45|79.16| |MRGL|85.12|**84.66**|82.07|78.59| |GDeR|85.12|83.08|**83.46**|**80.14**| [3] Graphde: A generative framework for debiased learning and out-of-distribution detection on graphs, NeurIPS 2022 [4] Multi-View Robust Graph Representation Learning for Graph Classification, IJCAI 2023 [5] Adversarial Attacks on Graph Classification via Bayesian Optimisation, NeurIPS 2021 --- > **Weakness 3**: The authors merely test GDER with poisoning attack on node features. Is GDER capable of addresing evasion attack? Also, the experiments can be stronger if the authors use more advanced GNN attack methods like Mettack [4] or Nettack [5]. Thank you for your feedback! Currently, GDeR only employs defense against adversarial poisoning attacks as it can dynamically identify and remove noisy samples during training. However, we would like to respectfully state this does not affect the comtribution of our work as we understand that most current robust GNN methods focus on solving poisoning attacks rather than evasion attacks. In our manuscript, we only used noise perturbation on node features as the attack method. This is because there are currently few attack methods specifically designed for graph-level classification tasks, as noted in [1]. However, we have supplemented in our response to Weakness 2 with a performance comparison of different robust GNN methods against the newer graph-level attack GRABNEL, and we hope this will address your concerns. [1] Adversarial Attacks on Graph Classification via Bayesian Optimisation --- > **Minor** Thank you for your insightful suggestions! We promise to include Table 7 in the main text in the camera-ready version. Regarding Line 184/189, we did mistakenly write $\mathcal{X}_t$ as $\mathcal{D}^{(t)}$, and we appreciate your correction! --- > **Question 1**: Figure 1 shows the sample label ditrbution using Infobatch, right? If so, what is the distribution like with GDER? Thank you for making our work even stronger! We tracked the training trajectory and sample distribution of GDeR on MUTAG+GCN. **The visualizations are available at the Figure 1 of the global rebuttal PDF**. As can be seen, unlike InfoBatch, which amplifies sample imbalance, our GDeR can effectively alleviate the problem of imbalance by allocating more samples from minority classes and achieving a stable distribution in the later stages of training. --- > **Question 2**: Can you give a case study of the graph samples pruned by GDER? For examples, how many of them are with noise? how many of them are from the majority class? Thank you for your comment, which has further improved the quality of our article! - Regarding **pruning for noisy data**, we tracked the training trajectory of GDeR and visualized how it balances the distribution of noisy and clean data in noisy scenarios. **The visualizations are available at the Figure 2 of the global rebuttal PDF**. It can be observed that as training progresses, GDeR gradually corrects the distribution of the training set, reducing the proportion of noisy/biased data in the maintained training set and thereby reducing the impact of biased samples on the model. - Regarding **pruning of imbalanced data**, please refer to our response to Question 1. --- Rebuttal Comment 1.1: Title: Response Comment: I read the replies, which addressed most of my concerns. I would like to raise my score. --- Reply to Comment 1.1.1: Comment: We are glad that our rebuttal has effectively addressed you concerns! We are still open to any further questions you may have :)
null
null
Rebuttal 1: Rebuttal: Dear Reviewers, We extend our sincere gratitude for your dedication to the review process. We are truly encouraged by the reviewers' recognition of several positive aspects of our paper, including **a novel and significant data pruning method** (Reviewer `RKXd`, `kNzQ`, `zhEK`), **high-quality presentation** (Reviewers `RKXd`, `kNzQ`, `zhEK`), and **encouraging experimental rsults** (Reviewers `RKXd`). Additionally, we have diligently addressed and responded to each concern raised by every reviewer. Here, we summarize several key points as follows: 1. **Transferability to other data domains** (Reviewer `kNzQ`, `zhEK`) To demonstrate the transferability of GDeR to other data domains, we present its performance on CIFAR-10 and ImageNet datasets (in Tables 1 and 2 of the gloabl rebuttal PDF). 2. **Updated baselines** (Reviewer `RKXd`, `zhEK`) We compare GDeR with more advanced robust GNN and long-tail techniques (in Tables 3 and 4). 3. **Complexity and efficiency** (Reviewer `zhEK`) We supplement the complexity analysis of GDeR and provide corresponding wall-clock time evaluation (in Tables 5 and 6). 4. **Case Study** (Reviewer `RKXd`) We have supplemented a case study, demonstrating how GDeR dynamically removes noisy data and alleviates long-tail distribution in practical training processes (in Figures 1 and 2). Thanks again to all the reviewers, and your expertise significantly helps us strengthen our manuscript – this might be the most helpful review we received in years! We are willing to address any further concerns or questions you may have. Warm regards, Authors Pdf: /pdf/3a4329b72f7c82a6c074692b8f2d4d0b4585f24d.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Shadowheart SGD: Distributed Asynchronous SGD with Optimal Time Complexity Under Arbitrary Computation and Communication Heterogeneity
Accept (poster)
Summary: The paper studied distributed asynchronous SGD with heterogeneous communication and computation times for non-convex stochastic optimization problems. In particular, the paper proposed a new algorithm called Shadowheart SGD and analyzed his time complexity. The time complexity is proven to be optimal in the famly of centralized methods with compressed communication by providing a lower bound. This time complexity is also shown to be better than those in the literature. Strengths: 1. The paper considered a general scenario of distributed/federated learning by considering asynchronous SGD, arbitrary computation and communication time, and unbiased compression. 2. The paper proposed a new and non-trivial algorithm called Shadowheart SGD, which is optimal in terms of time complexity. 3. The paper introduced equilibrium time, which is a key parameter to characterize the number of computations and communications in Shadowheart SGD. 4. The the paper also proposed Adaptive Showheart SGD, which does not require the knowledge of equilibrium time, the communication and the computation times. Weaknesses: 1. The paper was not well written and is very hard to read. It reads like a summary of the results that are in the appendix. In addition, the intuitive explanations are very few and many notations were not well explained. 2. The paper considered only IID data distribution. This almost rules out the application of the algorithm in federated learning. The results are nice contributions to the existing results but it seems minor. 3. It lacks experimental results. All the experiments use either logistic regression with MNIST dataset or quadratic optimization and multiplicative noise. Since the paper proposed a new algorithm, i believe the experiments need to be much more extensive. Technical Quality: 3 Clarity: 2 Questions for Authors: 1. In page 3, why $D_v$ is a distribution on $\mathbb{S}_{\xi}$? Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: The paper only focuses on the IID dataset. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the comments! Let us respond to the weaknesses: > The paper was not well written and is very hard to read. It reads like a summary of the results that are in the appendix. In addition, the intuitive explanations are very few and many notations were not well explained. We are sorry if some parts of the paper are not clear. We will be happy to clarify the results. Note that we have an extensive list of notations in Section B. The intuitive explanations of our algorithm, the lower bound, and the equilibrium time are presented in Sections 4, 5, and 6. > The paper considered only IID data distribution. This almost rules out the application of the algorithm in federated learning. The results are nice contributions to the existing results but it seems minor. This is true, we do not hide this (Lines 33-34). However, in this paper, we chose to focus on this homogeneous setting due to the significant and unexpected results it yields, such as the equilibrium time. We believe that we developed enough new results for one paper. Of course, the non-IID setup is equally important, and we are currently considering it, but it will require a separate project. > It lacks experimental results. All the experiments use either logistic regression with MNIST dataset or quadratic optimization and multiplicative noise. Since the paper proposed a new algorithm, i believe the experiments need to be much more extensive. Notice that our experiments are deliberately and scrupulously designed to capture the dependencies from the theory. We have many plots with different input parameters to understand the behaviors of algorithms. But in order to run controlled experiments, we have to exclude all external noise and parameters that more complex losses and models introduce. That is why we choose standard optimization problems that have fewer hyperparameters and uncertainties. > In page 3, why There is a typo. Thank you, we will fix it!
Summary: The paper proposes a method (Shadowheart SGD) for centralized asynchronous optimization with compression. The lower complexity bounds are proposed, as well. It is shown that Shadowheart SGD achieves the lower bounds, meaning that the method is optimal. Strengths: The main strength is the development of an optimal method. The paper contribution is outlined and clearly presented. The proposed method is compared to others in several regimes of communication delays, showing how Shadowheart SGD can outperform its counterparts. Weaknesses: Personally, I did not understand the discussion of equilibrium time well. I understand that it comes from the analysis; also the examples are illustrative. But still, can equilibrium time have some interpretation like mixing time or first hit time in Markov chains? Or maybe it is possible to interpret equilibrium time using some random process, like the authors do in [1]. Also citation of MARINA on line 76, page 3 is missing. [1] Vogels, T., Hendrikx, H., & Jaggi, M. (2022). Beyond spectral gap: The role of the topology in decentralized learning. Advances in Neural Information Processing Systems, 35, 15039-15050. Technical Quality: 3 Clarity: 4 Questions for Authors: See question on equilibrium time above. Confidence: 3 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: The paper is mostly theoretical and does not have negative societal impact. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you very much for the review! Let us clarify the weaknesses and questions: > Personally, I did not understand the discussion of equilibrium time well. I understand that it comes from the analysis; also the examples are illustrative. Indeed, the equilibrium time, in some sense, is a "mixing time," a time when the server can aggregate enough information from stochastic vectors to confidently do the optimization step since randomness is stabilized by that time. We believe one of the best possible intuitive explanations of the equilibrium time is presented in Lines 130-138. The larger time $t,$ the larger $b_i$ and $m_i.$ Due to the law of large numbers, $g^k$ tends to exact gradient $\nabla f(x^k)$ when $t \to \infty.$ The equilibrium time $t^*$, in some sense, says when $g^k$ is close enough to $\nabla f(x^k)$, and the server can confidently do the next optimization step. > Also citation of MARINA on line 76, page 3 is missing. We will add the relevant citation. Thank you. --- Rebuttal Comment 1.1: Title: Answer to Authors Comment: Dear Authors, Thank you for the reply. I decided to maintain my score.
Summary: This paper considers distributed and centralized smooth non-convex optimization when workers have different computation and communication speeds. These different speeds on the workers (and even possibly the server) characterize the problem's device heterogeneity. The authors provide a new algorithm that uses unbiased compression and, more importantly, assigns a time budget for each communication round to optimize for total time complexity. This time budget is calculated based on the speeds of different workers and is the main contribution of this paper. Adapting classic lower bounds for distributed zero-respecting algorithms, and using RandK compression algorithm, the authors show their algorithm is optimal. The authors also offer extensions, such as an adaptive variant of their algorithm, and consider the setting when server communication requires some time. Strengths: The paper is well written, and a reader not familiar with the literature gets to a very good understanding of the current literature in the area after reading the first few sections of the paper. The results are also presented in a transparent, easy to understand, and mathematically sound manner. The studied problem is a fundamental problem in distributed optimization, and underlies several more complicated seeming problems in Federated learning. The authors attempt at showing optimality of their procedure at least for RandK also provides context on any scope for improvement in the current result. Finally, the authors consider several important extreme limits as well as baselines, making clear comparisons with them. Overall, it is a good paper with only some flaws that can be corrected using remarks and small additional comments, that can be accommodated in the camera ready version. I support accepting the paper, and am happy to further increase my score based on author rebuttal and discussion. Weaknesses: I have the following comments in a decreasing order of importance: 1. I am not sure if the lower bound actually holds for all unbiased compression schemes. For instance, if the unbiased compression scheme is to add random noise to the gradient vector before arbitrarily picking a direction to communicate, then it woun't be distributed zero-respecting. Ofcourse, with randK this does not happen because if it picks a co-ordinate it wouldn't make it non-zero if it isn't already. This is an important subtlety and should be incorporated in the discussion, but either removing the above compression scheme I mentioned through an additional constraint in the definition of distributed zero-respecting algorithms (with compression), or state that the lower bound is only for randK compression operator. Otherwise, it is not accurate to say the proposed method is optimal. 2. In the introduction while specifying the scope of the problem, it would also be useful to discuss the issue of bit communication complexity somewhere. It is true that in some settings, the communication time complexity only depends on the total number of communications, but in more optimized channel communication the bit complixty is an important factor as well, and for instance in the general setup $\tau_i$ should be a function of $K$ for say randK compressor. I ofcourse don't expect the authors to solve all problems in a single problem, but discussing this would make the survey more exhaustive in my opinion. 3. The authors should comment on when (12) is a reasonable approximation, even if the calculations are not exact. 4. Small typos: line 112 should be "is not negligible", line 134 should have $t/2$ instead of $t$, unless the authors are using some interlacing idea to compute and communicate simultaneously, in line 217 it should be "it is left", Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Do the authors have thoughts on statistical heterogeneity can be considered in this setting? I am concerned that the current algorithm might heavily bias the final output towards the databases of agents that are pretty quick. 2. Do the authors have thoughts on the second weakness I raised above? In particular, it is an open question to obtain algorithms with optimal round and bit communication complexity, and understanding the regime with non-trivial bit complexity can help make progress towards that setting. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Nothing big, but please see my comments above in the weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your very positive comments! We now respond to weaknesses: > 1. I am not sure if the lower bound actually holds for all unbiased compression schemes. In the lower bound (Theorem O.5), we chose the particular compressor, Rand$K.$ Thus, indeed, our lower bound works only with one compressor. We do not hide this since we state in Theorem O.5, saying, "there exists ... communication oracles," which indicates that the theorem works with fixed compressors. We also admit this in Line 205 of the main part. We will add this detail more explicitly. We want to note that choosing one particular bad compressor (RandK compressor in this case) is the same as choosing one particular bad function, which is the standard approach in the literature. For instance, Yurii Nesterov, in his celebrated book [1] (like many other works), chooses one particular bad function (special quadratic function) to show the lower bounds. As far as we know, nobody showed that the lower bounds from [1] work for *any* function (because it is a very difficult task, and possibly it is not true). Proving the lower bound for any compressor can be also very difficult and even impossible. > 2. In the introduction while specifying the scope of the problem, it would also be useful to discuss the issue of bit communication complexity somewhere. > 2. Do the authors have thoughts on the second weakness I raised above? In particular, it is an open question to obtain algorithms with optimal round and bit communication complexity, and understanding the regime with non-trivial bit complexity can help make progress towards that setting. Let us clarify this weakness and question. The parameter $\tau_i$ is # of seconds required to send a compressed vector $C(x)$ to the server (e.g., Line 20). For simplicity, let us consider the Rand$K$ compressor. Then $\tau_i$ is # of seconds required to send $K$ non-zero coordinates of the vector $x.$ Now assume that the communication is proportional to the number of coordinates/bits that a worker sends, i.e., it takes $\dot{\tau}_i$ seconds is to send a **one coordinate/bit** (see Section 7). Then, clearly, we have $$\tau_i = K \times \dot{\tau}_i,$$ and, indeed, $\tau_i$ is a function of $K!$ Hence, $\tau_i$ is proportional to the number of coordinates/bits that a worker sends in one compressed message, which we believe is in line with the expectations of the reviewer. We discuss this in Section 7 for Rand$1$ in detail when comparing the methods. In general, we do not specify the exact dependencies between communication times $\tau_i$ and the number of bits/information that a compressor sends. While it is clear for RandK, for all possible compressors the dependence can be very nontrivial, so we decided to abstract from this away. The optimal time complexity ((10) from the paper) captures bits communication using the times $\tau_i.$ We agree that this should be noted and clarified in the main part, thank you! P.S. We also asked ourselves what would be the optimal choice of $K$ in Rand$K$ to get the best possible time complexity. It turns out an optimal choice is $K = 1$ (up to a constant factor). See Property 6.2. > 3. The authors should comment on when... This $\approx$ is tight up to a constant because $$\frac{1}{2 x} \leq \frac{1}{x + y} \leq \frac{1}{x}$$ if $x \geq y,$ and $$\frac{1}{2 y} \leq \frac{1}{x + y} \leq \frac{1}{y}$$ if $y \geq x.$ > 4. Small typos Thank you for your attention. We will fix them. > 1. Do the authors have thoughts on statistical heterogeneity can be considered in this setting? This is a good future research direction that we indeed consider right now. But in this paper, we decided to focus on this setting due to the current non-trivial and non-obvious results (e.g. the equilibrium time) that we believe deserve a separate paper. [1]: Nesterov, Yurii, Lectures on convex optimization, 2018 --- Rebuttal Comment 1.1: Comment: Thanks for the response; I will keep my positive score and recommend accepting the paper with the aforementioned clarifications.
Summary: The paper presents a novel method for non-convex stochastic optimization in an asynchronous centralized distributed setup, focusing on improving time complexity in heterogeneous environments. Additionally, the authors demonstrate that the proposed method achieves theoretically optimal time complexity under compression. Strengths: 1. **Novel Method**: The introduction of Shadowheart SGD guarantees finding an $\varepsilon$-stationary point with optimal time complexity in heterogeneous environments. 2. **Theoretical Contributions**: The paper provides rigorous proofs showing that Shadowheart SGD has better or equal time complexity compared to existing centralized methods. 3. **Optimal Time Complexity**: The time complexity analysis shows that Shadowheart SGD can be significantly better than previous methods in many regimes. Weaknesses: 1. The definition and calculation of equilibrium time are complex and not very intuitive. The implicit nature of this definition might make practical implementation and understanding challenging. 2. While the proposed method improves time complexity, the paper does not fully discuss the impact on final convergence and loss in real-world scenarios with limited training datasets. Additionally, the experiments only compare the loss of different SGD methods under the same wall time, but it would be insightful to know if Shadowheart SGD maintains lower loss when comparing the number of training samples. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Can the process of calculating equilibrium time be simplified or made more intuitive? Are there any practical algorithms or tools recommended for this calculation? 2. Similar to Weakness 2. Regarding the impact on convergence and loss in scenarios with limited training datasets, how does Shadowheart SGD perform when comparing the loss against the number of training samples? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the positive review! Let us address the weaknesses and questions: > The definition and calculation of equilibrium time are complex and not very intuitive. The implicit nature of this definition might make practical implementation and understanding challenging. Unfortunately, with the equilibrium time, we have a situation like the math community has, for instance, with the Gamma function, which also does not have an explicit formula, but it does not make the Gamma function less useful. We devoted the whole Section 6 explaining the idea behind the equilibrium time. Note that in Section 6.1, we describe an algorithm for finding the equilibrium time numerically. > Can the process of calculating equilibrium time be simplified or made more intuitive? Are there any practical algorithms or tools recommended for this calculation? Yes, we describe an algorithm in Section 6.1. > While the proposed method improves time complexity, the paper does not fully discuss the impact on final convergence and loss in real-world scenarios with limited training datasets. Additionally, the experiments only compare the loss of different SGD methods under the same wall time, but it would be insightful to know if Shadowheart SGD maintains lower loss when comparing the number of training samples. > Similar to Weakness 2. Regarding the impact on convergence and loss in scenarios with limited training datasets, how does Shadowheart SGD perform when comparing the loss against the number of training samples? It is easy to show that Shadowheart SGD requires the number of training samples less than Rennala SGD and Asynchronous SGD. All these three algorithms utilize the computation resources almost all the time during optimization processes, meaning that they use $$\approx \sum_{i=1}^n \lfloor\frac{t}{h_i}\rfloor$$ training samples by a time $t.$ Since the time complexity $t_{\textnormal{shadowheart}}$ of Shadowheart SGD is smaller than in other methods, we can conclude that $$\sum_{i=1}^n \lfloor\frac{t_{\textnormal{shadowheart}}}{h_i}\rfloor < \sum_{i=1}^n \lfloor\frac{t_{\textnormal{rennala}}}{h_i}\rfloor$$ and $$\sum_{i=1}^n \lfloor\frac{t_{\textnormal{shadowheart}}}{h_i}\rfloor < \sum_{i=1}^n \lfloor\frac{t_{\textnormal{asynchronous}}}{h_i}\rfloor.$$ Therefore, Shadowheart SGD uses less training samples to find an $\varepsilon$--stationary point. Comparing to Minibatch SGD, which requires $\frac{n L \Delta}{\varepsilon} + \frac{\sigma^2 L \Delta}{\varepsilon^2}$ training data to find an $\varepsilon$--stationary point, our method indeed can *calculate* more stochastic gradients because $b_i = \lfloor \frac{t^*}{h_i} \rfloor$ can be large if $h_i$ is small. However, Minibatch SGD provably requires more time to solve the problem since some workers are idle during the optimization processes (see Lines 55-56 and Table 1). Notice that we work with i.i.d. stochastic gradients, and nothing stops us from reusing previously used training samples. For instance, we can organize a stochastic gradient calculation in a way that workers calculate $\nabla f(x;\xi),$ where $\xi$ is a uniformly random sample *with replacement*. What if we can not reuse samples? When we were writing the paper, we tried to use all workers' time and utilize them as much as we could to improve the performance. That is why we take $b_i = \lfloor \frac{t^*}{h_i} \rfloor.$ But if we want to limit the number of samples that Shadowheart SGD uses, then we can slightly modify this algorithm and take $b_i = \min \left[\lfloor \frac{t^*}{h_i} \rfloor, b_{\max}\right],$ where $b_{\max}$ is some parameter that bounds the number of used samples.
null
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Achieving Near-Optimal Convergence for Distributed Minimax Optimization with Adaptive Stepsizes
Accept (poster)
Summary: The paper presents D-AdaST, a distributed adaptive minimax method designed to address non-convergence issues in nonconvex-strongly-concave (NC-SC) minimax problems caused by inconsistent locally computed adaptive stepsizes in distributed settings. The method incorporates a stepsize tracking mechanism, which ensures consistency across local stepsizes and maintains time-scale separation, achieving near-optimal convergence rates comparable to centralized counterparts. Strengths: The paper addresses a significant challenge in distributed minimax optimization by introducing a method that achieves near-optimal convergence rates. The theoretical contributions are well-founded, and the experimental validation is comprehensive. Weaknesses: Please consider my questions for this part. Technical Quality: 3 Clarity: 3 Questions for Authors: 1- How does D-AdaST compare to other existing adaptive minimax methods in terms of computational time efficiency? 2-The current analysis focuses on nonconvex-strongly-concave minimax problems. How would the theoretical framework extend to other problem classes, such as nonconvex-nonconcave or convex-concave problems? What modifications or additional assumptions would be necessary? 3-In a distributed setting, communication delays and asynchrony can pose significant challenges. How does the theoretical analysis account for these factors? Are there any bounds or guarantees provided for performance in asynchronous or delayed communication settings? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: They discussed the limitations of their work in terms of assumptions and main results. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. Please see below for a detailed point-by-point response. >__Q1:__ How does D-AdaST compare to other existing adaptive minimax methods in terms of computational time efficiency? __Response:__ We note that, as discussed in the Introduction section, there is limited literature on the distributed adaptive minimax method; instead, the proposed D-AdaST is the first distributed adaptive method that achieves near-optimal convergence without knowledge of problem-dependent parameters for nonconvex minimax problems. The resulting *sample complexity* $\tilde{\mathcal{O}} \left( \epsilon ^{-(4+\delta)} \right)$ with arbitrarily small $\delta>0$ represents the number of stochastic gradient computation needed to find an $\epsilon$-stationary point, which indicates the theoretical computational complexity and is near-optimal compared to the existing lower bound $\varOmega \left( \epsilon ^{- 4} \right)$ for the centralized nonconvex minimax optimization [6]. Experimentally, we compared the convergence performance of D-AdaST with the distributed variants of AdaGrad, TiAda, and NeAda in terms of the number of *gradient calls* (see Figure 3 and 4), which properly represents the computational efficiency of the algorithms when all experiments are run on the same server with the same batch size. We believe that these results demonstrate the superiority of the proposed D-AdaST algorithm regarding computational efficiency. >__Q2:__ The current analysis focuses on nonconvex-strongly-concave minimax problems. How would the theoretical framework extend to other problem classes, such as nonconvex-nonconcave or convex-concave problems? What modifications or additional assumptions would be necessary? __Response:__ Given the limited existing work on distributed adaptive minimax methods, we believe that it is an interesting and important direction for future work to study other problem settings, such as nonconvex-nonconcave or convex-concave objective functions. For the convex-concave setting, we expect that the convergence analysis techniques from centralized methods [14] can be used to obtain results in our decentralized setting. Going beyond the convex-concave assumption poses a major challenge in the sense that there is no well-formulated suboptimality measure for general nonconvex-nonconcave problems. Specifically, for a general nonconvex-nonconcave minimax problem, a saddle point may not exist even in a centralized setting [15], and determining its existence is known to be NP-hard. Therefore, making additional assumptions to ensure better properties of the inner problem is necessary, such as employing the Polyak-Lojasiewicz condition on $y$ to obtain global convergence for the nonconvex-nonconcave problem [16]. >__Q3:__ In a distributed setting, communication delays and asynchrony can pose significant challenges. How does the theoretical analysis account for these factors? Are there any bounds or guarantees provided for performance in asynchronous or delayed communication settings? __Response:__ We believe that it is an independently interesting and important problem to consider the communication delays and asynchronous communication protocols in distribution minimax optimization, which requires major adjustments to the communication model. Particularly, we notice that in [17], by constructing an augmented topology graph (c.f., Fig. 2), the delayed and asynchronous system is reduced to a synchronous augmented one with no delays by adding virtual nodes to the graph. They obtained linear and sublinear convergence for strongly convex and non-convex minimization, respectively, under the assumptions of bounded delay and asynchronous model (see Assumption 6 in [17]). This motivates us to potentially extend this technique for modeling communication to our setting in future work. --- Rebuttal Comment 1.1: Comment: Appreciate authors for their response and clarification. I am satisfied with their response and maintain my score. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer KXvx Comment: We appreciate the reviewer's acknowledgement. Thank you for taking the time to carefully review our paper and respond to our clarifications.
Summary: This paper proposed a decentralized stochastic first-order method for nonconvex minimax optimization. For nonconvex-strongly-concave setting, the proposed D-AdaST has the convergence rate of $O(\epsilon^{-4+\delta})$ for any small $\delta>0$. Strengths: see questions Weaknesses: see questions Technical Quality: 3 Clarity: 3 Questions for Authors: The proposed method does not require the knowledge of the parameters $L$, $\mu$ and total iterations number, which is nice in empirical. I have some questions as follows: 1. This paper states D-AdaST achieves the near-optimal convergence rate, while the definition of optimality is unclear, e.g., problem setting and algorithm class. I’m not sure whether this statement is correct. Specifically, Assumption 2 requires the second-order Lipschitz continuous, while the existing lower for solving nonconvex-strongly-concave minimax problem by first-order methods only considers the first-order Lipschitz continuous. The author should clarify how to fill this gap and explain how to achieve the lower bound in the setting of D-AdaST more clearly. 2. Assumption 3 introduce the upper bound $C$ for stochastic gradient, which should be included in the final convergence rate. 3. Can you provide some explanation why the small $\delta$ cannot be avoided in result? What happens if we try to take $\delta\to0$? 4. The section of introduction describes the problem setting with constraints on both $x$ and $y$, while the algorithm design and convergence analysis looks only consider the constraint on $y$. 5. Is it possible to increase the batch-size of stochastic gradient to reduce the total communication rounds? Typically, the communication rounds in decentralized optimization can be smaller than the computation rounds, e.g., Chen et al., 2024. Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: N/A Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your valuable comments and suggestions. Please see below for a detailed point-by-point response. >__Q1:__ This paper states D-AdaST achieves the near-optimal convergence rate, while the definition of optimality is unclear, e.g., problem setting and algorithm class. I’m not sure whether this statement is correct. Specifically, Assumption 2 requires the second-order Lipschitz continuous, while the existing lower for solving nonconvex-strongly-concave minimax problems by first-order methods only considers the first-order Lipschitz continuous. The author should clarify how to fill this gap and explain how to achieve the lower bound in the setting of D-AdaST more clearly. __Response:__ We will further clarify the definition of optimality of the convergence in the revision to avoid possible confusion. Specifically, we followed the definitions in [6], where they derived a complexity lower bound $\varOmega \left( \epsilon ^{-4} \right)$ for a class of smooth nonconvex-strongly-concave (NC-SC) functions (c.f., Definition 1) and the first-order algorithms that satisfy the zero-respecting condition using only (historical) gradient information (c.f., Definition 4). Under this problem setting and algorithm class, they proved that the dependency on $\epsilon$ is not improvable. Therefore, we claimed that our obtained convergence rate $\tilde{\mathcal{O}} \left( \epsilon ^{-\left( 4+\delta \right)} \right)$ is near-optimal in terms of $\epsilon$ in the sense that the parameter $\delta$ can be arbitrarily small so as to achieve the lower bound. It should be noted that the parameter $\delta$ can not be 0 due to its key role in ensuring time-scale separation (refer to the response to Q3 and Remark 4 for a more detailed discussion). We note that, to our knowledge, there is no existing parameter-agnostic method that achieves the optimal convergence rate for NC-SC problems as considered in this work, even in a centralized setting. Regarding the assumption on second-order Lipschitz continuous for $y$, we note that it is essential for achieving the (near) optimal convergence rate $\tilde{\mathcal{O}} \left( \epsilon ^{- 4} \right)$ for NC-SC problems [7, 8]. Specifically, together with Assumptions 1 and 2, we can show that $y^*\left( \cdot \right)$ is smooth (c.f., Lemma 2). Nevertheless, without this assumption, [9] only shows a worse complexity of $\tilde{\mathcal{O}} \left( \epsilon ^{- 5} \right)$ without large batch size (c.f., Remark 4.6). >__Q2:__ Assumption 3 introduce the upper bound $C$ for stochastic gradient, which should be included in the final convergence rate. __Response:__ We will provide a more detailed convergence result including dependence on $C$ in the revision. The explicit convergence result can be found in (75) in the Appendix, which shows the dependence on $C$ and other constant parameters. >__Q3:__ Can you provide some explanation why the small $\delta$ cannot be avoided in result? What happens if we try to take $\delta \rightarrow 0$. __Response:__ We note that, for the NC-SC minimax problem, it is necessary to have a time-scale separation in stepsizes between the minimization and maximization processes to ensure the convergence of gradient-descent-ascent-based algorithms [10]. As shown in Theorem 2, the transient time required to reach this state is related to $\left( \cdot \right) ^{\frac{1}{\alpha -\beta}}$, while we need $\alpha -\beta =0$ to reach the convergence rate of $\tilde{\mathcal{O}} \left( \epsilon ^{-4} \right)$ (c.f., Eq. (13)). Therefore, by setting $\alpha -\beta =\mathcal{O} \left( \delta \right), \delta >0$, we found that the smaller $\delta$, the faster the convergence and the longer the transition time (c.f., Figure 9 in Appendix A.3). Particularly, the transition time becomes infinite when $\delta$ approaches 0, indicating that the algorithm does not converge. Therefore, the parameter $\delta$ plays a key role in achieving adaptive time-scale separation as well as balancing the convergence speed with the transient process. Please refer to Remark 4 for a more detailed discussion. >__Q4:__ The section of introduction describes the problem setting with constraints on both $x$ and $y$, while the algorithm design and convergence analysis looks only consider the constraint on $y$. __Response:__ In the introduction, we initially introduced a general minimax problem with $x\in \mathcal{X} \subseteq \mathbb{R} ^p$ and $y\in \mathcal{Y} \subseteq \mathbb{R} ^d$, which subsumes both scenarios of the constrained and unconstrained problem, as $\mathcal{X}$ and $\mathcal{Y}$ can represent the full Euclidean space with appropriate dimensions. For the specific problem (1) considered in this work, we set $\mathcal{X} =\mathbb{R} ^p$ and $\mathcal{Y} \subset \mathbb{R} ^d$ is a closed and convex set, which is widely considered in centralized/distributed NC-SC minimax problems [10, 11]. >__Q5:__ Is it possible to increase the batch-size of stochastic gradient to reduce the total communication rounds? Typically, the communication rounds in decentralized optimization can be smaller than the computation rounds, e.g., Chen et al., 2024. __Response:__ We agree with the reviewer that the complexity of communication can be further reduced by incorporating certain techniques that amount to increasing the batch size of the stochastic gradient, such as variance reduction [12] and multiple local updates used in federated learning [13]. As mentioned by the reviewer, Chen et al. [12] used larger batch-size or full gradient evaluations (c.f., step 15 in Algorithm 2), together with the FastMix protocol, to reduce the computation and communication complexity. However, it should be noted that the resulting complexity is obtained under a stronger assumption of the average smoothness (c.f., Assumption 2.2 in [12]). Instead, we obtained a near-optimal rate under a weaker assumption of smoothness (c.f, Assumption 2), which is commonly used in machine learning tasks [6]. --- Rebuttal Comment 1.1: Comment: Thanks for your detailed rebuttal. I am satisfied with most of your response, while there are still some questions should be addressed. My main consideration is the applicability of the lower bound provided by Li et al. [6]. Noticing that their Definition 1 does not include second-order Lipschitz continuity. The lower bound of $\Omega(\epsilon^{-4})$ in their Theorem 2 is based on the function $f^{{\rm nc}-{\rm sc}-{\rm sg}}$. The proof of this theorem only verifies its gradient is Lipschitz continuous. However, the Lipschitz continuity of its Hessian is unclear. Therefore, we should check if the construction of Li et al. [6] holds the second-order Lipschitz continuity shown in Assumption 2. If the author can address this issue, I will increase my overall rating. Otherwise, the statement on the optimality should be avoided in revision (I will still keep borderline accept). **Some minor comments:** For Q4, I recommend to unify the presentation of constrained/unconstrained setting on the variable. For Q5, I think reduce the communication complexity is possible even if there is no average smoothness assumption. At least, this can be address in nonconvex minimization, e.g., [18] Yucheng Lu and Christopher De Sa. Optimal Complexity in Decentralized Training. ICML 2021. --- Reply to Comment 1.1.1: Title: Response to Reviewer Nhkx Comment: Thank you for your prompt response and valuable comments. We have carefully considered your remaining concerns and have responded in detail as follows: >__Q6:__ My main consideration is the applicability of the lower bound provided by Li et al. [6]. Noticing that their Definition 1 does not include second-order Lipschitz continuity. The lower bound of $\varOmega \left( \epsilon ^{-4} \right)$ in their Theorem 2 is based on the function $f^{\text{sc-nc-sg}}$. The proof of this theorem only verifies its gradient is Lipschitz continuous. However, the Lipschitz continuity of its Hessian is unclear. Therefore, we should check if the construction of Li et al. [6] holds the second-order Lipschitz continuity shown in Assumption 2. If the author can address this issue, I will increase my overall rating. Otherwise, the statement on the optimality should be avoided in revision (I will still keep borderline accept). __Response__ We have carefully checked the construction of the hard examples for obtaining the complexity lower bound in [6] and verified that they satisfy second-order Lipschitz continuity on $y$ as assumed in Assumption 2. Specifically, we recall the hard instance in the stochastic setting in [6] (c.f., Eqs. (10) and (11)) as follows: $$ \bar{f}^{\text{nc-sc-sg}}\left( \boldsymbol{x},\boldsymbol{z};\bar{\boldsymbol{y}} \right) =-\Psi \left( 1 \right) \Phi \left( x_1 \right) +\sum_{i=2}^T{\left[ \Psi \left( -z_i \right) \Phi \left( -x_i \right) -\Psi \left( z_i \right) \varPhi \left( x_i \right) \right]}+\sum_{i=1}^{T-1}{\left[ c_1x_{i}^{2}+c_2z_{i+1}^{2} \right]}+\sum_{i=1}^{T-1}{h^{\text{sg}}\left( x_i,z_{i+1};\bar{\boldsymbol{y}}^{\left( i \right)} \right)}, $$ with $$ h^{\text{sg}}\left( x,z;\boldsymbol{y} \right) =\frac{C}{n}\left[ -\frac{1}{2}\boldsymbol{y}^{\text{T}}\left( \frac{1}{n^2}I_n+A \right) \boldsymbol{y}+\boldsymbol{b}_{x,z}^{\text{T}}\boldsymbol{y} \right] $$ and $\boldsymbol{b}_{x,z}=x\boldsymbol{e}_1-\frac{1}{2}z\boldsymbol{e}_n$, where $\boldsymbol{x}$ and $\boldsymbol{z}$ are variables to minimize, $\boldsymbol{y}$ is the variable to maximize, $\Psi \left( \cdot \right)$ and $\Phi \left( \cdot \right)$ are component functions, $c_1$, $c_2$ and $C$ are constants, $A$ is a positive semi-definite matrix (refer to Section 4 in [6] for detailed definitions). It should be noted that the assumption of second-order Lipschitz continuity of this work is only required to hold for $y$ (c.f., Eq. (11) in Assumption 2), and the terms related to $y$ in the objective function $\bar{f}^{\text{nc-sc-sg}}$ are only in $h^{\text{sg}}$, which is twice differentiable. It is not difficult to verify that $\nabla _{x,z;\boldsymbol{y}}^{2}h^{\mathrm{sg}}=-\left( \frac{1}{n^2}I_n+A \right)$ and $\nabla _{\boldsymbol{yy}}^{2}h^{\mathrm{sg}}=\mathrm{diag}\left\\{ 1,0,\cdots,0,-1/2 \right\\}$ are constant matrices and thus both are Lipschitz continuous. This indicates that $\bar{f}^{\text{nc-sc-sg}}$ is second-order Lipschitz continuous for $y$, satisfying Eq. (11) in Assumption 2. Therefore, we believe that the lower bound given by Li et al. [6] is applicable to the function class considered in our work. Combined with the previous discussion, we will carefully revise our statement about "near-optimal" to avoid any possible confusion. >__Q7:__ For Q4, I recommend to unify the presentation of constrained/unconstrained setting on the variable. __Response__ Thank you for your suggestion. We will do a proper revision of the presentation of the decision variable domains. >__Q8:__ For Q5, I think reduce the communication complexity is possible even if there is no average smoothness assumption. At least, this can be address in nonconvex minimization, e.g., > * [18] Yucheng Lu and Christopher De Sa. Optimal Complexity in Decentralized Training. ICML 2021. __Response__ We agree with the reviewer that reducing the communication complexity is possible even without relying on the average smoothness assumption. As mentioned by the reviewer, without this assumption, Lu et al. [18] combined local updates with the Factorized Consensus method (c.f., Lemma 1 in [18]) and Accelerated Gossip protocol (c.f., Algorithm 3), yielding reduced communication complexity. However, in order to obtain a computational complexity of $\varOmega \left( \epsilon ^{-3} \right)$ as in [12], this assumption is required in most of the literature based on the variance reduction methods. Otherwise, without this assumption, the sampling complexity obtained by Lu et al. [18] matches another lower bound $\varOmega \left( \epsilon ^{-4} \right)$. Having noted the characteristics and limitations of these methods, we believe they are all worthy of further investigation to improve the communication efficiency of distributed algorithms, with or without this average smoothness assumption.
Summary: The authors introduced a new distributed adaptive minimax method, D-AdaST, to address the issue of non-convergence in nonconvex-strongly-concave minimax problems caused by inconsistencies in locally computed adaptive stepsizes. D-AdaST employs an efficient adaptive stepsize tracking protocol that ensures time-scale separation and consistency of stepsizes among nodes, thereby eliminating steady-state errors. Extensive experiments on both real-world and synthetic datasets validate their theoretical findings across various scenarios. Strengths: Originality and Significance: The paper introduces a novel Distributed Adaptive Minimax Optimization Algorithm with a Stepsize Tracking Protocol. This approach is significant in addressing the non-convergence issues that arise when transitioning adaptive single-machine algorithms to a distributed environment. The authors provide a thorough theoretical analysis, which could facilitate the development of similar methods or modifications to existing single-machine algorithms. Quality and Clarity: The paper is well-structured and clear, with thoroughly explained assumptions and convergence rates, making it easy to understand. Weaknesses: The experimental section has some deficiencies, as it only includes a GAN experiment on the CIFAR-10 dataset. I believe it would be beneficial to supplement the paper with results from additional datasets or other minimax optimization problems to provide a more comprehensive evaluation. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see weakness. Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The paper employs general assumptions for distributed minimax optimization problems; therefore, it is unnecessary to discuss the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: >__W1:__ The experimental section has some deficiencies, as it only includes a GAN experiment on the CIFAR-10 dataset. I believe it would be beneficial to supplement the paper with results from additional datasets or other minimax optimization problems to provide a more comprehensive evaluation. __Response:__ Thank you for your comments. We provide additional experiments of training GANs on a more complicated dataset CIFAR-100 to further illustrate the effectiveness of the proposed D-AdaST, as shown in the __attached PDF file in General Response__. We use the entire training set of CIFAR-100 with coarse labels (20 classes) to train GANs over networks, where each node is assigned four distinct classes of labeled samples. Due to time constraints, we ran one experiment with the same settings as in Figure 4(a). The revision may include other complementary experiments on additional datasets. It can be observed that D-AdaST outperforms the others in terms of the inception score. Together with other experimental results in the paper, we believe that we have demonstrated the effectiveness of the proposed D-AdaST method and its potential for further real-world applications. --- Rebuttal Comment 1.1: Comment: Thank you for the additional information regarding the experiment. I will accordingly adjust my score. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer vU1y Comment: Thanks for the thoughtful acknowledgement and consideration of our responses. We sincerely appreciate the reviewer's valuable time in reviewing our paper.
Summary: The paper introduces a method for distributed minimax optimization, for the scenario that various "agents" each hold part of a data set locally, and aim to coordinate to find the minimax solution of some criterion. Here, the criterion consists of the average of local cost functions, which are assumed to be smooth but non-convex-strongly-concave. Each agent is assumed to able to compute a stochastic gradient at each iteration, which can be communicated across the neighboring agents, who collectively communicate according to a given, known graph structure. In the nondistributed setting, known gradient-based methods depend on hyper-parameter tuning, in particular in choosing the stepsizes. When such a method is properly adaptive, it finds the right stepsize without apriori knowledge based on continuous adjustment per iteration. The authors construct counterexamples showing that directly applying adaptive methods designed for centralized problems results in non-convergence in distributed setting. The authors then propose an adaptive stepsize tracking protocol that involves transmitting two extra scalar variables to ensure consistency among stepsizes of different nodes. The authors give theoretical guarantees, matching (nearly) the convergence rate in the non-distributed setting. Furthermore, the authors exhibit their method on various real-world datasets to further underline their theoretical findings. Strengths: * The article is well written: the authors explain the problem clearly and the algorithm is clearly outlined. * The result and formulation of Theorem 1 are insightful, and the synthetic example of (6) is simple yet very instructive. Together they outline the problem to be overcome in Section 2.2 nicely. * The authors provide an (almost) rate optimal theoretical guarantee for their proposed method. * The chosen real-world settings are both interesting and illustrative. * The proposed method is attractive in terms of communication complexity, with only a couple of scalars being required on top of the gradients. Weaknesses: * Whilst I think the problem that the authors address is interesting, and within their scope they provide satisfying answers, I also think that this scope is rather limited. There are many things to investigate here in terms of e.g. network topology, communication efficiency, the influence of various noise sources. The authors stay within a set of rather stringent assumptions and do not capture the effect of these problem characteristics fully. * In high-dimensional settings, which the authors aim to address, achieving zero bias for stochastic gradients is not always feasible. Additionally, confirming the absence of bias in the gradient estimator is challenging to verify in practice. Does the method still work without this assumption? What if the bias is very small? * Similarly, the uniform bound on the gradient seems rather strong. Is this really required by the analysis? Technical Quality: 3 Clarity: 3 Questions for Authors: Some of the figures are rather difficult to read when printed on paper as they are rather small, but were readable digitally. * Is there a reason the constant $C$ does not appear in e.g. (13)? * I do not see the point of Corollary 1. As I understand it, this is an upper bound for D-TiAda. The authors say that this result is provided for "proper comparison". How does Corollary 1 provide any comparison if it is only an upper bound? Isn't Theorem 1 to provide the comparison that the authors are after? Confidence: 2 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Within the scope of the article, I think limitations are appropriately addressed. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for the insightful and valuable comments. Please see below for a detailed point-by-point response. >__W1:__ Whilst I think the problem that the authors address is interesting, and within their scope they provide satisfying answers, I also think that this scope is rather limited. There are many things to investigate here in terms of e.g. network topology, communication efficiency, the influence of various noise sources. The authors stay within a set of rather stringent assumptions and do not capture the effect of these problem characteristics fully. __Response:__ First of all, we would like to further clarify the scope and contributions of this work. In this paper, we focus on a nonconvex-strongly-concave distributed minimax problem over networks, which subsumes many applications in machine learning and optimization. To mitigate the challenge of hyper-parameter tuning, we propose the first parameter-agnostic distributed adaptive minimax algorithm, D-AdaST, that efficiently resolves the issue of inconsistent adaptive step sizes with relatively low communication costs, achieving a near-optimal rate. Moreover, the obtained convergence results indeed highlight the dependence on network topology as well as the trade-offs between convergence speed and the length of the transition phase (cf. Remark 5), as demonstrated in the experiments. We concur with the reviewer on the importance of considering communication efficiency and the influence of different noise sources in future work. Regarding the assumptions used in this work, we will provide a more detailed discussion of the rationale and limitations of the assumptions in the revision. Please also refer to the subsequent responses to specific assumptions. >__W2:__ In high-dimensional settings, which the authors aim to address, achieving zero bias for stochastic gradients is not always feasible. Additionally, confirming the absence of bias in the gradient estimator is challenging to verify in practice. Does the method still work without this assumption? What if the bias is very small? __Response:__ We agree with the reviewer that obtaining the unbiased stochastic gradient is not always feasible in practice. Nevertheless, this remains one of the most widely used assumptions in optimization and machine learning when uniform sampling is performed in an independent and identically distributed (IID) manner. Without this assumption, the biased term in the gradient may introduce an extra constant steady-state error in the current convergence result, which cannot be mitigated by employing decreasing stepsizes [1]. Nevertheless, the algorithm still functions and if the bias is small relative to other terms or exhibits a certain structure, e.g., memory-biased (c.f., Definition 11 in [2]), more explicit convergence results can be obtained [2]. We believe that investigating these biased stochastic gradient models represents an interesting direction for future work. >__W3:__ Similarly, the uniform bound on the gradient seems rather strong. Is this really required by the analysis?. __Response:__ We remark that the assumption of bounded gradient is essential for the adaptive methods to achieve the property of being parameter-agnostic, thus reducing the burden of hyper-parameter tuning. In our proof, without the assumption, the term $S_1$ in Eq. (18) cannot be proved to be vanishing. As a result, one would need to employ the negative term $-\frac{4}{K}\sum_{k=0}^{K-1}{\mathbb{E} \left[ \|| \nabla_xf\left( \bar{x}_k,\bar{y}_k \right) \|| ^2 \right]}$ to absorb $S_1$ to ensure a sufficient descent. However, this typically imposes specific requirements on step size selection that may depend on the problem-dependent parameters, as also observed in [3], which contradicts the basic principle of a parameter-agnostic method. Indeed, to the best of our knowledge, in the field of distributed nonconvex-strongly-concave minimax optimization, there is no existing parameter-agnostic method that achieves the (near) optimal convergence rate while also eliminating the bounded gradient assumption. On the other hand, as detailed in Remark 3, this assumption is widely used and can be met by imposing constraints on the bounded domain of decision variables in many real-world tasks [4, 5]. >__Q1:__ Some of the figures are rather difficult to read when printed on paper as they are rather small, but were readable digitally. __Response:__ To express the experimental results more clearly, we will adjust the text size and the thickness of the curves in the figure appropriately. >__Q2:__ Is there a reason the constant $C$ does not appear in e.g. (13)? __Response:__ For better readability of the convergence results, we show in the main text only the dependence of the convergence on some key parameters, while hiding other constant parameters such as $C$. The explicit convergence result with dependence on $C$ can be found in (75) in the Appendix. We will present a more detailed convergence result concerning other parameters in the revision. >__Q3:__ I do not see the point of Corollary 1. As I understand it, this is an upper bound for D-TiAda. The authors say that this result is provided for "proper comparison". How does Corollary 1 provide any comparison if it is only an upper bound? Isn't Theorem 1 to provide the comparison that the authors are after? __Response:__ Theorem 1 shows that the D-TiAda algorithm does not converge in certain scenarios, while Corollary 1 esentially shows that the underlying reason is due to the inconsistency of the adaptive stepsizes, i.e., $\zeta _{v}^{2}$ and $\zeta _{u}^{2}$ in (14), compared to the result for D-AdaST in Theorem 2. Together with these conclusions, we have demonstrated the impact of the stepsize inconsistency in the distributed adaptive minimax methods, as well as an efficient stepsize tracking mechanism to solve this problem. We will revise this part of the statements accordingly to avoid any possible confusion. --- Rebuttal Comment 1.1: Comment: I appreciate the detailed response by the authors. I wish to maintain my score. --- Reply to Comment 1.1.1: Title: Thanks to Reviewer rQb7 Comment: We appreciate the acknowledgment of the reviewer. Thanks for spending your time carefully reviewing our paper and responding to our clarifications.
Rebuttal 1: Rebuttal: __General Response to All Reviewers__ We would like to express our gratitude to all the reviewers for evaluating our work positively and providing their insightful and valuable comments that have helped us greatly improve the quality of our paper. We have carefully considered each of the reviewers' concerns and provided a detailed point-by-point response under each rebuttal section. Minor comments aside, our response includes the following major aspects: i) we have further clarified the scope of this work and discussed the rationale behind our assumptions; ii) we have conducted an additional experiment on a more complicated dataset CIFAR-100 (c.f., the attached PDF file in this general response block) and will add more experimental results with additional datasets in the final version; iii) we have provided detailed explanations to clarify certain concepts and statements; iv) we have discussed possible extensions to other general settings of the problem as well as the communication model. Unless otherwise noted, the citations in our response correspond to those in the submitted manuscript. Below are the references used throughout the rebuttal, which have been properly cited and compared in the main text. * [1] Hu, B., Seiler, P. and Lessard, L. Analysis of biased stochastic gradient descent using sequential semidefinite programs. Mathematical Programming, 2021. * [2] Driggs, D., Liang, J. and Schonlieb C. On Biased Stochastic Gradient Estimation. JMLR, 2022. * [3] Huang, F., Wu, X., and Hu, Z.. Adagda: Faster adaptive gradient descent ascent methods for minimax optimization. AISTATS, 2023. * [4] Dinh, L., Pascanu, R., Bengio S. and Bengio Y. Sharp minima can generalize for deep nets. ICML 2017. * [5] Arjovsky, M., Chintala S. and Bottou L. Wasserstein generative adversarial networks. ICML 2017. * [6] Li, H., Tian, Y., Zhang, J., and Jadbabaie, A. Complexity lower bounds for nonconvex-strongly-concave min-max optimization. NeurIPS, 2021. * [7] Chen, T., Sun, Y. and Yin W. Closing the gap: Tighter analysis of alternating stochastic gradient methods for bilevel problems. NeurIPS, 2021. * [8] Li, X., Yang, J., and He, N. Tiada: A time-scale adaptive algorithm for nonconvex minimax optimization. ICLR, 2023. * [9] Lin, T., Jin, C. and Jordan M. On gradient descent ascent for nonconvex-concave minimax problems. ICML, 2020. * [10] Yang, J., Li, X., and He, N. Nest your adaptive algorithm for parameter-agnostic nonconvex minimax optimization. NeurIPS, 2022. * [11] Tsaknakis, I., Hong, M., and Liu, S. Decentralized min-max optimization: Formulations, algorithms and applications in network poisoning attack. ICASSP, 2020. * [12] Chen, L., Ye, H. and Luo, L. An Efficient Stochastic Algorithm for Decentralized Nonconvex-Strongly-Concave Minimax Optimization. AISTATS, 2024. * [13] Khaled, A. and Jin, C. Faster federated optimization under second-order similarity. ICLR, 2023. * [14] Ene, A. and Le Nguyen, H. Adaptive and Universal Algorithms for Variational Inequalities with Optimal Convergence. AAAI, 2022. * [15] Jin, C., Netrapalli, P. and Jordan, M. What is local optimality in nonconvex-nonconcave minimax optimization? ICML, 2020. * [16] Yang, J., Kiyavash, N. and He, N. Global convergence and variance-reduced optimization for a class of nonconvex-nonconcave minimax problems. NeurIPS, 2020. * [17] Tian, Y., Sun, Y. and Scutari, G. Achieving linear convergence in distributed asynchronous multi-agent optimization. IEEE Transactions on Automatic Control, 2020. Pdf: /pdf/73adb65568e05fbacdca264d419334b023eb83f0.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Enhancing the Hierarchical Environment Design via Generative Trajectory Modeling
Reject
Summary: After rebuttal While I think there are still some problems with this paper, e.g. the short training duration, and the slight exaggeration of claims (that SHED outperforms UED). I think, however, that the idea is nice, and getting RL environment design to work better is a good goal. ----- This paper aims to improve Unsupervised Environment Design in two ways. First, it introduces a hierarchical MDP formulation, where the top level corresponds to the teacher, and the lower level corresponds to the learning agent. Each transition in the top-level MDP involves training the lower level agent on generated levels. Related to this, they develop a state representation for the adversary, which is the performance of the agent on a fixed set of diverse levels. Separately to this, they use a diffusion model to upsample the number of experiences for the teacher, effectively training on synthetic data, to improve sample efficiency. Strengths: - I think the H-MDP formulation itself is very valuable; it moves away from treating the generation of environments as a black-box, sparse reward, multi-step generation process (as in PAIRED), and towards a more informed process, where the teacher gets feedback in terms of the state (i.e., the performance vector). - The analysis in the appendix investigating the ability of the model to generate good synthetic trajectories is useful. Weaknesses: - Major - The results do not very convincingly demonstrate that SHED is better than current SoTA. Looking at figure 3 particularly, I would say that ACCEL has about the same performance as SHED. However, comparing against the RL baseline, SHED does do better. - The method is limited in the types of environments it can generate. For instance, mazes are generated using an LLM instead of directly placing blocks. This method therefore is not quite as broad in scope as PAIRED or ACCEL, which can generate arbitrary environments. - Relatedly, in the minigrid experiments, do all methods generate the levels in the same way using an LLM, providing the difficulty numbers? It would be good to compare this against the standard version of ACCEL that directly places blocks in the maze level, as it does not have the same restriction as SHED. - Minor - The figures can be improved: - Make the alpha value on the error bars a bit less - Keep colours consistent across figures, so the same method has the same colour - Keep capitalisation consistent across the figure labels. - line 80, the period ends on the line after the maths, it should end on the same line. - Footnote 1: Jiang et al. (2021) use (amongst others) the positive value loss, which is not quite the GAE, as it clips it at zero before summing. - equation one, you use $\beta_t$ but $t$ does not seem to be defined? Should this be $\beta_k$? - Line 159, PARIED should be PAIRED - There is no reward scale in figure 4 - Figure 9's caption can be made clearer. I understand it to be the performance of each method in different testing environments. - Line 718 does not link to a figure. - Figure 11's caption: zero-shot and not zeros-shot - Capitalise the first word in the title of appendix C.2 - In terms of notation, in line 96, $\pi^*$ usually has a dependence on $\theta$ (e.g. $\pi^*_\theta$) to indicate it is optimal w.r.t. that particular level. - Line 217, maybe add a citation to the first sentence, as I thought that is what you do, which confused me for a second. - line 237 space after period. - Line 241 "given" instead of giving? - Lines 296 - 297 are a bit confusing, as the word environment is used three times. - The assumption in theorem 1 is pretty strong. Technical Quality: 3 Clarity: 2 Questions for Authors: - It seems to me that the upsampling is very similar to [1], could you comment on that/cite the paper if indeed the technique is similar? - Have you tried/have any intuition of what the effect would be if you set $\eta=0$? - Furthermore, it seems like the specific evaluation environments you use can have a large impact. Have you looked at how variable results are when using different (still randomly generated) environment parameters. Relatedly, what happens if you just generate a few DR levels *without* explicitly discretising the environment parameters? - Are you using the high entropy version of PAIRED introduced by [2]? - Do you have an indication of how easy it would be to generate larger environments, with much larger parameter vectors? - Do you think it would be possible to incorporate the multi-step generation of PAIRED into this process? E.g., still generate a maze step-by-step but also have the H-MDP structure and policy performance vector? - Why can't you sample the action in the diffusion process from the teacher's policy? - A reward of 50 in bipedal walker seems very low, could you compare against other papers / explain why it is low? E.g. [2] had rewards around 150 [1] Lu, Cong, et al. "Synthetic experience replay." _Advances in Neural Information Processing Systems_ 36 (2024). [2] Mediratta, Ishita, et al. "Stabilizing unsupervised environment design with a learned adversary." _Conference on Lifelong Learning Agents_. PMLR, 2023. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: I think the authors can list a few more limitations. Primarily, the restriction on the type of environment that can be generated, i.e., it needs numerical parameters, and generating a maze outright is challenging. This is quite a large difference to prior settings. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable time and feedback, - **W1.** The results. **A1.** It is crucial to note that SHED demonstrates more consistent and superior performance in the more complex BipedalWalker and Maze, as depicted in Figure 3,4. This suggests that SHED offers better generalizability and robustness across different environments. Moreover, the less variability in performance with SHED underscores its stability and reliability, which are essential qualities in real-world applications where consistent performance is critical. - **W2.** The method is limited to the types of environments. **A2.** In the maze domain, learning block placement is challenging due to the sequential decision process involved (long horizon and large action space). Random placing blocks (ACCEL) may be viable, as it could result in mazes without solutions—especially under constraints of limited resources. To overcome these challenges, we introduced the use of LLM to manage the maze generation process on a more granular level. This approach not only mitigates the complexities associated with direct block placement but also introduces a degree of variability in the environment generation process. This innovative approach, which is unprecedented in the field of UED research, not only ensures the generation of solvable mazes but also extends the control in environment design. This environment design method has received positive feedback from reviewer EFDP. - **W3.** Do all methods use an LLM? **A3.** Yes, in the maze domain, all methods generate mazes using ChatGPT, where we provide several key factors—rather than just single difficulty numbers—that define the teacher’s action space (environment parameters) for maze generation (see Appendix D). We can adjust the 'temperature' value in ChatGPT to enhance the diversity of the environments produced. This capability highlights the flexibility and robustness of our LLM-based method, offering an improvement over ACCEL that directly places blocks, which might generate unsolvable mazes and cannot capture the nuanced intentions for the desired generated mazes. - **Q1.** similar to [1] **A1.** Thank you for pointing out this paper, which shares some similar motivations with our work. However, there are some key differences to note: 1. While [1] also adopts a diffusion model to generate trajectories, it lacks specific technical details on how these trajectories are generated. In contrast, our paper thoroughly describes the generation process. 2. Additionally, [1] focuses on upsampling student trajectories, whereas our proposed hierarchical MDP framework emphasizes the challenge of collecting the teacher's experience. Our method specifically upsamples the teacher's experience to improve training efficiency for the teacher. - **Q2.** intuition of $\eta=0$? **A2.** As indicated in lines 270-274, when $\eta=0$, the teacher might obtain higher rewards by sacrificing student performance in one subset of evaluation environments to enhance it in another. This contradicts our objective of developing a student agent with general capabilities across all environments. We have included experiments in the appendix where $\eta$ is set to higher values, as shown in Fig. 8. We observed that the test performance deteriorates in certain environments. - **Q3.** How variable results are when using different environment parameters. **A3.** In Appendix E, we provided details on how the number of evaluation environments influences the results. We demonstrate that increasing the size of the evaluation environment set leads to improved transfer performances. The variability in results is minimal when using different (randomly generated) environment parameters. This stability is due to our explicit discretization of the environment parameters, which ensures that the evaluation environments are sufficiently diverse and thus effectively reflect the student's general capabilities. However, when using just a few DR levels without explicitly discretizing the environment parameters for evaluation, the performance can be enhanced if the parameters are reasonable and do not heavily feature an OOD environment, as shown on the right in Fig. 3 in the attached PDF. Conversely, if similar evaluation environments predominantly include complex and unreasonable parameters (OOD), the results are notably poorer, as shown on the left in Fig. 3 in the PDF. Therefore, introducing diversity in the evaluation environments is essential to ensure stability in the overall performance. - **Q4.** high entropy version of PAIRED introduced by [2]. **A4.** We didn't use the entropy version of PAIRED, we will include this in the related work. - **Q5.** How easy it is to generate larger environments? **A5.** In our framework, scaling up to generate larger environments with more extensive parameter vectors is feasible. Our framework is designed to be modular, allowing for a straightforward extension of the parameter space. However, the complexity and computational demands will increase with the size of the environment parameters. - **Q6.** Incorporate the multi-step generation of PAIRED? **A6.** It is possible to integrate the multi-step generation approach of PAIRED into our framework; however, as mentioned in W2, learning to generate a maze step-by-step is inherently complex. In PAIRED, this process required hundreds of thousands of updates to learn effectively, which is feasible under settings with limited resources. - **Q7.** Sample the action in the diffusion process from the teacher's policy? **A7.** Yes, sampling actions from the teacher's policy during the diffusion process is feasible. - **Q8.** results **A8.** Please refer to A2 to Reviewer AEHi - **L1.** limitation **A1.** While we have outlined some limitations in the appendix, we will revise and expand this to provide a more comprehensive analysis of potential weaknesses and areas for future improvement. --- Rebuttal Comment 1.1: Title: Discussion Comment: Thank you for your response and your additional experiments. I have some follow up questions and points to discuss > A1. It is crucial to note that SHED demonstrates more consistent and superior performance in the more complex BipedalWalker and Maze, as depicted in Figure 3,4. This suggests that SHED offers better generalizability and robustness across different environments. Moreover, the less variability in performance with SHED underscores its stability and reliability, which are essential qualities in real-world applications where consistent performance is critical. I am not sure I am convinced. SHED performs about the same as ACCEL on lunar lander and bipedalwalker. Shed does slightly better in maze, but that is using a nonstandard way of generating mazes. > A7. Yes, sampling actions from the teacher's policy during the diffusion process is feasible. Then why did you not do this? > A8. Please refer to A2 to Reviewer AEHi Could you at least run evaluation on the same levels that ACCEL used? What is the computational demand of running Bipedal Walker for 30K PPO updates? (or at least something like 5k-10k, 50 doesn't seem enough to learn much) > A3 Thanks for that, it is interesting. > A1. Thank you for pointing out this paper, which shares some similar motivations with our work. However, there are some key differences to note: > While [1] also adopts a diffusion model to generate trajectories, it lacks specific technical details on how these trajectories are generated. In contrast, our paper thoroughly describes the generation process. To me the paper seems pretty clear, what details are lacking? > Additionally, [1] focuses on upsampling student trajectories, whereas our proposed hierarchical MDP framework emphasizes the challenge of collecting the teacher's experience. Our method specifically upsamples the teacher's experience to improve training efficiency for the teacher. The SynthER method is a way to upsample data for RL agents, and your teacher is still an RL agent. So I would still argue that it is very similar to what you are doing? > A2. In the maze domain, learning block placement is challenging due to the sequential decision process involved Could you run a baseline comparing how well normal ACCEL does that places the blocks directly? Also please rectify all of the minor points in your revised manuscript. --- Rebuttal 2: Title: Rebuttal by authors (1) Comment: Thank you for your interest in our work and your prompt response. We have done additional experiments to address your concerns. **Q1.** SHED performs about the same as ACCEL on lunar lander and bipedalwalker. Shed does slightly better in maze, but that is using a nonstandard way of generating mazes. **A1.** We conducted experiments in the LunarLander environment to exclude very challenging test environments when testing zero-shot transfer performance. This is because all approaches perform poorly in these challenging environments, which would narrow the performance gap among them. Because we cannot directly show detailed results, we roughly report the results: After removing the challenging environments and completing 1,000 PPO updates, the average performance of SHED, h-MDP, and ACCEL improved significantly on LunarLander. However, ACCEL remains only slightly weaker than SHED. Here are the results: SHED: -6.019, h-MDP: -12.92, ACCEL: -8.706, Random: -30.65, PAIRED: -126.2. It is important to note that the training curves of ACCEL and Random show significant variance and fluctuations, while the learning curve of SHED is much smoother. This indicates that SHED can consistently provide suitable environments, enhancing robustness and stability in training. **Q2.** Sampling actions from the teacher's policy during the diffusion process is feasible. Why did you not do this? **A2.** First, we provide detailed steps for sampling actions from the teacher's policy during the diffusion process in Appendix B.2 to upsample the teacher's experiences. However, we chose to randomly sample actions to generate synthetic trajectories because it is more straightforward, which is the method we used in our work. Second, using a diffusion model directly as a policy to obtain the teacher's actions would require further modifications to train the diffusion model. This approach is explored in works by Wang et al. (citation number 20) and Janner, Michael, et al. [1]. However, incorporating this into our work would significantly increase its complexity, and therefore, we leave this direction for future work. [1] Janner, Michael, et al. "Planning with diffusion for flexible behavior synthesis.". International Conference on Machine Learning. PMLR, 2022. **Q3.** Could you at least run evaluation on the same levels that ACCEL used? What is the computational demand of running Bipedal Walker for 30K PPO updates? (or at least something like 5k-10k, 50 doesn't seem enough to learn much) **A3.** As reported in the paper, all models were trained on a single NVIDIA GeForce RTX 3090 GPU and 16 CPUs. Training ACCEL for 30K PPO updates can take approximately 5 days, which is quite long. There seems to be a misunderstanding: we trained on 50 environments, where each environment ran for 4 epochs, and each epoch included 5 PPO minibatches, resulting in a total of 20 PPO updates per environment. Across all 50 environments, there are 1,000 PPO updates. Also, We included some of the levels that ACCEL used, such as SmallCorridor and FourRooms. We will include more levels in the final version. **Q4.** what details are lacking? **A4.** Their work is generally well-explained. One difference between their work and ours is that they lack a more detailed technical illustration of how actions and next states are generated using the diffusion model. **Q5.** The SynthER method is a way to upsample data for RL agents, and your teacher is still an RL agent. So I would still argue that it is very similar to what you are doing? **A5.** SynthER utilizes a diffusion model to generate synthetic experiences from a limited dataset, which is similar to our approach. This work shares similar motivations with ours. However, the key difference is that SynthER focuses on directly synthesizing the experience dataset for the student agent and can be used for online/offline training. In contrast, our approach involves simultaneously upsampling the upper-level teacher's experiences to further assist in training the upper-level teacher agent efficiently, addressing drawbacks in our proposed hierarchical framework. Additionally, directly synthesizing student trajectories to help train the student agents would be unfair in our comparison, as it might induce extra training in terms of the number of PPO updates compared to other approaches. Overall, while we acknowledge that their work and ours share similar motivations and technologies, the application details are different, as we use synthetic trajectories in the upper-level teacher's experience within our hierarchical framework. We will include their work in related work and provide a discussion. --- Rebuttal 3: Title: Rebuttal by authors (2) Comment: **Q6.** Could you run a baseline comparing how well normal ACCEL does that places the blocks directly? **A6.** Yes, we conducted an experiment using the maze generation method in ACCEL, which involves placing blocks directly to generate the maze. The maze generation process is as follows: 1. We randomly sample the maze size. 2. We randomly place the start and end positions. 3. We randomly sample the number of blocks. 4. We randomly place the blocks, ensuring that the blocks, start position, and end position are not the same. Here are the rough results: - Randomly sample maze width and height from {8, 9, 10, 11, 12, 13, 14, 15}, randomly sample the number of blocks from [20, 40]: Results: ACCEL: -9.8 - Fix maze size at 13*13, randomly sample the number of blocks from [20, 60]: Results: ACCEL: -12.7 - Fix the maze size at 13x13, and fix the number of blocks at 60: Results: ACCEL: -15.3 For comparison, here are the ACCEL performances under our maze generation method: ACCEL: -4.98 Additionally, we added a filter function to exclude mazes without a feasible solution until generate a feasible maze, and the results are: - Randomly sample maze width and height from {8, 9, 10, 11, 12, 13, 14, 15}, randomly sample the number of blocks from [20, 40]: Results: ACCEL: -5.3 - Fix maze size at 13x13, randomly sample the number of blocks from [20, 60]: Results: ACCEL: -8.5 - Fix maze size at 13x13, fix the number of blocks at 60: Results: ACCEL: -7.8 **Q7.** please rectify all of the minor points in your revised manuscript. **A7.** Thank you for pointing out the minor issues. We will revise the paper to address these points accordingly. --- Rebuttal Comment 3.1: Comment: Thanks for all of your effort in the rebuttal. I have updated my score to 6, on the condition that all of the fixes/explanations are indeed added to the revised paper, and that the paper's claims be slightly softened. I don't think you outperform state of the art UED methods; however, you are competitive with them, and you greatly improve the performance of RL-based methods.
Summary: The paper presents a novel approach to Unsupervised Environment Design (UED) that addresses the challenges of efficiency by introducing a hierarchical MDP framework and using synthetic data. This framework involves an upper-level RL teacher agent that generates training environments tailored to a lower-level student agent's capabilities. The paper proposes the Synthetically-enhanced Hierarchical Environment Design (SHED) method, which uses generative modeling to create synthetic trajectory datasets, thereby reducing the resource-intensive interactions between agents and environments. The effectiveness of SHED is demonstrated through empirical experiments across various domains, showing superior performance compared to existing UED methods. Strengths: - The use of diffusion models to generate synthetic trajectories is a novel approach that effectively reduces the computational burden of training the teacher agent. - The paper provides comprehensive experiments across different domains, demonstrating the effectiveness and robustness of the proposed method compared to state-of-the-art UED approaches. Weaknesses: - The proposed method introduces significant complexity, particularly in the implementation of the hierarchical MDP and the generative modeling components. This might limit the accessibility and reproducibility of the approach. - While the empirical results are promising, the evaluation is limited to a few specific domains. It would be beneficial to see broader applicability across more diverse and complex environments. - Figure 4 is not properly formatted (no values on the axes). Technical Quality: 2 Clarity: 2 Questions for Authors: - Why were diffusion models chosen over other generative models (e.g., GANs, VAEs)? Have other models been considered or tested? - How well does the proposed method scale to real-world applications with significantly higher complexity and variability? Have any tests been conducted in such settings? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The limitations are discussed in Appendix F.1 but I think the authors should discuss the limitations in the main paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable time and feedback, and we kindly request the reviewer to consider our clarifications. - **W1.** The proposed method introduces significant complexity, particularly in the implementation of the hierarchical MDP and the generative modeling components. This might limit the accessibility and reproducibility of the approach. **A1.** We would like to clarify that the two major components introduced in our work, the hierarchical framework and the synthetic trajectories, can be decoupled from each other and can be used independently. Additionally, the algorithm itself is not overly complex; our hierarchical framework can be viewed as a variant of hierarchical reinforcement learning (HRL), The abstractions in SHED are twofold: 1. Use performance vector $p(\pi)$ to approximate the embedding of the student policy and also serve as the state for the teacher agent, informing the current student agent’s capability. 2. The teacher’s agent is the environment parameters. The teacher’s action $a^u$ is an abstract representation of the next generated environment that is used to train the student agent, allowing the teacher to guide the student’s learning process by setting tailored training environments. - **W2.** While the empirical results are promising, the evaluation is limited to a few specific domains. It would be beneficial to see broader applicability across more diverse and complex environments. **A2.** We acknowledge that this might be perceived as a shortcoming of our study, given our reliance on parameterized environments. However, we would like to emphasize that our work introduces a novel application of GPT-generated maze environments, a concept that has not been previously explored in UED literature. This innovation enables us to extend traditionally non-parameterized environments to be modeled and controlled by large language models or large multi-modal language models, broadening the potential for future research and applications. Our approach opens up opportunities for further exploration of diverse and complex environments in future studies. - **W3.** Figure 4 is not properly formatted (no values on the axes). **A3.** We have modified the figure. Please see the Figure 1 in the attached PDF document. - **Q1.** Why were diffusion models chosen over other generative models (e.g., GANs, VAEs)? Have other models been considered or tested? **A1.** We chose diffusion models due to their recent success in generating high-quality synthetic data, and we have demonstrated their performance in generating good synthetic trajectories in the Appendix C. While we acknowledge the potential of other generative models, such as GANs and VAEs, we did not test them as our focus was not on comparing the effectiveness of different models for generating synthetic trajectories. Given the strong empirical results achieved by diffusion models, we prioritized exploring their application within our framework. - **Q2.** How well does the proposed method scale to real-world applications with significantly higher complexity and variability? Have any tests been conducted in such settings? **A2.** Our current study focuses on controlled environments to establish foundational principles, and we plan to test them in real-world applications in future work. It should be noted that we are training across environments, so there is a natural stochasticity/complexity that arises in transitions and rewards. That is to say in Minigrid, an agent in one environment may see a different transition/reward than in another environment and it has to still provide a good action that works across both environments. - **L1.** The limitations are discussed in Appendix F.1 but I think the authors should discuss the limitations in the main paper. **A1.** We appreciate your suggestion about discussing the limitations. Initially, we moved this section to the appendix due to page constraints. However, we understand the importance of highlighting these limitations in the main paper, and we will ensure they are included in the revised version. --- Rebuttal Comment 1.1: Comment: Thank you for your detailed response to my review. I will update my score accordingly.
Summary: This paper considers the Unsupervised Environment Design problem, where a teacher agent seeks to design environments to train a student. Methods such as PLR, PAIRED and ACCEL have recently shown promising performance for random, RL and evolutionary generators. This paper proposes a handful of modifications, using RL with a different objective vs. PAIRED (performance on held out set vs. regret) and also proposes to add synthetic data to accelerate the RL process. Strengths: * This is an interesting method in a relevant area of research. UED seems to be one of the most active areas of research with plenty of opportunities for impact. * The use of evaluation environments is sensible and novel. * The idea of combining this with Genie is incredibly exciting. It would be interesting to hear how this could be possible or could work. Is there any way to show a simple proof of concept? Weaknesses: * There appear to be two confounding features of the method, the new objective for PAIRED and then the synthetic data. Why do they make sense to combine in this way? It just feels like the authors tried to do "enough for a paper" rather than contribute something meaningful that people can build on. I say this because its unclear how these two independent features interact with other existing algorithms. Maybe we should just do ACCEL with synthetic data for instance? Did the authors try that? If it is in the Appendix already and I missed it then I will increase my score. * The performance gains are fairly minor, and presented in an unclear fashion with just a bunch of curves on a single plot. Can we get some more rigorous analysis for example using the recommendations from Agarwal et al, "Deep Reinforcement Learning at the Edge of the Statistical Precipice"? * The Maze experiment seems to have many inductive biases and seems distinct from the diffusion based approach for BipedalWalker and LunarLander. What happens if ACCEL has access to ChatGPT as an editor and then uses replay? This seems like a simpler extension that alone could be a strong paper - although it would resemble ELM (Lehman et al 2022) so it wouldn't be particularly novel. * The related work is very light. This is disappointing since the paper builds on so many related areas, such as synthetic data, diffusion models, UED, language models for evolution, procedural content generation etc. Technical Quality: 2 Clarity: 2 Questions for Authors: * The authors say "The hierarchical framework can incorporate techniques from existing UED works, such as prioritized level replay". Why did you not do this? Then it would be much clearer to see if it is state of the art. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: Covered in the Appendix. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable time and feedback, and we kindly request the reviewer to consider our clarifications. - **S1.** The idea of combining this with Genie is incredibly exciting. **A1.** Genie is the first generative interactive environment trained in an unsupervised manner using unlabelled Internet videos. Genie can generate action-controllable virtual worlds described through text, synthetic images, photographs, and sketches. We propose using a diffusion model to upsample the teacher's experience, which is intuitively compatible, given the diffusion model's proven success in image generation. In our approach, the diffusion model predicts the next student policy representation, similar to how Genie forecasts the next visual frame. Thus, our framework (using a generative model) can be leveraged to learn a simple label that assists Genie in generating more coherent and continuous frames. - **W1.** There appear to be two confounding features of the method, the new objective for PAIRED and then the synthetic data. Why do they make sense to combine in this way? Maybe we should just do ACCEL with synthetic data for instance? **A1.** Our primary motivation is to train a student agent with general capabilities more efficiently under resource constraints, specifically with a limited number of environments generated and a limited training horizon compared to the open-ended training in previous UED papers. This is why we employ an RL-based teacher like PAIRED, which generates suitable environments more effectively compared to random generation methods like ACCEL. However, PAIRED uses a regret-based metric—a scalar reward that quantifies the difference between the performances of an expert agent and the current agent. Such metrics can produce difficult environments that are challenging for an RL agent to learn from because regret represents the best-case (upper bound) learning potential, not the actual learning potential of an environment. Consequently, it is hard to fully capture the true general capabilities of student agents. To address this limitation, we propose a hierarchical-MDP framework that better aligns with our objectives and uses the performance vector to approximate the student policy representation. However, the RL-based teacher approach can be costly in terms of collecting the experience required for training, which conflicts with our motivation to reduce interactions between environments and agents due to limited resources. To mitigate this issue, we propose using a generative model to upsample experiences, thus aiding the training of the RL teacher without incurring significant costly interactions between agents and environments. While it is theoretically possible to combine ACCEL with synthetic data, this approach would not align with other methods. In all our settings, we ensure that student agents maintain the same level of interaction with environments and have the same number of updates. If we were to use synthetic data with ACCEL, it would create an unfair advantage by providing extra training opportunities for the student agent, and ensuring the same number of interactions would make it difficult to demonstrate the superiority of using synthetic data with ACCEL, which is why we did not consider combining ACCEL with synthetic data. We want to highlight that synthetic data can be decoupled from all UED methods, and our hierarchical framework also provides insights into the Hierarchical RL field. - **W2.** Can we get some more rigorous analysis? **A2.** Yes. We have provided the aggregate results after min-max normalization (with range=[-21, 1] in the Partially observable Maze domain in Figure 2 in the attached PDF document. Notably, our method SHED dominates all the benchmarks in both the IQM and optimality gap. - **W3.** What happens if ACCEL has access to ChatGPT as an editor and then uses replay? **A3.** Thank you for your insightful suggestions. In our Maze experiment, we have already integrated ChatGPT as an editor for ACCEL and utilized replay. Our findings show that ACCEL with ChatGPT and replay demonstrates higher variance in performance, highlighting the instability of random teachers in identifying suitable environments for training student agents. - **W4.** The related work is very light. **A4.** Thank you for highlighting this; we will enhance the related work section to more thoroughly discuss the relevant works in our revised version. - **Q1.** The authors say "The hierarchical framework can incorporate techniques from existing UED works, such as prioritized level replay". Why did you not do this? Then it would be much clearer to see if it is state of the art. **A1.** Our primary motivation is to train a student agent with general capabilities more efficiently under resource constraints, specifically with a limited number of generated environments and a limited training horizon. Integrating prioritized level replay is less appealing in our framework because we aim for our RL teacher to directly generate tailored environments for the current student agent. This approach focuses on dynamic adaptation rather than replay, ensuring that each environment is specifically suited to the student's evolving capabilities. --- Rebuttal Comment 1.1: Title: Concerns remain Comment: The issue with this work is what I said in the original review - it is not clear what the motivation is, there are a million different things being compared and no clear take away from the work. The rebuttal hasn't changed anything for me. --- Rebuttal 2: Title: Rebuttal by Authors Comment: Thank the reviewer for the feedback. We understand your concerns and appreciate the opportunity to consider our original motivations. Our primary motivation, as stated in the abstract and introduction, is to train a student agent with general capabilities more efficiently under resource constraints, specifically with a limited number of generated environments and a limited training horizon. Previous UED papers rely on open-ended training, requiring thousands of generated environments and billions of environment interactions. However, this approach is unrealistic in real-world scenarios due to the limited resources available to construct environments. To address this challenge, we focus on restricting the number of generated environments (50 environments in our settings) and interaction steps (millions of interactions) to train the student agent to achieve general capabilities. This necessitates designing a teacher agent that can generate suitable environments tailored to the current student's capability level, as opposed to previous methods that relied on random environment generation. Our hierarchical framework addresses this by having the teacher take an approximation of the student policy as the observed state and output environment parameters to generate tailored environments. However, training the teacher agent is challenging due to the costly collection of the teacher's experience, which requires extensive environment interactions. This conflicts with our original motivation. To mitigate this issue, we propose using a diffusion model to upsample the real collected teacher's experience, reducing the costly interactions needed to gather the teacher's experience. Additionally, our hierarchical framework can incorporate techniques from existing UED works, such as prioritized level replay. Our proposed module is designed to be decoupled from other techniques. For example, if we have a larger budget for generated environments, we can implement a level buffer to store environments with high learning potential and revisit them in future training. We hope this explanation clarifies our motivation.
Summary: The authors of this paper use hierarchical MDP formulation and a teacher agent trained by RL to perform curriculum learning. To address the sparse data available for the teacher agent, this paper uses diffusion models to synthesize datasets for training. This paper performs experiments on lunar lander and bipedal walker environments to validate their claim. Strengths: Data sparsity is one of the main limitations of using a teacher agent in curricular RL. This paper uses diffusion models to synthesize a dataset for the teacher agent. Weaknesses: - This paper designs the teacher agent via hierarchical MDP to model the learning process of the student agent to perform curricular RL. However, Fingerprint Policy Optimization (Paul et al, 2019) also has a similar idea of modeling the learning process of the student agent. It would be interesting to explain more about how this paper's idea is related and contributes to this line of thought. - A fully trained algorithm on the BipedalWalker should approach a cumulative reward of 300. Even the modified version used in the ACCEL paper is measured on a scale of 0 out of 300. However, from Figure 3, it appears all baselines perform less than 50 on the BipedalWalker benchmark. It is questionable whether all baselines were fully trained with the right settings. Also, the performance of the proposed algorithm and those of the baselines are statistically too similar to see whether SHED improves over the baselines in Lunar Lander and BipedalWalker benchmarks. Finally, other than the version of ACCEL in this paper not performing as well as the ACCEL in the original paper, I am curious whether ACCEL can be considered state-of-the-art in the benchmarks as written in line 324. Genetic Curriculum (Song et al, 2022) reports higher cumulative reward on the BipedalWalkerHardcore environment. - Figure 4 has no scale on timestep and reward. Technical Quality: 2 Clarity: 2 Questions for Authors: - To improve the scores on the review, I hope to see the authors explain how their algorithm works in relation to existing works listed above. I also hope to see clarifications regarding the experiment results stated above as well. Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The authors has addressed the limitations of this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for valuable time and feedback, and we kindly request the reviewer to consider our clarifications. - **Q1.** Fingerprint Policy Optimization (Paul et al, 2019) also models the learning process of the student agent. It would be interesting to explain more about how this paper's idea is related and contributes to this line of thought. **A1.** Thank you for pointing out the FPO work, which is very interesting. FPO utilizes Bayesian optimization to actively select environment variable distributions, thereby enhancing the efficiency of the policy gradient method. It models the environment's effect on policy updates and optimizes for one-step policy improvement by balancing bias and variance. FPO introduces two low-dimensional policy fingerprints: State Fingerprint: This fingerprint represents the stationary distribution over states induced by the policy. It involves fitting an anisotropic Gaussian to the set of states visited in the trajectories sampled during the estimation of the policy's performance. The size of this fingerprint is equal to the dimensionality of the state space. Action Fingerprint: This fingerprint represents the marginal distribution over actions induced by the policy. It is approximated as a Gaussian distribution derived from the actions taken by the policy across various states. The size of this fingerprint corresponds to the dimensionality of the action space. Both fingerprints serve as low-dimensional proxies for the policy, enabling the Gaussian Process used in Bayesian optimization to efficiently model the relationship between the policy parameters, environment variables, and expected returns. These fingerprints allow FPO to condition its optimization on a compact yet informative representation of the policy, facilitating robust policy learning in environments with significant rare events (SREs). In contrast, our approach introduces an RL teacher designed to learn the student's learning process and directly generate suitable actions and environments. While FPO uses state fingerprints to represent policy states, we employ a performance vector in the evaluation environments. This approach provides a more accurate representation of the student's general capabilities. Another advantage of using RL is its consideration of long-term rewards, which can significantly enhance the student's general capabilities. Our method aims to optimize the learning trajectory of the student agent by generating suitable environments dynamically to its evolving learning needs. - **Q2.** It is questionable whether all baselines were fully trained with the right settings. Also, the performance of the proposed algorithm and those of the baselines are statistically too similar to see whether SHED improves over the baselines in Lunar Lander and BipedalWalker benchmarks. **A2.** The reviewer's point is easy to explain. First, our motivation lies in training a student agent with general capabilities more efficiently under resource constraints. Previous algorithms, such as ACCEL, focus on randomly generated environments for open agent training. For example, they train the agent with about 30k PPO updates (1b step), while in our setting, due to the limited generated environment and training horizon, for example, we use only about 1k PPO updates (50 environments) to train the student agent. Therefore, due to resource constraints, there are fewer interactions between the agent and the environment. In this case, the student agent is not the strongest and does not achieve the best performance. Note that we have conducted an ablation study in the appendix, increasing the budget of resource constraints, i.e., more training environments and training horizons. For example, in the left figure of Figure 7, our proposed algorithm SHED can efficiently train the student agent to achieve better general capability in a shorter time horizon, but as the training horizon increases, the performances of ACCEL and SHED converge and tend to be same, which shows that without considering resource constraints, our algorithm can also match the state-of-the-art (SOTA). Under resource constraints, our proposed environment generation process can better generate suitable environments for the current student policy, thereby enhancing its general capability, especially when there is a constraint on the number of generated environments and training horizon. Second, Figure 3 reflects the average performance over a set of evaluation environments, rather than a single vanilla environment. We consider improving the general capabilities of student agents rather than focusing on a single environment. Please note that our test environments are randomly generated and they are usually much more challenging compared to the vanilla bipedalwalker environment, thus overall performance is much smaller than 300. - **Q3.** Figure 4 has no scale on timestep and reward. **A3.** We have modified the figure. Please see the Figure 1 in the attached pdf document. --- Rebuttal Comment 1.1: Comment: Thank you for your response! I believe the Q1 and Q3 are now addressed. However, I'm sorry if I missed the points the authors have already made, but I have some follow up questions regarding the answers to Q2. "30k PPO updates (1b step), while in our setting, due to the limited generated environment and training horizon, for example, we use only about 1k PPO updates (50 environments) to train the student agent." - Would it be OK to rephase the sentence as following? : ACCEL was trained with 1 billion timesteps to interact with the environment and PPO policy was updated 30,000 times. SHED interacted with the environment with 50 environments and had 1,000 PPO updates to the policy. - If the above is true, how many steps did SHED spent interacting with the environment in total for the training? - Why is SHED's interaction measured in number of environments not in steps - When you say 50 environments, are we referring to 50 different obstacle courses in BipedalWalker? - What are the cases that there would be concerns about how many obstacle courses you load for training, instead of how many timesteps the agents spent interacting with the world? - Why does figure 3 report only 1000 training steps when the baselines were trained with a scale of billion steps? --- Rebuttal 2: Title: Rebuttal by Authors Comment: Thank you for your prompt response. **Q1.** Would it be OK to rephase the sentence as following? : ACCEL was trained with 1 billion timesteps to interact with the environment and PPO policy was updated 30,000 times. SHED interacted with the environment with 50 environments and had 1,000 PPO updates to the policy. **A1.** Yes, that is true. We will revise the sentence accordingly to improve clarity. **Q2.** If the above is true, how many steps did SHED spent interacting with the environment in total for the training? **A2.** SHED interacted with 50 environments. Each environment ran for 4 epochs, and each epoch included 5 PPO minibatches, resulting in a total of 20 PPO updates per environment. Across all 50 environments, this amounted to 1,000 PPO updates. In the LunarLander environment, there were around 1,000,000 total environment steps, while the BipedalWalker environment and Maze environment involved around 10,000,000 and 400,000 total environment steps across all environments, respectively. **Q3.** Why is SHED's interaction measured in number of environments not in steps. **A3.** ACCEL was trained on thousands of environments. We would like to emphasize the motivation of our work is to train a student agent with general capabilities more efficiently under resource constraints, i.e., with a limited number of generated environments. Please note that the number of generated environments is crucial for improving the agent's general capabilities as we only "shallowly" train the agent on each generated environment. Therefore, we use the number of environments as a metric, which provides a clearer understanding of the training conditions. Additionally, we report results in terms of the number of PPO updates to ensure a fair comparison across all approaches with the same number of updates, consistent with other works like PAIRED and ACCEL. **Q4.** When you say 50 environments, are we referring to 50 different obstacle courses in BipedalWalker? **A4.** This refers to 50 different instances of the environment that the student agent will be trained in to acquire the general capabilities. **Q5.** What are the cases that there would be concerns about how many obstacle courses you load for training, instead of how many timesteps the agents spent interacting with the world? **A5.** In general, these two terms are correlated. In this work, we focus on training a student agent with general capabilities more efficiently under resource constraints. The number, diversity, and complexity of the environments are crucial for improving the agent's general capabilities as we only "shallowly" train the agent on each environment. In such cases, the number of environments would be a concern. **Q6.** Why does figure 3 report only 1000 training steps when the baselines were trained with a scale of a billion steps? **A6.** There is a misunderstanding, the 1000 training steps refer to the number of PPO updates, not the environment steps. All training algorithms for UED (PAIRED, PLR, ACCEL, and many others) report results in terms of the number of PPO updates. We also do the same in this paper. --- Rebuttal Comment 2.1: Comment: Thank you for your detailed response! I'll update the scores accordingly. --- Reply to Comment 2.1.1: Title: Rebuttal by authors Comment: Thank you for taking the time to read our response. We’re glad that some of your concerns have been addressed. If there are any remaining issues or anything that still needs clarification, please let us know.
Rebuttal 1: Rebuttal: We thank the reviewers for their time and valuable feedback! We present the new results in the attached PDF. All feedback will be incorporated into the updated manuscript. Pdf: /pdf/02a4e962b8a8d43cbe20b901e8e6f4b7a9d12d1c.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Hyper-SD: Trajectory Segmented Consistency Model for Efficient Image Synthesis
Accept (poster)
Summary: This paper introduces Hyper-SD, a novel framework designed to mitigate the computational overhead associated with the multi-step inference process of Diffusion Models (DMs). The proposed framework addresses the limitations of existing distillation techniques, which focus on either ODE Trajectory Preservation or ODE Trajectory Reformulation and often suffer from performance degradation or domain shifts. Hyper-SD synergistically combines the advantages of both approaches while maintaining near-lossless performance during step compression. Key innovations include Trajectory Segmented Consistency Distillation for preserving the original ODE trajectory, human feedback learning to enhance low-step performance, and score distillation to improve low-step generation capabilities. The framework also uniquely leverages a unified LoRA for all inference steps. Extensive experiments and user studies demonstrate the Hyper-SD's supreme performance 1 to 8 inference steps for both SDXL and SD1.5, surpassing existing methods such as SDXL-Lightning in CLIP and Aes scores. Strengths: 1. The paper comprehensively considers segmented distillation and the incorporation of human feedback scores, potentially outperforming the original diffusion models in certain cases. It even enables one-step image generation. 2. The experiments are thorough and well-executed and the paper is well-writen. Weaknesses: 1. The additional introduction of the human feedback model might result in excessive memory usage. Since the evaluation steps require decoding and an additional model, would the actual computational resource consumption be clarified? 2. The inclusion of reward terms in the loss functions of LCM-based methods has been explored in related works, such as Reward Guided Latent Consistency Distillation. A comparison with these methods, if feasible, would be beneficial. Technical Quality: 2 Clarity: 2 Questions for Authors: refer to weakness Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: refer to weakness Flag For Ethics Review: ['No ethics review needed.'] Rating: 3 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your kind words about our method, experiments and writing. We would like to answer every question you have in the following. W1: We would like to clarify that our RLHF training is separated from TSCD and we apply RLHF through LoRA merging. So there would be no excessive memory usage in our distillation process. During RLHF, the convergence speed of RLHF LoRA is exceptionally rapid, requiring approximately 8000 iterations for around 96 A100 GPU hours. More importantly, the reward model is not involved in the evaluation process so no additional model needs to be decoded. W2: Thanks for your suggestion. We have read the provided paper carefully and will include it in our related work and discussion. The most essential difference is that we view feedback learning as a technique for trajectory-reformulating parallel to trajectory-preserving, and thus the training process is also separated. While in Reward Guided Latent Consistency Distillation (RG-LCM), the authors view it as some kind of guidance towards biasing the distillation process through a unified training objective. Our advantage is that we do not need to modify the original model's scheduler (i.e. DDIM) and thus have greater generalizability, while in RG-LCM the model needs to directly predict x_0 and fit the LCM scheduler. Moreover, our approach supports one-step generation with better trajectory-preserving and trajectory-reformulating pieces. --- Rebuttal 2: Title: please respond to authors Comment: Hi, Please read other reviews and respond to the authors' rebuttal. You should edit your review accordingly. Thanks.
Summary: The paper introduces a novel framework named Hyper-SD, which enhances diffusion models’ efficiency by combining consistency distillation, human feedback learning, and distribution matching. Hyper-SD performs consistency distillation in segments to preserve the original ODE trajectory, maintaining high-quality generation while reducing inference steps. Human feedback learning further boosts performance in low-step inference, mitigating performance loss during the distillation process. Experimental results demonstrate that Hyper-SD achieves state-of-the-art performance from 1 to 8 inference steps, significantly surpassing existing methods, particularly in aesthetic scores and CLIP scores. Strengths: 1. The components of this work are clear: a) Based on TCD, the author segments the ODE and then performs distillation on these segments. b) To enhance the model’s overall performance, the author utilizes RLHF techniques. c) The author further employs the DMD scheme to optimize the results obtained in a single step. 2. The results are solid, clearly demonstrating the effectiveness of the proposed approach. Weaknesses: The primary issue is the insufficiency of the ablation experiments, the whole Hyper-SD sounds contain 2 modules, a TCSD LoRA, a RLHF LoRA, and TCSD LoRA for 1 step needs further DMD loss. However, the ablation study does not demonstrate the effectiveness of each part. Here's my concerns: 1. In TCSD part, the distance metric includes GAN and L2. However, in TCD, it only contains L2. Is this metric similar to CTM (L2+GAN) or different? And how's the contribution of GAN part? 2. RLHF module needs another LoRA, is this the only different between Tab 3 w/ and w/ RLHF? 3. May I know the RECALL of the module w/ (w/o) GAN, w/ (w/o) RLHF ? 4. RLHF module is totally separately trained from the TCSD module? 5. The one-step model is the same training pipeline with TCSD (which means [8,4,2,1] + RLHF) but with DMD loss based on some base models like SDXL? Or it is a continue trained model based on a TCSD LoRA? 6. For one-step generator for comparison, does it include RLHF LoRA? 7. A minor question. Is $\Delta t$ ( or we can say $t_n - t_{n-1}$ ) the same for different segs? My feeling is it should be same, but in Fig.2, the length of ODE solver is obviously different. Technical Quality: 3 Clarity: 3 Questions for Authors: Please check Weaknesses. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: the limitations are discussed Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your kind words about our method and main results. We would like to answer every question you have in the following. W0: We fully understand your primary concerns about ablation studies. To further validate the effectiveness of our proposed method in multi-step or one-step generation, we would like to raise your attention to Table 1 in the global rebuttal pdf, where we conduct experiments on TCD+RLHF, and TCD+DMD. 1. From the results, performance across different training settings when replacing the TSCD with TCD has decreased significantly. 2. As for RLHF component, we can see that vanilla TCD demonstrates severe aesthetic dropping when no RLHF LoRA is applied. In contrast, TCD+RLHF performs better but still lags behind our full Hyper-SD thanks to the refined trajectory-preserving capability from TSCD. 3. For one-step generation, the conclusions are similar that only after our TSCD performs better trajectory-preserving, the score estimation will be more accurate and the power of DMD to match teacher distribution will be stronger. W1: Our GAN implementation is different from CTM that adopts StyleGAN-XL[1] discriminator for unconditional or classifier-guidance image synthesis. Since the prompting in text-to-image diffusion is quite important, we turn to a more advanced discriminator from SDXL-Lightning[2] that uses U-Net for backbone and student model for initialization. Therefore, the inputs to the discriminator would be noisy latent and timesteps together and it could better capture the subtle variation between the prediction and targets across timesteps. To highlight the importance of introducing GAN loss, we conduct extra ablation studies in Table 3 of global rebuttal pdf. The results show that even if we also use adversarial loss for TCD, its performance is not as good as TSCD, which verifies our leadership in trajectory-preserving. W2: Yes it is. We apply RLHF through LoRA merging. It's exciting to see that simple merging of TSCD and RLHF LoRAs could bring significant benefits instead of distortion or collapse, which further demonstrates the importance of breaking down training objectives into trajectory-preserving and trajectory-reformulating pieces. W3: We have provided a more detailed comparison w/ or w/o RLHF (Table 1), w/ or w/o GAN (Table 3) in global rebuttal pdf. The results fully demonstrate the effectiveness of our proposed modules. For one, RLHF can still be effective even on TCD, and the results are even better on TSCD with better trajectory-preserving. For another, our GAN loss performs better against L2 only on TCD, and our TSCD benefit more from adversarial loss. W4: Not exactly. The training process of RLHF LoRA is with frozen TSCD LoRA applied. And we merge them into one after RLHF for convenience. W5: It is a continuously trained model based on the two-step TCSD LoRA since we set a progressive training strategy. W6: Yes it is. The result for DMD in Table 3 of manuscript is with RLHF. We're sorry to bring you this confusion. We have provided a clearer comparison for one-step generation w/o RLHF in Table 1 of global rebuttal pdf. W7: It's the same within a stage but different between stages. Like in the 4-segments stage, the Δt is 1000//16=62. While in the 2-segments stage, the Δt is 1000//8=125. > [1] Sauer, Axel, Katja Schwarz, and Andreas Geiger. "Stylegan-xl: Scaling stylegan to large diverse datasets." ACM SIGGRAPH 2022 conference proceedings. 2022. > [2] Lin, Shanchuan, Anran Wang, and Xiao Yang. "Sdxl-lightning: Progressive adversarial diffusion distillation." arXiv preprint arXiv:2402.13929 (2024). --- Rebuttal Comment 1.1: Comment: I have reviewed the author's response. Despite some issues in the experimental section, I believe the paper demonstrates sufficient innovation. The author's response has nearly resolved my concerns. I consider this paper to be slightly above the acceptance threshold. However, given that some reviewers have given it overly low scores, I am raising my score to 7 to balance my independent evaluation with the overall average score of the manuscript. --- Rebuttal 2: Title: respond to authors rebuttal Comment: Hi, Please read other reviews and respond to the authors' rebuttal. You should edit your review accordingly. Thanks.
Summary: This paper presents an approach for distilling a diffusion model into a multi-step generator. Previous distillation methods typically fall into two categories: those that preserve the ODE (Ordinary Differential Equation) trajectory and those that match the teacher model at the distribution level. This research offers a unified solution by integrating ideas from both categories. Specifically, it employs a multi-segment consistency distillation objective, akin to the ODE trajectory-preserving methods like Consistency Trajectory Model, and incorporates a combination of GAN (Generative Adversarial Network) and distribution matching distillation losses from the distribution-level matching methods. Additionally, an important contribution of this work is the incorporation of human or reward model feedback in the distillation process. The resulting method demonstrates high-quality text-to-image generation. Strengths: S1. The paper is well-written and easy to follow, providing a thorough introduction to the background and connections to previous works. S2. The final approach is effective, with solid ablation studies of various components. The evaluations are comprehensive and demonstrate strong results. S3. The utilization of reward optimization in diffusion distillation is relatively new, showcasing the potential of unifying distillation with other post-training techniques to enhance final performance in terms of aesthetics, alignment, and efficiency. Weaknesses: W1. The final method combines well-explored modules (consistency trajectory model, GAN, and DMD) into a unified framework. While this effective integration is a contribution, it is not particularly novel and doesn't introduce significantly new knowledge. W2.Regarding the evaluation, diversity comparisons are missing from the current paper. Although FID is not the best metric, the authors should still report it for a comprehensive understanding. Additionally, assessing per-prompt diversity is important. The authors are encouraged to generate a set of images with the same prompt and measure their variations as a proxy metric. I am particularly interested in the influence of reward optimization on generation diversity. W3. For one-step generation, the improvements appear small compared to vanilla DMD (Table 3). Technical Quality: 3 Clarity: 3 Questions for Authors: My concerns are outlined in the weaknesses section. While the paper demonstrates strong results and well-executed experiments, the overall method lacks significant novelty or performance improvement. Therefore, I will give it a borderline accept as my initial rating. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your kind words about our writing, experiments and reward optimization approaches. We would like to answer every question you have in the following. W1: As for technical novelty, we would like to highlight and summarize as follows: 1. We're the first to split the acceleration objective of diffusion model into trajectory-preserving and trajectory-reformulating. While previous approaches all just focus on the specific aspect, we break the objective down into pieces and yield different technical solutions, bringing new insights and perspectives in this field. 2. As for trajectory-preserving, we're based on TCD[2] and hope to further detach the training objectives as neighboring timesteps and more distant timesteps to different training stages, which could reduce training difficulty and mitigate accumulation errors. 3. As for trajectory-reformulating, we propose to consider human feedback learning as an effective technique to bias output distribution toward human preference. It's exciting to see that simple merging of TSCD and RLHF LoRAs could bring significant benefits instead of distortion or collapse, which further demonstrates the importance of breaking down training objectives into pieces. W2: We agree that assessing per-prompt diversity is quite important and would like to raise your attention to Table 2 in the global rebuttal pdf, where we have picked 100 prompts and generated 4 images per prompt for each method shown. We extract the CLIP image embeddings per image and report their pair-wise average similarities for quantitative evaluation. The results show that our diversity without RLHF is quite similar compared to other acceleration approach. While biasing the output distribution toward human preference would inevitably compromise the diversity, our reward optimization doesn't sacrifice significantly and achieve an excellent trade-off compared to SDXL base model with a slight similarity increase (+0.0128). In Figure 2 of global rebuttal pdf, we also illustrate several examples for qualitative comparison. W3: We're sorry to bring you this confusion. The DMD result in Table 3 of manuscript is not vanilla but is with RLHF and without TSCD. To further validate the effectiveness of our proposed method in one-step generation, we have provided another ablation study in Table 1 of global rebuttal pdf. The results show that vanilla DMD doesn't yield competitive results compared to consistency distilled models in terms of CLIPScore, ImageReward and PickScore, while our TSCD performs to be better. --- Rebuttal Comment 1.1: Comment: I thank the authors for their reply. I raise my score to 6. I agree with the authors about the innovation of integrating reward modeling in distillation. For diversity, it might be better to do something more pixel correlated e.g. LPIPS distance. --- Reply to Comment 1.1.1: Comment: Thanks again for your generous reply. We have just followed your advice on evaluating the generation diversity through pixel-level LPIPS metric. Using the same 100 prompts and random seed to generate 4 images each as in Table 3 of global rebuttal pdf, we list the average LPIPS distance as follows: - SDXL-Base (25-step UNet): 0.6991 - SDXL-LCM (4-step LoRA): 0.6782 - SDXL-TCD (4-step LoRA): 0.6668 - SDXL-Lightning (4-step LoRA): 0.6895 - TSCD (4-step LoRA): 0.7008 - TSCD+RLHF (4-step LoRA): 0.6993 Since higher LPIPS distance indicate better diversity, the results are quite to our surprise that our method surpass all the other acceleration approaches and even the baseline 25-step base model. And it is reasonable to see that with RLHF the LPIPS distance degrades a little bit. Benefiting from our better trajectory-preserving technique, we observe TSCD to have +0.034 LPIPS distance bonus against vanilla TCD. Since LPIPS is more interested in the subtle variations of color & detail, and the original 25-step base model is more towards gray in color style, our RLHF complements nicely to be more vivid so the metric doesn't compromise significantly (-0.0015). If you have further questions, please feel free to comment here. We welcome more discussion at any time. Thanks.
Summary: This paper studies the distillation problem of diffusion models. Specifically, it introduces Trajectory Segmented Consistency Distillation to progressively perform consistent distillation within pre-defined time-step segments, which facilitates the preservation of the original ODE trajectory. Besides, the human feedback learning and Distribution Matching Distillation (DMD) technique are also included in the proposed model. Strengths: + This paper is well written and easy to follow. + The studied problem is interesting and the proposed technique follows up the previous works to extend to a more general training strategy. + The user study is informative to demonstrate the superior performance of the proposed model. Weaknesses: - The proposed method of TSCD sounds interesting but some details may be questionable. First, intuitively, the randomness of time segments and the choices of t_end may introduce the randomness in the model training process. How can we guarantee the model will always converge to the same results given these different design choices? Second, comparing with CTM, how can we guarantee the multi-step training with different time steps can essentially introduce the advantages? For example, the multi-stage training may also introduce accumulated errors. - It is interesting to add human feedback as a guidance. The paper also mentioned that human feedback learning may distort the output distribution. This is quite important. How can we know how human feedback learning is changing the distribution? How to detect and evaluate that? It is not very obvious why employing the LoRA merge technique with the TSCD can achieve a flexible balance between generation quality and output domain similarity. Can the author elaborate more on this? Any experiments or evidence to support this claim? - Concern about the technical novelty. The proposed model consists of multiple techniques that have been introduced and commonly used in previous works. Although the performance looks promising by adding up all these modules together from the user study and the proposed method looks reasonable, there may not be some essential out-of-box ideas or findings that are very exciting. - More detailed ablation study would be helpful. It is appreciated that the author conducts the ablation study to evaluate the effects of different modules. However, it is desirable to have a more thorough ablation study besides Table 3, since it may be still not very clear which modules among the three components play the most important factor in the proposed method. For example, without human feedback, the performance of TCD and TSCD is quite close, especially with more steps. How about with human feedback to TCD and then compare it? Does DMD are added in both TCD and TSCD experiments to have a fair comparison? Technical Quality: 3 Clarity: 3 Questions for Authors: Please see the weaknesses for specific questions. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 2 Limitations: The author discusses the limitations of the proposed method in the appendix, although it focuses more on the potential future works and follow up directions, instead of limitations and potential negative societal impact of their work. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your kind words about our writing, intuition and experiments. We would like to answer every question you have in the following. W1: Firstly, the randomness of t_end that broadened the boundary condition of the original consistency distillation in CM[1] (Theorem 1) has been proven to be valid and guaranteed to converge in TCD[2] (Theorem 4.1) or CTM[3] (Appendix B.1). And our time segments within each stage are fixed and identical, it just gets progressively smaller between stages. Secondly, comparing with CTM, our multi-stage design choice is actually breaking down the training objective into several pieces. During the early training stage, the model focuses on learning the consistency of nearby neighboring timesteps. While in the later stages, the model can easily handle those with small intervals and extend its capability to the consistency of further timesteps or larger intervals based on that. In other words, there is overlap in the training objectives between stages, even when there are fewer segments in the later stages, we still have a probability that the target timestep with a small interval will be stepped on. From the perspective of accumulation error, it comes from the difficulty of fitting the model from simple tasks (smaller intervals) to difficult tasks (larger intervals), and this is much smaller than the fitting error that comes from learning a mixed task (arbitrary intervals). Similar ideas could be evidenced by Progressive Distillation[4], where the promising distilled results indicate a smaller fitting error compared to the original non-distilled diffusion model. In our experiments, we also conduct extensive ablation and user studies to validate our superior performance against TCD across different settings and inference steps. For more ablation, we also conduct extra experiments in Table 1 of global rebuttal pdf which will be detailed later in W4 section. W2: Exactly, we would like to highlight that one of our main contributions is considering human feedback learning to be an effective technique to reformulate better trajectories, thus output distribution could be biased toward human preference rather than distorted badly. Since human preference is subjective and hard to be expressed through quantitative metrics, our assessment is also quite straightforward, that is, through extensive user studies as we show in the manuscript. To elaborate more on this, we would like to raise your attention to Figure 1 in the global rebuttal pdf, where we demonstrate the effectiveness of human feedback learning by different mixing ratios for TSCD and RLHF LoRAs. W3: As for technical contributions, we would like to highlight and summarize as follows: 1. We're the first to split the acceleration objective of diffusion model into trajectory-preserving and trajectory-reformulating. While previous approaches all just focus on the specific aspect, we break the objective down into pieces and yield different technical solutions, bringing new insights and perspectives in this field. 2. As for trajectory-preserving, we're based on TCD[2] and hope to further detach the training objectives as neighboring timesteps and more distant timesteps to different training stages, which could reduce training difficulty and mitigate accumulation errors. 3. As for trajectory-reformulating, we propose to consider human feedback learning as an effective technique to bias output distribution toward human preference. It's exciting to see that simple merging of TSCD and RLHF LoRAs could bring significant benefits instead of distortion or collapse, which further demonstrates the importance of breaking down training objectives into pieces. W4: We fully understand your concerns about ablation studies. To prove the effectiveness of TSCD against TCD, we would like to raise your attention to Table 1 in the global rebuttal pdf, where we conduct experiments on TCD+RLHF and TCD+DMD. The results show that our TSCD demonstrates superior performance consistently over different training settings, which prove the robustness of TSCD with reduced training difficulty and less accumulation errors. > [1] Song, Yang, et al. "Consistency Models." International Conference on Machine Learning. PMLR, 2023. > [2] Zheng, Jianbin, et al. "Trajectory consistency distillation." arXiv preprint arXiv:2402.19159 (2024). > [3] Kim, Dongjun, et al. "Consistency Trajectory Models: Learning Probability Flow ODE Trajectory of Diffusion." The Twelfth International Conference on Learning Representations. > [4] Salimans, Tim, and Jonathan Ho. "Progressive Distillation for Fast Sampling of Diffusion Models." International Conference on Learning Representations. --- Rebuttal Comment 1.1: Comment: Thanks for the author's efforts to answer my questions. Some of my concerns have been well addressed. I would like to raise my score to borderline accept. Regarding W2, it is great to see how the RLHF weighting factor is influencing the generated images. But without a thorough and rigors study, it is still hard to conclude how the distribution manifold is changed from these few samples, or what kind of bias is introduced. This may need to further concerns when this proposed model is used to other subsequent applications. --- Rebuttal 2: Title: Response to rebuttal. Comment: Hi, Please read other reviews and respond to the authors' rebuttal. You should edit your review accordingly. Thanks.
Rebuttal 1: Rebuttal: Dear all, For each question from different reviewers, we have responded individually with a targeted rebuttal under. We put all the figures and tables into the pdf file submitted here. Hope this address your concerns and we welcome more discussion at any time. Regardless of the final decision, thank you all for your hard work. Best, Authors of Submission 2694. Pdf: /pdf/65156c954fadbb1f31e988963d33d38fcc9b8173.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
How to Continually Adapt Text-to-Image Diffusion Models for Flexible Customization?
Accept (poster)
Summary: This paper introduces a novel approach called CIDM to address the challenge of Concept-Incremental Flexible Customization in text-to-image generation. The authors identify two key issues in CIFC: catastrophic forgetting of previously learned concepts and concept neglect during multi-concept composition. The proposed CIDM tackles these problems through several innovative components, including a concept consolidation loss, elastic weight aggregation, and a context-controllable synthesis strategy. The paper provides experiments on various customization tasks to demonstrate the effectiveness of their approach compared to existing methods. Strengths: 1. The paper addresses a practical problem in custom text-to-image generation, namely the ability to continually learn new concepts without forgetting old ones. 2. The ablation studies provide insights into the contribution of each proposed component, strengthening the paper's technical depth. 3. The approach is model-agnostic and has been tested on different backbone architectures (SD-1.5 and SDXL), showing its broad applicability. Weaknesses: 1. The symbol definitions are too complex which are hard to follow. 2. The paper introduces several new hyperparameters. How sensitive is the model to these? 3. How well does the model perform across different application domains and with varied types of personalized concepts? Technical Quality: 3 Clarity: 2 Questions for Authors: See weakness. Confidence: 4 Soundness: 3 Presentation: 2 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: The symbol definitions are too complex which are hard to follow. A1: Thanks for your valuable comment. We will carefully polish the symbol definitions and introduce a table about notation definitions to better understand this paper in the final revision process. Q2: This paper introduces several new hyperparameters. How sensitive is the model to these? A2: Thanks for your insigntful comment. We employ Stable Diffusion (SD-1.5) [38] as the pretrained model and use image alignment (IA) as metric to investigate the effect of the hyperparameters {$\gamma_1, \gamma_2, r$} on the performance of our model. When setting the rank $r=4$ of LoRAs, we analyze the effect of the hyperparameters {$\gamma_1, \gamma_2$} over a range of {0.01, 0.1, 1, 10}. Moreover, we introduce the hyperparameter experiments for $r$ within the range {2, 4, 6, 8}, when setting the balancing weights $\gamma_1=0.1, \gamma_2=1.0$. From the following results, we observe that our model has stable performance over a wide range of hyperparameters {$\gamma_1, \gamma_2, r$}. Furthermore, our model is not sensitive to different hyperparameter selections, making it effective and robust for learning continuous text-guided concept customization tasks. | $\gamma_1$ \ $\gamma_2$ | 0.01| 0.1 | 1.0 | 10 | |:-: |:-: |:-: |:-: |:-: | | **0.01** | 77.3|77.7 |77.9 |77.5 | | **0.1** | 77.7|77.9 | 78.0|77.6 | | **1.0** | 77.5|77.8 | 77.9|77.9 | | **10** | 77.8|77.6 | 77.4| 76.6| --- | rank | 2 | 4 | 6 | 8 | |:-: |:-: |:-: |:-: |:-: | | **IA (%)** | 76.2|78.0 |77.8 | 77.9| --- Q3: How well does the model perform across different application domains and with varied types of personalized concepts? A3: Thanks for your constructive comments. Inspired by [10][46], we construct a new challenging concept-incremental learning (CIL) dataset including ten continuous text-guided concept customization tasks to verify the effectiveness of our model under the CIFC setting. This dataset has varied types of personalized concepts, which are collected from different application domains. Moreover, different concepts have significant differences. Specifically, as shown in Fig. 8, seven tasks have different object concepts (i.e., V2 duck toy, V4 backpack, V5 teddy bear) and pet concepts (i.e., V1 dog, V3 cat, V7 dog and V9 cat) from [40, 22]. Besides, the remaining three tasks have different style concepts (i.e., V6, V8 and V10 styles) collected from website. Considering the practicality of the CIFC setting, we introduce some semantically similar concepts (e.g., V1 and V7 dogs, V3 and V9 cats), making the CIL dataset more challenging under the CIFC setting. Particularly, all comparison experiments of this paper are conducted on this challenging dataset. As shown in Tabs. 1-2 and Figs. 2-5 of the submitted manuscript, our model achieves better performance than other state-of-the-art (SOTA) comparison methods in resolving the practical concept-incremental flexible customization (CIFC) problem, under both qualitative and quantitative evaluations. --- Rebuttal 2: Comment: Thanks for the authors' responses. Some of my concerns have been solved. --- Rebuttal Comment 2.1: Title: Thank you for your valuable comments! Comment: Dear Reviewer SpKd, We are pleased that we have addressed some of your concerns. We are very grateful to you for dedicating your time and effort to evaluating our paper. Best regards, Authors of Paper #1919
Summary: This paper explores a novel and practically significant problem, namely how custom diffusion models can continuously learn new personalized concepts while avoiding catastrophic forgetting and concept neglect. The authors developed a concept consolidation loss and an elastic weight aggregation module to explore task-specific knowledge and task-shared knowledge, and aggregated all low-rank weights of old concepts based on the contributions of old concepts. To address the problem of concept neglect, the authors propose a context-controllable synthesis strategy that can utilize expressive regional features and noise estimation to control the generated contexts. Experimental results confirm the effectiveness of the proposed method. Strengths: 1. The continual learning of generative models is an interesting and meaningful task. How to mitigate catastrophic forgetting and concept neglect in concept-incremental flexible customization scenario remains insufficiently explored. 2. The method proposed by the authors effectively mitigates the challenges of catastrophic forgetting and concept neglect, with detailed descriptions and comparisons in both methodology and analysis. 3. The writing is good and easy to read. Weaknesses: 1. It would be better to mention the comparison methods of experiments in the related work,such as LoRA-M, LoRA-C, CLoRA and L2DM, and emphasize the differences between them and the proposed method. 2. This paper attempts to address two challenges in the Concept-Incremental text-to-image Diffusion Model: catastrophic forgetting and concept neglect. However, from the text, I did not understand the relationship between continual learning and concept neglect. I would like to know if concept neglect is a challenge introduced by continual learning, and what impact continual learning has on original concept neglect? 3. The baseline choices for continual learning such as EWC and LWF are too outdated. Why not choose new and more effective continual learning algorithms? Intuitively, more advanced algorithms should be able to better mitigate catastrophic forgetting. 4. Due to the past focus of continual learning mainly on classification tasks, I hope to see more discussion on the background of continual generation models. For example, in this setting, how severe are catastrophic forgetting and concept neglect specifically, and whether some more objective evaluation criteria can be introduced. 5. How much do different weighting schemes between LoRAs affect the quality of generation, compared to some intuitive weighting baselines? 6. Does the value of the rank in LoRA and its placement in the model (e.g., which layer) have an impact on the results? Technical Quality: 4 Clarity: 3 Questions for Authors: Please check the above concerns in the weakness points. And If my concerns are resolved, I will consider increasing my score. Confidence: 4 Soundness: 4 Presentation: 3 Contribution: 3 Limitations: See the weaknesses above. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: It would be better to mention the comparison methods of experiments in the related work, such as LoRA-M, LoRA-C, CLoRA and L2DM, and emphasize the differences between them and the proposed method. A1: Thanks for your insightful comment. We will carefully polish the related work in the finial revision. The differences between our model and other methods are as follows: 1) LoRA-M [64] equally amalgamates all LoRA layers to retrain diffusion model, and LoRA-C [64] explores contributions of different LoRA layers for multi-concept composition. When learning new concepts continually, they may experience signifcant loss of attributes on old concepts (i.e., catastrophic forgetting) for versatile customization. Different from them, we devise a concept consolidation loss to mitigate forgetting of old concepts by exploring task-specifc/task-shared knowledge. 2) CLoRA [43] proposes a self-regularized low-rank adaption to continually learn new concepts, while L2DM [46] builds a long-term memory bank to reconstruct old concepts. They cannot control synthesized contexts according to user conditions and suffer from concept neglect in multi-concept composition. However, our model proposes a context-controllable synthesis to tackle concept neglect, and effectively learns new concepts continuously for versatile customization. Q2: I did not understand the relationship between continual learning and concept neglect. I would like to know if concept neglect is a challenge introduced by continual learning, and what impact continual learning has on original concept neglect? A2: As introduced in [46][64], concept neglect is a common challenge when performing multi-concept composition according to user-provided conditions, rather than being directly introduced by continual learning. However, if latent diffusion models aim to consecutively synthesize a sequence of new concepts under the CIFC setting, the catastrophic forgetting of old concepts can significantly exacerbate the degree of concept neglect when composing multiple personalized concepts. Q3: The baseline choices for continual learning such as EWC and LWF are too outdated. Why not choose new and more effective continual learning algorithms? Intuitively, more advanced algorithms should be able to better mitigate catastrophic forgetting. A3: To further verify the effectiveness of our model, we use a more advanced continual learning algorithm, InfLoRA Refs[1], published in CVPR2024, for performance comparison. As shown in the following results, our model substantially surpasses InfLoRA Refs[1] by a large margin. Note that CLoRA [43] and L2DM [46] are the two most advanced continual generation models, which are highly relevant to this paper. Particularly, our model achieves better performance than these SOTA methods. | Methods |IA (SD-1.5) | |- |:-: | | Finetuning | 73.7 | | EWC [19] | 75.9 | | LWF [25] | 74.1 | | CLoRA [43] | 76.9 | | L2DM [46] | 76.1 | | InfLoRA Refs[1] | 76.2 | | **CIDM (Ours)** | **78.0** | Refs[1] InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning, CVPR 2024 Q4: How severe are catastrophic forgetting and concept neglect, and whether some more objective evaluation criteria can be introduced. A4: Following L2DM [46], we introduce a new evaluation criteria: Task Forgetting Rate of Image Alignment (TFR-IA), to evaluate the degree of catastrophic forgetting and concept neglect. For the $l$-th concept, its TFR-IA is computed by $\frac{1}{g-1}\sum_{i=1}^{g-1} I_{i,l} - I_{g,l}$, where $I_{i,l}$ and $I_{g,l}$ denote the image alignment (IA) of the $l$-th concept evaluated at the $i$-th and $g$-th tasks. Our model outperforms other SOTA methods in terms of the TFR-IA metric. It verifies that our model has the least forgetting and concept neglect, making it more effective for continual generation. | Methods | TFR-IA (%) | |- |:-: | | EWC [19] | 2.48 | | LoRA-M [64] | 4.42 | | LoRA-C [64] | 4.30 | | CLoRA [43] | 1.56 | | L2DM [46] | 1.37 | | InfLoRA Refs[1] | 1.39 | | **CIDM (Ours)** | **1.22** | --- Q5: How much do different weighting schemes between LoRAs affect the quality of generation, compared to some intuitive weighting baselines? A5: As shown in Tab. 1 and Figs. 2-3 of the submitted manuscript, we introduce comparisons between our model and some intuitive LoRAs weighting baselines such as LoRA-M [64] and LoRA-C [64]. Specifically, LoRA-M [64] equally amalgamates all LoRA layers to retrain diffusion model, and LoRA-C [64] explores contributions of different LoRA layers for multi-concept composition. In Tab. 1 and Figs. 2-3, LoRA-M [64] and LoRA-C [64] experience signifcant loss of attributes on old concepts (i.e., catastrophic forgetting) for versatile customization. Moreover, our model significantly outperforms them in effectively resolving catastrophic forgetting. Q6: Does the value of the rank in LoRA and its placement in the model (e.g., which layer) have an impact on the results? A6: Thanks for your comment. 1) Following [10], we set the value of rank to 4 in this paper, and use Stable Diffusion (SD-1.5) [38] to analyze its effect on performance (IA) by setting r={2, 4, 6, 8}. As shown in the following results, the value of rank ($r\leq 8$) has negligible impacts on the performance. Note that larger rank can increase parameters and memory costs, which violates practicality of the CIFC setting. 2) For LoRA placement, each cross-attention layer in the diffusion models has its own LoRA layer. | rank | 2 | 4 | 6 | 8 | |:-: |:-: |:-: |:-: |:-: | | IA (%) | 76.2|78.0 |77.8 | 77.9| --- --- Rebuttal Comment 1.1: Comment: Thanks for the author's response and I have another question. In the open source community of diffusion models, the generation of multi-concept images can be achieved through multi-lora composition and some regional control techniques. How can we further understand the application of continual learning in the generation community and the advantages over multi-lora composition, for example, [1][2]? [1] Multi-LoRA Composition for Image Generation [2] Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models --- Rebuttal 2: Title: Thank you for your follow-up comments! Comment: Thank you for your follow-up comments! **1)** As for the application of continual learning in the generation community, our proposed model can help users continually synthesize a series of new personalized concepts for versatile customization based on their provided conditions, while addressing catastrophic forgetting of old personalized concepts. In contrast, existing state-of-the-art (SOTA) diffusion models in the generation community may experience significant loss of individual attributes in old personalized concepts (i.e., catastrophic forgetting) during versatile customization when continually learning new personalized concepts. The experiment results shown in Tabs. 1-2 and Figs. 2-5 of the submitted manuscript also illustrate the effectiveness of our model in addressing catastrophic forgetting and continually learning new concepts, compared to other state-of-the-art (SOTA) methods. **2) Advantages of Our model Over Multi-LoRA Composition:** **a)** Compared with SOTA multi-LoRA composition methods such as LoRA-M [64] and LoRA-C [64], our proposed elastic weight aggregation module utilizes learnable layer-wise concept tokens to merge all low-rank weights of old personalized concepts, based on their contributions to versatile concept customization. This module ensures that our model is more effective to resolve longer task sequences for versatile concept customization than other SOTA multi-LoRA composition methods such as LoRA-M [64] and LoRA-C [64]. As shown in the following results, we use a challenging benchmark dataset (i.e., celebrity faces) to construct thirty continuous text-guided concept customization tasks. Compared with our model, current multi-LoRA composition methods (LoRA-M [64] and LoRA-C [64]) suffer from significant performance degradation, due to the catastrophic forgetting caused by the composition of longer sequence tasks. | Methods | | Parameters (M) | | Memeory Costs (M) | | IA (SD-1.5) | |- |-|:-: |-|:-: |-|:-: | | EWC [19] | | 0.80 | | 6.87 | | 72.7$\pm$0.14 | | LoRA-M [64] | | 0.40 | | 1.72 | | 72.0$\pm$0.17 | | LoRA-C [64] | | 0.40 | | 1.72 | | 72.3$\pm$0.08 | | CLoRA [43] | | 0.40 | | 1.72 | | 74.1$\pm$0.13 | | L2DM [46] | | 0.80 | | 3.43 | |73.4$\pm$0.11 | | **CIDM (Ours)** | | 0.41 | | 1.74 | | **75.7$\pm$0.09** | --- **b)** Mix-of-Show [10] proposes a gradient fusion strategy to train a composed LoRA weight that mimics the predictions of individual LoRAs. However, it requires retraining the entire diffusion model and CLIP when learning new personalized concepts continually, which can substantially increase computational complexity and memory cost. As shown in the following results, compared with our model, Mix-of-Show [10] requires larger training parameters and memory costs to learn a new concept customization task, making it impractical to handle longer task sequences under the CIFC setting. Moreover, to ensure fair comparisons with existing methods, we apply the regional control techniques proposed in Mix-of-Show [10] to all comparison methods. The qualitative comparisons in Figs. 3 and 12 of the submitted manuscript demonstrate the superior performance of our model in multi-concept customization when continually learning new personalized concepts under the CIFC setting. | Methods | | Parameters (M) | | Memeory Costs (M) | |- |-|:-: |-|:-: | | Mix-of-Show [10] | | 983.65 | | 3930.46 | | **CIDM (Ours)** | | **0.41** | | **1.74** | --- Refs: [10] Mix-of-Show: Decentralized Low-Rank Adaptation for Multi-Concept Customization of Diffusion Models. [64] Multi-LoRA Composition for Image Generation. --- Rebuttal Comment 2.1: Comment: Thank you for your clear response. I have no further questions. --- Reply to Comment 2.1.1: Title: Thank you for your generous support! Comment: Dear Reviewer YpEo, Glad to hear that your concerns have been addressed well. Thank you for your great efforts in reviewing and for the good questions. Best regards, Authors of Paper #1919
Summary: This paper introduces the Concept-Incremental text-to-image Diffusion Model (CIDM), which addresses the Concept-Incremental Flexible Customization (CIFC) problem. This approach represents one of the first explorations into learning new customization tasks incrementally, effectively navigating the dual challenges of catastrophic forgetting and concept neglect. To combat catastrophic forgetting, the paper proposes a new concept consolidation loss coupled with an elastic weight aggregation module, which together helps preserve knowledge of previously learned personalized concepts. Additionally, the development of a novel context-controllable synthesis strategy specifically addresses the challenge of concept neglect, ensuring that new concepts are integrated without overshadowing existing knowledge. The effectiveness of the CIDM is validated through comprehensive experiments across a range of text-to-image generation tasks, demonstrating its capability to learn new customization tasks consecutively with notable efficacy. Strengths: 1. The paper commendably defines the Concept-Incremental Flexible Customization problem, setting a precedent for addressing this emerging challenge within the field. CIFC is a practical issue crucial for continuously synthesizing personalized concepts based on user preferences in real-world applications. 2. The novel model introduced by the authors incorporates a range of innovative functions and algorithms, which are thoroughly explained and convincingly motivated. The concept consolidation loss and the elastic weight aggregation module are noteworthy among these, paired with a context-controllable synthesis strategy. These elements collectively aim to mitigate issues of catastrophic forgetting and concept neglect effectively. 3. The experimental section of the paper is detailed, providing a broad comparison of the model’s capabilities across tasks such as multi-concept generation, style transfer, and image editing. Weaknesses: 1. A critical concern arises regarding the scalability of the model, specifically the accumulated memory load after learning each task. This is particularly pertinent in continual learning scenarios where efficiency is crucial. 2. The paper lacks detailed implementation information on critical tasks such as custom image editing and style transfer. Providing these details would significantly enhance the clarity and replicability of the research. 3. The concept of balancing task-specific and task-shared knowledge is intriguing and seems to be central to effectively addressing the CIFC problem. To substantiate the claims of its effectiveness, the authors should include ablation studies that isolate and quantify the impact of these components. Additionally, visualizations comparing the results with and without these strategies would provide a clearer, more direct illustration of their value. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. I recommend the authors include details on memory consumption or the number of stored parameters after each task during the rebuttal process. 2. The authors should incorporate a comprehensive description of custom image editing and style transfer processes, including specific parameters and settings used, to allow for a better understanding and evaluation of the proposed methods' effectiveness in practical applications. Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: There are no potential negative societal impacts. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: I recommend the authors include details on memory consumption or the number of stored parameters after each task during the rebuttal process. A1: Thanks for your constructive suggestion. We conduct comparison experiments to evaluate memory consumption and training parameters of each task, when utilizing Stable Diffusion (SD-1.5) [38] as the pretrained model to learn ten continuous text-guided concept customization tasks. As shown in the following results, the training parameters and memory requirements of our model are comparable to those of LoRA-M [64], LoRA-C [64] and CLoRA [43], but are substantially lower than those of other SOTA methods. Moreover, our model achieves better performance than existing SOTA diffusion models (see Tabs. 1-2 and Figs. 2-5). It illustrates that our model is efficient and effective to tackle the practical Concept-Incremental Flexible Customization (CIFC) problem. | Methods | | Parameters (M) | | Memeory Costs (M) | |- |-|:-: |-|:-: | | Finetuning | | 0.80 | | 3.43 | | EWC [19] | | 0.80 | | 6.87 | | LWF [25] | | 0.80 | | 4.21 | | LoRA-M [64] | | 0.40 | | 1.72 | | LoRA-C [64] | | 0.40 | | 1.72 | | CLoRA [43] | | 0.40 | | 1.72 | | L2DM [46] | | 0.80 | | 3.43 | | **CIDM (Ours)** | | 0.41 | | 1.74 | --- Q2: The paper lacks detailed implementation information on critical tasks such as custom image editing and style transfer. Providing these details would significantly enhance the clarity and replicability of the research. A2: Thanks for your insightful comments. 1) As for custom image editing, we introduce Anydoor [3] as a plug-in for all comparison methods. Specifically, given a text prompt that contains the user's personalized concept, an initial image, and an editable bounding box, we first input the given text prompt into the latent diffusion model to perform custom synthesis. Then we embed the generated image into the editable bounding box of the initial image using Anydoor [3]. 2) We introduce T2I-adapter [30] to achieve custom style transfer. Specifically, given an initial image, we utilize BLIP [23] to extract the corresponding caption (e.g., "photo of a city skyline"). Then, we add the custom concept (e.g., [V6] style) to the caption to obtain the style transfer prompt (e.g., "photo of a city skyline in [V6] style"). Finally, we perform Canny edge detection on the initial image to extract a Canny image, and incorporate it with the style transfer prompt to achieve custom synthesis via the T2I-adapter [30]. Refs: [3] AnyDoor: Zero-shot Object-level Image Customization [23] BLIP:Bootstrapping LanguageImage Pre-training [30] T2I-Adapter: Learning Adapters to Dig out More Controllable Ability for Text-to-Image Diffusion Models Q3: The concept of balancing task-specific and task-shared knowledge is intriguing and seems to be central to effectively addressing the CIFC problem. To substantiate the claims of its effectiveness, the authors should include ablation studies that isolate and quantify the impact of these components. Additionally, visualizations comparing the results with and without these strategies would provide a clearer, more direct illustration of their value. A3: Many thanks for analyzing our model. 1) Tab. 3 in the submitted manuscript shows ablation studies of single-concept customization to analyze the effectiveness of each module in our model, where TSP and TSH denote task-specific knowledge and task-shared knowledge in the concept consolidation loss, respectively. When we remove the TSP or TSH module, our model decreases 0.5% and 0.6% in terms of image alignment (IA) metric. It verifes the effectiveness of our model to resolve the CIFC problem by exploring both task-specific and task-shared knowledge. 2) Fig. 6 in the submitted manuscript visualizes the ablation analysis of task-specific knowledge (TSP) and task-shared knowledge (TSH) from the perspective of image generation quality. These visualization results illustrates that our model can capture task-specifc information within each customization task and explore task-shared knowledge across different tasks to tackle the CIFC problem via optimizing Eq. (2). --- Rebuttal Comment 1.1: Comment: Thank you for your responses! My concerns are addressed. Hence, I increase my score. --- Rebuttal 2: Title: Thank you for your kind response and support. Comment: Dear Reviewer 5EBF, We are pleased that we have addressed most of your concerns. Thanks for your insightful comments and approval of our work. Best regards, Authors of Paper #1919
Summary: This paper tackles the problem of continually adapting text-to-image diffusion customization models. The proposed method employs a novel concept consolidation loss, elastic weight aggregation module, and context-controllable synthesis strategy. Extensive experiments demonstrate that the proposed method performs strongly compared to SOTA baseline methods. Strengths: 1. The approach is innovative and presented in a principled manner. 2. The problem setting is a strong area of impact for research with real-world applications, which is important for continual learning. 3. The experiments and analysis are very extensive, and the paper presentation is of high quality. Weaknesses: 1. Please include a table that compares the training, parameter, and memory costs of your method against other methods. This comparison is crucial for a comprehensive evaluation of your method. 2. Although the results are clear and detailed, they lack confidence intervals or similar statistical measures in the quantitative metrics. Running the experiments with multiple random seeds would ensure robustness and provide a more reliable assessment. 3. The dataset used is quite limited. To fully justify your setting, you should explore more diverse datasets and longer task sequences. For instance, with only 10 simple concepts, it would be feasible to store a LoRA version of Custom Diffusion or even the Custom Diffusion deltas in memory and merge them on the fly for the target task, achieving zero forgetting in a practical manner. To truly demonstrate the impact of your method and setting, beyond what can be trivially solved by adapters or state-of-the-art LoRA-based merging methods, you should consider long task sequences or challenging benchmarks such as celebrity faces. Technical Quality: 3 Clarity: 3 Questions for Authors: Please see my weaknesses section. I think 1 and 2 are crucial for acceptance (would raise my score to WA if done thoroughly). Part 3 is important, too, but more difficult to address in a short period of time. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Q1: Please include a table that compares the training, parameter, and memory costs of your method against other methods. This comparison is crucial for a comprehensive evaluation of your method. A1: Thanks for your insightful comment. As shown in the following results, we use Stable Diffusion (SD-1.5) [38] to conduct comprehensive evaluations in terms of training time, parameters, and memory costs of each task, when the number of continuous concept customization tasks is ten. All comparisons of training time are conducted under two NVIDIA RTX A6000 GPUs. Our model has comparable parameters, memory costs, and training time to LoRA-M [64], LoRA-C [64] and CLoRA [43]. Additionally, the training parameters and memory requirements of our model are substantially lower than those of other SOTA methods. More importantly, our model significantly outperforms existing SOTA comparison models for both the qualitative and quantitative evaluations on versatile generation tasks (see Tabs. 1-2 and Figs. 2-5 in the submitted manuscript). We will consider introducing these comparisons about parameters and memory costs in the final revision. | Methods | | Training Time (h) | | Parameters (M) | | Memeory Costs (M) | IA (SD-1.5) | |- |-|:-: |-|:-: |-|:-: |:-: | | Finetuning | | 0.65 | | 0.80 | | 3.43 | 73.5$\pm$0.15 | | EWC [19] | | 0.83 | | 0.80 | | 6.87 | 75.8$\pm$0.22 | | LWF [25] | | 1.18 | | 0.80 | | 4.21 | 74.3$\pm$0.09 | | LoRA-M [64] | | 0.65 | | 0.40 | | 1.72 | 74.6$\pm$0.23 | | LoRA-C [64] | | 0.65 | | 0.40 | | 1.72 | 74.7$\pm$0.06 | | CLoRA [43] | | 0.90 | | 0.40 | | 1.72 | 76.8$\pm$0.08 | | L2DM [46] | | 1.35 | | 0.80 | | 3.43 | 76.3$\pm$0.14 | | **CIDM (Ours)** | | 0.72 | | 0.41 | | 1.74 | **77.9$\pm$0.08** | --- Q2: Although the results are clear and detailed, they lack confidence intervals or similar statistical measures in the quantitative metrics. Running the experiments with multiple random seeds would ensure robustness and provide a more reliable assessment. A2: Thanks for your constructive comment. We use five random seeds (0, 2021, 2022, 2023, 2024) to conduct five random experiments in terms of the image alignment (IA) metric and report their average results over five random seeds for evaluation. As shown in the following results, our model achieves better performance than other SOTA methods, which verifies the robustness and reliable effectiveness of our model to tackle the CIFC problem. Moreover, we will consider presenting averaged comparison results over multiple random seeds in the finial revision. | Methods | | SD-1.5 [38] | | SDXL [33] | |- |-|:-: |-|:-: | | Finetuning | | 73.5$\pm$0.15 | | 71.4$\pm$0.13 | | EWC [19] | | 75.8$\pm$0.22 | | 77.5$\pm$0.07 | | LWF [25] | | 74.3$\pm$0.09 | | 76.6$\pm$0.10 | | LoRA-M [64] | | 74.6$\pm$0.23 | | 74.2$\pm$0.09 | | LoRA-C [64] | | 74.7$\pm$0.06 | | 74.6$\pm$0.14 | | CLoRA [43] | | 76.8$\pm$0.08 | | 77.7$\pm$0.06 | | L2DM [46] | | 76.3$\pm$0.14 | | 77.2$\pm$0.11 | | **CIDM (Ours)** | | **77.9$\pm$0.08** | | **79.6$\pm$0.07** | --- Q3: The dataset used is quite limited. To fully justify your setting, you should explore more diverse datasets and longer task sequences. To truly demonstrate the impact of your method and setting, you should consider long task sequences or challenging benchmarks such as celebrity faces. A3: Thanks for your invaluable comment. To further validate the effectiveness of our model in learning longer task sequences, we use a new challenging benchmark dataset (celebrity faces) to construct thirty continuous text-guided concept customization tasks. Compared to existing SOTA models such as LoRA-M [64], LoRA-C [64], CLoRA [43], and L2DM [46], this new setting on celebrity faces dataset includes longer task sequences (i.e., thirty text-guided customization tasks), and encompasses different object concepts. Therefore, it is more challenging and better suited to verify the effectiveness in addressing the CIFC problem. We use Stable Diffusion (SD-1.5) [38] to evaluate parameters, memory costs and image aligment (IA) metric over five random seeds. From the following results, we observe that our model significantly outperforms other SOTA methods to learn longer task sequences. More importantly, the training parameters and memory requirements of our model are comparable to those of LoRA-M [64], LoRA-C [64] and CLoRA [43], but are substantially lower than those of other SOTA methods. It verifies the effectiveness of our model to learn longer task sequences on the new challenging dataset. | Methods | | Parameters (M) | | Memeory Costs (M) | | IA (%) | |- |-|:-: |-|:-: |-|:-: | | EWC [19] | | 0.80 | | 6.87 | | 72.7$\pm$0.14 | | LoRA-M [64] | | 0.40 | | 1.72 | | 72.0$\pm$0.17 | | LoRA-C [64] | | 0.40 | | 1.72 | | 72.3$\pm$0.08 | | CLoRA [43] | | 0.40 | | 1.72 | | 74.1$\pm$0.13 | | L2DM [46] | | 0.80 | | 3.43 | |73.4$\pm$0.11 | | **CIDM (Ours)** | | 0.41 | | 1.74 | | **75.7$\pm$0.09** | --- Rebuttal Comment 1.1: Comment: I appreciate the authors' hard work and have raised my score. I hope this paper will be accepted. --- Reply to Comment 1.1.1: Title: Thanks for your constructive comments. Comment: Dear Reviewer Th4Q: Thank you for supporting our work. We sincerely appreciate your insightful comments that help improve our paper. We will take them into account when making the final revisions. Best regards, Authors of Paper #1919
Rebuttal 1: Rebuttal: Dear reviewers and area chairs: We extend our gratitude to all the reviewers and area chairs for dedicating their time and effort to evaluating our paper. We also thank the reviewers for their positive and insightful comments, which can help us improve our work. We are encouraged that: $\bullet$ Reviewer Th4Q and Reviewer 5EBF agree that our work is **novel** in resolving **a practical problem** named concept-incremental flexible customization (CIFC) and includes **comprehensive evaluation experiments** to validate the effectiveness of the proposed model. $\bullet$ Reviewer YpEo thinks that our paper focuses on tackling **an interesting and meaningful task** with **extensive comparisons** in both methodology and analysis. $\bullet$ Reviewer SpKd believes that our model addresses **a practical problem**, has **in-depth ablation studies**, and demonstrates **broad applicability** across different backbone architectures. All reviewers recognize that our model achieves **state-of-the-art performance under comprehensive experiments**. We have responded to each reviewer individually to address any comments. We would like to give a brief summary: **To Reviewer Th4Q:** 1) We introduce comparison experiments on training times, parameters, and memory costs between our model and other models. 2) We conduct five random evaluation experiments and report their average results over five random seeds (0, 2021, 2022, 2023, 2024) to compare the performances. 3) We introduce longer task sequences on a new challenging dataset (celebrity faces) to further validate the effectiveness of our model. **To Reviewer 5EBF:** 1) We provide comparisons on memory consumption and training parameters. 2) We introduce implementation details of custom image editing and style transfer. **To Reviewer YpEo:** 1) We clarify the significant differences between our model and other methods such as LoRA-M, LoRA-C, CLoRA, and L2DM, and explain the relationship between continual learning and concept neglect. 2) We use a more advanced continual learning algorithm, InfLoRA [1], published in CVPR2024, for performance comparison. 3) We introduce a new evaluation criteria: Task Forgetting Rate of Image Alignment (TFR-IA), to assess the degree of catastrophic forgetting and concept neglect. 4) We compare our model with several intuitive LoRA weighting baselines, such as LoRA-M [64] and LoRA-C [64], and discuss the impact of low rank in LoRAs. Refs[1] InfLoRA: Interference-Free Low-Rank Adaptation for Continual Learning, CVPR 2024 **To Reviewer SpKd:** 1) We will carefully polish the symbol definitions and introduce a table about notation definitions in the appendix. 2) We investigate the effect of the hyperparameters {$\gamma_1, \gamma_2, r$} on the performance of our model. 3) We clarify that our model can perform well on the challenging dataset used in this paper, which contains various types of personalized concepts collected from different application domains. We thank all reviewers and area chairs again! Best, Authors of Paper #1919
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Pseudo-Siamese Blind-spot Transformers for Self-Supervised Real-World Denoising
Accept (poster)
Summary: This paper presents a novel transformer architecture for real-world image denoising under a self-supervised framework. Following blind-spot networks, they introduced directional self-attention (DSA) module and Siamese architecture to prevent performance degradation from the masked region. Strengths: Superior denoising performance over existing methods Weaknesses: - Limited novelty, where this method presents a transformer-based approach of [1] and employs Siamese training, which resembles knowledge distillation techniques widely used in recent self-supervised denoising with blind spot networks. - Computational complexity in training. This method shares the computational weaknesses of [1]. Both [1] and the proposed method require four branches to achieve a blind spot in the center, leading to inefficient training that demands large memory and training time. Table 2 only presents the computational complexity of SelfFormer-F without SelfFormer-D, which is the main factor contributing to inefficiency (I guess SelfFormer-D approximately requires four times the memory and processing time compared to SelfFromer-F). Providing a detailed comparison of computational complexity in terms of training time and memory usage would significantly improve the understanding of the proposed method. . References [1] Samuli Laine, Tero Karras, Jaakko Lehtinen, and Timo Aila. High-quality self-supervised deep image denoising. Advances in Neural Information Processing Systems, 32, 2019. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Does the mutual loss in Eq. 6 affect both $M_D$ and $M_F$, or the output of $M_D$ is detached? 2. What is the principle of weight sharing of DSA and grid SA? Does it improve the performance? 3. Did this method adopt pixel-shuffle downsampling for real-world denoising, which is a widely used block in recent works. . Minor comments and typos - In Eq. 5, missing ( - In line 170, two commas - In line 262 SSA --> DSA?? - Full text of BID is missing. Blind image denoising? - Line 279, missing space before (a) Confidence: 4 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: The authors adequately addressed the limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your valuable comments. Please find our response below. **[W1]** *Regarding novelty over [27] and knowledge distillation (KD) used in recent self-supervised denoisers with BSNs.* **Novelty and contribution clarification** - As acknowledged by **Reviewer JdHm**, our approach "builds upon the approach used in Laine et al.'s work [27] by replacing CNN layers with attention layers to better capture long-range dependencies, demonstrating an innovative application of transformer architecture in the context of image denoising." It is important to note that the blind spot property in transformers cannot be trivially achieved by directly migrating it from CNNs due to the fundamentally different structures of attention mechanisms (transformer) versus convolutions (CNN). - As highlighted by **Reviewer UEMY**, "while the directional blind-spot scheme and grid-based attention are not novel concepts, the authors have proposed an elegant framework that revitalizes these techniques effectively". While these two concepts are not new individually, their integration to achieve the blind spot property in transformers is novel. This integration results in a cohesive system that enhances the utility and performance of these techniques in image denoising. **Comparison to existing techniques related to KD** We have reviewed two published works that utilize KD for self-supervised denoising: extended version of Neighbor2Neighbor [A1] and SASL [43]. These methods distill multiple CNNs (without weight sharing) into a single model to improve performance. In contrast, our method specifically optimizes the pseudo-Siamese framework for transformer models by employing weight sharing and mutual learning. This optimization effectively addresses the bias caused by the proposed directional self-attention mechanism. This distinction shows our method's unique approach to leveraging Siamese training to exploit the potential of transformer-based denoisers. In summary, while our method builds on existing concepts and techniques, it introduces significant innovations in integrating these techniques and concepts in ways not previously explored in the literature. This integration addresses the technical challenges of enabling the blind-spot property in transformer models, leading to noticeable performance gains over existing self-supervised denoisers. *[A1] Neighbor2Neighbor: A Self-Supervised Framework for Deep Image Denoising. TIP 2022.* --- **[W2]** *Regarding computational complexity in training and Table 2 of main paper.* Table 2 of main paper reported the computational complexity of SelfFormer-F w/o SelfFormer-D, because SelfFormer-F is the only model utilized during inference. One of our key motivations for introducing SelfFormer-F was to enhance inference efficiency and reduce testing time. By employing SelfFormer-F exclusively during inference, we achieve significant improvements in computational efficiency, making our method more practical for real-world applications. Regarding the computational cost of training, Table S1 below provides a comparison of training time and related memory usage, as well as inference time and related memory usage. As the reviewer mentioned, the four-branch design requires more memory, but its training time is similar to that of SS-BSN. Moreover, the inference time and related memory usage of our method are less than that of the other two related transformer-based methods. Specifically, the inference time of our method is approximately 34% and 45% of LG-BPN and SS-BSN, respectively. Considering the performance gain and efficient inference of our method over existing transformer-based models, even though there is some trade-off in training time and related memory usage, our method holds practical value. We will include the above analysis in the revision. **Table S1: Time \& memory usage in training \& inference.** | Complexity Metric | AP-BSN | MM-BSN | PUCA | LG-BPN | SS-BSN | Ours | | :-----------------: | :----: | :----: | :----: | :----: | :----: | :----------: | | Training Time (h) | 5.086 | 11.291 | 10.227 | 6.313 | 49.483 | 56.266 | | Training Mem. (G) | 0.574 | 1.349 | 2.549 | 1.884 | 0.873 | 9.161 | | Inference Time (ms) | 382 | 539 | 529 | 5208 | 3976 | 1812 | | Inference Mem. (G) | 0.664 | 1.310 | 1.200 | 12.072 | 3.492 | 2.806 | --- **[Q1\]** *Regarding mutual loss.* The mutual loss affects both $\mathcal{M}_D$ and $\mathcal{M}_F$, as the output of $\mathcal{M}_D$ is not detached. This means that both models are influenced by the mutual loss during training, allowing them to learn collaboratively to improve their performance. --- **[Q2]** *Regarding weight sharing of DSA and grid SA.* All four branches in SelfFormer-D and the single branch in SelfFormer-F have the same structure, allowing for weight sharing between DSA and grid SA. To illustrate the impact, Table S2 compares the results w/ and w/o weight sharing, which will be added in the revision. We can see that weight sharing not only reduce the training parameter number but also contributes to a regularization effect, potentially improving the model's generalization. **Table S2: Ablation study of weight sharing on SIDD-Validation.** | Weight Sharing | PSNR(dB) | SSIM | | :------------: | :------: | :---: | | w/o | 37.45 | 0.880 | | w/ | 37.63 | 0.882 | --- **[Q3]** *Whether adopt pixel-shuffle downsampling?* Our approach does not adopt pixel-shuffle downsampling. Instead, to address noise correlation, our implementation masks the 4$ \times$4 half-plane neighboring locations around the center pixel in the attention window of DSA in SelfFormer-D during training. By masking out these neighboring pixels, the pixels utilized in the attention windows maintain a distance from the central pixel, and the noise correlation decreases with the distance between two pixels. We will clarify it in revision. --- Rebuttal Comment 1.1: Comment: I’ve reviewed the author’s response and the additional rebuttal PDF. I appreciate the detailed and clear explanations, which addressed most of my concerns. I have increased my score by 1 point.
Summary: This paper proposes a transformer-based model for self-supervised denoising. The model employs two pseudo-siamese sub-networks during training, but only one is used for inference. One sub-network uses a grid-based DSA for blind-spot learning, while the other utilizes full grid-based attention to mitigate the bias introduced by the blind-spot network. The grid-based scheme is designed to eliminate noise dependence among related pixels. Experimental results demonstrate superior performance on the SIDD and DnD datasets. Strengths: +The challenges of this task lie in three aspects: spatially correlated noise, the blind-spot strategy, and the bias introduced by using a blind-spot network (BSN). While the directional blind-spot scheme and grid-based attention are not novel concepts, the authors have proposed an elegant framework that revitalizes these techniques effectively. +One significant advantage over previous methods is the reduction of bias introduced by the BSN. The pseudo-siamese scheme is particularly effective, as it addresses the bias problem while simultaneously enhancing inference efficiency. Weaknesses: -In Table 3, 'w/o DSA' and 'only selfFormer-F' both lack a blind-spot scheme, but why is there a large gap between them? -Why does 'w/o CA' perform so worse? the CA is not the main component to address the three key difficulties mentioned above. although the previous method does not design a module to integrate the channel information, they can perform better than 'w/o CA'. -typo: line 281 'selfFormer-S' -There is still significant room for improvement. Currently, supervised learning with real pairs can easily achieve psnr above 38 on the SIDD dataset. Technical Quality: 3 Clarity: 3 Questions for Authors: My main concern lies in the ablation study, where some results appear to be unreasonable. I appreciate that this work advances self-supervised denoising, and I hope the authors can address my confusion. I will adjust my score based on their response. Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 4 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation and valuable feedback. Please find our response and answers below. **[W1]** *In Table 3, 'w/o DSA' and 'only SelfFormer-F' both lack a blind-spot scheme, but why is there a large gap between them?* Thank you for pointing this out. We sincerely apologize for the confusion caused by ‘‘w/o DSA’’ in Section 4.3. The description of 'w/o DSA' was inadvertently omitted from Section 4.3. The intended description is that 'w/o DSA' replaces all DSA (Grid SA) modules with the BSCA (CA) moduels. Because the blind spot property is retained, its performance is significantly better than 'only SelfFormer-F,' which lacks this property. The confusion arises from the mention of ‘(a) DSA-\>Grid SA’ in Section 4.3. Its results are not reported in Table 3 because its structure is similar to those of 'only SelfFormer-F, as is its performance: 'DSA-\>Grid SA' achieves 23.96 dB/0.336 in PSNR/SSIM. During the preparation of the paper, we decided to exclude this result from Table 3 but inadvertently left it in the description. We sincerely apologize for any misunderstanding caused by this careless error. We will correct it in the revision. --- **[W2]** *Why does 'w/o CA' perform so worse? The CA is not the main component to address the three key difficulties mentioned above. Although the previous method does not design a module to integrate the channel information, they can perform better than 'w/o CA'.* Thanks for the comment. We found that the unsatisfactory performance of “w/o CA” results from its shallower structure. The “w/o CA” was constructed by removing all BSCA (CA) modules and broadening the 1$\times$1 convolutions in the network, which made the network much shallower and reduced its expression capacity. To better analyze the effectiveness of CA, we construct a new baseline by replacing all BSCA (CA) modules with DSA (Grid SA) modules. The PSNR/SSIM result of this baseline versus the original model is 37.54dB/0.881 vs. 37.63dB/0.882 on the SIDD-Validation set. This result shows that using DSA (Grid SA) without BSCA (CA) has already achieved good performance, while BSCA (CA) results in moderate performance gain due to their complementary characteristics to DSA (Grid SA). We will update the baseline and its results in the revision. --- **[W3]** *There is still significant room for improvement. Currently, supervised learning with real pairs can easily achieve PSNR above 38dB on the SIDD dataset.* We acknowledge that the performance of self-supervised learning currently still lags behind supervised learning, primarily due to the absence of GT images for training. However, self-supervised learning is valuable in scenarios where collecting GT images is challenging, and it can also be used together with supervised learning to jointly and fully utilize paired data and GT-less data in real applications. Our work has reduced the performance gap between self-supervised and supervised methods. We believe that trading off some performance for the ability to train without GT images is practical and necessary. Our future work will focus on further closing this gap. --- Rebuttal Comment 1.1: Comment: The authors have addressed my concerns, and I am willing to raise the score to 7.
Summary: This paper explores self-supervised image denoising using only single-shot noisy images. The authors enhance the blind-spot technique by harnessing the transformer's ability to manage long-range pixel interactions, which is essential for eliminating noise dependencies between pixels. Experimental results verify its effectiveness to some extent. Strengths: Our method introduces a directional self-attention (DSA) module that employs a half-plane grid for self-attention, forming an effective blind-spot structure, and a Siamese architecture with mutual learning to counteract the limitations of the restricted attention grid in DSA. Experimental results on benchmark datasets show that the proposed approach is effective. Weaknesses: 1、The novelty of this paper lies in the use of directional self-attention to construct the transformer model, enabling a blind spot mechanism and fully utilizing pixel information. This approach has partially appeared in both PUCA (mentioned in the paper) and LGBPN, which reduces the novelty of this paper. [1] PUCA: Patch-Unshuffle and Channel Attention for Enhanced Self-Supervised Image Denoising [2] Local and Global Blind-Patch Network for Self-Supervised Real-World Denoising 2、The pseudo-Siamese architecture requires further explanation. Why does using a full-type network improve overall utilization efficiency? How does this approach differ from the method used in "Spatially Adaptive Self-Supervised Learning for Real-World Image Denoising"? 3、Predictably, the introduction of DSA results in multiple branches, which significantly increases the computational load. 4、Experiments on more datasets are needed. For example, the results on the SIDD validation are missing. Technical Quality: 2 Clarity: 2 Questions for Authors: 1、Could the authors elaborate further on their novelty and explain the differences from the existing works? 2、Could the authors compare the computational complexity and runtime differences between the proposed method and other comparison methods? 3、Could the authors provide the denoising results in real noisy environments (high ISO, high resolution)? 4、Blind spot networks often suffer from issues such as detail blurring and structural distortion, which also seem to be present in this paper. Could the authors provide a more in-depth discussion on these issues? Confidence: 5 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: The limitations have been discussed in this paper. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thanks for your appreciation and valuable feedback. **[W1]** *Regarding novelty \& difference from PUCA and LGBPN.* Our work focuses on studying the blind-spot mechanism in a transformer model, with an emphasis on spatial self-attention. Both PUCA and LGBPN also study effective blind-spot mechanisms, but their focuses are quite different. 1. PUCA is not a transformer-based method; it does not involve spatial self-attention or channel self-attention. Instead, it uses first-order channel attention and pixel shuffle downsampling to achieve the blind spot property. 2. LGBPN employs channel self-attention but not spatial self-attention in its transformer model. It achieves the blind spot property via masked convolutions. In contrast, our method achieves the blind-spot property via directional self-attention and utilizes a Siamese architecture to address the limitations caused by the use of directional self-attention. These differences highlight the unique aspects and contributions of our approach. --- **[W2]** *Regarding pseudo-Siamese architecture. Why using a full-type network improves overall utilization efficiency? How it differs from the method used in SASL [43]?* Due to the use of directional self-attention in our SelfFormer-D for gaining the blind-spot property, each branch can only utilize information from the pixels restricted within a half-plane. The results from different directions are then aggregated for the final output, which is not efficient in utilizing the information of all image pixels of the entire plane. The SelfFormer-F in our pseudo-Siamese framework employs full-direction self-attention to utilize information from all directions, addressing this limitation and thus improving overall utilization efficiency. We will elaborate it more in the revision. The main difference between our method and the SASL lies in the interaction of different models. The SASL sequentially trains three CNNs without weight sharing, progressively refining the predictions on flat and textured regions. In contrast, our pseudo-Siamese architecture utilizes simultaneous mutual learning between two transformer models with weight sharing but without identical structures, aiming to address the limitations caused by the use of directional self-attention. --- **[W3]** *Regarding computational load caused by DSA.* While DSA increases the computational load in training, it does not affect the inference time since only the single-branch SelfFormer-F with full-grid self-attention is used for testing. Additionally, we leverage the gridding strategy to reduce the computational load from DSA during training. Considering the performance gain of our method and the un-impacted inference time, even with some computational overhead in training, our method still holds practical value. --- **[W4]** *Experiments on more datasets. E.g., the results on SIDD-Validation.* Following existing works such as PUCA \[34\], we perform evaluations on SIDD-Benchmark and DND. They are two widely recognized datasets for benchmarking real-world image denoising, with readily available results and trained models of a large number of denoising methods. Following your suggestion, we conducted experiments on SIDD-Validation, with results listed in Table S1 below. See also our response to **Reviewer JdHm** for the results on the NIND dataset. All these results, which will be included in revision, demonstrate that our method maintains excellent performance across these datasets as well. **Table S1: Quantitative results of self-supervised methods on SIDD-Validation.** | Metric | N2V | R2R | CVF-SID | AP-BSN | LG-BPN | SASL | PUCA | C-BSN | Ours | | :------: | :---: | :---: | :-----: | :----: | :----: | :---: | :---: | :---: | :---: | | PSNR(dB) | 27.06 | 35.04 | 34.15 | 36.73 | 37.31 | 37.39 | 37.49 | 37.51 | **37.63** | | SSIM | 0.651 | 0.844 | 0.871 | 0.878 | 0.884 | 0.875 | 0.880 | **0.885** | 0.882 | --- **[Q1]** *Regarding novelty.* Pls see our response to [W1]. --- **[Q2]** *Computational complexity \& runtime.* Pls refer to Table 2 of main text for the results. Our SelfFormer has a smaller size than the second-best performer, PUCA, and is much faster than other transformer-based self-supervised methods, LG-BPN and SS-BSN. --- **[Q3]** *Results in real noisy environments (high ISO, high resolution)?* Both SIDD and DND are composed of images captured in noisy environments with high ISO and high resolution, as described in their descriptions. However, they only provide cropped patches of sizes 256 $\times$256 or 512$\times$512. Thus, we only show results on these patches in our paper. In Fig. 1 of our attached PDF, we merge the patches into a whole image for visualization. Our method produces better visual quality than the compared ones. --- **[Q4]** *Blind-spot networks (BSNs) often suffer from issues such as detail blurring and structural distortion, which also seem to be present in this paper. A more in-depth discussion on these issues?* During training, BSNs utilize neighboring pixels to estimate the central one for self-supervision, assuming strong correlations among neighboring pixels, which is not accurate for edges. As the central pixel is not used, its estimation introduces correlation with neighboring pixels, leading to detail blurring and structural distortion. One approach to mitigate such issues is to exploit additional image pixels distanced from the central pixel, whose true intensities are strongly correlated with the central pixel, but whose noise is weakly correlated. This is a key motivation for studying transformer-based models with blind-spot property. Our approach utilizes transformer blocks and self-attention to exploit long-range dependencies within an image and employs a pseudo-Siamese framework to improve the utilization efficiency of long-range pixels. While we cannot completely resolve the issues, our method provides better results with fewer artifacts. --- Rebuttal Comment 1.1: Title: Comments to the Rebuttal Comment: Thanks for the authors' effort in addressing the concerns. I think the concerns are well-addressed.
Summary: This paper presents a novel approach for real-world image denoising using self-supervised learning. The method leverages the transformer's ability for long-range pixel interactions, combined with a sophisticated blind-spot structure through grid self-attention and directional self-attention (DSA) modules. Additionally, it employs a Siamese architecture with mutual learning to enhance performance. Strengths: The paper is well-written and very easy to understand. The paper builds upon the approach used in Laine et al.'s work [R1] by replacing CNN layers with attention layers to better capture long-range dependencies, demonstrating an innovative application of transformer architecture in the context of image denoising. The experiments convincingly show that the proposed method outperforms existing self-supervised and clean-image-free denoising techniques, providing strong empirical evidence for the efficacy of the approach. The approach demonstrates better improvement in terms of time complexity compared to other transformer-based methods, highlighting its practical advantages in computational efficiency. [R1] Laine, Samuli, et al. "High-quality self-supervised deep image denoising." Advances in Neural Information Processing Systems 32 (2019). Weaknesses: The evaluation is limited to SIDD and DND datasets only. More diverse datasets could further validate the method’s robustness. While some ablation studies are provided, further analysis could help understand the contribution of each component in more detail such as the choice of hyperparameters, window size and loss functions. Although the paper claims improvements in time complexity over other transformer-based methods, the computational requirements for training and inference are still likely higher compared to traditional CNN-based approaches, which could limit practical applicability in resource-constrained environments Technical Quality: 3 Clarity: 4 Questions for Authors: One major assumption of Blind-spot networks is that noise across pixels is independent. However, several works, such as [R2], have shown that spatial noise in real-world images is often correlated. This correlation needs to be addressed before applying BSN. How did your BSN approach handle this noise correlation, as this was not detailed in the paper? [R2] Lee, Wooseok, Sanghyun Son, and Kyoung Mu Lee. "Ap-bsn: Self-supervised denoising for real-world images via asymmetric pd and blind-spot network." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022 Confidence: 5 Soundness: 3 Presentation: 4 Contribution: 4 Limitations: Yes Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: Thank you for your appreciation and valuable feedback. Please find our response and answers below. **[W1]** *The evaluation is limited to SIDD and DND datasets only. More diverse datasets could further validate the method’s robustness.* The SIDD and DND are widely recognized datasets for benchmarking real-world image denoising, with readily available results and trained models of a large number of denoising methods. Therefore, many existing self-supervised denoisers including the SOTA method PUCA [34] mainly use these two datasets for evaluation. For other datasets, there is a lack of published results from many methods. We agree with the reviewer that more diverse datasets could further evaluate the robustness of our method. In response to this comment, we conducted supplementary experiments on the NIND dataset [A1], primarily comparing our method against those with available results or codes. For PUCA, the second-best performer in our main paper, we obtained its results by re-training the model using the authors' code. Please refer to Table S1 below for the comparison of the results from different methods. The results, which will be included in the revision, demonstrate that our method maintains superior performance across different datasets as well. *[A1] Natural Image Noise Dataset. CVPRW 2019.* **Table S1: Quantitative PSNR(dB)/SSIM results of self-supervised methods on NIND dataset.** | Method | | NIND ISO3200 | | NIND ISO5000 | | :----: | ---- | :-------------: | ---- | :-------------: | | N2V | | 28.42/0.766 | | 27.04/0.658 | | AP-BSN | | 34.41/0.854 | | 33.49/**0.847** | | LG-BPN | | 33.94/0.840 | | 33.33/0.831 | | C-BSN | | 34.33/0.855 | | 33.52/0.839 | | PUCA | | 34.24/0.854 | | 33.49/0.840 | | Ours | | **34.43/0.857** | | **33.55/0.847** | --- **[W2]** *While some ablation studies are provided, further analysis could help understand the contribution of each component in more detail such as the choice of hyperparameters, window size and loss functions.* Thank you for the suggestion. In Table S2 below, we supplement the experimental results using varied values of window (grid) size and loss function weight $\lambda$. We can see that increasing the grid size may lead to better performance, as more pixels could be utilized in DSA and grid SA. When the loss weight $\lambda$ is set to too small or to large, the performance will decrease noticeably. **Table S2: Results of ablation studies using different grid sizes and loss function weights on SIDD-Validation.** | Grid size | PSNR(dB) | SSIM | \| | Loss weight | PSNR(dB) | SSIM | | :-------: | :---: | :---: | ---- | :---------: | :---: | :---: | | 8 | 37.43 | 0.874 | \| | 0.01 | 37.35 | 0.879 | | 12 | 37.51 | 0.879 | \| | 0.1 | 37.57 | 0.881 | | 16 | 37.63 | 0.882 | \| | 1 | 37.63 | 0.882 | | 20 | 37.65 | 0.882 | \| | 10 | 37.51 | 0.880 | --- **[W3]** *Although the paper claims improvements in time complexity over other transformer-based methods, the computational requirements for training and inference are still likely higher compared to traditional CNN-based approaches, which could limit practical applicability in resource-constrained environments.* We agree that transformer-based models are generally more computationally expensive than CNN-based models. However, the impressive performance potential of transformer-based methods makes their study worthwhile. In our experiments, the proposed transformer-based method is faster than other transformer-based self-supervised learning methods. The current weakness of transformer-based methods in computational efficiency is an important research topic, and there is an active and ongoing effort to improve this aspect. --- **[Q1]** *One critical assumption of blind-spot networks is that noise across pixels is independent ... How did your BSN approach handle this noise correlation, as this was not detailed in the paper?* The noise correlation is addressed by masking out the 4$\times$4 half-plane neighboring locations around the center pixel in the attention window of the directional self-attention of SelfFormer-D during training. By masking out these neighboring pixels, the pixels utilized in the attention windows maintain a distance from the central pixel, and the noise correlation decreases with the distance between two pixels. We will include this detail in the revision. --- Rebuttal Comment 1.1: Comment: Thank you for the detailed rebuttal. I appreciate the effort the authors put into addressing the concerns I raised. The responses were clear and addressed my questions thoroughly. However, despite the promising result, the computational demands still remain a significant concern compared to CNN-based approaches, particularly in resource-constrained environments. While the proposed method shows improvements over other transformer-based approaches, the broader practical applicability could still be limited by these computational requirements. Given this, I believe that a 'weak accept' rating is still appropriate. The paper offers valuable contributions to the field, but the trade-offs in computational efficiency may restrict its impact in real-world scenarios.
Rebuttal 1: Rebuttal: We sincerely thank the reviewers for their valuable feedback and insightful suggestions. We appreciate the constructive nature of this review process and are committed to thoroughly addressing each aspect. Pdf: /pdf/ca729db0974868fd6a01f78e9db80491c2cf0c82.pdf
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
On the Comparison between Multi-modal and Single-modal Contrastive Learning
Accept (poster)
Summary: In this paper, the authors performed theoretical analysis between single-modal and multi-modal contrastive learning. The authors proved theorems that indicates single-modal contrastive learning tend to perform worse on test datasets after converging on the training set, while multi-modal contrastive learning tend to perform better on the test set. They also conducted small-scale simulated experiments that confirmed their theoretical results. Strengths: The paper studies the important topic of contrastive learning, and put effort into creating a theoretical understanding of the difference between single-modal and multi-modal contrastive learning. They also conducted small-scale simulated experiments that confirmed their theoretical results. Weaknesses: There are many problems with the proof: 1. The starting point of the proof is Assumption 4.1. However, Assumption 4.1 contains many issues that puts the soundness and correctness of the entire proof as well as the usefulness of the results into question: (a) Incorrect use of big-O-like notations. Big-O notations are formally defined only as sets of functions with variables, but in Assumption 4.1, the authors used big-O-like notation to bound constants, which doesn't make sense. (b) Assumption (1) is extremely unrealistic. It basically requires that $d>n^2$, meaning that the input data's dimension is larger than the square of the number of data points we have in the dataset. In any real world scenario, your data's input dimension will not be anywhere close to the square of the size of your dataset. For example, OpenAI CLIP is trained with $n=400M$ data instances, and the instance dimension is in the order of $d<2\times 10^5$, so in this case, $d < \frac{1}{10^{11}}n^2$. (c) Undefined variable $\sigma_0$. Technically $n$ and $\eta$ is also never formally defined in text, but we can guess that they mean the size of the dataset and learning rate respectively. (d) It is also unclear how assumption (6) distinguishes between the single-modal and multi-modal features, as the assumption only applies to the first modality's SNR. 2. Theorem 4.2 and 4.3, the main results of the paper, are not well-defined. $\epsilon$ is supposed to be a vector according to Eq(5), but the main result of Theorem 4.2 says that the train loss (which is scalar) is less than $\epsilon$. 3. Confusing definition of the loss derivatives in Eq (8). Are these definitions for $l_i^{(t)}$ or for $l'^{(t)}_{i}$? These expressions does not look like derivatives. 4. Lemma 5.1 is not proven anywhere, but is then directly used in proving Lemma C.3 and Lemma C.7. 5. The overall proof is organized poorly. It's very difficult to understand what each Lemma is trying to achieve and how are the Lemmas connected to each other, especially between the main text and the Appendix. For example, it is very hard to find where Lemma 5.2 (in the main paper) is proved in the Appendix. Perhaps the authors should add references in the main paper to the appendix (for example, state after Lemma 5.2 "proof in Appendix X.X"). 6. The proof roadmaps in Section 5 does not lead to conclusions in Theorems mentioned in Section 4. For example, the roadmap in Section 5.1 does not explicitly tell us how we end up with Theorem 4.2. 7. In proof of Lemma B.2, on Line 560, it is unclear what "RHS" is referring to. Is it the RHS of the last equation on the previous page? Or is it the one immediately before it on line 560? If it is the latter case, you shouldn't be allowed to "set" it to $\delta\/m$ since $\delta$ already appears in the expression before. In addition to problems in the proof, it is also unclear how the "noise memorization" and "signal learning" evaluation numbers are defined in the experiment in Figure 1. Technical Quality: 2 Clarity: 1 Questions for Authors: 1. Please address the problems in the weakness section. 2. Does your theorems provide any new insight into future research contrastive learning? Any actionable suggestions? Confidence: 2 Soundness: 2 Presentation: 1 Contribution: 2 Limitations: Discussion is adequate. Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their detailed and constructive feedback on our paper. Below, we provide detailed responses to the main comments --- **W1(a)** Incorrect use of big-O-like notations. **R1(a)** We appreciate the reviewer's attention to the details of our notation. We have followed standard usage of big-O notation, such as in [1,2,3,4]. Big-O notation is indeed a mathematical tool used to describe the asymptotic behavior of functions. While it is most commonly used to describe functions in terms of variables, it can also be used to describe constant bounds. For instance, the notation $O(1)$ is widely accepted and used to indicate a constant upper bound regardless of the size of the input. --- **W1(b)** $d>n^2$ is unrealistic. **R1(b)** We appreciate the reviewer's concern regarding the assumption $d> n^2$. Our work seeks to provide a theoretical understanding of the difference between multi-modal and single-modal contrastive learning in the over-parameterization regime. The high dimensional setting we adopted ensures sufficient over-parameterization, as the number of trainable parameters is $d \times m$. This setting is common in theoretical studies to simplify analysis and highlight key phenomena. Similar high-dimensional settings have been adopted in relevant prior works [3,4,5,6]. --- **W1(c)** Undefined variable σ_0, n, η. **R1(c)** Thank you for bringing this to our attention. $\sigma_0$ represents the initialization strength of the weights of the neural network, where $\mathbf{w}^{(0)}_r \sim \mathcal{N}(0, \sigma^2_0 I)$. $n$ is the number of training samples and $\eta$ is the learning rate. We will ensure that these variables are explicitly defined and clearly explained in the revised version of the manuscript. --- **W1(d)** Unclear how assumption (6) distinguishes between the single-modal and multi-modal features. **R1(d)** Assumption (6) specifically applies to the first modality's SNR. Assumption (7), on the other hand, specifies the relationship between the signal strengths of the two modalities, given that we assume the noise scale is the same across both modalities (as stated on Line 123). Together, assumptions (6) and (7) establish the requirements on the SNR for the second modality, thereby distinguishing the roles of single-modal and multi-modal features. --- **W2** ϵ is a vector according to Eq(5), but is scalar in Theorem 4.2. **R2** In our paper, we use bold-faced letters to denote vectors and matrices, while non-bold letters represent scalars (line 106). Therefore, the scalar $\epsilon$ and the vector $\boldsymbol{\epsilon}$ are distinct variables. We will ensure to clarify this notation in the revised version of the manuscript to avoid any confusion. --- **W3** Confusing definition of the loss derivatives in Eq (8). **R3** We use $\ell'^{(t)}_i$ to denote the 'effective' loss derivative, which is a component of the gradient of $L$ with respect to $L$ with respect to $\mathbf{w}_r^{(t)}$. The complete form of gradient is given in Eq. (10) in the main paper. We will ensure to clarify this notation in the revised manuscript to avoid any confusion. --- **W4** Lemma 5.1 is not proven. **R4** Lemma 5.1 is derived directly from the weight decomposition (Eq 11) and gradient descent training (Eq 7). Similar results can be found in the literature [3,7]. We will include the proof of Lemma 5.1 in the appendix of the revised manuscript to ensure completeness and clarity. Thank you for pointing this out. --- **W5**: Hard to find where Lemma 5.2 is proved in the Appendix. **R5**: Thank you for the comments. We will make sure to restructure the proof to improve the clarity and add explanations for each lemma. For the proof of Lemma 5.2, it can be found in Appendix C.1.3. Apart from the contents of the appendix which we have already provided, we will add references to Lemma 5.2. --- **W6**: The roadmap in Section 5.1 does not explicitly tell us how we end up with Theorem 4.2. **R6**: We would like to highlight that proof roadmaps are to provide *major* steps and lemmas (rather than all of them) that lead to a final conclusion. And we leave the remaining part in the appendix given the limited space in the main paper. In our revised version, we will add more explanations by including the remaining proof sketches that lead to Theorems in Section 4. Here we make an explanation using single-modal learning (Section 5.1) as an example. After we have shown convergence of the dynamics in the second stage, we can show the maximum signal learning is on the order of $\tilde O(1/\sqrt{n})$, while maximum noise memorization is on the constant order (Lemma C.17). This means for downstream tasks, the embedding of a test data would be dominated by the noise, overshadowing the signal. Thus the embedding is not linearly separable, which gives a constant test loss. --- **W7**: Line 560, it is unclear what "RHS" is referring to. **R7**: the RHS denotes RHS of the inequality $ \mathbb{P} \big( |\langle \mathbf{w}^{(0)}_{r} , \boldsymbol{\mu} \rangle| \leq c \big) \leq 2 \sqrt{1 - \exp\big( - \frac{2 c^2}{\sigma_0^2 \\| \boldsymbol{\mu} \\|_2^2 \pi} \big)}$ The $\delta$ here is defined in line 558 of Lemma B.2. In the proof of Lemma B.2, it is the first time to use $\delta$, thus it is valid. Similarly, the $\delta$s in Lemma B.3, B.4, B.5 are all defined within the scope of the respective lemma. --- **W8**:It is also unclear how the "noise memorization" and "signal learning" are defined in the experiment in Figure 1. **R8**: The signal learning and noise memorization refer to the coefficients $\gamma_r^{(t)}$ and $\rho_{r,i}^{(t)}$, which we have already explicitly defined in Line 214. In the experiments, we track the coefficients along the optimization path to plot the results. --- **Q**: Does your theorems provide any new insight into future research contrastive learning? **A**: Please refer to the global rebuttal --- Rebuttal Comment 1.1: Title: Additional Discussion Comment: Thank you so much for your replies! I still have some remaining concerns about Lemma B.2. Specifically, there still seems to be a double-definition of $c$ and $\delta$. You first defined $c =100n∥µ∥_2σ^{−1}_ξd^{−1}\sqrt{8log(6n/δ)}$ on line 560, so $c$ is defined depending on $\delta$; but then, you set RHS (which is an expression containing $c$) directly as $\frac{\delta}{m}$, which shouldn't be allowed. The version available in the Appendix is too condensed for readers to fully grasp the details. Can you please provide a line-by-line detailed proof of Lemma B.2? This will help me better assess the correctness of the proofs too. Thank you! --- Rebuttal 2: Comment: **References** [1] Allen-Zhu, Zeyuan, et al. A convergence theory for deep learning via over-parameterization. In International conference on machine learning. 2019. [2] Allen-Zhu, Zeyuan. Learning and generalization in overparameterized neural networks, going beyond two layers. In Advances in neural information processing systems. 2019. [3] Cao, Yuan, et al. Benign overfitting in two-layer convolutional neural networks. In Advances in neural information processing systems. 2022. [4] Meng, Xuran, et al. Benign Overfitting in Two-Layer ReLU Convolutional Neural Networks for XOR Data. In International Conference on Machine Learning. 2024. [5] Wen, Z. and Li, Y., 2021, July. Toward understanding the feature learning process of self-supervised contrastive learning. In International Conference on Machine Learning (pp. 11112-11122). PMLR. [6] Xu, Z., Wang, Y., Frei, S., Vardi, G., & Hu, W. (2023). Benign overfitting and grokking in relu networks for xor cluster data. ICLR 2024. [7] Kou, Yiwen, et al. "Benign overfitting in two-layer ReLU convolutional neural networks." International Conference on Machine Learning. PMLR, 2023. --- Rebuttal 3: Title: Detailed Response to Follow-Up Question on Lemma B.2 Comment: Thank for your the follow-up question. Below, we provide a line-by-line proof for Lemma B.2. *Proof of Lemma B.2.* By Lemma B.1 (anti-concentration result for Gaussian random variable), and because $\langle \mathbf{w}^{(0)}_r, \mathbf{\mu} \rangle \sim \mathcal{N}(0, \sigma_0^2 \| \mathbf{\mu} \|^2_2)$, we can show $$ \mathbb{P} \big( |\langle \mathbf{w}^{(0)}_{r} , \mathbf{\mu} \rangle| \leq c \big) \leq 2 \sqrt{1 - \exp\big( - \frac{2 c^2}{\sigma_0^2 \| \mathbf{\mu} \|_2^2 \pi} \big)}. $$ We first set $c = 100 \cdot \text{SNR} \sqrt{\frac{8\log(6n/\delta)}{d}} n$ and plug it into the RHS of the above inequality, which becomes $$\text{RHS} = 2 \sqrt{1 - \exp\big(- \frac{16\times100^2 \cdot \log(6n/\delta) n^2}{\sigma_0^2 \sigma_\xi^2 d^2 \pi} \big)}.$$ Then we can verify that when $d$ satisfies the condition on Line 561, i.e., $d \geq \frac{400n}{\sigma_0 \sigma_\xi} \sqrt{\frac{\log(6n/\delta)}{-\pi \log(1- \delta^2/(4m^2))}}$, we can verify that $$\text{RHS} \leq \frac{\delta}{m}.$$ This suggests for a single neuron $r \in [m]$, we have $$\mathbb{P}( |\langle \mathbf{w}_r^{(0)}, \mathbf{\mu} \rangle| \leq c ) \leq \frac{\delta}{m}.$$ Taking the union bound for all the neurons $r \in [m]$, we have $$ \mathbb{P} \big( \cup\_{r =1}^m \{ | \langle \mathbf{w}\_r^{(0)}, \mathbf{\mu} \rangle | \leq c \} \big) \leq \sum_{r=1}^m \mathbb{P}( |\langle \mathbf{w}_r^{(0)}, \mathbf{\mu} \rangle| \leq c ) = \delta. $$ This then implies that $$\mathbb{P}( \cap_{r =1}^m \{|\langle \mathbf{w}_r^{(0)}, \mathbf{\mu} \rangle| \geq c \} ) \geq 1 -\delta.$$ This concludes that with probability at least $1-\delta$, for all neurons $r \in [m]$, we have $|\langle \mathbf{w}_r^{(0)}, \mathbf{\mu}\rangle| \geq c = 100 \cdot \text{SNR} \sqrt{\frac{8\log(6n/\delta)}{d}} n$. We hope the above line-by-line proof resolves your question and we believe the wording "set RHS to $\delta/m$" is what has caused the confusion. We will remove such wroding and make the proof clearer in the revised version. If you have any further questions, we are more than willing to address them. If you find that our responses have sufficiently addressed your concerns and questions, please consider increasing your score. Thank you for your time. --- Rebuttal Comment 3.1: Title: Continued Discussion Comment: Thank you for your response! Your line-by-line proof of Lemma B.2. is much clearer than what was available in the paper, and this allowed me to verify that the proof of Lemma B.2. is correct, and I continued to verify the correctness of the proofs afterwards that depends on it. One problem I found is the inconsistency of scopes of symbols. There are global definitions/assumptions of symbols from the main paper, then some additional assumptions/definitions are made from the Lemma statement, which makes it confusing as readers are unsure which assumption still holds within the proofs of each Lemma. For example, the symbol "d" is bounded within Assumption 4.1, but is then again given a lower bound in Lemmas B2,B4,B5,C3,C5,C6, etc. The lower bounds does not have a clear relative size ordering, and is not necessarily bounded by Assumption 4.1 (such as the assumption of lower bound of d from Lemma C6). Technically, within each lemma's proof, the statements should assume the lower bound of d stated in the Lemma, not Assumption 4.1, unless explicitly stated. Therefore, for each following lemma that depends on a previous lemma, the lower bound assumption on d must be stronger than the one from the previous lemma. There are also different assumptions on the size n (lemma C4, lemma C10) Therefore, can you (1) clarify the scopes of Assumptions/Definitions of all variables, and (2) explain why the earlier lemmas can be used in the proofs of later lemmas without proof of tighter/equal bound assumptions? Also, regarding R6, it is not trivial to prove Lemma 5.2 from Lemma C.17, and I am unable to find the intermediate steps of this proof. Similarly, there needs to be proof of Lemma 5.1 from Lemma C.11, and steps of the proof of Theorem 4.2 from Lemma 5.1 and 5.2. If there is no space in the main paper, you should include them in the appendix at least. (same situation appear for Lemma D.* and theorem 4.3 and Lemma 5.3/5.4) An additional comment/suggestion: the current full proof is very poorly written and poorly organized. They are extremely difficult to read. Some of the variable definitions/assumptions occurred more than 20 pages ago, and there should be reminders of the definitions/assumptions. The current proof writing quality would not be admissible to any mathematical journals where reviewers actually check the correctness of your proofs. If everything is written step-by-step and well-defined like the Lemma B.2. proof that you rewrited above, the writing quality would be good enough to allow reviewers to verify the correctness of your proofs. --- Reply to Comment 3.1.1: Comment: Thank you for your detailed feedback. We appreciate the opportunity to clarify the assumptions and structure of our proofs. **Global and Local Assumption**: Assumption 4.1 is indeed a global assumption that applies to the entire paper. The assumptions made within specific lemmas do not conflict with this global assumption. In other words, the assumptions in specific lemmas can be weaker but are still compatible with Assumption 4.1. If there is any uncertainty about which assumption should be applied within a lemma, Assumption 4.1 should be considered as the default. **Regarding the Assumption on $d$**: Assumption 4.1 provides a unified lower bound that encompasses the requirements from various lemmas, such as B.2, B.4, B.5, C.3, C.5, and C.6. The lower bound on $d$ in Lemma C.6 is indeed bounded by Assumption 4.1. Specifically, the condition on $d$ in Lemma C.6 can be shown to satisfy Assumption 4.1: $ \sqrt{\frac{300 \log(6n^2/\delta)}{-\log(1-\delta^2/4n^2) \pi \sigma_0^2\sigma_\xi^2}} = \tilde{\Theta}(1/(\sigma_0 \sigma_\xi ) ) $ which is consistent with Assumption 4.1. **Inheritance of Conditions in Lemmas**: When a lemma is used in the proof of a subsequent lemma, the conditions from the earlier lemma are inherited. This ensures that the assumptions remain consistent and valid throughout the proofs. **Proof Details and Organization**: Regarding Lemma 5.2 and Theorem 4.2, Lemma 5.2 ensures that until convergence of the dynamics in the second stage, the maximum signal learning is on the order of $\tilde{O}(n^{-1/2})$, while the maximum noise memorization remains constant (as shown in Lemma C.17). The results in Theorem 4.2 are derived through the arguments provided in Section C.3. In the appendix, we have organized the material using section and subsection titles to provide a clear structure. While some steps in the proofs are streamlined, this approach aligns with standard literature in deep learning theory [1, 2, 3]. We are more than willing to address any specific questions or concerns you may have. We hope this clarification resolves your concerns. If you have any further questions or suggestions, please feel free to ask. [1] Allen-Zhu, Zeyuan, et al. A convergence theory for deep learning via over-parameterization. ICML 2019. [2] Cao, Yuan, et al. Benign overfitting in two-layer convolutional neural networks. NeurIPS 2022. [3] Xu, Z., Wang, Y., Frei, S., Vardi, G., & Hu, W. (2023). Benign overfitting and grokking in relu networks for xor cluster data. ICLR 2024.
Summary: This paper presents a theoretical framework to understand the differences between multi-modal and single-modal contrastive learning approaches. It emphasizes the impact of signal-to-noise ratio (SNR) on the generalizability of these learning methods in downstream tasks. The authors argue that multi-modal learning, through the cooperation of modalities, can achieve better feature learning and performance in downstream tasks compared to single-modal learning. Strengths: - This paper explore an interesting problem. As to why multi-modal contrastive learning might be more effective than its single-modal, this paper provides a feature learning theory framework for analyzing differences between multi-modal and single-modal contrastive learning which is valuable for the machine learning community. - This paper delves into the nuanced dynamics of multi-modal and single-modal contrastive learning by employing a data generation model that incorporates signal and noise, thereby enabling an in-depth study of the optimization trajectories under gradient descent. - The authors notably identify the signal-to-noise ratio (SNR) as a pivotal determinant of the generalizability across learning methods, and compellingly demonstrates that the orchestrated interplay between modalities in multi-modal contrastive learning fosters superior generalization in downstream tasks, underscoring the importance of harmonizing diverse data streams for effective feature learning. Weaknesses: - **Strong assumptions made in the theoretical analysis** The analysis relies on some strict assumptions, such as specific signal-to-noise ratios and network initialization conditions, which might not hold in real-world applications. - **Some influential previous works are not introduced** such as UMT/UME[1], QMF[2], MMParato[3], and ReconBoost[4]. [1] On Uni-Modal Feature Learning in Supervised Multi-Modal Learning. ICML 2023. [2] Provable Dynamic Fusion for Low-Quality Multimodal Data. ICML2023 [3] MMPareto: Boosting Multimodal Learning with Innocent Unimodal Assistance. ICML 2024 [4] ReconBoost: Boosting Can Achieve Modality Reconcilement. ICML 2024. Technical Quality: 3 Clarity: 3 Questions for Authors: The theoretical analysis is predicated on rather strong assumptions. If the volume of data and the complexity of the models are significantly increased, would conclusions still be applicable? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: None Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their constructive feedback and thoughtful comments on our paper. We appreciate the opportunity to address the points raised and clarify our contributions. Below, we provide detailed responses to the main comments and questions. --- **W1**: Strong assumptions made in the theoretical analysis The analysis relies on some strict assumptions, such as specific signal-to-noise ratios and network initialization conditions, which might not hold in real-world applications. **R1**: The data model adopted in this work aligns with related works in the field. Specifically, we use small weight initialization from a Gaussian distribution, ensuring that the network with gradient descent can effectively perform feature learning, as demonstrated in [a,b]. If large initialization were used, the training might fall into the lazy training regime or neural tangent kernel (NTK) regime, which has different learning dynamics and generalization properties [c,d]. The specific SNR values were chosen to create a scenario where we can make a meaningful comparison between single-modal and multi-modal contrastive learning. Similar strategies have been adopted in prior works [e,f]. --- **W2**: Some influential previous works are not introduced such as UMT/UME[1], QMF[2], MMParato[3], and ReconBoost[4]. **R2**: We thank the reviewer for pointing out these valuable references. Indeed, the mentioned works provide significant theoretical and empirical insights into multimodal learning: [1] "On Uni-Modal Feature Learning in Supervised Multi-Modal Learning" (UMT/UME) explores feature learning in multimodal contexts, although its setting and claims differ from ours. [2] "Provable Dynamic Fusion for Low-Quality Multimodal Data" (QMF) focuses on a popular multimodal fusion framework from a generalization perspective. [3] "MMPareto: Boosting Multimodal Learning with Innocent Unimodal Assistance" identifies previously ignored gradient conflicts between multimodal and unimodal learning objectives through optimization analysis. [4] "ReconBoost: Boosting Can Achieve Modality Reconcilement" explores a novel multimodal alternating learning paradigm that aims to reconcile the exploitation of unimodal features with the exploration of cross-modal interactions. These studies are relevant to our work, and we will cite them and add a discussion in our revised version to provide a more comprehensive overview of the related literature and to position our contributions in the broader context of multimodal learning research. --- **Q**: The theoretical analysis is predicated on rather strong assumptions. If the volume of data and the complexity of the models are significantly increased, would conclusions still be applicable? **A**: Our theoretical analysis indeed can cover scenarios with large data sizes (when sample size $n$ and dimension $d$ increases). This ensures that our conclusions about training dynamics and generalization hold even with an increase in the volume of data. We believe that the main insights regarding the benefits of modality cooperation and the importance of signal-to-noise ratio (SNR) are likely to extend to more complex data distributions (such as non-linearly separable) as well as more complex model architecture (such as transformers). The core principles of multi-modal learning, such as leveraging complementary information from different modalities to enhance signal learning, are not inherently tied to the simplicity of the data model. We acknowledge that more complex scenarios require further investigation and we consider this an interesting direction for future work. --- **References** [a] Cao, Yuan, et al. "Benign overfitting in two-layer convolutional neural networks." Advances in neural information processing systems 35 (2022): 25237-25250. [b] Suzuki, Taiji, et al. "Feature learning via mean-field langevin dynamics: classifying sparse parities and beyond." Advances in Neural Information Processing Systems 36 (2024). [c] Jacot, Arthur, Franck Gabriel, and Clément Hongler. "Neural tangent kernel: Convergence and generalization in neural networks." Advances in neural information processing systems 31 (2018). [d] Chizat, Lenaic, Edouard Oyallon, and Francis Bach. "On lazy training in differentiable programming." Advances in neural information processing systems 32 (2019). [e] Wen, Zixin, and Yuanzhi Li. "Toward understanding the feature learning process of self-supervised contrastive learning." International Conference on Machine Learning. PMLR, 2021. [f] Zou, Difan, et al. "Understanding the generalization of adam in learning neural networks with proper regularization." arXiv preprint arXiv:2108.11371 (2021). --- Rebuttal Comment 1.1: Comment: I appreciate the author's great efforts in the rebuttal phase. I will keep my initial score as a final decision. --- Reply to Comment 1.1.1: Comment: Thank you for your positive support and for taking the time to review our rebuttal. We appreciate your thoughtful consideration throughout the review process.
Summary: This paper provides a theoretical analysis comparing single-modal and multi-modal contrastive learning. The authors develop a unified framework to analyze the optimization dynamics and generalization capabilities of both approaches. Key findings include: - Both single-modal and multi-modal contrastive learning can achieve small training error. - Multi-modal contrastive learning generalizes better to downstream tasks compared to single-modal learning. - The advantage of multi-modal learning comes from cooperation between modalities and higher quality data in the second modality. The analysis is based on a two-stage optimization process and uses a signal-noise data generation model. Theoretical results are supported by synthetic experiments. Strengths: - Develops a unified framework to analyze both single-modal and multi-modal contrastive learning, allowing for direct comparisons. - Provides detailed theoretical analysis, including convergence guarantees and generalization bounds. - Identifies key factors (signal-to-noise ratio, cooperation between modalities) that explain the superior performance of multi-modal learning. - Supports theoretical findings with synthetic experiments that align well with the analysis. Weaknesses: - The paper could benefit from more intuitive explanations of the key theoretical results to make them more accessible to a broader audience. For example, can you provide intuition for why the cooperation between modalities leads to better signal learning in the multi-modal case? How might this insight be leveraged to improve single-modal contrastive learning? - The paper lacks discussion on how the insights could be applied to improve existing contrastive learning methods or guide the development of new approaches. Do your theoretical insights suggest any practical strategies for improving multi-modal contrastive learning, such as ways to select or preprocess data to increase the effective signal-to-noise ratio? - The theoretical setup and assumption of the linear data generation model are somewhat simple and restricted. Do you expect the main insights to hold for more complex data distributions? - The experimental validation is limited to synthetic data. Including experiments on real-world datasets, even if simplified, would strengthen the paper's impact. Technical Quality: 3 Clarity: 3 Questions for Authors: Line 205: “On the contrary, augmentation often maintains the same SNR as the original data, so single-modal learning hardly benefits from the augmentation and can only memorize the noise from the data.” This claim significantly contradicts empirical experiments such as SimCLR and MoCo. How would you justify this? Confidence: 3 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: See weaknesses. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We would like to thank the reviewer for their constructive feedback and insightful comments on our paper. We appreciate the opportunity to address the points raised and clarify our contributions. Below, we provide detailed responses to the main comments and questions. --- **W1**: Can you provide intuition for why the cooperation between modalities leads to better signal learning in the multi-modal case? How might this insight be leveraged to improve single-modal contrastive learning? **R1**: Both the single-modal and multi-modal contrastive learning aim to maximize the similarity between positive pairs while minimizing the similarity between the negative ones. The difference is single-modal learning forms positive pairs by data augmentation while multi-modal learning forms positive pairs by the correspondence between the two modalities. Usually, the data augmentation does not change the signal-to-noise ratio (SNR), while the two modalities can have different SNR. Due to the use of contrastive loss, the learning of signals in one modality highly depends on the signal learning in the other modality. Thus, when the other modality has a higher SNR, the learning of the first modality is lifted by aligning with the second modality. This cooperative learning leads to better overall signal learning compared to single-modal contrastive learning. To improve single-modal contrastive learning, one could potentially simulate the effect of multi-modal learning by augmenting the single modality with additional synthetic views that enhance the signal-to-noise ratio, thereby improving the overall learning process. --- **W2**: Do your theoretical insights suggest any practical strategies for improving multi-modal contrastive learning? **R2**: Please refer to the global rebuttal --- **W3**: Do you expect the main insights to hold for more complex data distributions? **R3**: While our current analysis is based on a linear data generation model, we believe that the main insights regarding the benefits of modality cooperation and the importance of signal-to-noise ratio (SNR) are likely to extend to more complex data distributions. The core principles of multi-modal learning, such as leveraging complementary information from different modalities to enhance signal learning, are not inherently tied to the simplicity of the data model. To ensure broader applicability, future work will focus on validating these insights with non-linear and more realistic data models. This will involve experimenting with a variety of data distributions and network architectures to confirm that the advantages of multi-modal cooperation and improved SNR hold true in more practical and complex scenarios. --- **W4**: The experimental validation is limited to synthetic data. **R4**: We have extended the comparison to realistic image data, ColoredMNIST [1,2], which is a typical benchmark studying the generalization capability under distribution shifts. The task is a 10-class classification that recognizes the number of the colored MNIST images. The two modalities are image and text that describes the images. We implement the multi-modal learning following the practice of [2], where we consider an ideal language encoder that successfully encodes the caption of the images into one-hot labels of colors and digits. For single-modal learning, we follow the implementation of the SimCLR paper to construct a set of augmentations to learn the representations. Then, under a mild and realistic distribution shift, we verify that CLIP archives an out-of-distribution test accuracy of 82.13\%, which outperforms that of SimCLR 12.68\%. | Model | Train Accuracy | Test Accuracy | |-------|----------------|---------------| | SimCLR | 88.43% | 12.68% | | CLIP | 87.77% | 82.13% | --- **Q**: Line 205: “On the contrary, augmentation often maintains the same SNR as the original data, so single-modal learning hardly benefits from the augmentation and can only memorize the noise from the data.” This claim significantly contradicts empirical experiments such as SimCLR and MoCo. How would you justify this? **A**: Our argument does not contradict the success of single-modal contrastive learning methods such as SimCLR and MoCo. The statement is made in a comparative context with multi-modal contrastive learning. In our analysis, when comparing single-modal to multi-modal contrastive learning, the augmentation in the single-modal case often does not improve the signal-to-noise ratio (SNR) as effectively as having a second, complementary modality would. Therefore, in scenarios where the SNR is not significantly enhanced by augmentation, single-modal learning may perform worse than multi-modal learning. Having said that, this does not imply that single-modal methods like SimCLR and MoCo fail in general. Empirical evidence shows that these methods are quite effective, particularly when the number of training samples is large and the SNR is sufficiently high. Our theoretical framework can indeed provide guarantees for the success of single-modal contrastive learning under such conditions. --- **References** [1] Invariant Risk Minimization arXiv 2019. [2] Do CLIPs Always Generalize Better than ImageNet Models? arXiv 2024. --- Rebuttal Comment 1.1: Comment: Thanks for the author's response! Most of my concerns have been addressed. One remaining question: could you elaborate more on the experimental setting for ColoredMNIST experiments? What do the training data & test data look like? --- Reply to Comment 1.1.1: Comment: Thank you for your positive feedback and further questions regarding the ColoredMNIST experiment. We are happy to provide additional details: - Image: The ColoredMNIST dataset is a variation of the standard MNIST dataset, where each digit is assigned a specific color based on its label. We consider a 10-class classification task in order to better match with the realistic setting. Specifically, we have 10 colors to color 10 digits, and introduce spurious correlations via label noises following the literature. - For the training set, 10% of labels will be clipped to a random class. For images with class '0' (or '1'), they will be colored as red (or green) with a probability of 77.5%, and as another random color with a probability of 22.5%. The coloring scheme introduces a spurious correlation. - For the test set, 10% of labels will be clipped to a random class. For images with class '0' (or '1'), they will be colored as green (or red) with a probability of 77.5%, and as another random color with a probability of 22.5%. The coloring scheme can be considered as reversing the training spurious correlations. Therefore, the evaluation on testsets can reflect to what extent the model learns to use the spurious features, i.e., colors, to classify images. - Text: We also consider an ideal language encoder that successfully encodes the captions of the images into one-hot labels of colors and digits. As a result, we can claim that the effective SNR of invariant features (the shape of the digit) will be degraded under the impact of the injected color. Therefore, the performance of SimCLR may be suboptimal as it cannot effectively utilize the information of the digit's shape. On the other hand, CLIP demonstrates a better capacity for handling this scenario. We hope this additional explanation clarifies the experimental setup for the ColoredMNIST experiments. If you have any further questions, please feel free to ask. If our answer has clarified your concerns, we would appreciate it if you could consider reevaluating our submission.
null
null
Rebuttal 1: Rebuttal: Dear Reviewers and ACs Thank you all for your time and constructive feedback! We are truly encouraged to see many positive comments on our work, such as the the *unified framework for an interesting problem* (Reviewer bcfY, Reviewer 7UFF, Reviewer nN5z), *thorough theoretical analysis* (Reviewer bcfY, Reviewer 7UFF), *identification of key factors* (Reviewer bcfY, Reviewer 7UFF), and *supportive simulations* (Reviewer bcfY, Reviewer nN5z). Your insights have greatly helped us strengthen our manuscript. Here, we provide a response to a common question raised by Reviewer bcfY (W2) and Reviewer nN5z (Q): **Q**: Do your theoretical insights suggest any practical strategies for improving multi-modal contrastive learning?/ Does your theorems provide any new insight into future research contrastive learning? Any actionable suggestions? **A**: Our theoretical results highlight that increasing the effective signal-to-noise ratio (SNR) across modalities is crucial for improving multi-modal contrastive learning. This leads to two practical strategies for developing new approaches: - **Selecting or Generating High-Quality Data Pairs** Ensuring that the signal is strong and well-aligned across modalities can significantly improve the performance. For instance, improving the quality of aligned image-caption samples by filtering out poorly aligned pairs used for training can enhance the overall learning process [1,2,3] - **Improving SNR for the Text Modality** Enhancing the descriptiveness of the captions can boost the SNR for the text modality, leading to better performance in multi-modal contrastive learning [4,5,6] Should the reviewers have any further suggestions or wish to discuss any points in more detail, we would be more than delighted to continue our productive exchange. Once again, we deeply appreciate the reviewers' time and valuable comments. Best regards, Authors of Submission 555 --- **References** [1] Schuhmann, Christoph, et al. "Laion-5b: An open large-scale dataset for training next generation image-text models." Advances in Neural Information Processing Systems 35 (2022): 25278-25294. [2] Nguyen, Thao, et al. "Quality not quantity: On the interaction between dataset design and robustness of clip." Advances in Neural Information Processing Systems 35 (2022): 21455-21469. [3] Gadre, S.Y., Ilharco, G., Fang, A., Hayase, J., Smyrnis, G., Nguyen, T., Marten, R., Wortsman, M., Ghosh, D., Zhang, J. and Orgad, E., 2024. Datacomp: In search of the next generation of multimodal datasets. Advances in Neural Information Processing Systems, 36. [4] Santurkar, Shibani, et al. "Is a caption worth a thousand images? a study on representation learning." The Eleventh International Conference on Learning Representations. 2023. [5] Nguyen, Thao, et al. "Improving multimodal datasets with image captioning." Advances in Neural Information Processing Systems 36 (2024). [6] Fan, L., Krishnan, D., Isola, P., Katabi, D. and Tian, Y., 2024. Improving clip training with language rewrites. Advances in Neural Information Processing Systems, 36.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null
Graphcode: Learning from multiparameter persistent homology using graph neural networks
Accept (poster)
Summary: This paper introduces graphcodes, biparametric persistence summaries that are empirically efficient to compute yet not topologically invariant. It also provides a dedicated C++ library for calculating graphcodes. Graphcodes are structured as layered graphs; each layer contains vertices representing points in persistence diagrams derived from "horizontal slices" of a bifiltration. Edges connecting these points are induced by the inclusion maps between homology groups across the vertical slices. The authors propose the use of graphcodes as effective representations of data topology for graph neural networks, which facilitates the learning from topological features in datasets through deep learning approaches. The performance of graphcodes is empirically tested and benchmarked against other (multi)persistent summaries across three distinct tasks: graph classification, shape classification, and binary classification of samples derived from various random point processes. While graphcodes demonstrate superior performance in shape and binary classification tasks, they do not outperform state-of-the-art methods in graph classification. Strengths: **Significance**: The paper tackles an important problem in applied topological data analysis: the development of topological summaries or representations that "characterize" (or provide critical information about) multiparameter persistence modules in such a way that they are easy to use/study, as persistence diagrams and their vectorizations do in one-dimensional persistent homology. More than that, the proposed summaries, called graphcodes, are not only good representations in the sense that they are easy to understand (stacked and connected persistence diagrams) as one-dimensional persistence diagrams, but are also easy to represent and manipulate in machine learning tasks, as are diagram vectorizations in the one-dimensional case. The graph nature of the representation makes graphcodes ideal for machine learning tasks, where topological information can be combined with graph learning architectures to automatically extract topological information about the input. This last point may be really relevant in topological deep learning as an automatic topological feature extractor from topological domains. **Originality**: Although I am not an expert on the specific topic of multiparameter persistent homology, I know that its integration with machine learning is still in its beginning, and there are not a lot of papers exploring how to create efficient and ML-suitable representations of the topological information of data. This one does, and it does so in a very original and understandable way. Seeing biparametric filtrations as graphs is something clever that I have not seen before and opens the door to automatically extracting topological information from the input data using graph learning methods. **Quality and Clarity**: Regarding quality, I find the main text (and the parts of the appendix I read) to be of very good quality. For an expert in topological data analysis, the paper can be followed smoothly, and the main concepts are illustrated with figures. For a non-expert, the theoretical part will be probably hard, but I cannot think of any way of making it clearer with the limited number of pages. Regarding the experiments, the number of tasks is enough for a theoretical paper. Weaknesses: The main weakness of the paper is that the topological summary is not a topological invariant and depends on a choice of basis. This likely means that, in a real scenario, neural networks fed with graphcodes would require numerous examples to learn how to overcome this limitation. In a context where equivariance and invariance are becoming increasingly important in neural networks to avoid issues like this one, this may prevent graphcodes from being a suitable choice in deep learning pipelines. However, something similar occurs with graph eigenvector positional encodings, yet they are still used in state-of-the-art graph neural networks. The suitability of graphcodes may depend on how sensitive the performance of neural networks trained with graphcodes is to the choice of basis. Given that this conference is oriented towards machine learning, I believe that experiments assessing this sensitivity are crucial for the paper's acceptance. Regarding the entire set of experiments, I feel the discussion on the graph classification problem is not fair. For instance, it is mentioned that the performance of their methods is affected by small training sets for GNN architectures, but for the proteins dataset, a GNN achieves an accuracy of 84.91%, which is significantly higher than the methods reported, according to PapersWithCode: https://paperswithcode.com/sota/graph-classification-on-proteins. Although it is a drawback that the results with graphcodes are inferior to their counterparts, I do not think this is a critical weakness, and I believe it is preferable to have a more critical discussion of the method. Similarly, I think that the comparison with other methods in graph classification is unfair because they do not report the accuracies of neural networks trained with the other summaries. Although you mention partial experimentation, it is not specified adequately how these experiments were conducted, how many parameters they used, etc., to dismiss the experiments and present results from other tables. Regarding the text, it is unusual for me to work with half-open intervals [a, a). As stated in Figure 2, some cycles can be born at the same time they die. While it is useful for computations to have these bars, it is not standard to consider persistence diagrams with only some diagonal points. Even the notation is counterintuitive (what does an interval like this signify?). Perhaps it would be worth adding a comment about this fact. There is a typo in line 134 (bais -> basis). I believe there is a typo in line 143: I think you meant to say that it "contains consistent bases for all chain vector spaces $Z_p(K_1)$, ..., $Z_p(K_n)$"; subcomplexes are not vector spaces. Finally, regarding literature review, I feel like the Mapper paper [1], and [2] and [3] could be good additions to the section, The first one is used usually to generate graph representations of datasets that "preserves" topology and can also be used as inputs to graph neural networks. The second one analyzes expressivity of topology for graph learning (coincides very well with the topic). The third one is a topological layer based on persistent homology for graphs. [1] Singh, Gurjeet, Facundo Mémoli, and Gunnar E. Carlsson. "Topological methods for the analysis of high dimensional data sets and 3d object recognition." PBG@ Eurographics 2 (2007): 091-100. [2] Rieck, Bastian. "On the expressivity of persistent homology in graph learning." arXiv preprint arXiv:2302.09826 (2023). [3] Horn, Max, et al. "Topological graph neural networks." arXiv preprint arXiv:2102.07835 (2021). PS: Some basic experiments (really basic) on sensitivity and a slight modification of the discussion for graph classification being more critique would increase my score to weak accept. More critical experiments on sensitivity, or/and a good discussion about this section may likely make me to increase the score even higher. Technical Quality: 3 Clarity: 4 Questions for Authors: - Vertices of the graph are given by persistence diagrams computed from "horizontal" slices, and edges from vertical morphisms. Does the construction depend on which parameter you select to be "horizontal" and which parameter you select to be "vertical"? - Can computations be parallelized? If so, what is the impact on the computation time? I saw that you compute the whole graph reducing one big matrix. However, it seems to me that parallelization, like reducing several matrices at a time, could be worth in this context. - Could this be extended to $n$-parametric persistent homology? - Are there differentiability results for these computations? You compare your method with PersLay. One of the good points of PersLay is precisely that backpropagation can be computed using PersLay as in intermediate layer due to differentiability results of one-dimensional persistence barcodes. Can something be said about this? Confidence: 4 Soundness: 3 Presentation: 4 Contribution: 3 Limitations: Authors have addressed adequately the limitations of their paper through the text. Flag For Ethics Review: ['No ethics review needed.'] Rating: 6 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his comments. **Weaknesses:** Please also see "Expressivity and non-invariance of graphcodes" in the overall rebuttal above. Given a fixed size dataset one can create multiple graphcodes corresponding to different choices of bases for each instance. This is a way to teach the neural network to learn relevant information independent of the chosen basis without the need of a bigger initial dataset. We only included the TUDatsets as they seem to be the standard benchmark for multi-parameter topological descriptors. We are aware that GNN's outperform topological descriptors on these datasets and we don't see any evidence that these datasets are particularly well suited for topological descriptors. But we want to point out that graphcodes are not inferior to other methods on all of these datasets. They are certainly not dominant but on some of the datasets they outperform some of the other methods. And non of the methods outperforms every other method on every dataset. One can also not directly compare the data required for a GNN taking the graphs directly as an input to a GNN taking the graphcodes as an input as the graphcodes can be much bigger than the initial graphs. We actually did experiments testing the sensitivity to different choices of basis. In the uploaded software we implement an option (do-exhaustive-reduction) to compute a basis based on minimal lexicographic cycles and an option (do-random-shuffle) to make random changes to the basis. The minimal lexicographic cycles didn't change the classification accuracy while the random bases lead to slightly worse results. We are happy to include these experiments in a final version. We also propose to change the basis of each graphcode in every iteration of the training process to teach the neural network to extract the relevant information independent of the bases. The problem is that randomly changing the basis is not trivial. The simple method we implemented in (random shuffle) increases the number of edges a lot. Doing this kind of basis changes in every training iteration would make the training process slow. But this is certainly something that can be improved in future work. **Question 1:** Graphcodes can be constructed by taking slices vertically or horizontally which we have also implemented in the software. Both choices of direction lead to differtent graphs but both of them contain the same complete information of the two-parameter persistence module. One could also consider taking diagonal slices. **Question 2:** Yes one could compute the subgraph for any two consecutive slices independent of each other in parallel. For computing graphcodes from arbitrary two-parameter persistence modules that would probably be the most efficient option. But in the case of persistence modules coming from bifiltered simplicial complexes every simplicial slice is contained in the following one. What makes the graphcode algorithm so efficient is that we don't have to compute the persistence of all slices successively but we can compute the graphcode of all slices at once in the same asymptotic time as computing the persistence of the top slice. **Question 3:** This could theoretically by extended to persistence modules over any poset of the form $P\times \mathbb{R}$ where the $\mathbb{R}$ part is compressed by taking barcodes and maps between bars over different points in $P$ are connected by edges in a similar fashion. **Question 4:** This is an interesting question but unfortunately we don't have an answer at the moment. --- Rebuttal Comment 1.1: Comment: Thank you very much for your answers. I'm really glad that you performed experiments on the sensitivity of the method to different choices of basis. I would really love to see a brief discussion of this in the paper. That said, I'm not convinced by your first argument: I need data that corroborate your point. Although theoretically valid, I would need to see if networks are really able to extract information independent of the choice of basis. For example, experiments comparing accuracies with respect to the number of "augmented" representations with different bases would be really clarifying. Regarding datasets, I'm not complaining about the results, but about how the discussion goes in the paper. I'm planning to raise my score, but I would need to see a more critical discussion. As you say, you are starting a new research topic! In particular, I'm talking about sentences like this line: *Moreover, we believe that the performance of our pipeline relative to the other methods is worse on these datasets compared to the following experiments because the training sets are rather small, which is disadvantageous for our GNN architecture*. Also, for improved transparency on the results, I would add some results for usual graph neural networks (non-topological), and discuss the differences among topological and non-topological methods. If you think that it is really the case that this will work with more examples better than other SOTA methods, then I would require you to do experiments with bigger datasets, like ogb-lsc: https://ogb.stanford.edu/docs/lsc/pcqm4mv2/ This said, I'm really excited about this avenue of multiparameter persistent homology and machine learning. I'm going to trust you will make the changes requested and I'll raise my score to 6. Also, please address the comments I made that were not reflected in your answer (e.g., the ones about the typos). P.S.: Could you please include a reference for the theorem in the general rebuttal? I'm interested in that theorem and its formal formulation.
Summary: This paper introduces "Graphcode", a new representation for summarizing the topological properties of datasets filtered along two parameters. Graphcodes are based on persistent homology but aim to provide a more interpretable and efficient summary than existing multi-parameter topological descriptors. The key idea is to collect one-parameter sliced barcodes with basis-choice dependent mappings between consecutive persistence diagrams, which results in a bipartite graph structure, as the so-called Graphcode summary. The authors present an efficient algorithm to compute Graphcodes and demonstrate how they can be directly used as input to graph neural networks for machine learning tasks. Experiments on several datasets show that Graphcodes have competitive or superior performance compared to other existing topological descriptors of multi-parameter persistence, especially on tasks with clear topological signals. Strengths: - A new topological representation that captures two-parameter topological information. - Efficient computation algorithm that scales well to large datasets. Weaknesses: - Graphcode are not topologically invariant, depending on choice of basis. Impact of this is not fully explored. - Theoretical analysis of expressiveness/information captured by Graphcode is limited. Technical Quality: 2 Clarity: 2 Questions for Authors: - How sensitive are the results to the choice of basis used in constructing the Graphcode? Is there a way to make this choice optimal or at least consistent/stable locally? - Since the Graphcode is not topological invariant, one concern might be that if the GNN based on such method is still permutational invariant / equivalent? If yes, more discussion or justification should be included in the main text. If not, the potential issue or constraints in application should be discussed. - The theoretical foundations and analysis of what information Graphcode capture compared to other descriptors is somewhat limited. Can this be expanded? Confidence: 4 Soundness: 2 Presentation: 2 Contribution: 2 Limitations: As the author already mentioned in the paper, one main concern is raised from the issue that the Graphcode, as a topological descriptor, is not invariant. It might also break the stability property of persistence modules. A GNN based on Graphcode might no longer be permutation invariant or equivalent. More discussion about such limitations in both theory and application will buy the paper more benefits. Flag For Ethics Review: ['No ethics review needed.'] Rating: 4 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his comments. **Question 1:** The standard graphcode algorithm chooses a basis based on the reduction performed by the slicewise persistence algorithm. So in some sense the algorithm chooses a specific basis. In the uploaded software we also added an option (do-exhaustive-reduction) to compute a basis based on minimal lexicographic cycles and an option (do-random-shuffle) to make random changes to the basis. The minimal lexicographic cycles didn't change the classification accuracy while the random bases lead to slightly worse results. The problem with the way we perform random basis changes is that it strongly increases the number of edges. This can be explained by the way the reduction in the persistence algorithm works which leads to a relatively small graph. But we think that there is room for improvement in shuffling the bases without massively increasing the number of edges. But overall changing the bases did not dramatically affect the results. **Question 2:** The permutation invariance of graph neural networks is a feature of the architecture itself and is true independent of the input graph. But if the question is if two different graphs representing the same persistence module can lead to different classification results then theoretically this can happen. One way to deal with this problem is to randomly change the basis of the graphcode in every iteration of the training process to teach the neural network to extract the ismorphism type of the persistence module independent of the basis. As explained in the previous paragraph the current implementation of this random basis change increases the number of edges too much which makes the whole process slow. But this is something that can be worked out and we are optimistic that especially the expertise of the machine learning community can help a lot to improve the architecture. **Question 3:** Please see "Expressivity and non-invariance of graphcodes" in the overall rebuttal above. --- Rebuttal Comment 1.1: Comment: Thank you answering the questions. I will keep my score.
Summary: The paper introduces "graphcodes," a novel multi-scale summary of the topological properties of datasets using graph neural networks. Unlike traditional persistent homology, which uses a single parameter, graphcodes handle datasets filtered along two real-valued scale parameters, resulting in a more informative and interpretable summary. The paper outlines how graphcodes can be efficiently computed and integrated into machine learning pipelines, demonstrating improved classification accuracy over state-of-the-art approaches. Strengths: The paper introduces graphcodes, extending persistent homology to two-parameter scales, offering a novel and efficient method that integrates seamlessly into machine learning pipelines via graph neural networks. It demonstrates superior classification accuracy on various datasets compared to existing methods and provides an interpretable summary of topological features. The approach claims efficient computation comparable to one-parameter summaries, adding value through its innovation and practical performance. Weaknesses: From the experimental results, it’s hard to find too much practical meanings of using graphcode. Technical Quality: 2 Clarity: 2 Questions for Authors: 1. How is the graph constructed in graphcode? Is it a directed graph? 2. From Table 1, MP-HSM-C performs much better than GC. Moreover, as stated in Line 251, all approaches terminate within a few seconds even on the devices mentioned in Line 240. So do we really need a method that runs a few seconds faster at the cost of losing considerable accuracies? 3. As stated in Line 227, GC-NE can be viewed as Perslay. It seems that Perslay is already accurate and efficient enough. From my perspective, it’s more practical to use Perslay rather than graphcode considering the experimental results provided in this paper. Besides, how does GC-NE perform on graph classification tasks? 4. As mentioned in line 172, the graphcode depends on the chosen barcode bases. I wonder if there is a risk that different choices of barcode bases could impact the robustness and consistency of GC? Confidence: 3 Soundness: 2 Presentation: 2 Contribution: 1 Limitations: See Questions Flag For Ethics Review: ['No ethics review needed.'] Rating: 5 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his comments. **Question 1:** The construction of the graph is explained in Section 3, Appendix B and D. The vertices are the points of the persistence diagrams (topological cycles) of the individual slices where two points in consecutive slices are connected by an edge if the representative cycle of one point maps to the representative cycle of the other point. See Figure 3 for a schematic depiction. We only use undirected edges but using directed edges pointing in the direction of the filtration is certainly something that can be explored in the future. **Question 2:** Multi-parameter persistent homology especially in combination with machine learning is still in its infancy. Only recent advances in algorithms made it possible to apply these methods on larger datasets. Therefore, there is still a lack of good benchmark datasets for multi-parameter topological descriptors. We used the TUDataset for comparison because it was the most common dataset in the related literature on combining multi-parameter persistent homology and machine learning but we don't see any evidence that topological descriptors are particularly well suited for these datasets. We believe that these datasets where chosen mainly because of their small size. It is true that all approaches terminate within a few seconds but for example the MUTAG dataset has only 188 samples. As shown in Table 2, on larger datasets other methods took significantly longer than graphcode. **Question 3:** The original Perslay architecture is constructed for one-parameter persistent homology. So GC-NE is not exactly the same as Perslay. To our knowledge the adaption of Perlay to two-parameter persistent homology has also not been considered before. A stronger topological descriptor is only beneficial in the presence of enough topological signal. On some datasets weaker topological descriptors will be sufficient and the edges will not contribute much to the accuracy. But for example on the shape dataset in Table 2 the edges improve the accuracy by 4.1\% at basically the same computational cost. **Question 4:** Please see "Expressivity and non-invariance of graphcodes" in the overall rebuttal above. --- Rebuttal Comment 1.1: Comment: If there is no evidence that topological descriptors are particularly well suited for these datasets, what is making Graphcode better than compared methods?Besides, I cannot find a clear response to Question 4. --- Reply to Comment 1.1.1: Comment: We do not claim that graphcodes are better compared to other topological descriptors on the TUDatasets. They outperform most of the other methods on some of the TUDatasets and they don't do well on others. If you look at this table you see that there is no method that outperforms all the other methods. But on all the datasets we considered, except the TUDatasets, graphcodes outperformed all the other methods in computation time and classification accuracy. Although these might not be "real world" datasets, given their more topological/geometric nature, it is still a strong proof of concept that graphcodes are very good at picking up the topology or geometry of datasets. Of course one can ask the broader question of how useful are topological methods on real world datasets in general? But as mentioned before topological data analysis is a young and developing field and it can take time for methods to be successfully applied in practice. It also took quite some time for neural networks to be successfully applied in practice. **Regarding question 4:** Theoretically there is a risk that the basis dependence can effect the robustness and consistency. But in practice they still perform well on the geometric datasets. We propose to deal with this dependence on representation by showing the neural network graphcodes with respect to different choices of basis for each instance to teach it to extract the relevant information independent of the basis. That such an approach can work is demonstrated, for example, in the lastest alphafold paper https://www.nature.com/articles/s41586-024-07487-w where they got rid of the equivariance and invariance constraints of the previous version. We implemented options in our graphcode software package to compute an alternative basis based on minimal lexicographic cycles and an option to randomly change the basis. The minimal lexicographic cycle bases did not effect the results at all while the random bases did slightly worse. There are still problems with the random change of basis as it strongly increases the number of edges but there are many avenues for future research to improve the pipeline.
Summary: This paper proposes a computationally fast method to extract information from a 2-parameter persistence module. The authors consider a 2-parameter persistence module as slices of 1-parameter persistence modules. Each 1-parameter persistence module can be represented as barcodes. The authors consider these barcodes as basis and represent the maps between the barcodes, induced by the bifiltration, as an attributed graph. This attributed graph is used as an input to a GNN architecture. The information extracted by this method is not a topological invariant, however, empirical results show that the method gives comparable results on TU datasets and better results on some synthetic datasets as compared to existing multiparameter methods. Strengths: 1. Overall, the paper is well-written and organized. 2. The problem that the authors are trying to tackle is a hard one of capturing relevant information in 2-parameter persistence in the absence of a complete invariant. 3. The method proposed in the paper has substantial theoretical backing and an algorithmic component. 4. The proposed method is computationally fast, as indicated by results in Table 2. Weaknesses: 1. The authors have shown experiments on TU Datasets and some synthetic datasets. GC seems to perform well on the synthetic datasets, however, the performance on TU datasets is not particularly impressive. The authors claim that it might be because there are not enough topological signals to capture in those datasets, which is not fully convincing. 2. The exact experimental setup (hyperparameter choices, number of GAT layers etc.) is not described in the paper, not even in the appendix. Technical Quality: 3 Clarity: 3 Questions for Authors: 1. Was there a specific reason to choose GAT as the GNN architecture? How does GC perform with other GNN architectures? 2. Was there a specific reason to choose max-pooling? How does the model perform with sum/average pooling? 3. It is interesting to see from Table 3 and Table 4 that adding edges in GC amounts to a marginal increase in performance. Does this mean that the information carried by the maps between barcodes is not as important? 4. How does the proposed method stand with respect to the expressivity as compared to other multiparameter persistent homology methods? Confidence: 5 Soundness: 3 Presentation: 3 Contribution: 3 Limitations: Yes, the authors have discussed limitations. Flag For Ethics Review: ['No ethics review needed.'] Rating: 7 Code Of Conduct: Yes
Rebuttal 1: Rebuttal: We thank the reviewer for his comments. **Question 1 and 2:** The idea of choosing a GAT network is that topological (homological) features that are persistent across both scale parameters are reasonable candidates for the topological signal and that the network should learn to pay more attention to the features of high persistence in consecutive slices that are connected by an edge. We also tested standard graph convolution networks and average pooling which both performed worse. We want to point out that the proposed architecture is meant to serve as a proof of concept and we don't want to claim that it is close to optimal. There is certainly a lot of room for improvement and we hope that especially the expertise of the machine learning community will help to advance this architecture. **Question 3:** The graphcodes are very strong topological descriptors but this expressivity comes at the cost of basis dependence. We believe that this tradeoff is more favourable for graphcodes if there is more topological signal present in the data. The random point process data does not really have a significant topological signal and also many of the Orbit point clouds look very much like random noise. Therefore, the edges of the graphcode don't contribute as much as they do in the shape dataset. But for the Orbit100k dataset they add another 0.8\% which might be significant given the already very high classification accuracy. Also on the TUDatasets, given the small size of these datasets and the at least questionable topological signal, we think that the tradeoff between expressivity and (basis) consistency becomes unfavourable for graphcodes. But we want to point out that, although graphcodes are not dominant on the TUDatsets, on some of those datasets graphcodes still outperform some of the other methods. **Question 4:** Please see "Expressivity and non-invariance of graphcodes" in the overall rebuttal above. --- Rebuttal Comment 1.1: Comment: I would like to thank the authors for their response. I have read the responses and would like to stick to my score.
Rebuttal 1: Rebuttal: At first, we want to thank all reviewers for their comments. As some points came up in more than one review we want to address them here in a general rebuttal and address more specific questions in individual rebuttals below. **Expressivity and non-invariance of graphcodes:** The most important point we want to address is the critique of non-invariance of graphcodes which came up in all of the reviews. It is a well-known theorem in the field of topological data analysis that there can not exist a complete (capturing all the information) and discrete invariant of multi-parameter persistence modules. Some reviewers asked about the expressivity or the information captured by graphcodes. The graphcode of a two-parameter persistence module is a complete descriptor which means that it captures all the information of the two-parameter persistent homology up to isomorphism. It is easy to reconstruct the initial module up to isomorphism from the graphcode. It is just an alternative compact representation of the module that is suitable as input for graph neural networks. Other established methods construct a discrete invariant from a module which comes at the cost of significant loss of information. The theorem mentioned before implies that one is forced to make this tradeoff between invariance and loss of information and the lack of invariance of graphcodes has to be put into this context. From this perspective, on appropriate datasets (like the shape dataset), the lack of invariance is rather a strength than a shortcoming since it is the only way we can capture all the information. A two-parameter persistence module can be equivalently represented by two graphs with different edges (the vertices and vertex labels are actually invariant) but both graphs contain exactly the same information. **Graphcodes as inputs for neural networks:** Graphcodes are a novel approach to combine multi-parameter persistent homology and machine learning by giving up on invariance for the benefit of providing complete two-parameter persistence information to a neural network. Since this has not been considered before and as the reviewers have pointed out there are a lot of open questions like: How well can a graph neural network architecture learn to extract the isomorphism type of the underlying module from the graphcode and use this information to make predictions? What is the best architecture to learn from graphcodes? How to efficiently change the bases of the graphcodes in the learning process to teach the neural network to extract the isomorphism type of a module independent of the basis? As acknowledged by all the reviewers, we built the theoretical foundation of graphcodes, an efficient algorithm to compute graphcodes and proposed an architecture that works well on some datasets which provides a proof of concept. Hence, we think that these open questions are not a reason to reject the paper. Every line of research has to start at some point. Since most of these open questions are on the machine learning side of the pipeline we believe that it would be very beneficial to bring this work to the attention of the machine learning community.
NeurIPS_2024_submissions_huggingface
2,024
null
null
null
null
null
null
null
null